Lecture Notes in Computer Science Commenced Publication in 1973 Founding and Former Series Editors: Gerhard Goos, Juris Hartmanis, and Jan van Leeuwen
Editorial Board David Hutchison Lancaster University, UK Takeo Kanade Carnegie Mellon University, Pittsburgh, PA, USA Josef Kittler University of Surrey, Guildford, UK Jon M. Kleinberg Cornell University, Ithaca, NY, USA Alfred Kobsa University of California, Irvine, CA, USA Friedemann Mattern ETH Zurich, Switzerland John C. Mitchell Stanford University, CA, USA Moni Naor Weizmann Institute of Science, Rehovot, Israel Oscar Nierstrasz University of Bern, Switzerland C. Pandu Rangan Indian Institute of Technology, Madras, India Bernhard Steffen TU Dortmund University, Germany Madhu Sudan Microsoft Research, Cambridge, MA, USA Demetri Terzopoulos University of California, Los Angeles, CA, USA Doug Tygar University of California, Berkeley, CA, USA Gerhard Weikum Max-Planck Institute of Computer Science, Saarbruecken, Germany
5966
Tomasz Janowski Hrushikesha Mohanty (Eds.)
Distributed Computing and Internet Technology 6th International Conference, ICDCIT 2010 Bhubaneswar, India, February 15-17, 2010 Proceedings
13
Volume Editors Tomasz Janowski Center for Electronic Governance United Nations University International Institute for Software Technology P.O. Box 3058, Macau E-mail:
[email protected] Hrushikesha Mohanty University of Hyderabad Department of Computer and Information Sciences Central University PO, AP, Hyderabad 500 046, India E-mail:
[email protected]
Library of Congress Control Number: 2009943913 CR Subject Classification (1998): C.2.4, D.4.2, D.4.3, D.4.7, H.2.4, H.5, K.6.5 LNCS Sublibrary: SL 3 – Information Systems and Application, incl. Internet/Web and HCI ISSN ISBN-10 ISBN-13
0302-9743 3-642-11658-2 Springer Berlin Heidelberg New York 978-3-642-11658-2 Springer Berlin Heidelberg New York
This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, re-use of illustrations, recitation, broadcasting, reproduction on microfilms or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer. Violations are liable to prosecution under the German Copyright Law. springer.com © Springer-Verlag Berlin Heidelberg 2010 Printed in Germany Typesetting: Camera-ready by author, data conversion by Scientific Publishing Services, Chennai, India Printed on acid-free paper SPIN: 12984480 06/3180 543210
Preface
This volume contains the papers presented at the 6th International Conference on Distributed Computing and Internet Technology (ICDCIT 2010) held during February 15–17, 2010 in Bhubaneswar, India. The conference was organized by Kalinga Institute of Industrial Technology (KIIT) University, Bhubaneshwar, India, www.kiit.org, and co-organized by the Center for Electronic Governance at United Nations University - International Institute for Software Technology (UNU-IIST-EGOV), Macao, www.egov.iist.unu.edu. In the tradition of the ICDCIT conference series, ICDCIT 2010 welcomed presentations of research ideas and results in theory, methodology and applications of distributed computing and Internet technology. In addition, the conference emphasized that research in this area can play an important role in building a foundation for the development of e-Society Applications (e-Applications) and at the same time, that e-Applications can provide the relevance and context for such research. Establishing a connection between foundational and applied research in distributed systems and Internet technology with software and services that enable e-Applications was a key feature of ICDCIT 2010. A total of 91 papers were submitted for ICDCIT 2010. Each paper was reviewed by at least two members of the Program Committee or additional referees. As a result of the review process, 12 papers were accepted as long papers, 9 as short papers and 5 as extended abstracts. All accepted papers were divided into five categories— Networking, Grid Computing and Web Services, Internet Technology and Distributed Computing, Software Engineering and Secured Systems, and Societal Applications—and included as part of the proceedings. In addition, the conference enjoyed five invited talks by Jim Davies, Manish Gupta, Maurice Herlihy, Krithi Ramamritham and R. K. Shyamasundar, with four invited papers received. The current proceedings are divided into six corresponding sections: 1. Section 1 - Invited Papers includes four papers. The paper “Transactional Memory Today” by Maurice Herlihy presents the author’s perspective on the nature of the transactional memory mechanism and the problem it is expected to address. The paper “Maintaining Coherent Views over Dynamic Distributed Data” by Krithi Ramamritham presents a scalable technique for executing continuous queries over dynamic data while making sure that correct results are always presented to users. The paper “Malware: From Modelling to Practical Detection” by R.K. Shyamasundar, co-authored with Harshit Shah and N.V. Narendra Kumar, surveys a theory behind malware in order to identify the challenges behind its detection, and proposes a new approach to malware detection. The paper “Semantic Frameworks—Meanings in the Architecture” by Jim Davies, co-authored with Jeremy Gibbons,
VI
Preface
explains how the use of semantic technologies and model-driven engineering can greatly reduce the cost of consistency and coordination in software development, in system integration and in continuous interactions with users. 2. Section 2 - Networking comprises five papers. The paper “Fuzzy-Controlled Source-Initiated Multicasting (FSIM) in Ad Hoc Networks” by Anuradha Banerjee and Paramartha Dutta proposes an intelligent multicast routing scheme that takes into account estimated network evolution in terms of residual energy, link stability, position of receivers in the multicast tree, etc. The paper “On Optimal Space Tessellation with Deterministic Deployment for Coverage in Three-Dimensional Wireless Sensor Networks” by Manas Kumar Mishra and M. M. Gore identifies a model for optimal coverage in three-dimensional wireless sensor networks, with the best trade-off between performance and the number of nodes. The paper “Seamless Handoff between IEEE 802.11 and GPRS Networks” by Dhananjay Kotwal Maushumi Barooah and Sukumar Nandi proposes a new scheme that provides seamless mobility between GPRS and IEEE 802.11 (WiFi) networks. The paper “A Tool to Determine Strategic Location and Ranges of Nodes for Optimum Deployment of Wireless Sensor Network” by Amrit Kumar, Mukul Kumar and Kumar Padmanabh describes a tool developed by the authors for strategic localization and optimization of wireless sensor networks. The paper “An Efficient Hybrid Data-Gathering Scheme in Wireless Sensor Networks” by Ayon Chakraborty, Swarup Kumar Mitra and M.K. Naskar proposes a new data-gathering scheme for remote wireless sensor networks that offers the best energy and delay performance compared with existing popular schemes. 3. Section 3 - Grid Computing and Web Services comprises five papers. The paper “Introducing Dynamic Ranking on Web Pages Based on Multiple Ontology Supported Domains” by Debajyoti Mukhopadhyay, Anirban Kundu and Sukanta Sinha proposes a data structure and a pair of algorithms for efficient ranking and retrieval of Web pages. The paper “Multi-criteria Service Selection with Optimal Stopping in Dynamic Service-Oriented Systems” by Oliver Skroch presents a service selection algorithm for dynamic service composition, which is optimized on service quality and cost. The paper “Template-Based Process Abstraction for Reusable Interorganizational Applications in RESTful Architecture” by Cheng Zhu, Hao Yu, Hongming Cai and Boyi Xu proposes a template-based mechanism to abstract the similarities between business processes based on REST (Representational State Transfer) architectures. The paper “Enhancing the Hierarchical Clustering Mechanism of Storing Resources Security Policies in a Grid Authorization System” by Mustafa Kaiiali, Rajeev Wankar, C.R. Rao and Arun Agarwal enhances the use of the hierarchical clustering mechanism to store security policies for the resources available on dynamically changing grids. The paper “A Framework for Web-Based Negotiation” by Hrushikesha Mohanty, Rajesh Kurra and R.K. Shyamasundar proposes a process view on Web-based negotiations, and presents an implementation framework for such processes.
Preface
VII
4. Section 4 - Internet Technology and Distributed Computing comprises seven papers. The paper “Performance Analysis of a Renewal Input Bulk Service Queue with Accessible and Non-accessible Batches” by Yesuf Obsie Mussa and P. Vijaya Laxmi conducts performance analysis of finite buffer queues with accessible and non-accessible batches wherein inter-arrival time of customers and service times of batches are, respectively, arbitrarily and exponentially distributed. The paper “A Distributed Algorithm for Pattern Formation by Autonomous Robots, with No Agreement on Coordinate Compass” by Swapnil Ghike and K. Mukhopadhyaya addresses the problem of coordinating a set of autonomous, mobile robots for cooperatively performing a task, proposing a distributed algorithm for pattern formation when no agreement exists on coordinate axes. The paper “Gathering Asynchronous Transparent Fat Robots” by Sruti Gan Chaudhuri and Krishnendu Mukhopadhyaya proposes a new distributed algorithm for gathering a set of n (n ≥ 5) autonomous, mobile robots into a small region, assuming that the robots are transparent in order to achieve full visibility. The paper “Development of Generalized HPC Simulator” by W. Hurst, S. Ramaswamy, R. Lenin and D. Hoffman presents preliminary efforts to design an open-source, adaptable, free-standing and extendible high-performance computer simulator. The paper “Performance Analysis of Finite Buffer Queueing System with Multiple Heterogeneous Servers” by C. Misra and P.K. Swain conducts performance analysis of a finite buffer steady-state queueing system for three heterogeneous servers. The paper “Finite Buffer Controllable Single and Batch Service Queues” by J.R. Mohanty conducts performance analysis of a finite buffer discrete-time single and batch service queueing system, based on the use of recursive methods. The paper “Enhanced Search in Peerto-Peer Networks Using Fuzzy Logic” by Sirish Kumar Balaga, Haribabu K and Chittaranjan Hota proposes a fuzzy logic-based lookup algorithm for peer-to-peer file sharing networks, where each node is assigned probabilities based on its content. 5. Section 5 - Software Engineering and Secured Systems comprises seven papers. The paper “UML-Compiler: A Framework for Syntactic and Semantic Verification of UML Diagrams” by Jayeeta Chanda, Ananya Kanjilal and Sabnam Sengupta proposes a UML compiler that takes a context-free grammar for UML diagrams, and verifies the syntactic correctness of individual diagrams and their consistency with other diagrams, focusing on class and sequence diagrams. The paper “Evolution of Hyperelliptic Curve Cryptosystems” by Kakali Chatterjee and Daya Gupta explores efficient methods for selecting secure hyperelliptic curve cryptosystems and for carrying out fast operations on them. The paper “Reliability Improvement Based on Prioritization of Source Code” by Mitrabinda Ray and Durga Prasad Mohapatra proposes a metric to compute the potential of an object to cause failures in an object-oriented program, in order to prioritize the testing of such objects depending on their criticality. The paper “Secure Dynamic Identity-Based Remote User Authentication Scheme” by Sandeep K. Sood, Anil K. Sarje
VIII
Preface
and Kuldip Singh demonstrates new vulnerabilities of a well-known scheme for efficient smart card-based remote user authentication, and proposes an improved scheme to address them. The paper “Theoretical Notes on Regular Graphs as Applied to Optimal Network Design” by Sanket Patil and Srinath Srinivasa presents some theoretical results concerning regular graphs and their applications to the problem of optimal network design. The paper “Formal Approaches to Location Management in Mobile Communications” by Juanhua Kang, Qin Li, Huibiao Zhu and Wenjuan Wu presents a formal model of mobile communication systems, and carries out formal analysis of two properties related to location management on it. The paper “Automated Test Scenario Selection Based on Levenshtein Distance” by Sapna P.G. and Hrushikesha Mohanty proposes a method for selecting test scenarios generated from UML activity diagrams using the Levenshtein distance. 6. Section 6 - Societal Applications comprises two papers. The paper “Study of Diffusion Models in an Academic Social Network” by Vasavi Junapudi, Gauri K. Udgata and Siba K. Udgata proposes a framework for determining the influence of nodes in an academic social network, and the spread of such influence. The paper “First Advisory and Real-Time Health Surveillance to Reduce Maternal Mortality Using Mobile Technology” by Satya Swarup Samal, Arunav Mishra, Sidheswar Samal, J.K. Pattnaik and Prachet Bhuyan presents a risk assessment method to provide advisory and referral services to patients in rural areas through SMS or WAP devices. Many people and organizations contributed to the ICDCIT 2010 conference. We wish to thank Achuyta Samanta, Founder, KIIT University, for his continuing support of the conference. We also wish to thank the General Co-chairs and all members of the Advisory and Conference Committees for their support and help at various stages of the conference. Our most sincere thanks go to the Program Committee and additional reviewers whose cooperation in carrying out quality reviews was critical for establishing a strong conference program. We express our sincere thanks to the invited speakers—Jim Davies, Manish Gupta, Maurice Herlihy, Krithi Ramamritham and R. K. Shyamasundar—for their contributions to the program. We also sincerely thank Rilwan Basanya for his help in the editing process. We are particularly grateful for the financial and logistics support of KIIT University and UNU-IIST for hosting the conference, and we acknowledge the provision of infrastructural support to carry out this editorial work by the University of Hyderabad and UNU-IIST. We also remember, with gratitude, the assistance provided by our students: Supriya Vaddi, Rajesh Kurra and Venkatswamy. Lastly, we would like to thank Springer for publishing the ICDCIT proceedings in its Lecture Notes in Computer Science series. February 2010
Tomasz Janowski Hrushikesha Mohanty
Organization
ICDCIT 2010 was organized by Kalinga Institute of Industrial Technology (KIIT) University, Bhubaneshwar, India, www.kiit.org, and co-organized by the Center for Electronic Governance at United Nations University - International Institute for Software Technology (UNU-IIST-EGOV), Macao, www.egov.iist.unu.edu.
Patrons Achuyta Samanta
KIIT University, India
Advisory Committee Maurice Herlihy Gerard Huet David Peleg R.K. Shyamasundar
Brown University, USA INRIA, France Weizmann Institute of Science, Israel TIFR, India
Conference Committee Jim Davies S. C. De Sarkar Tomasz Janowski Hrushikesha Mohanty D. N. Dwivedy Samaresh Mishra Animesh Tripathy
University of Oxford, UK KIIT University, India United Nations University, Macao University of Hyderabad, India KIIT University, India KIIT University, India KIIT University, India
General Co-chair General Co-chair Program Co-chair Program Co-chair Finance Chair Organization Chair Publicity Chair
Program Committee Arun Agarwal Rahul Banerjee Nikolaj Bjorner Goutam Chakraborty Antonio Cerone Venkatesh Choppella Mainak Chaudhuri Yannis Charalabidis Zamira Dzhusupova Gillan Dobbie Elsa Estevez Simeon Fong
University of Hyderabad, India BITS Pilani, India Microsoft Corporation, USA Iwate Pref. University, Japan United Nations University, Macao IIITM-K Thiruvananthapuram, India IIT, Kanpur, India National Technical University of Athens, Greece United Nations University, Macao Univeristy of Auckland, New Zealand United Nations University, Macao University of Macau, Macao
X
Organization
Pablo Fillottrani N. Ganguly Manoj M Gore Veena Goswami Diganta Goswami Manish Gupta Steve Harris Chittaranjan Hota Dang Van Hung Kamal Karlapalem Michiharu Kudo Sandeep Kulakarni Vijay Kumar Ralf Klischewski Padmanabhan Krishnan Euripidis Loukis Anirban Mahanti Sanjay Madria Rajib Mall Pramod K Meher Debajyoti Mukhopadhyay G.B. Mund Sagar Naik Adegboyega Ojo Manas Ranjan Patra Srinivas Padmanabhuni Radha Krishna Pisipati Anupama Potluri Sirni Ramaswamy R. Ramanujam Abhik Roychoudhury V.N. Sastry Vivek Sarkar Pallab Saha Jaydip Sen Ashutosh Saxena Andrzej Szepietowski Maurice Tchuente A. Min Tjoa O.P. Vyas Krzysztof Walkowiak
Universidad Nacional del Sur, Argentina IIT, Kharagpur, India MNIT, Allahabad, India KIIT University, India IIT, Guwahati, India IBM Research Lab, India University of Oxford, UK BITS Pilani, India Vietnam National University, Vietnam IIIT Hyderabad, India IBM Research, Japan Michigan State University, USA University of Missouri - Kansas City, USA German University in Cairo, Egypt Bond University, Australia University of the Aegean, Greece NCIT, Australia Missouri Univ. of Science and Technology, USA IIT Kharagpur, India NTU, Singapore Calcutta Business School, India KIIT University, India University of Waterloo, Canada United Nations University, Macao Berhampur University, India Infosys, India Infosys, India University of Hyderabad, India University of Arkansas, USA IMSC Chennai, India National University of Singapore, Singapore IDRBT, India Rice University, USA National University of Singapore, Singapore TCS, India Infosys, India University of Gdansk, Poland University of Yaounde I, Cameroon Vienna University of Technology, Austria Ravishankar University, India Wroclaw University of Technology, Poland
Organization
Additional Referees Prasant K. Pattnaik Srinivas Prasad Surampudi Bapi Raju C. Raghavendra Rao
Tripathy Somanath Ashok Singh Siba K. Udgata
XI
Table of Contents
Section 1 – Invited Papers Transactional Memory Today . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Maurice Herlihy
1
Maintaining Coherent Views over Dynamic Distributed Data . . . . . . . . . . Krithi Ramamritham
13
Malware: From Modelling to Practical Detection . . . . . . . . . . . . . . . . . . . . . R.K. Shyamasundar, Harshit Shah, and N.V. Narendra Kumar
21
Semantic Frameworks—Meanings in the Architecture . . . . . . . . . . . . . . . . . Jim Davies and Jeremy Gibbons
40
Section 2 – Networking Fuzzy-Controlled Source-Initiated Multicasting (FSIM) in Ad Hoc Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Anuradha Banerjee and Paramartha Dutta
55
On Optimal Space Tessellation with Deterministic Deployment for Coverage in Three-Dimensional Wireless Sensor Networks . . . . . . . . . . . . . Manas Kumar Mishra and M.M. Gore
72
Seamless Handoff between IEEE 802.11 and GPRS Networks . . . . . . . . . . Dhananjay Kotwal, Maushumi Barooah, and Sukumar Nandi
84
A Tool to Determine Strategic Location and Ranges of Nodes for Optimum Deployment of Wireless Sensor Network . . . . . . . . . . . . . . . . . . . Amrit Kumar, Mukul Kumar, and Kumar Padmanabh
91
An Efficient Hybrid Data-Gathering Scheme in Wireless Sensor Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ayon Chakraborty, Swarup Kumar Mitra, and M.K. Naskar
98
Section 3 – Grid Computing and Web Services Introducing Dynamic Ranking on Web Pages Based on Multiple Ontology Supported Domains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Debajyoti Mukhopadhyay, Anirban Kundu, and Sukanta Sinha
104
Multi-criteria Service Selection with Optimal Stopping in Dynamic Service-Oriented Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Oliver Skroch
110
XIV
Table of Contents
Template-Based Process Abstraction for Reusable Inter-organizational Applications in RESTful Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cheng Zhu, Hao Yu, Hongming Cai, and Boyi Xu
122
Enhancing the Hierarchical Clustering Mechanism of Storing Resources’ Security Policies in a Grid Authorization System . . . . . . . . . . . . . . . . . . . . . Mustafa Kaiiali, Rajeev Wankar, C.R. Rao, and Arun Agarwal
134
A Framework for Web-Based Negotiation . . . . . . . . . . . . . . . . . . . . . . . . . . . Hrushikesha Mohanty, Rajesh Kurra, and R.K. Shyamasundar
140
Section 4 – Internet Technology and Distributed Computing Performance Analysis of a Renewal Input Bulk Service Queue with Accessible and Non-accessible Batches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yesuf Obsie Mussa and P. Vijaya Laxmi
152
A Distributed Algorithm for Pattern Formation by Autonomous Robots, with No Agreement on Coordinate Compass . . . . . . . . . . . . . . . . . Swapnil Ghike and Krishnendu Mukhopadhyaya
157
Gathering Asynchronous Transparent Fat Robots . . . . . . . . . . . . . . . . . . . . Sruti Gan Chaudhuri and Krishnendu Mukhopadhyaya
170
Development of Generalized HPC Simulator . . . . . . . . . . . . . . . . . . . . . . . . . W. Hurst, S. Ramaswamy, R. Lenin, and D. Hoffman
176
Performance Analysis of Finite Buffer Queueing System with Multiple Heterogeneous Servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . C. Misra and P.K. Swain
180
Finite Buffer Controllable Single and Batch Service Queues . . . . . . . . . . . J.R. Mohanty
184
Enhanced Search in Peer-to-Peer Networks Using Fuzzy Logic . . . . . . . . . Sirish Kumar Balaga, K. Haribabu, and Chittaranjan Hota
188
Section 5 – Software Engineering: Secured Systems UML-Compiler: A Framework for Syntactic and Semantic Verification of UML Diagrams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jayeeta Chanda, Ananya Kanjilal, and Sabnam Sengupta
194
Evolution of Hyperelliptic Curve Cryptosystems . . . . . . . . . . . . . . . . . . . . . Kakali Chatterjee and Daya Gupta
206
Reliability Improvement Based on Prioritization of Source Code . . . . . . . Mitrabinda Ray and Durga Prasad Mohapatra
212
Table of Contents
XV
Secure Dynamic Identity-Based Remote User Authentication Scheme . . . Sandeep K. Sood, Anil K. Sarje, and Kuldip Singh
224
Theoretical Notes on Regular Graphs as Applied to Optimal Network Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sanket Patil and Srinath Srinivasa
236
Formal Approaches to Location Management in Mobile Communications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Juanhua Kang, Qin Li, Huibiao Zhu, and Wenjuan Wu
243
Automated Test Scenario Selection Based on Levenshtein Distance . . . . . Sapna P.G. and Hrushikesha Mohanty
255
Section 6 – Societal Applications Study of Diffusion Models in an Academic Social Network . . . . . . . . . . . . . Vasavi Junapudi, Gauri K. Udgata, and Siba K. Udgata First Advisory and Real-Time Health Surveillance to Reduce Maternal Mortality Using Mobile Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Satya Swarup Samal, Arunav Mishra, Sidheswar Samal, J.K. Pattnaik, and Prachet Bhuyan Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
267
279
283
Transactional Memory Today Maurice Herlihy Computer Science Department Brown University Providence, RI USA 02912
Abstract. The term “Transactional Memory” was coined back in 1993, but even today, there is a vigorous debate about what it means and what it should do. This debate sometimes generates more heat than light: terms are not always well-defined and criteria for making judgments are not always clear. This article will try to impose some order on the conversation. TM itself can encompass hardware, software, speculative lock elision, and other mechanisms. The benefits sought encompass simpler implementations of highly-concurrent data structures, better software engineering for concurrent platforms and enhanced performance.
1
Introduction
In 1993, Eliot Moss and I wrote a paper entitled Transactional Memory: Architectural Support for Lock-Free Data Structures, which appeared in that year’s International Symposium on Computer Architecture (ISCA) [16]. This paper mostly passed unnoticed. As illustrated in Figure 1, it had few citations for the next ten years. Then, around 2005, something changed. Today, a Googletm search for “Transactional Memory” yields about 80,000 hits, while Bingtm yields about 2.7 million hits. This article addresses two questions: what transactional memory (TM) is, and which problems it is expected to address. This article reflects the author’s opinions only. It makes no pretense of being objective, nor of providing a comprehensive survey of current work. Readers interested in up-to-date transactional memory research should consult the Transactional Memory Online web page at: http://www.cs.wisc.edu/trans-memory/
2
Background
The major chip manufacturers have, for the time being, given up trying to make processors run faster. Moore’s law has not been repealed: each year, more and more transistors fit into the same space, but their clock speed cannot be increased without overheating. Instead, attention has turned toward chip multiprocessing (CMP), in which multiple computing cores are included on each processor chip.
Supported by NSF Grant 0811289.
T. Janowski and H. Mohanty (Eds.): ICDCIT 2010, LNCS 5966, pp. 1–12, 2010. c Springer-Verlag Berlin Heidelberg 2010
2
M. Herlihy
Fig. 1. Yearly citation count for Herlihy and Moss [16]
In the medium term, advances in technology will provide increased parallelism, but not increased single-thread performance. As a result, system designers and software engineers can no longer rely on increasing clock speed to hide software bloat. Instead, they must learn to make more effective use of increasing parallelism. This adaptation will not be easy. In today’s programming practices, programmers typically rely on combinations of locks and conditions, such as monitors, to prevent concurrent access by different threads to the same shared data. While this approach allows programmers to treat sections of code as “atomic”, and thus simplifies reasoning about interactions, it suffers from a number of severe shortcomings. First, programmers must decide between coarse-grained locking, in which a large data structure is protected by a single lock, and fine-grained locking, in which a lock is associated with each component of the data structure. Coarsegrained locking is simple, but permits little or no concurrency, thereby preventing the program from exploiting multiple processing cores. By contrast, fine-grained locking is substantially more complicated because of the need to ensure that threads acquire all necessary locks (and only those, for good performance), and because of the need to avoid deadlock when acquiring multiple locks. The decision is further complicated by the fact that the best engineering solution may be platform-dependent, varying with different machine sizes, workloads, and so on, making it difficult to write code that is both scalable and portable. Second, conventional locking provides poor support for code composition and reuse. For example, consider a lock-based queue that provides atomic enq() and
Transactional Memory Today
3
deq() methods. Ideally, it should be easy to move an element atomically from one queue to another, but this kind of composition simply does not work. If the queue methods synchronize internally, then there is no way to acquire and hold both locks simultaneously. If the queues export their locks, then modularity and safety are compromised. Finally, such basic issues as the mapping from locks to data, that is, which locks protect which data, and the order in which locks must be acquired and released, are all based on convention, and violations are notoriously difficult to detect and debug. For these and other reasons, today’s software practices make concurrent programs too difficult to develop, debug, understand, and maintain.
3
Transactional Model
A transaction is a sequence of steps executed by a single thread. Transactions are atomic: each transaction either commits (it takes effect) or aborts (its effects are discarded). Transactions are linearizable [17]: they appear to take effect in a oneat-a-time order. Transactional memory (TM) supports a computational model in which each thread announces the start of a transaction, executes a sequence of operations on shared objects, and then tries to commit the transaction. If the commit succeeds, the transaction’s operations take effect; otherwise, they are discarded. Sometimes we refer to these transactions as memory transactions. Memory transactions satisfy the same formal serializability and atomicity properties as the transactions used in conventional database systems, but they are intended to address different problems. Unlike database transactions, memory transactions are short-lived activities that access a relatively small number of objects in primary memory. Database transactions are persistent : when a transaction commits, its changes are backed up on a disk. Memory transactions need not be persistent, and involve no explicit disk I/O. To illustrate why memory transactions are attractive from a software engineering perspective, consider the problem of constructing a concurrent FIFO queue that permits one thread to enqueue items at the tail of the queue at the same time another thread dequeues items from the head of the queue, at least while the queue is non-empty. Any problem so easy to state, and that arises so naturally in practice, should have an easily-devised, understandable solution. In fact, solving this problem with locks is quite difficult. In 1996, Michael and Scott published a clever and subtle solution [24]. It speaks poorly for fine-grained locking as a methodology that solutions to such simple problems are challenging enough to be publishable. By contrast, it is almost trivial to solve this problem using transactions. Figure 2 shows how the queue’s enqueue method might look in a language that provides direct support for transactions (for example, see Harris [13]). It consists of little more than enclosing sequential code in a transaction block. In practice, of course, a complete implementation would include more details (such as how to
4
M. Herlihy
class Queue
{ QNode head; Qnode tail ; public void enq(T x) { atomic { Qnode q = new Qnode(x); tail . next = q; tail = q; } } ... } Fig. 2. Transactional queue code fragment
respond to an empty queue), but even so, this concurrent queue implementation is a remarkable achievement: it is not, by itself, a publishable result.
4
Hardware Transactional Memory
Most hardware transaction memory (HTM) proposals are based on straightforward modifications to standard multiprocessor cache-coherence protocols. When a thread reads or writes a memory location on behalf of a transaction, that cache entry is flagged as being transactional. Transactional writes are accumulated in the cache or write buffer, but are not written back to memory while the transaction is active. If another thread invalidates a transactional entry, a data conflict has occurred, that transaction is aborted and restarted. If a transaction finishes without having had any of its entries invalidated, then the transaction commits by marking its transactional entries as valid or as dirty, and allowing the dirty entries to be written back to memory in the usual way. 4.1
Unbounded HTM
One limitation of HTM is that in-cache transactions are limited in size and scope. Most hardware transactional memory proposals require programmers to be aware of platform-specific resource limitations such as cache and buffer sizes, scheduling quanta, and the effects of context switches and process migrations. Different platforms provide will different cache sizes and architectures, and cache sizes are likely to change over time. Transactions that exceed resource limits or are repeatedly interrupted will never commit. Ideally, programmers should be shielded from such complex, platform-specific details. Instead, TM systems should provide full support even for transactions that cannot execute directly in hardware. Techniques that substantially increase the size of hardware transactions include signatures [32] and permissions-only caches [3]. Other proposals support
Transactional Memory Today
5
(effectively) unbounded transactions by allowing transactional metadata to overflow caches, and for transactions to migrate from one core to another. These proposals include TCC [12], VTM [28], OneTM [3], UTM [1], TxLinux [29], and LogTM [32].
5
Software Transactional Memory
An alternative to providing direct hardware support for TM is to provide software transactional memory (STM), a software system that provides programmers with a transactional model. In this section, we describe some of the questions that arise when designing an STM system. Some of these questions concern semantics, that is, how the STM behaves, and other concern implementation, that is, how the STM is structured internally. 5.1
Weak vs. Strong Isolation
How should threads that execute transactions interact with threads executing non-transactional code? One possibility is strong isolation [4] (sometimes called strong atomicity), which guarantees that transactions are atomic with respect to non-transactional accesses. The alternative, weak isolation (or weak atomicity), makes no such guarantees. HTM systems naturally provide strong atomicity. For STM systems, however, strong isolation may be too expensive. The distinction between strong and weak isolation leaves unanswered a number of other questions about STM behavior. For example, what does it mean for an unhandled exception to exit an atomic block? What does I/O mean if executed inside a transaction? One appealing approach is to say that transactions behave as if they were protected by a single global lock (SGL) [21,23,12]. One limitation of the SGL semantics is that it does not specify the behavior of zombie transactions, transactions that are doomed to abort because of synchronization conflicts, but continue to run for some duration before the conflict is discovered. In some STM implementations, zombie transactions may see an inconsistent state before aborting. When a zombie aborts, its effects are rolled back, but while it runs, observed inconsistencies could provoke it to pathological behavior that may be difficult for the STM system to protect against, such as dereferencing a null pointer or entering an infinite loop. Opacity [11] is a correctness condition that guarantees that all uncommitted transactions, including zombies, see consistent states. 5.2
Eager vs. Lazy Update
There are two basic ways to organize transactional data. In an eager update system, the data objects are modified in place, and the transaction maintains an undo log allowing it to undo its changes if it aborts. The dual approach is lazy (or deferred) update, where each transaction computes optimistically on its local copy of the data, installing the changes if it commits, and discarding them if it aborts. An eager system makes committing a transaction more efficient, but makes it harder to ensure that zombie transactions see consistent states.
6
5.3
M. Herlihy
Eager vs. Lazy Conflict Detection
STM systems differ according to when they detect conflicts. In eager conflict detection schemes, conflicts are detected before they arise. When one transaction is about to create a conflict with another, it may consult a contention manager, defined below, to decide whether to pause, giving the other transaction a chance to finish, or to proceed and cause the other to abort. By contrast a lazy conflict detection scheme detects conflicts when a transaction tries to commit. Eager detection may abort transactions that could have committed lazily, but lazy detection discards more computation, because transactions are aborted later. 5.4
Contention Managers
In many STM proposals, conflict resolution is the responsibility of a contention manager [15] module. Two transactions conflict if they access the same object and one access is a write. If one transaction discovers it is about to conflict with another, then it can pause, giving the other a chance to finish, or it can proceed, forcing the other to abort. Faced with this decision, the transaction consults a contention management module that encapsulates the STM’s conflict resolution policy. The literature includes a number of contention manager proposals [10,15,19,2], ranging from exponential backoff to priority-based schemes. Empirical studies have shown that the choice of a contention manager algorithm can affect transaction throughput, sometimes substantially. 5.5
Visible vs. Invisible Reads
Early STM systems [15] used either invisible reads, in which each transaction maintains per-read metadata to be revalidated after each subsequent read, or visible reads, in which each reader registers its operations in shared memory, allowing a conflicting writer to identify when it is about to create a conflict. Invisible read schemes are expensive because of the need for repeated validation, while visible read schemes were complex, expensive, and not scalable. More recent STM systems such as TL2 [9] or SKYSTM [22] use a compromise solution, called semi-visible reads, in which read operations are tracked imprecisely. Semi-visible reads conservatively indicate to the writer that a read-write conflict might exist, avoiding expensive validation in the vast majority of cases. 5.6
Privatization
It is sometimes useful for a thread to privatize [31] a shared data structure by making it inaccessible to other threads. Once the data structure has been privatized, the owning thread can work on the data structure directly, without incurring synchronization costs. In principle, privatization works correctly under SGL semantics, in which every transaction executes as if it were holding a “single global lock”. Unfortunately, care is required to ensure that privatization works correctly. Here are two possible hazards. First, the thread that privatizes the
Transactional Memory Today
7
data data structure must observe all all changes made to that data by previously committed transactions, which is not necessarily guaranteed in an STM system where updates are lazy. Second, a doomed (“zombie”) transaction must not be allowed to perform updates to the data structure after it has been privatized.
6
Motivation
Reading the TM literature, it becomes clear that TM is used to address three distinct problems: first, a simple desire to make highly-concurrent data structures easy to implement; second, a more ambitious desire to support well-structured large-scale concurrent programs; and third, a pragmatic desire to make conventional locking more concurrent. We will examine each of these areas in turn. 6.1
Lock-Free Data Structures
A data structure is lock-free if it guarantees that infinitely often some method call finishes in a finite number of steps, even if some subset of the threads halt in arbitrary places. A data structure that relies on locking cannot be lock-free because a thread that acquires a lock and then halts can prevent non-faulty threads from making progress. Lock-free data structures are often awkward to implement using today’s architectures which typically rely on compare-and-swap for synchronization. The compare-and-swap instruction takes three arguments, and address a, an expected value e, and an update value u. If the value stored at a is equal to e, then it is atomically replaced with u, and otherwise it is unchanged. Either way, the instruction sets a flag indicating whether the value was changed. Often, the most natural way to define a lock-free data structure is to make an atomic change to several fields. Unfortunately, because compare-and-swap allows only one word (or perhaps a small number of contiguous words) to be changed atomically, designers of lock-free data structures are forced to introduce complex multi-step protocols or additional levels of indirection that create unwelcome overhead and conceptual complexity. The original TM paper [16] was primarily motivated by a desire to circumvent these restrictions. 6.2
Software Engineering
TM is appealing as a way to help programmers structure concurrent programs because it allows the programmer to focus on what the program should be doing, rather than on the detailed synchronization mechanisms needed. For example, TM relieves the programmer of tasks such as devising specialized locking protocols for avoiding deadlocks, and conventions associating locks with data. A number of programming languages and libraries have emerged to support TM. These include Clojure [18], .Net [7], Haskell [14]. Java [20,25], C++ [6], and others. Several groups have reported experiences converting programs from locks to TM. The TxLinux [29] project replaced most of the locks in the Linux kernel with
8
M. Herlihy
transactions. Syntactically, each transaction appears to be a lock-based critical section, but that code is executed speculatively as a transaction (see Section 6.3). If an I/O call is detected, the transaction is rolled back and restarted using locks. Using transactions primarily as an alternative way to implement locks minimized the need to rewrite and restructure the original application. Damron et al. [8] transactionalized the Berkeley DB lock manager. They found the transformation more difficult than expected because simply changing critical sections into atomic blocks often resulted in a disappointing level of concurrency. Critical sections often shared data unnecessarily, usually in the form of global statistics or shared memory pools. Later on, we will see other work that reinforces the notion the need to avoid gratuitous conflicts means that concurrent transactional programs must be structured differently than concurrent lock-based programs. Pankratius et al. [26] conducted a user study where twelve students, working in pairs, wrote a parallel desktop search engine. Three randomly-chosen groups used a compiler supporting TM, and three used conventional locks. The best TM group were much faster to produce a prototype, the final program performed substantially better, and they reported less time spent on debugging. However, the TM teams found performance harder to predict and to tune. Overall, the TM code was deemed easier to understand, but the TM teams did still make some synchronization errors. Rossbach et al. [30] conducted a user study in which 147 undergraduates implemented the same programs using coarse-grained and fine-grained locks, monitors, and transactions. Many students reported they found transactions harder to use than coarse-grain locks, but slightly easier than fine-grained locks. Code inspection showed that students using transactions made many fewer synchronization errors: over 70% of students made errors with fine-grained locking, while less than 10% made errors using transactions. 6.3
Lock Elision
Transactions can also be used as a way to implement locking. In lock elision [27], when a thread requests a lock, rather than waiting to acquire that lock, the thread starts a speculative transaction. If the transaction commits, then the critical section is complete. If the transaction aborts because of a synchronization conflict, then the thread can either retry the transaction, or it can actually acquire the lock. Here is why lock elision is attractive. Locking is conservative: a thread must acquire a lock if it might conflict with another thread, even if such conflicts are rare. Replacing lock acquisition with speculative execution enhances concurrency if actual conflicts are rare. If conflicts persist, the thread can abandon speculative execution and revert to using locks. Lock elision has the added advantage that it does not require code to be restructured. Indeed, it can often be made to work with legacy code. Azul Systems [5] has a JVM that uses (hardware) lock elision for contended Java locks, with the goal of accelerating “dusty deck” Java programs. The run-
Transactional Memory Today
9
time system keeps track of how well the HTM is doing, and decides when to use lock elision and when to use conventional locks. The results work well for some applications, modestly well for others, and poorly for a few. The principal limitation seems to be the same as observed by Damron et al. [8]: many critical sections are written in a way that introduces gratuitous conflicts, usually by updating performance counters. Although these are not real conflicts, the HTM has no way to tell. Rewriting such code can be effective, but requires abandoning the goal of speeding up “dusty deck” programs.
7
Conclusions
Over time, transactional memory has been the subject of various criticisms. One criticism is that STM systems are too inefficient. While it is true that early STM systems had high overhead, they have improved substantially, and it remains to be seen how efficient they will eventually become. In the long run, the advent of HTM may help speed up STM, the same way hardware assists helped to speed up virtual memory. Another criticism is that TM requires writing in a different style than the one programmers are used to. This claim is supported by the evidence, described above, that it is not always easy to convert lock-based programs to use transactions because lock-based programs often contain gratuitous data conflicts. Nevertheless, such conversion does provide benefits in some cases, and user studies
Fig. 3. The Gartner Hype Cycle
10
M. Herlihy
suggest that moving forward, transactional programs may be easier to write correctly than lock-based programs. Many of the criticisms directed toward TM can be explained by the “Gartner Hype Cycle” illustrated schematically in Figure 31 . At the “technology trigger”, the community becomes aware of an idea. At the “peak of Inflated Expectations”, the promise of the idea generates unrealistic expectations. When problems and limitations emerge, and the idea fails to meet these expectations, the “trough of disillusionment” generates a reaction. Researchers quietly continue to develop the idea and address these problems during the “slope of enlightenment.” Finally, an idea reaches the “plateau of productivity” as its benefits become demonstrated and accepted. Today, I think TM is half-way up the slope of enlightenment.
References 1. Ananian, C.S., Asanovi´c, K., Kuszmaul, B.C., Leiserson, C.E., Lie, S.: Unbounded transactional memory. In: Proceedings of the 11th International Symposium on High-Performance Computer Architecture (HPCA 2005), San Franscisco, CA, February 2005, pp. 316–327 (2005) 2. Attiya, H., Epstein, L., Shachnai, H., Tamir, T.: Transactional contention management as a non-clairvoyant scheduling problem. In: PODC 2006: Proceedings of the twenty-fifth annual ACM symposium on Principles of distributed computing, pp. 308–315. ACM, New York (2006) 3. Blundell, C., Devietti, J., Lewis, E.C., Martin, M.: Making the fast case common and the uncommon case simple in unbounded transactional memory. In: International Symposium on Computer Architecture (June 2007) 4. Blundell, C., Lewis, E.C., Martin, M.M.K.: Deconstructing transactions: The subtleties of atomicity. In: Fourth Annual Workshop on Duplicating, Deconstructing, and Debunking (June 2005) 5. Click, C.: Experiences with hardware transactional memory, http://blogs.azulsystems.com/cliff/2009/02/and-now-some-hardwaretransactional-memory-comments.html 6. Corporation, I.: C++ stm compiler, http://software.intel.com/en-us/articles/intel-c-stm-compilerprototype-edition-20/ 7. Corporation, M.: Stm.net, http://msdn.microsoft.com/en-us/devlabs/ee334183.aspx 8. Damron, P., Fedorova, A., Lev, Y., Luchangco, V., Moir, M., Nussbaum, D.: Hybrid transactional memory. In: ASPLOS-XII: Proceedings of the 12th international conference on Architectural support for programming languages and operating systems, pp. 336–346. ACM, New York (2006) 9. Dice, D., Shalev, O., Shavit, N.: Transactional locking ii. In: Proc. of the 20th Intl. Symp. on Distributed Computing (2006) 10. Guerraoui, R., Herlihy, M., Pochon, B.: Toward a theory of transactional contention managers. In: PODC 2005: Proceedings of the twenty-fourth annual ACM symposium on Principles of distributed computing, pp. 258–264. ACM, New York (2005) 1
Image from http://en.wikipedia.org/wiki/Hype cycle
Transactional Memory Today
11
11. Guerraoui, R., Kapalka, M.: On the correctness of transactional memory. In: PPoPP 2008: Proceedings of the 13th ACM SIGPLAN Symposium on Principles and practice of parallel programming, pp. 175–184. ACM, New York (2008) 12. Hammond, L., Carlstrom, B.D., Wong, V., Hertzberg, B., Chen, M., Kozyrakis, C., Olukotun, K.: Programming with transactional coherence and consistency (TCC). ACM SIGOPS Operating Systems Review 38(5), 1–13 (2004) 13. Harris, T., Fraser, K.: Language support for lightweight transactions. In: Proceedings of the 18th ACM SIGPLAN conference on Object-oriented programing, systems, languages, and applications (OOPSLA 2003), pp. 388–402. ACM Press, New York (2003) 14. Harris, T., Marlow, S., Peyton-Jones, S., Herlihy, M.: Composable memory transactions. In: PPoPP 2005: Proceedings of the tenth ACM SIGPLAN symposium on Principles and practice of parallel programming, pp. 48–60. ACM, New York (2005) 15. Herlihy, M., Luchangco, V., Moir, M., Scherer, W.: Software transactional memory for dynamic-sized data structures. In: Symposium on Principles of Distributed Computing (July 2003) 16. Herlihy, M., Moss, J.E.B.: Transactional memory: Architectural support for lockfree data structures. In: International Symposium on Computer Architecture (May 1993) 17. Herlihy, M.P., Wing, J.M.: Linearizability: a correctness condition for concurrent objects. ACM Transactions on Programming Languages and Systems (TOPLAS) 12(3), 463–492 (1990) 18. Hickey, R.: The clojure programming language. In: DLS 2008: Proceedings of the 2008 symposium on Dynamic languages, pp. 1–1. ACM, New York (2008) 19. Scherer III, W.N., Scott, M.L.: Contention management in dynamic software transactional memory. In: PODC Workshop on Concurrency and Synchronization in Java Programs (2004) 20. Korland, G.: Deuce stm, http://www.deucestm.org/ 21. Larus, J.R., Rajwar, R.: Transactional Memory. Morgan and Claypool (2006) 22. Lev, Y., Luchangco, V., Marathe, V., Moir, M., Nussbaum, D., Olszewski, M.: Anatomy of a scalable software transactional memory. In: TRANSACT 2009 (2009) 23. Menon, V., Balensiefer, S., Shpeisman, T., Adl-Tabatabai, A.-R., Hudson, R.L., Saha, B., Welc, A.: Single global lock semantics in a weakly atomic stm. SIGPLAN Not. 43(5), 15–26 (2008) 24. Michael, M.M., Scott, M.L.: Simple, fast, and practical non-blocking and blocking concurrent queue algorithms. In: PODC, pp. 267–275. ACM Press, New York (1996) 25. S. Microsystems. Dstm2, http://www.sun.com/download/products.xml?id=453fb28e 26. Pankratius, V., Adl-Tabatabai, A.-R., Otto, F.: Does transactional memory keep its promises? results from an empirical study. Technical Report 2009-12, University of Karlsruhe (September 2009) 27. Rajwar, R., Goodman, J.R.: Speculative lock elision: enabling highly concurrent multithreaded execution. In: MICRO 34: Proceedings of the 34th annual ACM/IEEE international symposium on Microarchitecture, Washington, DC, USA, pp. 294–305. IEEE Computer Society, Los Alamitos (2001) 28. Rajwar, R., Herlihy, M., Lai, K.: Virtualizing Transactional Memory. In: International Symposium on Computer Architecture (June 2005)
12
M. Herlihy
29. Rossbach, C.J., Hofmann, O.S., Porter, D.E., Ramadan, H.E., Aditya, B., Witchel, E.: Txlinux: using and managing hardware transactional memory in an operating system. In: SOSP 2007: Proceedings of twenty-first ACM SIGOPS symposium on Operating systems principles, pp. 87–102. ACM, New York (2007) 30. Rossbach, C.J., HofmannD, O.S., Witchel, E.: Is transactional memory programming actually easier? In: Proceedings of the 8th Annual Workshop on Duplicating, Deconstructing, and Debunking (WDDD) (June 2009) 31. Spear, M.F., Marathe, V.J., Dalessandro, L., Scott, M.L.: Privatization techniques for software transactional memory. In: PODC 2007: Proceedings of the twentysixth annual ACM symposium on Principles of distributed computing, pp. 338–339. ACM, New York (2007) 32. Yen, L., Bobba, J., Marty, M.R., Moore, K.E., Volos, H., Hill, M.D., Swift, M.M., Wood, D.A.: Logtm-se: Decoupling hardware transactional memory from caches. In: HPCA 2007: Proceedings of the 2007 IEEE 13th International Symposium on High Performance Computer Architecture, Washington, DC, USA, pp. 261–272. IEEE Computer Society, Los Alamitos (2007)
Maintaining Coherent Views over Dynamic Distributed Data Krithi Ramamritham Department of Computer Science and Engineering, Indian Institute of Technology Bombay [email protected] Abstract. Data delivered today over the web reflects rapid and unpredictable changes in the world around us. We are increasingly relying on content that provides dynamic, interactive, personalized experiences. To achieve this, the content of most web pages is created dynamically, by executing queries dynamically, using data that changes dynamically from distributed sources identified dynamically. A typical web page presents a "view" of the world constructed from multiple sources. In this paper, we examine the nature of dynamics of distributed data and discuss fresh approaches to maintain the coherency of the views seen by the users. Achieving such coherency while developing scalable low-overhead solutions poses challenges in terms of delivering data with the required fidelity in spite of data dynamics. How these challenges can be met by the judicious design of algorithms for data dissemination, caching, and cooperation forms the crux of our work. Keywords: Views, Dynamic Data Dissemination, Web, Internet Scale Algorithms, Data Coherency.
1 Introduction The web is becoming a universal medium for information publication and usage. Such information is becoming more and more dynamic and usage is varying from simple tracking to online decision making in real time. Applications include auctions, personal portfolio valuations for financial decisions, route planning based on traffic information, etc. For such applications, data from one or more independent data sources may be aggregated to determine if some action is warranted. Given the increasing number of such applications that make use of highly dynamic data, there is significant interest in systems that can efficiently deliver the relevant updates automatically. In this paper we summarize our work related to the topic of executing continuous queries over dynamic data so that correct results are always presented to users. Specifically, we present low-cost, scalable techniques to answer continuous aggregation queries using a Content Distribution Network (CDN) of dynamic data items. In such a network of data aggregators (DAs), each data aggregator serves a set of data items at specific coherencies. Just as various fragments of a dynamic web-page are served by one or more nodes of a CDN, serving a query involves decomposing a client query into sub-queries and executing sub-queries on judiciously chosen data aggregators with their individual sub-query incoherency bounds. T. Janowski and H. Mohanty (Eds.): ICDCIT 2010, LNCS 5966, pp. 13–20, 2010. © Springer-Verlag Berlin Heidelberg 2010
14
K. Ramamritham
2 Caches and Views Seen in the Web context, websites have transitioned from a static content model, where content is served from ready-made files, to a dynamic content model, where content is generated on demand. Dynamic content generation allows sites to offer a wider variety of services and content. But, as Internet traffic continues to grow and websites become increasingly complex, performance and scalability are major issues for website owners and users. Web 3.0 techniques provide website visitors with dynamic, interactive, and personalized experiences. However while serving dynamic content, two types of latencies are encountered: 1) 2)
Network Latency -- Latency due to communication between users and data sources and numerous nodes in the network (e.g., routers, switches) and Content Creation Latency -- Latency due to computationally-intensive logic executed at multiple tiers within the sources or within the nodes that lie between the client and the sources.
A widely used existing approach to address WWW performance problems is based on the notion of content caching. These content caching approaches store content at various locations outside the site infrastructure and can improve website performance by reducing content generation delays. Dynamic pages are typically designed according to a template, which specifies layout and content of page wherein pages are decomposed into fragments; fragments and templates are cached at the edge and pages are assembled from fragments, on demand. Alternatively, page layout can itself be generated at run-time [1]. Thus, caching can be done at either (a) the page level, which does not guarantee that correct pages are served and provides very limited reusability, or (b) the fragment level, which is associated with several design level and runtime scalability issues. To address these issues, several back-end caching approaches have been proposed, including query result caching and fragment level caching. In general cached content includes media files (pictures, audio, video) or dynamically generated page fragments; Content generated for one user is saved, and used to serve subsequent requests for the same content. A fragment is a portion of a web page, or at the granularity of a programmatic object. This type of cache works with a dynamic content application to reduce the computational and communication resources required to build the page on the site, thus reducing server-side delays. The holy grail of dynamic content caching is the ability to cache dynamic content at finer granularities outside the site’s infrastructure. Such an approach would provide the benefits of caching finer granularities of content (e.g., greater reusability), while simultaneously achieving the benefits associated with proxy-based caching (e.g., reduced bandwidth, reduced firewall processing). In databases, views are created to avoid repeated execution of sub-expressions that occur in queries. Which views to create and how to maintain them, i.e., make sure that the materialized forms of the views are up-to-date, thereby reflecting the current values of the base relations that contribute to the views, are part of the query optimization problem. In the systems context, cached values represent values evaluated once, but used multiple times. Whereas cache units have historically been physical entities like pages or cache lines, more recently the notion of logical caches has become commonplace, making a cache element similar in scope to a
Maintaining Coherent Views over Dynamic Distributed Data
15
view. In a distributed systems context view maintenance or keeping a cache element up-to-date involves making sure that the “value” or a view is the same irrespective of whether the cached/materialized form of the view is used/seen or the view/cache is constructed at the time it is used/seen. The challenge in ensuring this arises due to the distributed nature of the data sources which in turn implies communicating changes in the sources to the views so that views are up-to-date. In this paper, we offer a set of techniques for view/cache maintenance wherein o
instead of invalidating (typically, cache maintenance is invalidation-based since it is simple to just mark a cached element invalid whenever the source has changed), we refresh a cached fragment. •
o
instead of refreshing after each update, we refresh in a use-specific manner. •
o
by continuous refresh of the cache fragment, the cache/view can always be kept in a usable state.
by developing selective refresh mechanisms, overheads of view maintenance can be minimized.
instead of a dynamic fragment being refreshed by the source, we use a peer cache’s content for refresh. •
by constructing data dissemination networks data accuracy requirements of a large number of clients can be satisfied in a scalable manner.
3 Data Coherency Not every update at the source of data d leads to a refresh message being sent to a node that caches d. This is because, data usage usually can tolerate some incoherency. In most real-world applications, strong coherency, whereby a client and source always are in sync with each other, is neither required nor affordable. Several relaxations of strong coherency have been proposed: o Time domain: Δt - coherency •
The client is never out of sync with the source by more than Δt time units, e.g., traffic data not stale by more than a minute
o Value domain: Δv - coherency •
The difference in the data values at the client and the source are bounded by Δv at all times, e.g., user is only interested in temperature changes > 1 degree; user is only interested in changes in traffic load > 100 messages/sec
Formally, let si(t) denote the value of the ith data item at the data source at time t. Let the value of the ith data item known to the user be di(t). Then the data incoherency is given by |si(t)-di(t)|. For a data item which needs to be refreshed at an incoherency bound C, a data refresh message is sent to the user if data incoherency |si(t)-di(t) | > C.
16
K. Ramamritham
4 Dynamic Content Distribution Network of Data Aggregators Refreshes occur from data sources to the clients through a network of aggregators. In hierarchical data dissemination network, a higher level aggregator guarantees a tighter incoherency bound (i.e., a lower value of C) compared to a lower level aggregator. For maintaining a certain incoherency bound, a data aggregator (DA) gets data updates from the data source or some higher level DA so that the achieved data incoherency is at least as stringent as the specified data incoherency bound. From a data dissemination capability point of view, each DA is characterized by a set of (Si, ci) pairs, where Si is a data item which the DA can disseminate at an incoherency bound ci. Each DA in the network can be seen as providing a Sources view over the dynamic data provided by the sources. Consider a network of data aggregators managing data items S1-S4. Aggregators can be characterized asNetwork of d1: {(S1, 0.5), (S3, 0.2)} data d2: {(S1, 1.0), (S2, 0.1), (S4, aggregators 0.2)} Aggregator d1 can serve values of S1 with an incoherency bound greater than or equal to 0.5 whereas d2 can disseminate the same data item at a looser Clients incoherency bound of 1.0 or more. In such a dissemination netData Dissemination Network work of multiple data items all the nodes can be considered as peers since a node d1 can be child of another node d2 for a data item S1 (incoherency bound at d1 is greater than that at d2) but the node d1 can be parent of d2 for another data item S2. Given the data and coherency needs of DAs, how should they configure themselves and cooperate to satisfy these needs? How should DAs refresh each other’s data such that data coherency requirements are satisfied? Consider the design and building of a dynamic data distribution system that is coherence-preserving, i.e., the delivered data must preserve associated coherence requirements (the user-specified bound on tolerable imprecision) and must be resilient to failures. To this end, we consider a system in which a set of DAs cooperate with each other and the sources, forming a peer-to-peer network. In this system, necessary changes are pushed to the users so that they are automatically informed about changes of interest. In [3] we present techniques to determine when to push an update from one repository to another for coherence maintenance, to construct an efficient dissemination tree for propagating
Maintaining Coherent Views over Dynamic Distributed Data
17
changes from sources to cooperating repositories, and to make the system resilient to failures. We experimentally demonstrate that 1) careful dissemination of updates through a network of cooperating repositories can substantially lower the cost of coherence maintenance, 2) unless designed carefully, even push-based systems experience considerable loss in fidelity due to message delays and processing costs, 3) the computational and communication cost of achieving resiliency can be made to be low, and 4) surprisingly, adding resiliency can actually improve fidelity even in the absence of failures. Now we move our attention from individual data items to queries over these data items. Continuous aggregate queries over dynamic data. Consider the following specific applications involving continuous queries used to monitor changes to time varying data and to provide results useful for online decision making. These aggregation queries are long running queries as data is continuously changing and the user is interested in notifications when certain conditions hold. Thus, responses to these queries are to be refreshed continuously, respecting the coherency requirements associated with the queries. For tracking portfolios, we execute the following types of queries. o portfolio query involving shares in stock exchanges in a country Σ (Number of shares of company i × current price of share of company i) o global portfolio query involving stocks in exchanges in different countries Σ (Number of shares of company i × current price of share of company i in country j × currency exchange rate with country j) both the stock price in the foreign currency and the currency exchange rate change continuously. For monitoring network traffic to capture unusual activity such as network bombardment or denial of service attacks, given a condition involving data from certain sources, or certain destinations, each monitoring node evaluates Σ frequency count of messages satisfying the condition Finally, consider monitoring dynamic physical phenomena using sensor networks. Suppose a disaster management team is interested in tracking an oil spill with the help of sensors. Sensors track the perimeter of the “circular” spill. Center of the spill (x0, y0) can be approximated by the average of the points on the perimeter. By tracking (xi, yi) points on the perimeter, area under the spill can be calculated by continuously evaluating the expression: Π ((xj – x0)2 + (yj - y0)2) . Given a data dissemination network and above queries over the dynamic data contained in the nodes, the following interrelated questions arise:
18
K. Ramamritham
Given a set of continuous queries at a DA how should we assign accuracy bounds for each data item? Whereas the above work considers networking DAs with specific data, [4] considers the problem of assigning data accuracy bounds to these data items. Assigning data accuracy bounds for data used by non-linear queries poses special challenges. Unlike linear queries, data accuracy bounds for non-linear queries depend on the current values of data items and hence need to be recomputed frequently. So, we seek an assignment such that a) if the value of each data item at C is within its data accuracy bound then the value of each query is also within its accuracy bound, b) the number of data refreshes sent by sources to C to meet the query accuracy bounds, is as low as possible, and c) the number of times the data accuracy bounds need to be recomputed is as low as possible. In [4], we couple novel ideas with existing optimization techniques to derive such an assignment. Specifically, we make the following contributions: (i) Propose a novel technique that significantly reduces the number of times data accuracy bounds must be recomputed; (ii) Show that a small increase in the number of data refreshes can lead to a large reduction in the number of re-computations; we introduce this as a tradeoff in our approach; (iii) Give principled heuristics for addressing negative coefficient polynomial queries where no known optimization techniques can be used; we also prove that under many practically encountered conditions our heuristics can be close to optimal; and (iv) Experimentally demonstrate the efficacy of our techniques in handling large number of polynomial queries. Given the data and the coherency available at DAs, and a set of continuous queries, how do we plan the query executions, that is, how do we divide the query into sub-queries, and allocate the query incoherency bound among them? In case of a CDN, web page’s division into fragments is a page design issue, whereas, for continuous aggregation queries, this issue of dividing a query into sub-queries has to be handled on per-query basis by considering data dissemination capabilities of DAs. Consider a portfolio query Q1 = 50 S1 + 200 S2 + 150 S3 , with a required incoherency bound of 80 (in a stock portfolio S1, S2, S3 can be different stocks and incoherency bound can be $80). We want to execute this query over the DAs, minimizing the number of refreshes. There are various options for the client to get the query result: 1) The client may get the data items S1, S2 and S3 separately. The query incoherency bound can be divided among data items in various ways ensuring that query incoherency is below the incoherency bound. In this paper, we show that getting data items independently is a costly option. This strategy ignores the fact that the client is interested only in the aggregated value of the data items and various aggregators can disseminate more than one data item. 2) If a single DA can disseminate all three data items required to answer the client query, the DA can construct a composite data item corresponding to the client query (Sq = 50 S1 + 200 S2 + 150 S3) and disseminate the result to the client so that the query incoherency bound is not violated. It is obvious that if we get the query result from a single DA, the number of refreshes will be minimum (as data item updates may cancel out each other, thereby maintaining the query results within the incoherency bound). As different DAs disseminate different subsets of data items, no DA may have all the data items required to execute the client query. Further, even if an aggregator can refresh all the data items, it may not be able to satisfy the query
Maintaining Coherent Views over Dynamic Distributed Data
19
coherency requirements. In such cases the query has to be executed with data from multiple aggregators. 3) Divide the query into a number of sub-queries and get their values from individual DAs. In that case, the client query result is obtained by combining the results of multiple sub-queries. For the DAs given earlier, Q1 can be divided in two alternative ways: Plan1: Result of sub-query 50 S1 + 150 S3 is served by d1 whereas value of S2 is served by d2. Plan2: Value of S3 is served by d1 whereas result of sub-query 50 S1 + 200 S2 is served by d2. In both the plans, combining the sub-query values at the client gives the query result. Selecting the optimal plan among various options is not-trivial. Intuitively, we should be selecting the plan with lesser number of sub-queries. But that is not guaranteed to be the plan with the least number of messages. Further, we should select the sub-queries such that updates to various data items appearing in a sub-query have more chances of canceling each other as that will reduce the need for refresh to the client. In the above example, if updates to S1 and S3 are such that when S1 increases, S3 decreases, and vice-versa, then selecting plan1 may be beneficial. In [2] we give a method to select the query plan based on these observations. While solving the above problem, we ensure that each data item for a client query is disseminated by one and only one DA. Although a query can be divided in such a way that a single data item is served by multiple DAs (e.g., 50 S1 + 200 S2 + 150 S3 is divided into two sub-queries 50 S1 + 130 S2 and 70 S2 + 150 S3); but in doing so the same data item needs to be processed at multiple aggregators, increasing the unnecessary processing load. By dividing the client query into disjoint sub-queries we ensure that a data item update is processed only once for each query (for example, in case of paid data subscriptions it is not prudent to get the same data item from the multiple sources). Sub-query incoherency bounds are required to be found using the query incoherency bounds such that, besides satisfying the client coherency requirements, the chosen DA (where the sub-query is to be executed) is capable of satisfying the allocated sub-query incoherency bound. For example, in plan1, incoherency bound allocated to the sub-query 50S1 + 150S3 should be greater than 55 (=50*0.5+150*0.2) as that is the tightest incoherency bound which the aggregator d1 can satisfy. Clearly, the number of refreshes will depend on the division of the query incoherency bounds among sub-query incoherency bounds. In [2] we provide a technique for getting the optimal query plan (i.e., set of subqueries with their incoherency bounds and DAs where the sub-queries will be executed) which satisfies client query’s coherency requirement with least cost, measured in terms of the number of refresh messages sent from aggregators to the client. For estimating query execution cost, we build a continuous query cost model which can be used to estimate the number of messages required to satisfy the client specified incoherency bound. Performance results using real-world traces show that our cost based query planning leads to queries being executed using less than one third the number of messages required by existing schemes.
20
K. Ramamritham
5 Conclusions Users today are increasingly relying on content that provides dynamic, interactive, personalized experiences. A typical response from the web today presents a "view" of the world constructed from multiple sources. We discussed several challenges and associated approaches to maintain the coherency of the views seen by the users. The approaches involve the judicious design of algorithms for view identification, data dissemination, caching, and cooperation. Details of our work related to dynamic data dissemination can be found at http://www.cse.iitb.ac.in/~krithi/ddd.html. Acknowledgments. I would like to thank all my Masters and Ph.D. students, especially Shetal Shah and Rajeev Gupta, for their contributions to this project.
References 1. VanderMeer, D., Datta, A., Dutta, K., Thomas, H., Ramamritham, K.: Proxy-Based Acceleration of Dynamically Generated Content on the World Wide Web. ACM Transactions on Database Systems (TODS) 29 (June 2004) 2. Gupta, R., Ramamritham, K.: Optimized Query Planning of Continuous Aggregation Queries in Dynamic Data Dissemination Networks. In: WWW 2007, Banff, Canada (May 2007) 3. Shah, S., Ramamritham, K., Shenoy, P.: Resilient and Coherency Preserving Dissemination of Dynamic Data Using Cooperating Peers. IEEE Transactions on Knowledge and Data Engineering 16(7), 799–812 (2004) 4. Shah, S., Ramamritham, K.: Handling Non-linear Polynomial Queries over Dynamic Data. In: Proc. of IEEE International Conference on Data Engineering (ICDE) (April 2008)
Malware: From Modelling to Practical Detection R.K. Shyamasundar, Harshit Shah, and N.V. Narendra Kumar School of Technology and Computer Science Tata Institute of Fundamental Research Homi Bhabha Road, Mumbai 400005, India {shyam,harshit,naren}@tcs.tifr.res.in
Dedicated to the memory of Amir Pnueli: 1941-2009 A Computer Science Pioneer and A Great Human being Abstract. Malicious Software referred to as Malware refers to a software that has infiltrated to a computer without the authorization of the computer (or the owner of the computer). Typical categories of malicious code include Trojan Horses, viruses, worms etc. Malware has been a major cause of concern for information security. With the growth in complexity of computing systems and the ubiquity of information due to WWW, detection of malware has become horrendously complex. In this paper, we shall survey the theory behind malware to provide the challenges behind detection of malware. It is of interest to note that the power of the malware (or for that matter computer warfare) can be seen in the theories proposed by the iconic scientists Alan Turing and John von Neumann. The malicious nature of malware can be broadly categorized as injury and infection analogously in the epidemiological framework. On the same lines, the remedies can also be thought of through analogies with epidemiological notions like disinfection, quarantine, environment control etc. We shall discuss these aspects and relate the above to notions of computability. Adleman in his seminal paper has extrapolated protection mechanisms such as quarantine, disinfection and certification. It may be noted that most of the remedies in general are undecidable. We shall discuss remedies that are being used and contemplated. One of the well-known restricted kind of remedies is to search for signatures of possible malwares and detect them before getting it through to the computer. Large part of the current remedies rely on signature based approaches that is, heavy reliance on the detection of syntactic patterns. Recent trends in security incidence reports show a huge increase in obfuscated exploits; note that in the majority of obfuscators, the execution behaviour remains the same while it can escape syntactic recognitions. Further, malware writers are using a combination of features from various types of classic malwares such as viruses and worms. Thus, it has become all the more necessary to
The work was partially supported under Indo-Trento Promotion for Advanced Research.
T. Janowski and H. Mohanty (Eds.): ICDCIT 2010, LNCS 5966, pp. 21–39, 2010. c Springer-Verlag Berlin Heidelberg 2010
22
R.K. Shyamasundar, H. Shah, and N.V.N. Kumar take a holistic approach and arrive at detection techniques that are based on characterizations of malware behaviour that includes the environment in which it is expected to execute. In the paper, we shall first survey various approaches of behavioural characterization of malware, difficulties of virus detection, practical virus detection techniques and protection mechanisms from viruses. Towards the end of the paper, we shall briefly discuss our new approach of detecting malware via a new method of validation in a quarantine environment and show our preliminary results for the detection of malware on systems that are expected to carry a priori known set of software.
1
Introduction
Malicious code is any code that has been modified with the intention of harming its’ usage/user. Informally, destructive capability of malware is assessed based on how well it can hide itself, how much loss it can inflict on the owner/user of the system, how rapidly it can spread, etc. Malware attacks result in tremendous costs to an organization in terms of cleanup activity, degraded performance, damage to its reputation, etc. Among some of the direct costs incurred are, labour costs to analyze and cleanup infected systems, loss of user productivity, loss of revenue due to direct losses or degraded performance of system, etc. According to a report on financial impact of malware attacks [35], the direct damages incurred in 2006 were USD 13 billion. Although the trend shows a decline in direct damages since 2004 (direct damages in 2004 and 2005 were USD 17.5 billion and USD 14.2 billion respectively), this decline is attributed to a shift in the focus of malware writers from creating damaging malware to creating stealthy, fast-spreading malware so that infected machines can be used for sending spams, stealing credit-card numbers, displaying advertisements or opening a backdoor to an organization’s network. This increase in the indirect and secondary damages (that are difficult to quantify) explains why malware threat has worsened in recent years despite a decline in direct damages. Malware can be primarily categorized [15] as follows: – Virus - Propagates by infecting a host file. – Worm - Self-propagates through e-mail, network shares, removable drives, file sharing or instant messaging applications. – Backdoor - Provides functionality for a remote attacker to log on and/or execute arbitrary commands on the affected system. – Trojan - Performs a variety of malicious functions such as spying, stealing information, logging key strokes and downloading additional malware - several further sub categories follow such as infostealer, downloader,dropper,rootkit etc. – Potentially Unwanted Programs (PUP) - Programs which the user may consent on being installed but may affect the security posture of the system or may be used for malicious purposes. Examples are Adware, Dialers and Hacktools/hacker tools (which includes sniffers, port scanners, malware constructor kits, etc.)
Malware: From Modelling to Practical Detection
23
– Other - Unclassified malicious programs not falling within the other primary categories. The breakdown of malware as per the study reported in [15], is summarized in Table 1. It shows that Trojan comprised a major class of malware in 2008. This is indicative of attacker’s preference to infect machines so that a host of malicious activities (advantageous to the attacker) can be launched from them. Signature based scanning techniques for malware detection are highly inadequate. With increased sophistication in detection techniques, stealth techniques employed by malware writers have also witnessed huge sophistication. Table 1. 2008 Malware trends Malware Class Percentage Trojan 46 Other 17 Worm 14 Backdoor 12 PUP 6 Virus 5
The theory of computer viruses was developed much before the actual instances of it were seen in wild. The foundations were laid by Cohen’s formal study [10] of computer viruses based on the seminal theory of self-reproducing automata invented by John von Neumann [26]. In this paper, we first survey1 some of the important theoretical results on computer viruses, several detection techniques, and protection mechanisms. Finally, we briefly discuss our recent study for detecting the presence of malwares in systems like embedded systems where we know a priori the software that is loaded in them.
2
Formal Approaches to Virus Characterization
Informally, a virus can be defined as any program that propagates itself by infecting a host file (trusted by the user). The infected host file when executed, in addition to performing its’ intended job it also selects another program and infects it thereby spreading the infection. Cohen [10] was the first to formalize and study the problem of computer viruses. The following definition intuitively captures the essence of a virus as defined by Cohen [10]. Definition 1. A computer virus is a program that can infect other programs, when executed in a suitable environment, by modifying them to include a possibly evolved copy of itself. 1
For an excellent detailed exposition of computer viruses from theory to practice, the reader is referred to [14].
24
R.K. Shyamasundar, H. Shah, and N.V.N. Kumar
We now present the pseudo-program of a possible structure of a computer virus as presented in [9]. program virus:= {1234567; subroutine infect-executable:= {loop: file = get-random-executable-file; if first-line-of-file = 1234567 then goto loop; prepend virus to file; } subroutine do-damage:= {whatever damage is to be done} subroutine trigger-pulled:= {return true if some condition holds} main-program:= {infect-executable; if trigger-pulled then do-damage; goto next;} next:} Although the above pseudo-program of a virus includes a subroutine to perform damage, the ability to perform damage is not considered a vital characteristic of a virus by Cohen. Cohen’s formal definition of a virus was based on the Turing machine computing model. The possibility of a virus infection comes from the theory of self-reproducing automata defined by Jon Von Neumann [26]. Every program that gets infected can also act as a virus and thus, the infection can spread throughout a computer system or a network. Cohen identifies the ability to infect as the key property of a virus, thus allowing it to spread to the transitive closure of information sharing. Given the widespread use of sharing in current computer systems, viruses can potentially damage a large portion of the network. Recovering from such a damage will be extremely hard and perhaps often impossible. Thus, it is of great importance to detect and protect systems from viruses. Cohen’s formalization was remarkable, because it captures the intuition that a program is a virus only when executed in a suitable environment. However, Cohen’s formal definition does not fully capture the relationship between a virus and a program infected by that virus. Soon after Cohen’s formalization of viruses, Adleman (Cohen’s Ph.D advisor) proposed a formalization of viruses using recursive functions computing model. Before providing Adleman’s definition of viruses, let us set up some basic notation and concepts as given in Adleman [1].
Malware: From Modelling to Practical Detection
25
A virus can be thought of as a program that transforms (infects) other programs. Definition 2. If v is a virus and i is any program, v(i) denotes the program i upon infection by virus v A system on which a program is executing can be characterized by giving the set of data and programs that are present in the system. A program can be thought of as a state transformer. If i is a program, d is a sequence of numbers that denotes the data in a system and p is a sequence of numbers that denotes the programs in a system then i(d, p) denotes the state resulting when program i executes in the system. Together d and p tell us the state of the system. Definition 3. We say that state (d1 , p1 ) is v-related to state (d2 , p2 ), denoted (d1 , p1 ) ∼ =v (d2 , p2 ) iff – – – –
d1 = d2 and p1 = p2 i.e. p1 and p2 differ number of programs in p1 and p2 are the same and either the ith program in p1 and the ith program in p2 are the same or the ith program in p2 results when the ith program in p1 is infected by virus v
Intuitively Adleman’s definition of a virus can be stated as follows: Definition 4. A program v, that always terminates, is called a virus iff for all states s either 1. Injure: all programs infected by v behave the same when executed in state s 2. Infect or Imitate: for every program p, the state resulting when p is executed in s is v-related to the state resulting when v(p) is executed in s Adleman’s definition of a virus characterizes the relationship between a virus and a program infected by it. However, there is no quantification or characterization of injury and infection. Although the notion of injury appears explicitly in his definition of a virus, Adleman considers the ability to infect to be the core of a virus (Remark 2 in Adleman [1]). Based on this definition he arrives at a classification of viruses2 into the four disjoint classes benign, Epeian, disseminating and malicious. Intuitively, benign viruses are those which never injure the system nor infect other programs. Consider a program p, that compresses programs to save disk space, and adds a decompression routine to its binary so that the program gets decompressed during execution. p is an example of a benign virus. Epeian viruses cause damage in certain conditions but never infect other programs. Boot sector viruses and other programs that delete some key files of the system are examples of Epeian viruses. 2
For immunological analogies between computer and biological viruses, the reader is referred to[17].
26
R.K. Shyamasundar, H. Shah, and N.V.N. Kumar
Disseminating viruses spread by infecting other programs but never injure the system. Internet worms like Netsky, Bagle, Sobig, Sober, MyDoom, Conficker etc., are examples of disseminating viruses. Malicious viruses are those programs that cause injury in certain conditions and propagate themselves by infecting other programs in certain conditions.
3
Techniques for Detecting Viruses
Cohen [9] considers the problem of detecting a computer virus and proves that it is undecidable in general to detect a virus by its appearance (static analysis). We now give an intuitive (informal) account of Cohen’s result Theorem 1. Detecting a virus by its appearance is undecidable. Proof. Proof is by contradiction. Let us assume to the contrary that there exists a procedure D that can tell whether a program is a virus or not. Let there be a program p which infects other programs if and only if D would have called it benign. Given p as input, if D calls it benign, then p infects – thus leading to a contradiction. If on the other hand D calls it a virus then p does not infect, again leading to a contradiction. All the possibilities lead to contradiction. Thus, it is not possible to have a procedure such as D. The above theorem illustrates that if we know what defense mechanism is used by the defender, we can always build a virus that infiltrates into the system i.e., fools the defender. No defense is perfect. Similarly, given a virus, there is always a defense system that defends against that particular virus. Adleman considers the problem of detecting whether a program is a virus or not and proves it undecidable. We present the theorem as in [1]. Theorem 2. For all Godel numberings of the partial recursive functions {φi } V = {i|φi is a virus} is 2 −complete What this theorem implies is that it is impossible to always correctly tell whether a given program is a virus or not. However, it is very important to be able to detect viruses and nullify them before they carry out the damage. Having had a glance of the possibilities, let us look at widely used methods for detecting viruses. 3.1
Signature Based Detection
In signature based detection of viruses, we have a database of known malicious patterns of instructions. Whenever a file is scanned the detection algorithm compares the sequence of symbols present in the file with the database of known malicious patterns. If the algorithm finds a match it declares the file to be a virus.
Malware: From Modelling to Practical Detection
27
Note that signature based detection algorithm critically depends on the database of known malicious patterns. This database is created by analyzing known viruses by extracting sequences of instructions present in them and removing any sequences from them that are typical of benign programs. If we remove very little, we are in danger of not being able to identify even minor modifications of the virus. If we remove too much, then we are in danger of marking genuine programs as viruses. Since a lot of discretion is needed to remove benign patterns, this process of building the database is very hard to automate and is typically compiled by human experts. For this reason it takes reasonable time (even 4 to 5 days) for adding the signature of a newly detected virus to the database, and by this time the virus would have spread wide and caused damage. Thus, signature based detection techniques work well for known viruses but cannot handle new viruses. This inability limits their effectiveness in controlling the spreading of new malware. Because of their syntactic nature, these techniques are susceptible to various program obfuscation techniques that preserve the program behaviour but change the program code in such a way that the original program and the modified program look very different. In [6], Chrisodorescu et al., reveal gaping holes in signature-based malware detection techniques employed by several popular, commercial anti-virus softwares. In their technique, a large number of obfuscated versions of known viruses are created and tested on several anti-virus softwares. The results demonstrate that these tools are severely lacking in their ability to detect obfuscated versions of known malware. An algorithm to generate signatures used by different anti-virus softwares to detect known malware is also presented. Results show that in many cases, the whole body of the malware is used as a signature for detection. This inability to capture malware behaviour comprehensively explains their failure to detect obfuscated versions of malware. IBM Internet Security Systems X-Force 2009 Mid-Year Trend and Risk Report [16] states that the level of obfuscation found in Web exploits and in PDF files continues to increase while some of these obfuscation techniques are even being passed to multimedia files. The amount of suspicious obfuscated content has nearly doubled from Q1 to Q2 of 2009. In the following, we discuss obfuscation techniques. Program Obfuscation Techniques Program obfuscation techniques modify a program in such a way that its behaviour remains the same but analysis of the obfuscated program becomes difficult. The main objective for such transformations is to prevent/make difficult reverse engineering in order to protect intellectual property or to prevent illegal program modifications. Several popular techniques for program obfuscation are code reordering, instruction substitution, variable renaming, garbage insertion and data and code encapsulation. An exhaustive list of obfuscating transformations and measures to gauge their efficacy are presented in [11]. Obfuscation techniques are heavily used by malware programs to evade detection.
28
R.K. Shyamasundar, H. Shah, and N.V.N. Kumar
Substantial research has been done on theoretical aspects of program obfuscators. In [2], Barak et al., present impossibility results for program obfuscation with respect to “virtual black box” intuition which states that anything that can be computed efficiently from an obfuscated version of program, can be computed efficiently with just an oracle access to the program. On the positive side, much of the interesting information about programs is hard to compute. In fact, Rice’s theorem [29] states that any non-trivial property of partial recursive functions is undecidable. In [11], Collberg et al., use the result that different versions of precise static alias analysis are NP-hard [19] or even undecidable [28] to construct opaque predicates – whose values are known to the obfuscator but are hard to compute by static analysis. In [4], authors present a control-flow obfuscation technique that implants an instance of a hard combinatorial problem into a program. Polymorphism Generators / Mutation Engines A better technique to evade detection is adopted by polymorphic and metamorphic viruses. Polymorphic viruses exhibit a robust form of self-encryption wherein the encryption and decryption routines are themselves changed. Thus, the virus body is the same in all the versions but the encrypted versions appear different. In metamorphic viruses, the contents of the virus itself are changed rather than the encryption and decryption routines. In order to achieve this objective, metamorphic viruses make use of obfuscation techniques like code reordering, instruction substitution, variable renaming and garbage insertion. For example, Chameleon, the first polymorphic virus, infects COM files in its directory. Its’ signature changes every time it infects a new file. This makes it difficult for anti-virus scanners to detect them. Other polymorphic viruses are Bootache, CivilWar, Crusher, Dudley, Fly, Freddy, etc. 3.2
Static Analysis of Binaries
Static analysis techniques are used to determine important properties like control flow safety (i.e., programs jump to and execute valid code), memory safety (i.e., program only uses allocated memory) and abstraction preservation (i.e., programs use abstract data types only as far as their abstractions allow). Type systems are usually employed to enforce these properties. Although Java Virtual Machine Language (JVML) also has the ability to type-check low-level code, it has several drawbacks like semantic errors between the verifier and its’ English language specification and also the difficulty in compiling high-level languages other than Java. In [23], authors present a Typed Assembly Language (TAL) which is a low-level statically-typed target language that supports memory safety even in the presence of advanced structures and optimizations; TALx86 is targeted at Intel architectures. In [34], authors present a technique to instrument well-typed programs with security checks and typing annotations so that the resulting programs obey the policies satisfied by security automata [30] and can be mechanically checked for safety. This technique allows enforcement of virtually all safety policies since it uses a security automata for policy specification. In
Malware: From Modelling to Practical Detection
29
[24], authors present a technique for protecting privacy and integrity of data by extending Java language with statically-checked information flow annotations. An interesting approach to establish safety of un-trusted programs is presented in [25]. In this approach referred to as Proof Carrying Code, the code producer provides a proof along with the program. The consumer checks the proof along with the program to ensure that his safety requirements are met with. Main objective of PCC was to be able to extend a system with a piece of software that was certified to obey certain memory safety properties. Though proof generation is quite complex and is often done by hand, proof verifier is relatively small and easy to implement. This technique has been applied to ensure safety of network packet filters that are downloaded into operating system kernel. An application to ensure resource usage and data abstraction in addition to memory safety for un-trusted mobile agents was also provided. An interesting approach to detect variants of a known virus by performing static analysis on virus code and abstracting out its behaviour is presented in [5]. We now describe their architecture for detecting variants of a known virus [5]: 1. Generalize the virus code into a virus automata with uninterpreted symbols to represent data dependencies between variables, 2. Pattern-definitions are internal representations of abstraction patterns used as alphabet by the virus automata, 3. The executable loader transforms the executable into a collection of control flow graphs (CFG’s) one for each procedure, 4. Annotator takes a CFG from the executable and the set of abstraction patterns and produces an annotated CFG as an abstract representation of a program procedure, and 5. Detector computes whether the virus (represented by the virus automata) appears in the abstract representation of the executable (represented as a collection of annotated CFG’s) using tools for language containment and unification. Note that for the above algorithm to work, abstraction patterns have to be provided (manually constructed) for each kind of transformation (code transportation, dead-code insertion etc). Once these are provided the rest of the algorithm is automatic. Thus, such an architecture works well for recognizing those variants of known viruses for which abstraction patterns are provided. In conclusion, this approach works better than signature based matching, but still has the drawback that it cannot recognize new viruses. 3.3
Semantics Based Detection
In [8], authors formalize the problem of determining whether a program exhibits a specified malicious behaviour and present an algorithm for handling a limited set of transformations. Malicious behaviour is described using templates, which are instruction sequences where variables and symbolic constants are used. They
30
R.K. Shyamasundar, H. Shah, and N.V.N. Kumar
abstract away the names of specific registers and symbolic constants in the specification of the malicious behaviour, thus becoming insensitive to simple transformations such as register renaming. They formalized the notion of when an instruction sequence contains a behaviour specified by a template, to match the intuition that they should have the same effect on the memory upon execution. A program is said to satisfy a template iff the program contains an instruction sequence that contains a behaviour specified by the template. The algorithm to check whether a program satisfies a given template proceeds by finding, for each template node, a matching node in the program. Once two matching nodes are found, one needs to check whether the define-use relationships true between template nodes also hold true in corresponding program nodes. If all the nodes in the template have matching counterparts under these conditions, the algorithm is said to have found a program fragment that satisfies the template. The above approach performs better than the static analysis based approach, since it incorporates semantics of instructions. Templates is a better form of abstraction than abstraction patterns. However, note that still templates have to be constructed by hand by looking at known malicious programs. In [7] authors present a way of automatically generating malware specifications by comparing the execution behaviour of a known malware against the execution behaviours of a set of benign programs. Algorithm [7] for extracting malicious patterns (malspecs) proceeds as follows 1. Collect execution traces. In this step, traces are collected by passively monitoring the execution of each program. 2. Construct dependence graphs. In this step, they construct dependence graphs from the traces to include def-use dependence and value-dependence. 3. Compute contrast subgraph. In this step, they extract the minimal connected subgraphs of the malware dependence graphs which are not isomorphic to any subgraph of the benign dependence graph. For using malspecs for malware detection, they consider the semantics aware malware detector described above and convert malspecs into templates that can then be used by the detection algorithm.
4
Protection Mechanisms
In the preceding section, we have studied the problem of detecting viruses. We have seen theoretical results indicating that the problem is undecidable in general. We have also seen some algorithms that can detect particular viruses. In this section, we will study the possibility of restricting the extent of damage due to an undetected virus. Cohen [9] identifies three important properties of usable systems sharing, transitivity of information flow and generality of interpreting information. Sharing is necessary if we want others to use the information produced by us. For sharing information, there must be a well-defined path between the users. Once we have a copy of the information, it can be used in any way, particularly we can pass it
Malware: From Modelling to Practical Detection
31
on to others. Thus, information flow is transitive. By generality of information, we mean that information can be treated as data or program. If there is no sharing, there can be no information flow across boundaries and hence, viruses cannot spread outside a partition. This is referred to as isolationism. Cohen [9] presents partition models that limit the flow of information. These models are based on well studied security properties like integrity and confidentiality/secrecy. These properties when enforced in tandem result in partitioning the set of users into closed subsets under transitivity of information flow. However precise implementations are computationally very hard (NP-complete). Isolationism is unacceptable in this inter-networked world. Without generality of interpreting information viruses cannot spread since infection requires altering the interpreted information. However, for building useful systems we must allow generality of interpreting information. Adleman [1] studies the conditions under which a virus can be isolated from the computing environment. We present an intuitive account of his study. Definition 5. The infected set of a virus v, denoted Iv is defined as the set of those programs which result from v infecting some program. Definition 6. A virus v is said to be absolutely isolable iff Iv is decidable. If a virus v is absolutely isolable, then we can detect as soon as a program gets infected by v and we can remove it. Thus, if a virus is absolutely isolable then we can neutralize it. For example, using this method we can neutralize viruses which always increase the size of a target program upon infection. Unfortunately, not all viruses are absolutely isolable as noted in the theorem below [1]. Theorem 3. There exists a program v which always terminates such that 1. v is a malicious virus 2. Iv is semi-decidable To deal with the viruses of the kind described in the above theorem, Adleman introduces the notion of germ set of a virus. Definition 7. The germ set of a virus v, denoted Gv is defined as the set of those programs which behave (functionally) the same as a program infected by v. Germs of a virus are functionally the same as infected programs, but are syntactically different. They are not resulted from infections, but they can infect other programs. Definition 8. A virus v is said to be isolable within its germ set iff there exists a set of programs S such that: 1. Iv ⊆ S ⊆ Gv 2. S is decidable
32
R.K. Shyamasundar, H. Shah, and N.V.N. Kumar
If a virus v is isolable within its germ set by a decidable set S, then not allowing programs in the set S to be written to storage or to be communicated will stop the virus from infecting. Moreover, isolating some uninfected germs is an added benefit. Unfortunately, not all viruses are isolable within their germ set. Adleman [1] suggested the notion of a quarantine as another protection mechanism. In this method, one executes a program in a restricted environment and observe its behaviour under various circumstances. After one gains sufficient confidence in its genuineness, it can be introduced into the real environment. Several techniques are developed based on this idea. Application sandboxing and virtualization are some of the widely studied methods. Virtualization Techniques and Sandboxing The most important aspect of malware detection is being able to observe malware behaviour without being detected by malware. Malware detector typically runs in the same machine/environment as the malware. Thus, it is susceptible to subversion by malware. To overcome this limitation, several techniques use a virtual machine for malware detection. In this scenario, the host OS runs a Virtual Machine Monitor (VMM) that provides a virtual machine with guest OS on top of it. Another alternative is to have the VMM run directly on the hardware instead of the host OS. VMM provides strong isolation and protects the host from damage. Therefore, malware-detector can be used outside the virtual machine without the fear of being compromised by malware. In [20], authors present a technique for detecting malware by running it in a virtual environment and observing its behavoiur from outside. In order to reconstruct the semantic view from outside, the guest OS data structures and functions are cast on VMM state (which is visible to the malware detector). Using this technique, the authors were able to view volatile state (e.g., list of running processes) and persistent state (e.g., files in a directory) of the virtual machine. This allowed them to detect rootkits like FU3 , NTRootkit4 and Hacker Defender5 that hide their own files and processes. As compared to this, the malware detection program running inside the virtual machine could not detect the rootkits. Another approach for malware detection via hardware virtualization is presented in [12]. In this approach, the authors present requirements for transparent malware analysis and present a malware detection mechanism that relies on hardware virtualization extensions. The advantage of using hardware virtualization extension is that it provides features like higher privilege, use of shadow page tables and privileged access to sensitive CPU registers. These techniques offer better transparency. However, virtual machine techniques are not completely foolproof. Malware writers employ sophisticated techniques to detect whether the program is running inside a virtual machine. Whenever a virtual environment is detected, malware can modify its behaviour and go undetected. In [18] and [27], authors outline various anomalies that exist in virtualization techniques that can be exploited 3 4 5
http://www.rootkit.com/board project fused.php?did=proj12 http://www.megasecurity.org/Tools/Nt rootkit all.html http://hxdef.czweb.org
Malware: From Modelling to Practical Detection
33
by malware to detect the presence of a virtual environment. Among the various anomalies presented are: discrepancies between interfaces of real and virtual hardware, inaccuracies of execution of some non-virtualized instructions, inaccuracy due to difficulty in modeling complex chipsets, discrepancies arising out of shared physical resources between guests and timing discrepancies. Even with hardware virtualization extensions, several timing anomalies can be easily detected. Thus, VM techniques do not offer complete transparency. In [21] an application of virtualization to launch a rootkit is presented. This rootkit installs a virtual machine under the current OS and then hoists the original OS into this virtual machine. These Virtual Machine Based Rootkits (VMBRs) can support other malicious programs in a separate OS that is isolated from the target system. VMBRs are difficult to detect because software running on the target system cannot access their state. Sandboxes are security mechanisms to separate running programs by providing a restricted environment in which certain functions are prohibited (e.g., chroot utility in UNIX that allows one to set the root directory for a process). In this sense, sandboxes can be viewed as a specific example of virtualization (e.g., Norman Sandbox6 that analyzes programs in secure, emulated environments). Sandboxes restrict the effects of a program within a specific boundary. For example, CWSandbox7 , uses API hooking technique to re-route the system calls through monitoring code. Thus, all relevant system calls made by un-trusted programs are monitored at run-time, their behaviour is analyzed and automated reports are generated. We have developed a similar sandboxing technique for Linux OS [32] wherein we monitor the system calls made by an un-trusted program and restrict its activities at run-time. We provide a guarded command based policy specification language to encode security policies. This language is as expressive as a Security Automata [30] and can express complex policies that depend on temporal aspects of system call trace. Whenever a system call is intercepted, it is allowed to go through only when it is deemed safe with respect to the policy at hand. Sandboxes also suffer from limitations described above for the virtualization techniques.
5
A New Approach: Validating Behaviours of Programs
In this section, we present a promising future direction [33], for protection from computer viruses. Our idea is based on validating as opposed to verification in the context of compilers [3,22]. The validation approach is based on comparing whether one program is as good or as bad as the other. Such an approach lies on the notion of bisimulation due to David Park and Robin Milner; these concepts are very widely used in the context of process algebra and protocol verification. Recollect that viruses spread by infecting trusted programs. When an infected program is executed, the program in addition to performing its intended job 6 7
http://www.norman.com/technology/norman sandbox/ http://www.cwsandbox.org/
34
R.K. Shyamasundar, H. Shah, and N.V.N. Kumar
(observable by the user) also results in damage (happens in the background without the users knowledge/consent). Based on this observation we divide the behaviour of a program into two parts: external behaviour (behaviour observable from outside) and internal behaviour (behaviour observable from inside the system with the help of a monitor). We further note that if the infection caused by the virus modifies the external behaviour then the user will suspect and remove the program. Therefore, it is reasonable to assume that infection modifies only the internal behaviour of programs. We therefore suggest that we must monitor the internal behaviour of trusted programs and validate them against their intended behaviour in an environment. For the class of transactional reactive programs, we can define the external behaviour as the sequence of interactions that happen between the user and the program. For example, if we consider a vending machine as our system, one possible external behaviour is given by place-coin ˆ choose-item ˆ receive-item. We also observe that during execution of a program p with external behaviour t, the main process may spawn child processes internally (not necessarily observable to the user) for modularly achieving/computing the final result. Thus, the total (internal + external) behaviour can be denoted by a tree with processes, data operations etc denoted as nodes and directed edges. We can now define the internal behaviour of a program as the process tree generated during execution together with the associated system calls made by each process (vertex/node) in the tree. Based on this model of the internal behaviour, we can derive an algorithm for model-checking whether a program behaviour simulates a given behaviour. If the observed behaviour of a program simulates its’ intended behaviour, then we say that the program is uninfected else we say that it is infected. We can use this method to detect if and when a trusted program is infected. We have performed a lot of experiments and obtained encouraging results. We present some of the experiments and our observations in this section. We performed our experiments on a machine with Linux (Ubuntu distribution) OS. We monitored (unobtrusive) the sequence of system calls made (using strace tool), when the genuine text editor nano is used to edit a file. Note that system calls act as an interface between the application and the underlying hardware devices (can be thought of as services). We have also noted the % of time spent in various system calls, the number of processes created during execution, total running time, CPU and other resources used during this operation. We have collected similar information for the genuine ssh program starting from the time the service is started to the time the user logged in and completed the session. We executed an infected version of nano program and collected the observable information during its execution. We then compared it with its intended behaviour and we easily concluded from the observations made that the version of nano we executed is infected. Moreover we were also able to identify the instructions added due to infection to the program. Summary of differences in the system call profiles of the genuine nano vs the infected nano:
Malware: From Modelling to Practical Detection
35
1. original program made 18 different system calls whereas the infected version made 48 2. infected program made network related system calls like socket, connect, etc. whereas the original program made none 3. infected program spawned 3 processes whereas the original program did not spawn any process 4. there is a huge difference in the number of read and write system calls 5. we observed a difference in the timing information provided by strace summary (when both the versions were run only for a few seconds). Original program spent around 88% on execve system call and 12% on stat64 whereas the infected version spent 74.17% on waitpid, 10.98% on write, 6.28% on read, 4.27% on execve and negligible time on stat64. This indicates that the infected program spent more time waiting on children than in execution. This increased percentage of time spent on writing and reading by infected program indicates malfunction. We executed an infected ssh program and collected the observable information during its execution. We then compared it with its intended behaviour and found that infected program modified the authentication module of the ssh program. Infected ssh would enable an attacker to successfully login to our host using a valid username with a magic-pass. In this case the infection has removed certain instructions from the program. At a high level we can describe the expected behavior of ssh as follows 1. 2. 3. 4. 5. 6.
start sshd service wait for a connection and accept a connection authenticate the user prepare and provide a console with appropriate environment manage user interaction and logout stop sshd
Summary of differences in behavior between genuine ssh and the infected ssh: 1. start sshd service – Genuine sshd uses the keys and config files from /etc/ssh whereas the infected ssh obtains these from a local installation directory 2. authenticate the user – Genuine sshd used kerberos, crypto utilities and pam modules which the infected ssh does not use – The infected ssh uses the config and sniff files (local/untrusted resources) which the genuine sshd does not use To test the resilience of our approach to the simple syntactic transformations, that the virus writers are resorting to evade detection, we have compiled the virus responsible for infection under various levels of optimization. gcc compiler performs several simple syntactic transformations like loop unrolling, function inlining, register reassignment etc. We have executed the infected programs nano (similarly ssh) compiled under different optimization levels and collected the
36
R.K. Shyamasundar, H. Shah, and N.V.N. Kumar
observable information during execution. What we observed was that barring very minor changes these programs produced the same traces of system calls. One difference we observed was the way in which contents of a file were buffered into and out of memory. Optimized program read in chunks of size 4096 whereas lower level of optimization resulted in reading chunks of a smaller size. These experiments demonstrate that the various obfuscators would have little impact on our approach and we will be able to catch infections. To summarize, we have presented an approach in which we benchmark the intended behaviours of trusted programs in an execution environment and whenever we want to validate whether the installation of the trusted program in a similar environment is tampered, we collect the observable information during runtime and compare it with its intended behaviour. If there is a significant difference between the two, then we say that the program is infected. We have also showed that the method is resilient to obfuscation. We are conducting experiments to see the effect of polymorphic and metamorphic viruses on our approach. Note that the method we presented will also be very useful for validating the embedded systems because typically the software and the hardware configurations of an embedded system are very few. In our study so far, the above approach seems to be very fruitful for checking un-tampering of devices of network communication and automobile software [31]. In fact, our study shows that our above approach does not need the constraints imposed in [31] or their related works on Pioneer and swatt protocols. The work is under progress and will be reported elsewhere.
6
Discussion
Malware has been a major cause of concern for information security. Today malicious programs are very widely spread and the losses incurred due to malware is very high (in some cases they result in financial loss while in some other cases they spoil the reputation of an organization). It is argued in [13] that the financial gain from criminal enterprise has lead to large investments of funds in developing tools and operational capabilities for the attackers on net transactions. Thus, it is imperative to develop mechanisms to defend ourselves from malware attacks. To develop sound defense mechanisms it then becomes necessary that we understand the fundamental capabilities of virus. In this paper, we have surveyed various efforts to formalize the notion of a virus/malware and characterize the damage due to them. We presented theoretical results based on the formalizations to show that detection of viruses in general is an undecidable problem. We presented a variety of detection mechanisms which are widely deployed to detect viruses in limited cases. We have also established the shortcomings of these detection mechanisms and discussed different techniques which the virus writers are resorting to evade detection. We then went on to survey some methods to limit the damages due to unidentified viruses. Unfortunately these methods end up being too complex to implement or force us towards unusable systems (systems with no sharing for example). It may be pointed out that the malware can be exploited for criminal activities.
Malware: From Modelling to Practical Detection
37
We have also presented a promising new direction for protecting from malware. Our approach is based on the observation that malware infect trusted programs to include a subroutine for damage to be carried out in the background without the users consent. We proposed a framework based on runtime monitoring of trusted programs and the notion of bisimulation to validate their behaviour. We presented various experimental results to demonstrate the efficacy of the approach and showed its resilience to program obfuscation techniques.
Acknowledgement The authors are grateful to L.M. Adleman, and F Cohen who founded the excellent theory of computer viruses. One of the authors (Harshit Shah) was supported under ITPAR II project from DST, Govt. of India.
References 1. Adleman, L.M.: An abstract theory of computer viruses. In: Goldwasser, S. (ed.) CRYPTO 1988. LNCS, vol. 403, pp. 354–374. Springer, Heidelberg (1990) 2. Barak, B., Goldreich, O., Impagliazzo, R., Rudich, S., Sahai, A., Vadhan, S.P., Yang, K.: On the (im)possibility of obfuscating programs. In: Kilian, J. (ed.) CRYPTO 2001. LNCS, vol. 2139, pp. 1–18. Springer, Heidelberg (2001) 3. Bhattacharjee, A.K., Sen, G., Dhodapkar, S.D., Karunakar, K., Rajan, B., Shyamasundar, R.K.: A system for object code validation. In: Joseph, M. (ed.) FTRTFT 2000. LNCS, vol. 1926, pp. 152–169. Springer, Heidelberg (2000) 4. Chow, S., Gu, Y., Johnson, H., Zakharov, V.A.: An approach to the obfuscation of control-flow of sequential computer programs. In: Davida, G.I., Frankel, Y. (eds.) ISC 2001. LNCS, vol. 2200, pp. 144–155. Springer, Heidelberg (2001) 5. Christodorescu, M., Jha, S.: Static analysis of executables to detect malicious patterns. In: SSYM 2003: Proceedings of the 12th conference on USENIX Security Symposium, Berkeley, CA, USA, pp. 12–12. USENIX Association (2003) 6. Christodorescu, M., Jha, S.: Testing malware detectors. In: ISSTA 2004: Proceedings of the 2004 ACM SIGSOFT international symposium on Software testing and analysis, pp. 34–44. ACM, New York (2004) 7. Christodorescu, M., Jha, S., Kruegel, C.: Mining specifications of malicious behavior. In: Crnkovic, I., Bertolino, A. (eds.) ESEC/SIGSOFT FSE, pp. 5–14. ACM, New York (2007) 8. Christodorescu, M., Jha, S., Seshia, S.A., Song, D.X., Bryant, R.E.: Semanticsaware malware detection. In: IEEE Symposium on Security and Privacy, pp. 32–46. IEEE Computer Society, Los Alamitos (2005) 9. Cohen, F.: Computer viruses: theory and experiments. Comput. Secur. 6(1), 22–35 (1987) 10. Cohen, F.: Computer Viruses. PhD thesis, University of Southern California (1986) 11. Collberg, C., Thomborson, C., Low, D.: A taxonomy of obfuscating transformations. Technical Report 148, Department of Computer Science, University of Auckland (July 1997), http://www.cs.auckland.ac.nz/ collberg/Research/Publications/ CollbergThomborsonLow97a/index.html
38
R.K. Shyamasundar, H. Shah, and N.V.N. Kumar
12. Dinaburg, A., Royal, P., Sharif, M., Lee, W.: Ether: malware analysis via hardware virtualization extensions. In: CCS 2008: Proceedings of the 15th ACM conference on Computer and communications security, pp. 51–62. ACM, New York (2008) 13. Dittrich, D.: Malware to crimeware: How far they gone, and how do we catch up? Login, The USENIX Magazine 34(4), 35–44 (2009) 14. Filiol, E.: Computer Viruses from Theory to Applications. IRIS International Series. Springer, France (2005) 15. IBM X Force Threat Reports. IBM Internet Security Systems X-Force, trend and risk report (2008), http://www-935.ibm.com/services/us/iss/xforce/trendreports/ 16. IBM X Force Threat Reports. IBM Internet Security Systems X-Force, mid-year trend and risk report (2009), http://www-935.ibm.com/services/us/iss/xforce/trendreports/ 17. Forrest, S., Hofmeyr, S.A., Somayaji, A.: Computer immunology. CACM 40(10), 88–96 (1997) 18. Garfinkel, T., Adams, K., Warfield, A., Franklin, J.: Compatibility is not transparency: Vmm detection myths and realities. In: HOTOS 2007: Proceedings of the 11th USENIX workshop on Hot topics in operating systems, Berkeley, CA, USA, pp. 1–6. USENIX Association (2007) 19. Horwitz, S.: Precise flow-insensitive may-alias analysis is np-hard. ACM Trans. Program. Lang. Syst. 19(1), 1–6 (1997) 20. Jiang, X., Wang, X., Xu, D.: Stealthy malware detection through vmm-based “outof-the-box” semantic view reconstruction. In: CCS 2007: Proceedings of the 14th ACM conference on Computer and communications security, pp. 128–138. ACM, New York (2007) 21. King, S.T., Chen, P.M., Wang, Y.-M., Verbowski, C., Wang, H.J., Lorch, J.R.: Subvirt: Implementing malware with virtual machines. In: SP 2006: Proceedings of the 2006 IEEE Symposium on Security and Privacy, Washington, DC, USA, pp. 314–327. IEEE Computer Society, Los Alamitos (2006) 22. Kundaji, R., Shyamasundar, R.: Refinement calculus: A basis for translation validation, debugging and certification. Theoretical Computer Science 354, 156–168 (2006) 23. Morrisett, G., Crary, K., Glew, N., Grossman, D., Samuels, R., Smith, F., Walker, D., Weirich, S., Zdancewic, S.: Talx86: A realistic typed assembly language. In: Second Workshop on Compiler Support for System Software, pp. 25–35 (1999) 24. Myers, A.C.: Jflow: practical mostly-static information flow control. In: POPL 1999: Proceedings of the 26th ACM SIGPLAN-SIGACT symposium on Principles of programming languages, pp. 228–241. ACM, New York (1999) 25. Necula, G.C.: Proof-carrying code. In: POPL 1997: Proceedings of the 24th ACM SIGPLAN-SIGACT symposium on Principles of programming languages, pp. 106– 119. ACM, New York (1997) 26. Von Neumann, J.: Theory of Self-Reproducing Automata. University of Illinois Press, Champaign (1966) 27. Raffetseder, T., Kr¨ ugel, C., Kirda, E.: Detecting system emulators. In: Garay, J.A., Lenstra, A.K., Mambo, M., Peralta, R. (eds.) ISC 2007. LNCS, vol. 4779, pp. 1–18. Springer, Heidelberg (2007) 28. Ramalingam, G.: The undecidability of aliasing. ACM Trans. Program. Lang. Syst. 16(5), 1467–1471 (1994) 29. Rice, H.G.: Classes of recursively enumerable sets and their decision problems. Transactions of the American Mathematical Society 74(2), 358–366 (1953)
Malware: From Modelling to Practical Detection
39
30. Schneider, F.B.: Enforceable security policies. ACM Trans. Inf. Syst. Secur. 3(1), 30–50 (2000) 31. Seshadri, A., Luk, M., Perrig, A., van Doorn, L., Khosla, P.: Externally verifiable code execution. CACM 49(9), 45–49 (2006) 32. Shah, H.J., Shyamasundar, R.K.: On run-time enforcement of policies. In: Cervesato, I. (ed.) ASIAN 2007. LNCS, vol. 4846, pp. 268–281. Springer, Heidelberg (2007) 33. Shyamasundar, R., Shah, H., Kumar, N.N.: Checking malware behaviour via quarantining (abstract). In: Int. Conf. on Information Security and Digital Forensics, vol. City University of London, Full manuscript under submission process (September 2009) 34. Walker, D.: A type system for expressive security policies. In: POPL 2000: Proceedings of the 27th ACM SIGPLAN-SIGACT symposium on Principles of programming languages, pp. 254–267. ACM, New York (2000) 35. www.computereconomics.com 2007 malware report: The economic impact of viruses, spyware, adware, botnets, and other malicious code, http://www.computereconomics.com/page.cfm?name=Malware%20Report
Semantic Frameworks—Meanings in the Architecture Jim Davies and Jeremy Gibbons Computing Laboratory, University of Oxford http://www.comlab.ox.ac.uk
Abstract. In most applications of information technology, the limiting factor is neither computational power nor storage capacity; nor is it connectivity. Hardware can be obtained at commodity prices, and software infrastructure can be downloaded free of charge. The limiting factor is the cost of consistency and coordination: in software development, in systems integration, and in continuing interaction with users. This paper explains how the use of semantic technologies and model-driven engineering can greatly reduce this cost, and thus increase the quality, interoperability, and suitability of information technology applications.
1
Introduction
The prediction made in 1965 by Gordon Moore, later enshrined as “Moore’s Law”, still holds true: “The complexity for minimum component costs has increased at a rate of roughly a factor of two per year. . . there is no reason to believe it will not remain nearly constant for at least ten years” [10]. This complexity, and hence the power, capacity, and connectivity of our computing infrastructure, continues to increase at an exponential rate, and is likely to do so for at least another ten years [4]. This law will one day cease to apply. As Moore himself has said, “It can’t continue forever. The nature of exponentials is that you push them out and eventually disaster happens.” [2]. However, we have already reached the point where every piece of information that we might hope to record using pen and paper could easily be stored and processed in computing systems. We could be living in the world of the “paperless office”, predicted in an editorial for Business Week back in 1975. [16]. We could be living in a world in which information technology allowed scientists, businesses, and governments to work together, seamlessly, and effectively, combining their resources to address the challenges of a new century. Instead, we are still living in a world in which most scientific data is unshared, many business systems are incompatible, and ‘joined up government’ is little more than an electoral promise [8]. The reason for this—the limiting factor on progress and application—is neither a lack of computational power nor one of storage capacity; nor is it a lack of connectivity. It is, quite simply, the cost of consistency and coordination: in software development, in systems integration, and in continuing interaction with users. T. Janowski and H. Mohanty (Eds.): ICDCIT 2010, LNCS 5966, pp. 40–54, 2010. c Springer-Verlag Berlin Heidelberg 2010
Semantic Frameworks—Meanings in the Architecture
41
It can be expensive to develop software whose behaviour is consistent with requirements. A typical ‘requirements process’ will involve analysis of existing systems, processes, and constraints, and agreement upon a specification for the new system. This specification then forms a basis for software design and development, and also for subsequent evaluation of the delivered system. All too often, it is only at the point of delivery that we discover that requirements have been inadequately specified, misinterpreted, or misunderstood. It can be expensive also to reconcile requirements, or to coordinate software development, so that different systems are able to work together. Inconsistency at lower levels of abstraction—hardware, networking protocols, file formats—may be irritating, but is easily accommodated through software drivers, wrappers, and converters. At a higher, semantic level, however, differences in approach and interpretation can lead to situations in which meaningful integration—of systems or data—would be impossible. Finally, it can be expensive to provide appropriate training and documentation, or to develop a sufficiently intuitive interface, that people will be consistent in their use of the system: recording information in the same way, making use of the same functions, and adopting compatible interpretations of the data presented. And yet if they do not, then the meaning of the data in the system may be quite inconsistent with any specification, and may depend upon information that has not been recorded. This can greatly reduce the value and utility of the system, and the data it contains. In each case, we can reduce costs by making interpretations and meanings more accessible and more computable: taking them out of the heads of the customers, developers, and users, and recording them in a way that supports not only manual inspection, but also automatic incorporation in the design, integration, and operation of our systems. To make this possible, we require a structure for the creation and management of terms, definitions, and usages—a standard means of describing semantics.
2
Semantics
In computing, the notion of ‘semantics’ is used in two senses. When we talk of programming language semantics, or the semantics of a modelling notation, we mean the intended interpretation of the language features—often expressed in terms of some other notation whose meaning is already understood. When we talk of semantic integrity, or of the semantic web, we mean the intended interpretation of data or behaviour in terms of entities in the real world. For example: we might give a semantics to the statement x := x + 1 in a programming language in terms of a mathematical function upon relations representing states; and if a travel booking system were to record both currentDate and dateOf T ravel information, we should not be surprised to find a semantic integrity constraint stating that the values of these attributes for the same booking enquiry should satisfy the < (less than) relation.
42
J. Davies and J. Gibbons
These two usages of the word are closely related. A computable representation of a ‘real world’ semantics will be expressed using modelling notations and programming languages, and the ‘language’ semantics will tell us how to break down that expression into simpler, component parts. However, while we might expect a language semantics to be a single, static document, generally agreed, we should expect our real world semantics to be a varying composition of documents and models, specific to a particular situation or purpose. Furthermore, whereas a language semantics might hope to present a total, complete account of the meaning of a particular construct, expressed still in some abstract, discrete mathematical domain, a real world semantics will always be partial and incomplete. Consider, for example, the semantics of a data item collected on the US I-94W visa waiver form [14], against the question: “have you ever been arrested. . . for a violation related to a controlled substance. . . ?”. The value of the data item might be “yes” or “no”, or some value indicating that the question was not properly answered. Its semantics may include the text of the question, the version of the form concerned, any guidance notes supplied, and the date of completion. It may include also the nationality of the individual named on the form, as this might conceivably influence the interpretation of the phrase “controlled substance”. In general, there will be no limit upon the amount of contextual information that might have some bearing upon our interpretation of a data item or function. And as that context is constantly changing, we might sensibly regard any real world semantics as transient, consisting simply of whatever additional information we present for the purpose in hand. This information we will refer to as the semantic metadata for the item concerned. To ensure that the behaviour of our systems is consistent with requirements, we can: provide a rich collection of semantic metadata for the specification document, and make this accessible to both customers and developers; make use of this semantic metadata in the automatic generation of part (or even all) of the implementation. We refer to this as semantics-driven development. To coordinate the design of different systems, we can develop each with reference to common semantic metadata: that is, with reference to some collection of ‘standard’ interpretations. The transient, unbounded nature of real world semantics means that any standard at this level will be limited in scope, applicable only to a certain audience, and only for a certain time. We must therefore support multiple, evolving ‘standards’, by developing a standard means of publishing, managing, and relating semantic metadata: a semantic meta-standard. To ensure that the operation of our system is consistent with the designers’ intentions, and to coordinate the actions and interpretations of the users, we can make the semantic metadata available at run-time: not merely by incorporating it into the design of the system, but also by making it accessible through the user interface. In every interaction with the system, a user’s actions may be informed—and related computations may be initiated, or even automatically performed—on the basis of the semantics of the data items involved. We refer to this principle of transparency as open semantics.
Semantic Frameworks—Meanings in the Architecture
43
In each case, we are aiming to ensure that each and every artifact present in our system—an attribute name in a design document, a method name in an interface, a question on a screen, an element of a drop-down list—comes with a reference to some useful semantic metadata. Such a reference may be accessed by a computer program: for example, to examine an XML description of a particular web service. Or it may be followed by a human, who wishes to learn more about the interpretation of an answer before they select it from a list. To make this work in practice, we require an implementation of a semantic meta-standard: a standard way for developers, customers, and users to publish, manage, and relate semantic metadata. We require also, for any particular application domain, a collection of metadata conforming to that standard, comprising both the constructive descriptions needed for semantics-driven development, but also the documentation needed for an open semantics. We will refer to such a collection as a semantic framework.
3
Semantic Frameworks
At the heart of any semantic framework will be a collection of value domains, each describing a range of possible values that might be associated with a data item. The importance of agreement upon the range and interpretation of values, including the units employed, cannot be overstated. As an example, consider the case of the NASA Mars Climate Orbiter [11], which was lost after messages between two different systems were misinterpreted: the first was sending values in US units (lbf-s), the second was expecting values in SI units (Ns); the result was an orbit 170km lower than planned—23km below survivable height. A further, cautionary example is provided by the range of different interpretations for the same ‘10 codes’ across the emergency services in the United States. Almost any viewer of US television will be aware that ‘10–4’ means ‘okay’ or ‘received and understood’ (although, as an aside, we might observe that this code has been reported as meaning ‘please repeat previous message’ in New Zealand [17]). However, other codes have widely differing meanings, even for agencies with overlapping jurisdictions. The code ‘10-47’ means ‘nuclear substance incident’ for New York City emergency response, but ‘road repair’ for New York State. While this might be cause for amusement, the fact that ‘10-33’ means ‘traffic delays’ for the highway patrol in Missouri, but ‘officer down’ to local police in the state could have had fatal consequences for an injured officer [13]. Following reviews of emergency services coordination, the US Department of Homeland Security has asked emergency services not to use these and other codes in radio communication. It is inevitable that we should wish to place differing interpretations upon the same codes. The effort required to achieve and maintain a consistent interpretation increases rapidly with the number of codes involved, and yet each agency or community will have its own priorities regarding standardisation: an agency working in a cold climate might agree a set of terms pertaining to snow-related incidents, but these terms would be irrelevant elsewhere.
44
J. Davies and J. Gibbons
Fig. 1. BA Special Meals
In business, organisations will adopt new terms and new meanings as part of a process of differentiation, in seeking competitive advantage. Or they may adopt a different interpretation out of necessity, being unable to find an existing set of values that meets their operational needs. As an example, consider the value domain shown in Figure 1, which describes the list of special meals that can be requested by British Airways (BA) passengers. Each item of text, such as “Asian vegetarian”, comes with a brief explanation of its intended meaning: a description of a particular class of meal. A full, computable definition of this value domain would specify also the representation (a text string of a particular length), and may be associated with other resources, such as documents or even menus: this would allow the open presentation of the definition at the point where the customer or their agent makes a selection. Each usage of a value domain is associated with one or more data elements: templates for the collection of data against a specific value domain. A data element must have a unique identifier; it may have also a set of names and an informal description, elaborating upon the semantics provided by the value domain. For example, a data element for meal choices might be defined as identifier: 1777777-14-72-131-424 name: special meal option description: [ ... ] value domain: 1777777-14-72-131-001
Semantic Frameworks—Meanings in the Architecture
SpecialMealRequest
45
<<1777777-14-72-131-424>> mealPreference
mealRequest
PassengerNameRecord
FrequentFlyerPreferences
Fig. 2. Using a Common Data Element
where 1777777-14-72-131-001 corresponds to the definition of value domain shown in Figure 1. Note that value domains and data elements are both classifiers, in that our documents, programs, and systems include items identified as instances of this value domain, or that data element. For example, a UML model showing a particular usage of the data element introduced above is shown in the class diagram of Figure 2, a partial description of the information architecture used by the airline. In the diagram, a UML class is tagged with a reference to the registered data element, indicating that it should implement the corresponding data type, and that its usage should be consistent with the corresponding semantics. In this model, both mealRequest and mealPreference are instances of the data element 1777777-14-72-131-424. They are also elaborations upon its meaning: we may regard this UML model as an additional item of semantic metadata, telling us more about how the BA meal choice data element is used, and thus more about its meaning. A developer working on a new system for the company or for one of its business partners might achieve greater consistency by re-using, or making reference to, this model, as well as to the definitions of the data element and value domain. We may expect any semantic framework to include a number of models that represent classifications of data elements and value domains—or even of other models. These models support the discovery and re-use of common data elements in different contexts, and may add to their semantics by declaring and constraining relationships between them. Some of these will have a specific, administrative role: indicating which of a range of elements may be preferred for a specific purpose, or recording versioning and supercession information. The most important kind of model, in terms of potential reductions in the cost of consistency and coordination, is the domain metamodel. This is a model of models, a pattern for software development, or systems integration and operation, within a particular domain. For example, we might imagine a metamodel for airline booking systems, which includes an element describing special meal requests. Different airlines might instantiate that element in different ways—see,
46
J. Davies and J. Gibbons
Religious Meals – Kosher Meal Pre-packed and sealed; contains meat – Muslim Meal No alcohol/pork/ham/bacon – Hindu Meal No beef, veal, pork, smoked or raw fish but can contain other types of meat. – Vegetarian Meals − Raw Vegetarian Meal Only raw fruits and vegetables − Vegetarian Oriental Meal No meat or seafood of any sort; can contain dairy products; cooked Chinese-style − Vegetarian Indian Meal (non-strict) No meat of any sort; can contain dairy products; cooked Indian-style − Vegetarian Jain Meal (strict; suitable for Jain) No meat of any sort; no onion, garlic, ginger and all root vegetables; cooked Indian-style − Western Vegetarian (non-strict; ovo-lacto) No meat of any sort; can contain dairy products; cooked Western-style − Vegetarian Vegan Meal (strict) No meat of any sort; no dairy products; cooked Western-style
Fig. 3. SQ Special Meals
for example, Figure 3, which describes the meal choices available on Singapore Airlines—but we would still be able to locate and read, automatically, the value domain corresponding to meal selections. Furthermore, a metamodel provides a basis for defining transformations at the model level. We might register, as part of our semantic framework, a model transformation that translates between the BA and Singapore domains; this could then be automatically discovered and applied on demand. Of course, we might expect to choose between one of several possible transformations, according to the specific context or purpose. For example, if we were interested in vegetarians who consume dairy products but are strict in their avoidance of seafood, then we might wish to ignore all but the first two categories of vegetarian meals from the Singapore Airlines list: these are the only two that explicitly exclude seafood. Alternatively, we might assume that the phrase “no meat of any sort” excludes seafood, and hence that any of the categories could apply. Any integration or analysis represents a context in itself, an additional, specific semantics for the information; we may wish to record this to explain our actions or results—in scientific terms, to provide for the repeatability of the experiment. If we wished to conduct a number of similar analyses, then we might define a metamodel for the analysis of data in this domain, and instantiate this with each specific set of assumptions.
Semantic Frameworks—Meanings in the Architecture
4
47
Application
This notion of semantic frameworks has been informed by several years of work in a particular application domain: that of clinical research informatics. The wide variations in biology and physiology within a population, and the variety of factors affecting health, mean that it is difficult to infer anything based upon one person’s response to a particular treatment. As a result, clinical trials, systematic studies of large populations, are essential to progress in medicine. In the case of the anti-cancer treatment Tamoxifen [3], the integration of data from 400 different trials allowed researchers to identify the subset of the population responsive to the drug, and indicated the optimum period of treatment. The adoption of compatible experimental protocols, with a common set of observations—data elements and value domains—produced an evidence base that changed clinical practice in the UK and elsewhere, and reduced mortality from operable breast cancer by 24%. This kind of integration is more often impossible: different studies make different observations, record the same information in different ways, or leave the precise meaning of the data unclear. A recent attempt at integrating 75 studies in ovarian cancer [5] was more representative in its outcome, failing due to inconsistencies in the choice of clinical and molecular variables, together with a lack of critical information about the experimental processes. It is unrealistic to expect different trials to be conducted to the same design: each represents a fresh context for observations, and we would expect designs to change as scientific knowledge advances, novel techniques are developed, and new technologies become available. However, not only can we identify standard collections of data elements and value domains for use across multiple trials—as in the case of Tamoxifen—but we can identify also commonalities between trial designs, patterns that every trial of a certain kind must conform to. The Consolidated Standards of Reporting Trials (CONSORT) statement [9] is an account of best practice in trials design. As part of the UK CancerGrid project [1], we were able to develop a domain metamodel for trials based upon this statement: a template that could be instantiated to produce a specific trial design. The process of instantiation consists in selecting data elements and value domains for the trial in question, some of which might be novel, and can be published for future reference. As clinical trials are experiments upon humans, a design document must describe the proposed trial in considerable detail, sufficient to allow a committee to determine it would be ethical to proceed. An instantiated trial model can be used as the basis for the automatic generation of the software needed to support trial execution: the forms for data entry, the services for subject and data management, and the transformations needed to extract the information needed to meet particular reporting requirements. The automatic generation of software artifacts can be crucial to the success of a semantic framework: it reduces the cost of software production to the point where we can overcome the inertia of existing systems and processes, replacing manual production of systems that are not interoperable with the automatic
48
J. Davies and J. Gibbons
production of systems that are. Furthermore, the models used as the basis for automatic generation are necessarily accurate depictions of usage, and hence are themselves accurate, up-to-date representations of semantic information. The semantic framework produced by the CancerGrid project is now being extended as part of the US cancer Biomedical Informatics Grid (caBIG) initiative [15], linking clinical trial designs, data sets, and analytical tools produced by a global community of cancer researchers. The cancer Data Standards Registry and Repository (caDSR) that sits at the heart of caBIG is the largest example of a semantic framework currently in use: it contains more than 12,000 data elements, whose definitions are linked to an underlying thesaurus of more than 60,000 biomedical terms. Other examples include the Metadata Online Registry (METeOR) maintained by the Australian Institute of Health and Welfare, the Canadian Institute for Health Information, the National Information Exchange Model (NIEM) from the US Departments of Homeland Security and Justice, and the US Health Information Knowledgebase. All of these, and the CancerGrid framework, implement the ISO 11179 [6] standard for metadata registries—an example of the kind of meta-standard required for the description of semantic information. However, the use of semantic frameworks in the design and development of software systems is still in its infancy, and there are many engineering issues to be addressed. Indeed, semantic frameworks may be seen as posing a collection of ‘grand challenges’: one for each application domain, in defining an appropriate base set of semantic metadata for collaboration, including domain metamodels for software generation; and one for computing itself, in providing an appropriate architecture for the realisation of these frameworks.
5
A Domain Challenge
As an example of a domain-specific grand challenge, we may consider that of information-driven health. Advances in analysis and imaging techniques, and progress in genomic, proteomic, and metabonomic science, allow us to obtain detailed information about the health of an individual. The automatic integration of this data, based upon a computable representation of its meaning or semantics, will revolutionise both medical and clinical research, and the impact on healthcare delivery will be dramatic. This revolution will come not only in personalised medicine, informed by the new biology, but also in the very nature of national and international healthcare systems. Agencies will be able to work faster and more effectively in adapting to changes in the circumstances, from advances in the body of medical knowledge to public health emergencies. Their agility will be limited only by the human capacity of ideas and understanding. To explain what it may be realistic to expect, within the next decade, we present three simple scenarios, each addressing a different aspect of informationdriven health:
Semantic Frameworks—Meanings in the Architecture
5.1
49
Scenarios
On-demand Collaboration. A researcher in breast cancer receives automatic notification of a paper in their specific topic of interest. The paper reports the results of a meta-analysis, a synthesis of independent but related studies, looking at the effectiveness of two different therapies. The two therapies appear equally effective, and a clinician would thus have no clear basis for selecting one over the other, in the treatment of a particular patient. However, the researcher believes that they may have access to additional data—genetic profiles and details of serious adverse events—that could contribute to the analysis. This is confirmed by means of a query to the relevant systems, which establishes a match—at the necessary level of detail—between this additional data and the datasets used in the meta-analysis. Automatically performing a simulation based upon the metadata, the systems suggests that the combined data would be enough for a statistically significant result. The researcher contacts the authors of the study, who are excited at the prospect of refining their meta-analysis with the new hypothesis. It turns out that they do not have access to enough genetic data to complete the combined analysis, but they work together on a proposal for retrieving and testing the samples required. The creation, submission, and review of the proposal is expedited by the automatic generation of the study protocol, the evidence of the statistical power of the calculation, an account of the potential benefit to health, and a projection of the costs. Provisional ethical approval is established automatically by reference to previous decisions, and this is countersigned by chairs of the respective ethics committees. Upon approval, the proposal is forwarded to the funding body, who agree to pay for the tissue processing costs. As the potential impact of the research can be quantified, various journals make offers to publish the results; on this occasion, the researchers accept the conditional offer from Nature Publishing. The results of the additional analysis show a clear correlation between specific gene expression, efficacy and adverse responses. The researchers upload their paper to Nature, together with the datasets, automatically processed to conform to the ethical approval. Much of the paper has been generated automatically from the hypotheses and design encoded in the proposal. Control of Infectious Diseases. A person suffering from flu-like symptoms accesses the national web portal for advice, and is guided through a quick checklist to establish whether or not they should seek medical attention. Based upon the severity and precise nature of the symptoms, there appears to be cause for concern, and the person is offered a choice of local appointments. The reason for prioritisation is that the symptoms appear to be indicative of H1N1. The portal has been automatically configured to take account of upto-date information about this and other diseases, including the geographical location of recent, confirmed cases. The questions asked are targeted; the responses, drawn from a controlled vocabulary, add to the information base.
50
J. Davies and J. Gibbons
Upon arrival at the clinic, the subject is seen by a physician already primed with information regarding the suspected diagnosis. As a result of the consultation, the physician requests urgent tests, and an ambulance is called to take the subject into precautionary quarantine. Real-time case reporting for these incidents is provided, with data parameterised by levels of certainty. As a result of the new case, the portal and other information systems are reconfigured to ask questions, based upon the reported symptoms, characteristic of a particular strain. Clinical testing equipment, including hand-held diagnostic devices and interfaced systems, will be automatically configured to look for this disease even when testing for other conditions. The national health protection agency obtains information about the subject, including any information about recent movements. If consent and authority are available, then the agency will access and integrate data from a variety of sources, including telecommunications, financial, and immigration records. The results of the subsequent analysis will further inform the dynamic re-configuration of the healthcare information systems. Improving Research Efficiency. An oncologist wishes to start trials on a new treatment for pancreatic cancer. Positive results have been obtained in early phase studies, and the next step is to propose a randomised Phase III trial. However, with an incidence of 1 in 9000, the target recruitment of 1000 represents one-tenth of the cases reported annually; it may be impossible to initiate more than one such trial in pancreatic cancer each year. As part of the reviewing process, the research council asks a collaborative group to consider the data requirements of the proposal in the context of other research on this disease. The group proposes a number of modifications—extended eligibility criteria, some additional items of data, and the different value domain for one of the data items—that would allow the same data to be used to support other research: in particular, genetic studies for the untreated disease. The group is able to do this, quickly and reliably, because the proposal includes a detailed, computable specification of the trial protocol. The protocol includes detailed descriptions of standard operating procedures for the collection of tissues and body fluids, together with downstream assays, to be used for pharmacodynamic endpoints. These endpoints may be used as assays (predictive biomarkers) for future individualisation of therapy if validated in the trial. An international portal for pancreatic cancer provides services for registering studies, datasets, and protocols along with the precise definitions of the data elements involved; it is thus possible to identify, automatically, aspects of the proposed trial that overlap with or complement other work. When the research council decides to fund the proposed trial, the majority of the infrastructure required is generated automatically from the final, approved version of the proposal: forms for registration, eligibility, serious adverse event reporting, treatment, and follow-up; supporting services for randomisation, sample handling, patient timetabling, clinician management; documentation for procedures and workflows.
Semantic Frameworks—Meanings in the Architecture
51
The automatic generation of supporting software makes it easier to guarantee the quality and and compatibility of the data collected, and greatly reduces the cost of trial start-up. Furthermore, as the proposal includes a computable description of the proposed analysis, the necessary statistical tools can be automatically configured and tested. As a result, interim and final analyses can be performed as soon as the actual data is available. Data from this study can then be combined with data from two similar randomized studies elsewhere in the world, allowing the biomarkers to be robustly validated. 5.2
Challenges
The automatic integration of data from different studies, and the facilitation of collaborative science based upon shared metadata—models of the studies themselves—has benefits across the whole of medical research and practice. For example, in vaccinology, if variations in a gene that codes for one part of the immune system are found to strongly influence the response to a vaccine, then we know that that part of the immune system is fundamental in the response, and that could help target that part of the immune system for future vaccine development. This is an area where large numbers of subjects are needed, and effective collaboration between researchers is essential. Increasing travel, global trade, and changing climate have meant that infectious diseases, along with other threats to public health, are emerging, and spreading, faster than ever before. The World Health Report 2007 [19] from the World Health Organisation, argues strongly that effective data sharing across organisational and national boundaries is essential to global public health. The International Health Regulations (2005) [18] “provide the framework for improved international public health security, laying out new obligations devised to collectively respond to international public health challenges of the 21st Century, taking advantage of new developments in [. . . ] information technology, such as rapid data sharing. Unprecedented tools are now available to rapidly detect, assess, and respond to events. . . ”. These regulations entered into force in June 2007, and should be fully implemented by June 2012: by this date, systems must be in place for timely and reliable notification, consultation, and verification; in most cases, a period of 24 hours is specified. This will not be achieved without advances in enabling, software technologies for automatic, metadata-driven data sharing and integration. Proactive management of a research portfolio requires detailed information not only about results obtained, but also about studies that are still at the design stage. The size of the portfolio, the complexity of the research, and the dynamic nature of the translational process mean that effective information technology support is essential. The availability of detailed, computable information about study design supports not only operational portfolio management but also the evaluation and demonstration of impact. The grand challenge for information-driven health is to make semantics-driven management of data standard practice across the whole spectrum of healthcare and medical research.
52
6
J. Davies and J. Gibbons
A Grand Challenge in Informatics
There has been considerable emphasis upon the computational challenges encountered in the analysis of data, and much progress has been made in this respect: in particular, within the projects funded as part of the various national “e-Science” programmes. However, there is growing recognition that the large-scale sharing and integration of data from dynamic, heterogeneous sources requires computable representations of the semantics of data, and it is here that a significant challenge lies. Natural language or informal understanding is sufficient for such a semantics only when the concepts are straightforward, the community is small or homogeneous, and the period of time over which understanding must be maintained is short. For problems of any complexity, communities of any size, or initiatives that are intended to last for many years, a formal approach is required. The semantics has to be amenable to automatic processing, and this processing has to be automatically linked to the processing of the data itself. Semantic frameworks of the kind described above, whether for informationdriven health research and practice or for any other domain, present significant software engineering challenges. These challenges fall into two categories: those of data semantics in representing models, and those of model-driven development in generating system artifacts—queries, scripts, programs, services, forms, and interfaces—from these models. There are solid foundations on which to build, many developed under the banner of the Semantic Web: notations for representing models, such as the Extensible Markup Language (XML), Resource Description Framework (RDF), and description logics; and notations for expressing model transformations, such as the Structured Query Language (SQL), Extensible Stylesheet Language Transformations (XSLT), and Gleaning Resource Descriptions from Dialects of Languages (GRDDL). However, much remains to be done. In particular, a continuously evolving context forces a raising in the level of abstraction, above that achieved in current implementations of the Model-Driven Architecture (MDA) approach [7]. It has been a tenet of this approach that system artifacts are inherently transient, and should be generated automatically from models rather than laboriously crafted by hand, but in a dynamic context even the transformations that generate these artifacts are transient: these too should be constructed by automatic refinement of models into code. The ‘transformations of transformations’ that we require are most naturally expressed in terms of higher-order and datatype-generic functional programming, but these techniques need further development. The research problems in the area of data semantics involve metadata, ontologies, models, security, and analysis. To support diverse and dynamic data communities, techniques are needed for the construction, federation and versioning of metadata repositories, and the establishment and maintenance of multiple, purposed ontologies over the same vocabulary and mappings between them. In order to model sophisticated standards such as the CONSORT [9] statement faithfully, progress is required in expressing and composing workflow fragments—the
Semantic Frameworks—Meanings in the Architecture
53
dynamic aspects of a model, as well as the static aspects. To derive the full benefit from the semantic framework, the facility is needed for semantics-driven data analysis: metamodels of analytic techniques, enabling the automatic configuration of analytic software. Finally, sensitive domains such as healthcare have stringent security requirements, particularly regarding privacy, and these could be significantly enhanced by exploiting the semantics of data, rather than mere syntax. Furthermore, critical software systems require validation and testing, even if they are generated automatically, and work is needed in exploiting the data semantics to facilitate and enrich this process.
Acknowledgements The authors would like to thank Charles Crichton, Steve Harris, Aadya Shukla, and Andrew Tsui for their contributions to work on semantic technologies at Oxford. The airline meals example used in Section 3 is a refinement of an example presented in [12]. The semantic framework developed for the CancerGrid project of Section 4 is explained in more detail in [1]. The scenarios in Section 5 were developed in discussion with James Brenton, Steve Harris, and Matthew Snape. We are grateful to the UK Engineering and Physical Sciences Research Council and to Microsoft External Research for their support.
References 1. Crichton, C., Davies, J., Gibbons, J., Harris, S., Tsui, A., Brenton, J.: Metadatadriven software for clinical trials. In: ICSE Workshop on Software Engineering in Health Care. IEEE, Los Alamitos (2009) 2. Dubash, M.: Moore’s Law is dead, says Gordon Moore. Techworld (April 2005) 3. Early Breast Cancer Trialists’ Collaborative Group. Tamoxifen for early breast cancer: An overview of the randomised trials. Lancet 351 (1998) 4. Geelan, J.: Moore’s Law: “We See No End in Sight”, Says Intel’s Pat Gelsinger. Java Developer’s Journal (May 2008) 5. Hall, J., Paul, J., Brown, R.: Critical evaluation of p53 as a prognostic marker in ovarian cancer. Expert Reviews in Molecular Medicine 6(12) (2004) 6. International Organization for Standardization (ISO). Information technology specification and standardization of data elements (2009), http://www.iso.org/ 7. Kleppe, A., Warmer, J., Bast, W.: MDA Explained, The Model Driven Architecture: Practice and Promise. Addison-Wesley, Reading (2003) 8. Di Maio, A.: Move ‘joined-up government’ from theory to reality. Gartner Research (2004) 9. Moher, D., Schulz, K.F., Altman, D.G.: The CONSORT statement: Revised recommendations for improving the quality of reports of parallel-group randomised trials. Lancet 357 (2001) 10. Moore, G.: Cramming more components onto integrated circuits. Electronics 38(8) (April 1965) 11. NASA. Mars Climate Orbiter Mishap Investigation Board Phase I Report (1999)
54
J. Davies and J. Gibbons
12. Shukla, A., Harris, S., Davies, J.: Semantic interoperability in practice. In: 43rd Hawaii International Conference in Systems Sciences. IEEE, Los Alamitos (2010) 13. Sullivan, L.: Plain talk eases police codes off the air. US National Public Radio (October 2009) 14. US Customs and Border Protection. Arrival-Departure Record, CBP Form I-94W, for Visa Waiver Program (October 2009), www.cbp.gov 15. US National Cancer Institute. Cancer Biomedical Informatics Grid (caBIG) (October 2009), cabig.nci.nih.gov/ 16. Business Week. The Office of the Future. Executive Briefing (June 1975) 17. Wikipedia. Ten-code (article) (October 2009) 18. World Health Organisation. International Health Regulations (2005). WHO (2005), http://www.who.int/csr/ihr 19. World Health Organisation. World Health Report — A Safer Future: Global Public Health Security in the 21st Century. WHO (2007), http://www.who.int/whr
Fuzzy-Controlled Source-Initiated Multicasting (FSIM) in Ad Hoc Networks Anuradha Banerjee1 and Paramartha Dutta2 1
Kalyani Govt. Engg. College, Kalyani, Nadia, West Bengal, India [email protected] 2 Visva-Bharati University, Santiniketan, West Bengal, India [email protected]
Abstract. Ad hoc networks are collections of mobile nodes communicating using wireless media without any fixed infrastructure. Existing multicast protocols fall short in a harsh ad hoc mobile environment, since node mobility causes conventional multicast tree to rapidly become outdated. The amount of bandwidth required for building up a multicast tree is less than that required for other delivery structures, since a tree avoids unnecessary duplication of data. In this article, we propose FSIM, a fuzzy controlled source-initiated intelligent multicast routing scheme that takes into account estimated network evolution in terms of residual energy, link stability, position of receivers in the multicast tree etc. Extensive simulation experiments have been conducted to compare performance of FSIM with state-of-the-art tree and mesh-based multicast protocols. The results with respect to a wide range of input parameters show that FSIM attains significantly higher packet delivery ratio at much lesser cost than its competitors. Keywords: Ad hoc networks, multicasting, pivot node, stability, tree.
1 Introduction Ad hoc networks are collections of mobile nodes communicating using wireless media without any fixed infrastructure. Conventional multicast routing protocols are inadequate in a harsh mobile environment, as mobility can cause rapid and frequent changes in network topology [1-14]. Frequent state changes require constant updates, reducing the already limited bandwidth available for data and possibly never converging to accurately portray the current topology. Mobility represents the most challenging issue to be addressed by multicast routing protocols. Broadly, the class of multicast protocols in ad hoc networks can be categorized into tree-based and mesh-based approaches. ODMRP (On-demand multicast routing protocol [1]) is the most popular representative of mesh-based ones, whereas MAODV (Multicast on-demand distance vector [2]) and ITAMAR (Independent tree ad hoc multicast routing [3]) are significant among tree-based protocols. PAST-DM (Progressively adapted sub-tree in dynamic mesh [4]) is a special multicast routing protocol which inherits the flavour of both tree-based and mesh-based protocols. ODMRP requires control packets originating at each source of a multicast group to be flooded T. Janowski and H. Mohanty (Eds.): ICDCIT 2010, LNCS 5966, pp. 55–71, 2010. © Springer-Verlag Berlin Heidelberg 2010
56
A. Banerjee and P. Dutta
through out the network. The control packet flood helps in repairing the link breaks that occur between floods. Limitations of ODMRP consist of network-wide control packet floods and sender initiated construction of mesh. This method of mesh construction results in a much larger mesh as well as numerous and unnecessary transmission of packets. DCMP [7] and NSMP [9] are extensions of ODMRP aiming to restrict the flood of control packets to a subset of entire network. However, both of them fail to eliminate entirely, ODMRP’s drawback of multiple control packets per group. From the point of view of bandwidth efficiency, tree-based protocols are better than mesh-based protocols, since only one path exists between any pair of nodes [1015]. However, a multicast tree is more subject to disruption due to link failure and node mobility than meshed structures. It has already been established that although performance of MAODV is very good for small groups, low mobility and high traffic loads its performance degrades sharply once the values of group size, mobility or traffic load crosses a threshold, with the reason being a sharp increase in the number of control packets transmitted to maintain the structure. ITAMR and PAST-DM also suffer from these problems. In this paper, we propose FSIM, a fuzzy-controlled source-initiated intelligent multicast routing algorithm for ad hoc networks, which takes into account the estimated network state evolution in terms of residual energy of nodes, velocity affinity between sender and receiver of each wireless link, position of receivers etc. Relative movements among nodes are statistically tracked to capture their dynamics, in order to construct a very long lasting multicast tree. Three fuzzy controllers, namely PPE (Pivot Performance Evaluator), LSI (Link Stability Indicator) and MRS (Multicast Route Selector) are embedded in each node to inculcate rationality in selection of optimal multicast route to each receiver.
2 Description of FSIM 2.1 Overview According to the distributed algorithm implemented in FSIM, source of the multicast group issues route-request messages to each member of the group independently. Utilizing the rationality incorporated in nodes by fuzzy controllers PPE, LSI and MRS, each receiver evaluates performance of all the paths through which routerequests have arrived from the multicast source, selects best two of them and embeds the corresponding sequence of intermediate nodes or routers in their route-reply messages. Sine radio-ranges of different nodes are different, traversal path of route-reply messages may be different from the path of route-requests. For each multicast receiver, source node selects the best two paths for acknowledgements according to the logic of above-mentioned fuzzy controllers. Corresponding sequence of intermediate nodes are transmitted back to them along with the first multicast information message. In this context, FSIM proposes the concepts of pivot nodes, independent and dependent multicast receivers. Their definitions are described below. Definition 1: Pivot node A pivot node is one, which has got at least two downlink neighbors – either both of them are multicast receivers or one of them is a multicast receiver while the other is a
Fuzzy-Controlled Source-Initiated Multicasting (FSIM) in Ad Hoc Networks
57
pivot node or predecessor of some pivot node at lower level. A receiver may be a pivot node. Definition 2: Dependent multicast receiver A dependent multicast receiver is one, which is connected to source through some other multicast receivers of the same group. Definition 3: Independent multicast receiver In a multicast tree, all multicast receivers who are not dependent are called independent multicast receivers. Definition 4: Ideal multicast tree Based on the assumptions that H is maximum allowable hop count in the present ad hoc network and the multicast source resides at level 1 in a multicast tree, the tree will be termed as ideal, provided it satisfies the following conditions. i) ii) iii)
There exists at least one level l in the multicast tree, where l < √H/2, all non-receiver nodes are pivot. Difference between two consecutive pivots in each row, must be less than √H. At least one pivot node must exist in last but one level. 1 2
4
5
3
7
6
8
9
10
12 11
13
14
15 19
16
17
18 27
20
21 24
22
23
25 26
Fig. 1. Demonstration of pivot nodes, independent and dependent multicast receivers
For illustration of all the above mentioned definitions, please refer to fig 1. The root of the multicast tree i.e. the node with identification number 1, is the multicast sender. Receivers are the nodes with identification numbers are 7, 12, 13, 14, 16, 17, 18, 20, 22, 23, 24, 25 and 26. All other nodes are intermediate nodes. Pivot nodes are with identification numbers 8, 9, 10, 11, 18 and 19. Node 19 is pivot since it is directly connected to three multicast receivers. Similar reason is applicable for nodes 9, 11 and 18 also. Node 10 has two downlink neighbors – nodes 15 and 16. Node 16 is
58
A. Banerjee and P. Dutta
itself a multicast receiver whereas node 15 is predecessor of 19, which is a pivot. Hence, node 10 is also pivot. Node 8 is pivot for three reasons – node 11 is a pivot, node 12 is a receiver and node 10 is predecessor of the pivot 19. Please note that, among all these, node 18 is a pivot as well as multicast receiver. Assuming that maximum hop count of the underlying ad hoc network is 25, the multicast tree in fig 1 is an ideal one because all non-receiver nodes at level 4 (level of root node is 1) are pivots and 4 < √25. At level 6, which is last but one, pivot nodes 18 and 19 exist. Last level in a multicast tree is also denoted as level 0. As far as input and output parameters of the fuzzy controllers LSI, PPE and MRS are concerned, they are presented in table 1. Table 1. Parameters of LSI, PPE and MRS Name of fuzzy controller LSI
PPE
MRS
Object node / link
Input parameter
Output parameter
link from node nls to node nlr, link cardinality up to nls
residual energy impact of nlr velocity affinity of the link proximity of the link bidirectional characteristic of the link
stability of the link
independent receiver cardinality impact of np dependent receiver distance-cardinality impact of np reliability of np
pivot performance of np
output of LSI of all links in S
route performance of S
pivot node np
route S
link cardinality ratio of the route up to nlr
output of PPE of all pivots in S
2.2 Description of Input Parameters of LSI 1. Residual energy impact – According to the discharge curve of batteries heavily used in ad hoc networks [17], following two observations are very important from the point of view of multicast communication. 1. 2.
At least 40% of total charge is required to remain in operable condition. 65% and above residual charge provide sufficient support for communication.
Energy requirement for pivot nodes is stricter than others because they have at least two downlink neighbors to transmit information messages. So, residual energy ei(t) of any node ni at time t is formulated differently for pivot and ordinary nodes. Relevant mathematical expressions appear in (1), (2) and (3).
Fuzzy-Controlled Source-Initiated Multicasting (FSIM) in Ad Hoc Networks
ei(t) =
eipv(t)
if ni is a pivot node
eior(t)
otherwise
(1)
(εi(t) / (εimax(t)+1)) exp Γt(i,j,k) eipv(t)
=
59
(εi(t) /
εimax)
(εi(t) /
εimax+1)
if εj(t) < εi(t) ≤ εk(t)
if εj(t) = εk(t) exp |N′p(t)|
(2) if εi(t) = εj(t) < εk(t)
where Γt(i,j,k) = ((εk(t) - εi(t)) / (εk(t) - εj(t)) N′(i,t) = ((2 + |Ni(t)|) eipv(t) / (1 + |Ni(t)|)) N′(i,t)
if N′(i,t) > 1
1
Otherwise
eior(t) =
(3)
Here εi(t), εimax indicate remaining charge of node ni at time t and maximum battery charge of the node, respectively. np specifies predecessor of ni in the multicast tree. Np(t) is set of downlink neighbors of np at time t. N′p(t) is set of nodes nv ∈{ Np(t) – ni} s.t. the condition ((εv(t) / εvmax) ≥ 0.65 is true if nv is pivot. On the other hand, if nv is an ordinary node, the required condition is ( εv(t) / εvmax) ≥ 0. εj(t) = min { ∀nq ∈ Np(t) εj(t)} and εk(t) = max { ∀nq ∈ Np(t) εj(t)}. Determination of residual energy depends on two important decisions – the first one is that whether the best one among all available downlink neighbors have been selected and the second one is, what percentage of energy remains in that node compared to its own battery charge. In expression (2), the part ((εi(t) - εj(t)) / (εk(t) - εj(t))) establishes the first decision while the exponentiated portion represents the second one. The situation εi(t) = εj(t) indicates that ni is the worst possible choice among all alternatives. But it may happen that the worst possible choice is sufficient for communication i.e. (εi(t) / εimax) ≥ 0.65 if ni is a pivot node and (εi(t) / εimax) ≥ 0.4 if ni is ordinary. So, ni deserves a nonzero weight directly proportional to (εi(t) / εimax+1) and inversely with the number of better choices turned down i.e. | N′p(t)|. 1 is added with to enforce the effect of εimax even if εi(t) = εimax. On the other hand, the situation εj(t) = εk(t) denotes that no alternative better than ni exists. In that case, weight of ni should be proportional to (εi(t) / εimax) only. Please note that, if ni is a pivot node, it may have to forward information message to at least 2 and at most | Ni(t)| downlink neighbors i.e. on an average, number of downlink neighbors is (1 + |Ni(t)| /2). On the other hand, if ni is an ordinary node, it may transmit information message to at least 1 and at most | Ni(t)| downlink neighbors i.e. on an average, number of downlink neighbors is (1 + |Ni(t)|) / 2. So, eior(t) is (2 + |Ni(t)|) eipv(t) / (1 + |Ni(t)|) or 1 whichever is lesser. For each node, residual energy impact ranges between 0 and 1.
60
A. Banerjee and P. Dutta
2. Velocity affinity – Assuming that ni is a downlink neighbor of np at time t, velocity affinity a′p,i(t) of the link from np to ni at time t is expressed in (4). Affinity is also determined based on two decisions – whether it is the best option and how good is the option actually.
a′p,i(t) =
((vp,k(t) - vp,i(t)) / (vp,k(t) - vp,j(t)+1)) if ni may remain downlink neighbor of np up to σ′p,i m(t) (4) ((vp,k(t)-vp,i(t))/(vp,k(t)-vp,j(t)+1))exp((1–1/up,i(t))(σp,i /σ′p,i m(t))) otherwise
For each node np, υp(t) denotes its velocity at time t. up,i(t) is the time duration for which ni has been within neighborhood of np in the current session up to time t. Coordinate positions of both ni and np are statistically estimated for timestamps t+1, t+2, t+3, … up to σ′p,i m(t) where m(t) is the remaining number of packets to be sent by the multicast sender and σ′p,i is the maximum transmission delay among all the routes from sender to multicast receivers through ni. If ni is statistically estimated not to stay within neighborhood of np throughout the interval t to σ′p,i , then let σp,i is the timestamp within the above mentioned interval up to which ni will remain downlink neighbor of np. As per the formulation in (4), 0≤ a′p,i(t) <1. High values of it indicate high link stability as per the relative velocities of the involved nodes. 3. Proximity – Assuming that ni is a downlink neighbor of np at time t, proximity xp,i(t) of ni w.r.t. np at time t is expressed in (5). Like above mentioned parameters, proximity is also determined based on two factors – nearness between the nodes at time t and radio-range of np compared to minimum and maximum radio-ranges in the network. xp,i(t) = (1–Dp,i(t)/Rp) exp ((Rmax - Rp) / (Rmax – Rmin +1))
(5)
Dp,i(t) specifies distance between ni and np at time t. Rmax and Rmin are the minimum and maximum radio-ranges of the network, respectively. Proximity lies between 0 and 1. High proximity of ni w.r.t. np increases likelihood of ni to remain downlink neighbor of np. In spite of low velocity affinity, a link may survive for some time provided proximity is close to 1. 4. Bidirectional characteristic - Let ni be a downlink neighbor of np at time t. The link will be termed bidirectional provided np is a downlink neighbor of ni at that time. Bidirectional characteristic b′p,i(t) of the link from np to ni at time t is expressed in (6). 1 if ni ∈ Np(t) and np ∈ Ni(t) b′p,i(t) =
(6) 0 otherwise
A bidirectional link may be used during traversal of acknowledgements.
Fuzzy-Controlled Source-Initiated Multicasting (FSIM) in Ad Hoc Networks
61
2.3 Description of Input Parameters of PPE 1. Independent receiver cardinality impact – Assuming that ni is a pivot node, its independent receiver cardinality impact Ii(t) at time t is given by, Ii(t) = φi(t) / |G|
(7)
φi(t) is the number of independent multicast receivers connected to ni as direct downlink neighbors. G is the group of multicast receivers. From the formulation of Ii(t) in (7), it is evident that (1/ |G|) ≤ Ii(t) ≤ 1. Values of it closer to 1 increases importance of ni in the multicast tree and also prioritizes those routes which pass through its receiver successors. 2. Dependent receiver distance - cardinality impact – Given that ni is a pivot node, its dependent receiver distance-cardinality impact Lλi(t) in route λ at time t is given by, Lλi(t) = (|δλi(t)| / |G|)2 (( ∑ (h~-ηλj,k(t))) / (h~ |δλi(t)|)) nj ∈ δλi(t)
(8)
δλi(t) is set of dependent receivers connected to ni through independent receiver nk (nk ∈ Ni(t)) in multi-hop paths in route λ. For each nj ∈ δλi(t), ηλj,k(t) denotes number of intermediate nodes between nj and nk. G is the set of all multicast receivers. h~ indicates level of the farthest multicast receiver in the multicast tree. 0 ≤ Lλi(t) < 1. Values closer to 1 increase credit of ni and also route λ. Please note that, unlike independent receiver cardinality impact, (|δλi(t)| / |G|) has been squared here. The reason is that, if a pivot node is connected to multiple dependent receivers, the situation is not as good as being connected to same number of independent receivers. It is desirable in multicast communication that all receivers receive same multicast message at approximately same time, which is terribly hampered by long-distant dependency among the receivers. 3. Reliability – For a pivot node ni and its predecessor np in the multicast tree, reliability Ai(t) of the link from np to ni, is set to 1 provided np has stored active paths excluding ni, to all successors of ni at time t (this condition will be referred to as Cp,i(t) in expression (10)). Otherwise it is 0. 1
if Cp,i(t) is true
Ai(t) =
(9) 0
otherwise
2.4 Description of Input Parameters of MRS 1. 2.
Stability of component links – This is output of LSI of all nodes in the multicast tree. Link cardinality ratio - Link cardinality ratio Uλ(t) of a route λ at time t, is mathematically expressed in (10). Uλ(t) = gλ(t) / H
(10)
62
A. Banerjee and P. Dutta
Here gλ(t) is total number of communication links in route λ at time t and H is maximum allowable hop count in the underlying network. Please note that gλ(t) is output of LSI. Sender initializes it to 0. Each node on route λ increments it till the lowest level in the route is reached. From the formulation in (10), it is evident that 0 ≤ Uλ(t) ≤ 1. Values closer to 0 increase agility of transmission through the associated route. 3. Overall pivot performance - Overall pivot performance Vλ(t) of a route λ at time t is given by, Vλ(t) = ∑ βj(t) / |ψλ(t)| nj ∈ ψλ(t)
(11)
ψλ(t) is set of pivot nodes in route λ at time t. βj(t) is the output of PPE of nj. Values of Vλ(t) lie between 0 and 1. Values closer to 1 indicate reusability of certain portion of this route for multiple multicast receivers. 2.5 Fuzzy Rule Bases of LSI Table 2 shows crisp ranges of input parameters of LSI with corresponding fuzzy variables. According to the discharge curves of heavily used batteries in ad hoc networks, at least 40% of total battery power is required to remain in operable condition. It is represented by fuzzy variable a. The range from 40% to 65% (fuzzy variable b) is considered satisfactory, while 65% to 85% (fuzzy variable c) provides adequate support for communication. Fuzzy variable d indicates the best option i.e. 85% to full battery capacity. Although that both residual energy and velocity affinity are equally significant and indispensable in determination of life of a wireless link, we are more strict in range distribution of velocity affinity (fuzzy variables a and b represent the ranges 0-0.50 and 0.50-0.70 respectively for velocity affinity compared to 0-0.40 and 0.40-0.65 respectively, for residual energy). The reason is that, several statistical estimations are involved in determination of velocity affinity compared to none in case of residual energy. These estimations may introduce errors. To reduce their effects, we have followed a restricted range division for velocity affinity. As far as computation of proximity of a link is concerned, no statistical predictions are involved in it. Moreover it is less important than residual energy and velocity affinity, because two lively nodes having excellent velocity affinity may be able to maintain their neighborhood relation even if their proximity is close to 0. Fuzzy variable a represents the range 0-0.30. Ranges 0.30-0.60, 0.60-0.80 are denoted by fuzzy variables b and c, respectively. Fuzzy variable d represents the best possible proximity range i.e. 0.8-1. The output parameter link stability is assigned uniform distribution between 0 and 1. Effects of residual energy e and velocity affinity a′ are combined in table 3 producing temporary variable t1. Both are assigned equal weight. Table 4 unifies the impact of t1 and proximity X generating next temporary output t2. Then it is united with bidirectional characteristic b′ in table 5 giving rise to link stability α. Please note that, table 4 gives more priority to t1 because it is produced by two parameters both of which are more important than proximity.
Fuzzy-Controlled Source-Initiated Multicasting (FSIM) in Ad Hoc Networks
63
Table 2. Crisp ranges of parameters of LSI and corresponding fuzzy variables Residual Energy 0-0.40 0.40-0.65 0.65-0.85 0.85-1.00
Velocity Affinity 0-0.50 0.50-0.70 0.70-0.85 0.85-1.00
Proximity
Link Stability 0-0.25 0.25-0.50 0.50-0.75 0.75-1.00
0-0.30 0.30-0.60 0.60-0.80 0.80-1.00
Fuzzy Variable a b c d
Table 3. Fuzzy combination of e and a′ producing t1 a′→ e↓ a b c d
a
b
c
d
a a a a
a b b b
a b c c
a b c d
Table 4. Fuzzy combination of t1 and X producing t2 t1→ X↓ a b c d
a
b
c
d
a a a b
b b b c
c c c d
d d d d
Table 5. Fuzzy combination of t2 and b′ producing t1 t2→ b′ ↓ 0 1
a
b
c
d
a b
b c
c d
d d
2.6 Fuzzy Rule Bases of PPE Table 6 shows crisp ranges of input parameters of PPE with corresponding fuzzy variables. According to the definition of a pivot node, at least one multicast receiver must be directly connected to it. Hence, assuming that G is the set of all multicast receivers, minimum and maximum value of I are 1/|G| and 1. Intermediate values are distributed uniformly. As far as maximum number of dependent receivers are concerned, it is |G|-2, because one receiver is connected to the current pivot node in single or multi-hop paths while other child of the pivot is an independent receiver on whom the others depend. According to the definition of dependent receiver distance cardinality impact and its expression in (8), its lowest and highest values are 0 and ((|G|-2)/ |G|)2. Values within this range are divided uniformly. Output β of PPE is assigned uniform distribution between 0 and 1.
64
A. Banerjee and P. Dutta Table 6. Crisp ranges of parameters of PPE and corresponding fuzzy variables
Independent receiver cardinality impact 1/|G| - (|G|+3)/4|G| (|G|+3)/4|G| - (|G|+1)/2|G| (|G|+1)/2|G| - 3(|G|+1)/4|G| 3(|G|+1)/4|G| - 1.00
Dependent receiver distance cardinality impact 0 - ((|G|-2)/ |G|)2/4 ((|G|-2)/ |G|)2/4 - ((|G|-2)/ |G|)2/2 ((|G|-2)/ |G|)2/2 - 3((|G|-2)/ |G|)2/4 3((|G|-2)/ |G|)2/4 - ((|G|-2)/ |G|)2
Output 0-0.25 0.25-0.50 0.50-0.75 0.75-1.00
Fuzzy Variable a b c d
Effects of independent receiver cardinality impact I and Dependent receiver distance cardinality impact L are combined in table 7 producing temporary variable t3. Fuzzy composition of t3 and reliability A in table 8 produces output β of PPE. Table 7. Fuzzy combination of I and L producing t3 a′→ e↓ a b c d
A
b
c
d
A A B C
b b c c
c c d d
d d d d
Table 8. Fuzzy combination of t3 and A producing β t3→ A↓ 0 1
a
b
c
d
a b
b b
c c
d d
2.7 Fuzzy Rule Bases of MRS Consider a multicast communication path source = ni → ni+1 → … → nj = destination. Let, fuzzy performance of a link from ni+k′-1 to ni+ k′ (k′ ≤ j-i) is denoted as αi+k′-1, i+k′ and fuzzy performance of the route up to ni+ k′ is given by γi+ k′. Table 9 combines the effects of α i,i+1 and α i+1,i+2 producing γi+1 while table 10 accepts α j-1,j and γj-1 as inputs for generation of γj. All links of the route, except the first and last one, influence stability of the route as per table 11. It is actually a general form of j-i-2 number of different tables. Entries in table 11 are weighted average of those in tables 9 and 10. Assuming that, ι = (k – 1) / (j – i –2), the weight is measured in terms of length of the path already traversed and its full length from source to destination. Stability of a link nearer to source than destination, will contribute more in performance of the path, compared to a link nearer to destination. Closeness of a link to destination denotes that maximum portion of the path has already been traversed and its strength in terms of stability is almost decided. Hence, it is difficult now to change stability of the route. Please note that γ follows uniform distribution between 0 and 1, as shown in link stability column of table 2.
Fuzzy-Controlled Source-Initiated Multicasting (FSIM) in Ad Hoc Networks
65
Table 9. Fuzzy combination of α i,i+1 and α i+1,i+2 producing γi+1 α i,i+1 → α i+1,i+2 ↓ a b c d
a
b
c
d
a a b c
a b c c
b c d d
c c d d
Table 10. Fuzzy combination of γj-1 and α j-1,j producing γj
γj-1→ α j-1,j ↓ a b c d
a
b
c
d
a a a a
b b b b
c c c c
c d d d
Table 11. Fuzzy combination of γi+k-1 and α i+k-1,i+k producing γi+k (2 ≤ k ≤ (j-i-2))
γj-1→ α j-1,j ↓ a b c d
a
b
c
d
a a ƒ(0.375 - 0.25 ι) ƒ(0.625 - 0.50 ι)
ƒ(0.125 + 0.25 ι) b ƒ(0.625 - 0.25 ι) ƒ(0.625 - 0.25 ι)
ƒ(0.375 - 0.25 ι) c ƒ(0.875 - 0.25 ι) ƒ(0.875 - 0.25 ι)
c ƒ(0.625 - 0.25 ι) d d
Please note that at intersection of c-th row and a-th column, table 9 contains b while table 10 contains a. Since j-i-2 number of intermediate tables is there, b changed to a in j-i-2 number of steps. The function ƒ fuzzifies a value for γ as per link stability column of table 2. According to it, fuzzy variable a ranges from 0 to 0.25, whereas b ranges from 0.25-0.50. Assuming that amid and bmid denote exact middle values of respective ranges, their values come out to be 0.125 and 0.375, in that order. Numeric deviation between those middle values in j-i-2 number of steps is 0.25. Hence the deviation from 0.375 in k-th table i.e. in k-1 steps, is 0.25ι where ι = (k – 1) / (j – i – 2), which leads to the resultant value (0.375 - 0.25ι) with corresponding fuzzified version being ƒ(0.375 - 0.25ι). Other entries in table 11 can be explained similarly. In table 12, γj is combined with U generating temporary output t4. Composite effect of V and t4 is presented in table 12 deriving ultimate performance Z of the route. Table 12. Fuzzy combination of γj and U producing t4
γj→ U↓
a
b
c
d
a b c d
b a a a
b b b b
c c c c
d d d d
66
A. Banerjee and P. Dutta Table 13. Fuzzy combination of t4 and V producing Ζ
γj→ U↓
a
b
c
d
a b c d
a a a b
a b b c
b c d d
c d d d
2 Simulation Results We compared performance of FSIM, against the performance of ODMRP [1], MAODV [2], ITAMR [3] and PAST-DM [4], which are well-known representatives of state-of-the-art multicast routing protocols for ad hoc networks. Qualnet 3.5 has been used for simulations. Table 14 describes the simulation environment. The metrics used in our evaluation are packet delivery ratio, agility and control overhead. Packet delivery ratio is defined as the number of data packets successfully delivered, divided by the total number of data packets actually transmitted. Agility indicates the total time delay required to deliver packets to each receiver, divided by total number of packets. Control or message overhead is defined as the number of control packets transmitted, divided by the number of data packets transmitted. Figures 2-7 graphically illustrate the benefit of our model and emphasize that FSIM produces highly improved throughput and agility at much lesser message cost. Table 14. Simulation Environment Total number of nodes Simulation time for each experiment Simulation area Node placement Mobility model Radio-range Channel capacity MAC protocol Data packet size Number of simulation runs
100, 500, 1000, 2500, 5000 1000 sec, 2000 sec, 4000 sec 1000 m x 1000 m Random Random waypoint 25 m – 100 m, 100 m – 350 m, 5 m – 500 m 66 Mbps, 100 Mbps IEEE 802.11g 512 bytes, 1024 bytes 30
Among the above-mentioned competitors of FSIM, ODMRP is mesh-based whereas all others are tree-based. None of these consider stability of component links. Hence, possibility of link failure and consequently, flooding of route-request packets is very huge for them. High message cost results in increased packet collision. Automatically, percentage of successful delivery of packets at the multicast destinations reduces. Also additional delays are introduced during repairing of link breaks. Sourceinitiated tree-based protocols are, in general, better than destination-initiated mesh-based protocols. The reason is that, tree-based protocols maintain only one route between the source and each multicast destination. On the other hand, in meshbased protocols multicast destinations create a mesh and elect a core, which directly
Fuzzy-Controlled Source-Initiated Multicasting (FSIM) in Ad Hoc Networks
Agility vs. packet load (number of senders = 10)
1.2
160
1
140
MAODV 0.8
120
ODMRP
0.6
PAST-DM ITAMR
0.4
FSIM
PAST-DM
60
ITAMR
40
0
20
5
10
15
ODMRP
80
0.2
1
MAODV
100
Agility
Packet delivery ratio
Packet delivery ratio vs. number of senders (packet load = 10 pkts/sec)
67
FSIM
0
20
10
20
Number of senders
30 40 Tr af f ic load
50
Fig. 2. Graphical demonstration of packet de- Fig. 5. Graphical demonstration of agility livery ratio vs.number of senders vs. traffic load Control overhead vs. number of senders (packet load = 10 pkts/sec)
Packet delivery ratio vs. packet load (number of senders = 10)
140 120
1
MAODV
0.8
ODMRP
0.6
PAST-DM
0.4
ITAMR FSIM
0.2
Control overhead
Packet delivery ratio
1.2
MAODV
100
ODMRP
80
PAST-DM 60
ITAMR
40
FSIM
20
0 10
20
30
40
0
50
1
5
Traffic load in pkts/sec
10
15
20
Number of senders
Fig. 3. Graphical demonstration of packet de- Fig. 6. Graphical demonstration of control livery ratio vs.traffic load overhead vs. number of senders Control overhead vs. packet load (number of senders = 10)
A gilit y vs. number of senders (packet load=10 pkt s/sec)
160
160
140
Agility
120
MAODV
100
ODMRP
80
PAST-DM
60
ITAMR FSIM
40
Control overhead
140
120
MAODV
100
ODMRP
80
PAST-DM
60
ITAMR
40
FSIM
20
20
0 10
0 1
5 10 15 Number of senders
20
20
30 Traffic
40
50
load
Fig. 4. Graphical demonstration of agility vs. Fig. 7. Graphical demonstration of control number of nodes overhead vs. traffic load
68
A. Banerjee and P. Dutta
communicates with the source. More than one path may exist between each receiver and the core due to the underlying mesh structure. Along with that, an additional route is required between the core and sender of the multicast message. If the receivers are long distant from one another, mesh structure does not help, even in case of multiple senders. FSIM rigorously analyzes performance of all the available routes by associated receivers parallely. The nodes are equipped with fuzzy controllers to take intelligent routing decisions. FSIM also proposes the concepts of pivot node and ideal multicast tree. All these help to restrict message overhead significantly compared to others. The improvement also propagates to packet delivery ratio and agility of transmission.
3 Mathematical Analysis Lemma 1. In an ideal multicast tree, number of intermediate nodes (except the sender and multicast receivers), is O(√H|G|) where H is maximum possible hop count in the network and G is the set of multicast receivers. Proof: Let, in an ideal multicast tree T, pivot nodes exist in the levels ω1, ω2, ω3, … , ωl; corresponding number of pivot nodes at those levels are ζ1, ζ2, ζ3, … , ζl, respectively. In worst case, the following scenario prevails: i) ii) iii)
Only one receiver is connected as direct downlink neighbor to each pivot node. Reuse of the links from source to a pivot node is minimal. (ω2 - ω1), (ω3 - ω2), … ,(ωl - ωl-1) all are less than or equal to √H.
For illustration of the above-mentioned situations, please refer to fig. 8. It shows portion of an ideal multicast tree starting from a pivot at level ζj where 0 < j < l. The pivot node ni is at level ζj. Since the tree is ideal and we are considering maximum number of intermediate nodes, pivot successor np of ni is at level ωj+k where (j+k) ≤ l and (ωj+k - ωj) ≤ √H. Please note that the receivers in figure 8 are childless. This enforces minimal reuse of the path from source of the multicast operation, to ni. Since exactly one receiver is downlink neighbor of each pivot node, total number of pivot nodes is equal to number of multicast receivers. The relation is mathematically expressed in (12). ζ1+ζ2+ζ3+…+ζl=|G|
(12)
Maximum number of intermediate nodes between the levels ω1 and (ω1+√H) in the ideal multicast tree is ζ1(ω1+√H - ω1 – 1) i.e. ζ1(√H – 1). Similarly, number of intermediate nodes for other levels can be computed. Hence, total number of intermediate nodes IR(ω1,0,T) in portion of tree T starting from level ω1 till the end (0 indicates last level in the tree while level of the root is assumed to be 1 in FSIM) is given by, IR(ω1,0,T) = ζ1(√H – 1) + ζ2(√H – 1) + … + ζl-1(√H – 1) i.e. IR(ω1,0,T)=(√H–1)(|G|-ζ)
(13)
Fuzzy-Controlled Source-Initiated Multicasting (FSIM) in Ad Hoc Networks
69
Since T is ideal and its pivoting started at level ω1 (ω1 < √H/2), maximum number of intermediate nodes IR (1,ω1,T) from multicast source up to that level is ω1|G| < √H|G|/2. Therefore, total number of intermediate nodes IR (1,0,T) is given by, IR (1,0,T) = IR (1,ω1,T) + IR(ω1,0,T) i.e. IR (1,0,T) = √H|G|/2 + (√H – 1) (|G| -ζl ) i.e. IR (1,0,T) < 3√H|G|/2 i.e.IR(1,0,T) is O(√H|G|)
z(14).
ni
. . .
np
. . .
Fig. 8. Demonstration of maximum possible intermediate node situation in an ideal multicast tree
Hence, it is proved that total number of intermediate nodes in an ideal multicast tree is O(√H|G|). Please note that here we have not restricted length of the path from root of an ideal multicast tree to a receiver; it is as large as maximum allowable hop count H. It is also evident from this lemma that pivot nodes alone can impose an upper limit in number of routers. Lemma 2. The advantage of maintaining a stable link compared to the cost of messages for link breakage is O(HC2(μ-1)+ HC3(μ-1)2+…+(μ-1)H-1) where average number of downlink neighbors at each step is μ.
70
A. Banerjee and P. Dutta
Proof: In worst case, length of a stable link is as large as H. As far as link breakage is concerned, flooding is the only process to recover from it. So, cost of link breakage is actually cost of flooding. For estimation of message cost in flooding, we assume that average number of downlink neighbors of each node is (1+θ), where, from practical point of view θ > 0. Otherwise, the situation will be equivalent to unicasting. Moreover, in today’s era of highly dense ad hoc networks, probability that a node will have exactly one downlink neighbor is very negligible. Hence, a tree is created during flooding where each node floods control messages to (1+θ) number of other nodes. Among them, each and every node repeats the process until value of hop count reaches H. The resultant message cost M is given by, M = (1+θ) + (1+θ)2 + (1+θ)3 + … + (1+θ)H-1 i.e. M = ((1+θ)H –1) / θ The advantage ϑ of maintaining a stable link compared to flooding is given by (M-H) s.t., ϑ = ((1+θ)H –1) / θ - H i.e. ϑ = HC2θ + HC3θ2+…+θH-1 Putting, (μ-1) in place of θ in (15), we get ϑ = HC2(μ-1)+ HC3(μ-1)2+…+(μ-1)H-1 Hence the lemma is proved.
4 Conclusion FSIM is a fuzzy-controlled multicast scheme that aims at constructing an ideal multicast tree with as much pivot nodes and independent receivers as possible along with stable links in terms of residual energy, velocity affinity, reliability etc. characteristics of nodes. Pivot nodes help to restrict the number of ordinary routers up to a certain threshold. Extensive simulation experiments have been conducted to establish its supremacy over state-of-the-art competitors.
References [1] Lee, S.J., Gerla, M., Chian: On demand Multicast Routing Protocol. In: Proceedings of W.C.N.C. (September 1999) [2] Royer, E., Perkins, C.: Multicast operation of the ad hoc on demand distance vector routing protocol. In: Proceedings of Mobicom (August 2003) [3] Haas, Z., Sajama: Independent Tree Ad Hoc Multicast Routing (ITAMAR). Mobile Networks and Applications 8(5), 551–566 [4] http://www.springerlink.com/index/475127G162353467.pdf [5] Ji, L., Corson, M.S.: A light-weight adaptive multicast algorithm. In: IEEE GlobeComm (December 2006) [6] Jetcheva, J.G., Johnson, D.B.: Adaptive demand driven multicast routing in multi-hop wireless networks. In: ACM MobiHoc (October 2004) [7] das, S.K., Manoj, B.S., Rammurthy, C.S.: A dynamic core-based multicast routing protocol for ad hoc wireless networks. In: ACM Mobihoc (June 2002)
Fuzzy-Controlled Source-Initiated Multicasting (FSIM) in Ad Hoc Networks
71
[8] Sinha, P., Shivkumar, R., Bhargavan, V.: Mcedar: Multicast core extraction in distributed ad hoc routing. In: WCNC, September 1999, pp. 1317–1319 (1999) [9] Lee, S., Kim, C.: Neighbor supporting ad hoc multicast routing protocol. In: Mobihoc (August 2002) [10] Lee, S.J., Su, W., Gerla, M.: A performance comparison study of wireless multicast protocols. In: Proceedings of IEEE Infocom, Tel. Aviv., Israel (March 2002) [11] Wu, C.W., Tay, Y.C.: Amris: A multicast protocol for ad hoc wireless networks. In: Proceedings of IEEE Milcom (October 1999) [12] Devarapalli, V., Sidhu, D.: MZR: A multicast protocol for mobile ad hoc networks. In: Proceedings of ICC 2001 (2001) [13] Xie, J., McAuley, A.: Amroute: Ad Hoc Multicast Routing Protocol, Monet (December 2002) [14] Jie, L., Corson, M.S.: Differential destination multicast, a manet multicast routing protocol for small groups. International Journal on Systems Science and Applications 3, 51–57 (2003) [15] Scalable network technologies, Qualnet 3.5, http://www.scalable-networks.com/ [16] Banerjee, A., Dutta, P.: Fuzzy-controlled energy-efficient management of selfish users in mobile ad hoc networks. In: ICIIS 2006 (2006) [17] Ross, T.J.: Fuzzy Logic with Engineering Applications. Tata Mc. Graw Hill Publications (1997) [18] Banerjee, A., Dutta, P.: Fuzzy Rule Based Cost –Effective Communication In a Mobile Environment. In: CSI-EAIT (2006) [19] Kawasaki, N.: Design of adaptive fuzzy controllers, M.S. Thesis, Dept. of Electronics, Osaka [20] Chang, J.H.: Energy-conserving routing in wireless ad hoc networks. In: Proc. of INFOCOM, Tel. Aviv., Israel (March 2000) [21] Ibrahim, A.M.: An introduction to applied fuzzy electronics, PHI INDIA. [22] Freeman, J.A.: Neural Networks. Pearson Education, India [23] Zaruda, J.M.: Introduction To Artificial Neural Systems. Jaico Publishing House (1999) [24] Kov, D.: An Introduction To Fuzzy Control, PHI India
On Optimal Space Tessellation with Deterministic Deployment for Coverage in Three-Dimensional Wireless Sensor Networks Manas Kumar Mishra and M.M. Gore Department of Computer Science & Engineering Motilal Nehru National Institute of Technology, Allahabad, India {manasmishra,gore}@mnnit.ac.in
Abstract. Coverage is one of the fundamental issues in Wireless Sensor Networks. It reflects how well the service volume is monitored or tracked by its participant sensors. Sensing ranges of nodes are assumed to be of spherical shape, which do not tessellate space. To address this problem, we need to model the sensing range of nodes as space tessellating polyhedra. In this paper, we analyze four such polyhedra, the Cube, Hexagonal Prism, Rhombic Dodecahedron, and Truncated Octahedron based on the number of nodes needed for tessellation, amount of overlapping achieved, and symmetry of lattice. We defined a trade off ratio between the amount of overlapping achieved and the number of nodes deployed. We used this ratio to identify Rhombic Dodecahedron as the polyhedron model for optimal 1-coverage. We also show the scalability of this polyhedron model for K-coverage with deterministic deployment.
1
Introduction
Wireless Sensor Networks (WSNs) have attracted a great deal of research attention due to their potential in wide range of applications. They provide a new class of computing environment and drastically expand the ability to interact with the physical world [8]. Thus, WSNs can substantially transform the way we manage our homes, factories, and the environment without being on-site. The primary objective of a WSN is to sense the phenomena occurring in the vicinity of the deployed sensor nodes, and to communicate these facts to the base station. The ability to sense and communicate the parameters primarily depends on the sensing range and the transmitting range of a sensor node, respectively. But, the deployment strategy is also an important factor in both these issues. Most of the WSNs research assume that, the sensor nodes are deployed on a twodimensional (2D) plane. In these models, the height of the network is considered negligible as compared to the length and the breadth of it. This approximation holds true for the applications where sensor nodes are deployed on earth surface and the height of the network to be established is relatively smaller than the transmission radius of a node. Further, the phenomenon that can be observed by the network are restricted to those, occurring predominantly along the surface area. However, in cases where the implemented WSN is intended to observe T. Janowski and H. Mohanty (Eds.): ICDCIT 2010, LNCS 5966, pp. 72–83, 2010. c Springer-Verlag Berlin Heidelberg 2010
On Optimal Space Tessellation for Coverage in 3D WSN
73
the phenomenon that requires a significant height of coverage, the assumptions related to the 2D nature of the network become a handicap. Under water, atmospheric and space applications are examples of such cases. Underwater WSNs can be applied for oceanographic data collection, pollution monitoring, offshore exploration, disaster prevention, assisted navigation, tactical surveillance, and mine reconnaissance applications. Atmospheric applications like fire prevention in natural surroundings, monitoring rain forests. Space applications like surveillance of airspace with aerial vehicles unmanned and with limited sensing range. These applications require the observation to be performed in a three-dimensional (3D) space, rather than a 2D plane. Although such application scenarios are relatively rare at present, but the increasing commonness of these can’t be ignored in the near future. The prime issue in any sensor network is the ability to sense data over an area/volume. This aspect is called the coverage of the WSN. This is addressed by the sensing ranges of the participating nodes. A WSN is considered to be fully covered when any point in the target area(2D)/volume(3D) falls under the sensing range of any node. These points may be either 1-covered or K-covered depending on the number of independent nodes that cover the point under consideration. The sensing ranges of the nodes are considered to be spherical in case of 3D WSN. But, as spheres do not tessellate space, the sensing ranges need to overlap with each other to cover the entire volume. An alternative is, to model the sensing range as one of the space tessellating polyhedra. The sensing range of the node is treated to be the circumsphere of the polyhedron used. In case of 1-covered WSN, the sensing ranges of the nodes are allowed to have the minimum overlapping needed for tessellation, so that the number of nodes required to fully cover the network is minimum. On the other hand, for all K-covered WSN, the objective is to cover each point with an optimum number of nodes, the k value, and ask for maximum possible common overlapping volume among these nodes. Apart from this, the accuracy in estimation of the phenomenon largely depends on the precision and reliability of the sensed data. Having a K-coverage model rather than a 1-coverage model for a WSN operating in 3D space is being advocated based on the following reasons [13]. 1. Estimation about the location of an event that is being detected, will largely depend on the number of nodes covering that particular event. This estimation can be made relatively easy with more than one node covering the event. 2. The detection of an event and it’s occurrence can be more precisely estimated with a highly covered WSN. The accuracy of tracking can also be improved. 3. The high packet loss in the case of multi-hop WSNs requires a K-covered model. The detection of an event by more sensors compensates the loss incurred due to the multi-hop nature of transmission. 4. The erroneous detection of an event can also be reduced by K-covered model. Therefore, it is not only important to achieve full coverage with minimum number of nodes, but also to achieve maximum possible overlapping between sensing ranges of nodes in order to gain in accuracy in detection of an event. Hence,
74
M.K. Mishra and M.M. Gore
for all practical purposes, the model to be adopted for space tessellation and subsequent coverage of a 3D WSN needs to have a trade off between the gain in accuracy in detection by maximizing the possible overlap, and the minimum number of nodes required for full coverage. We compare four space tessellating polyhedra as unit cells representing the sensing range of a node, and identify the polyhedra which has the best trade off. In [14,15], the authors proposed the models for 3D WSN, using body centric nodes, and advocated the Truncated Octahedron as the most efficient space-filling polyhedron as unit cell. We demonstrate that, this polyhedra fails to retain the most efficient status when we take in to account the accuracy, and reliability of the sensed data as well as the probability of generation of coverage holes due to boundaries and obstacles. The organization of the rest of the paper is as follows. Section 2 discusses some of the similar works presented in literature. Section 3 elaborates the problem addressed in this work. The proposed perspective for optimal coverage model has been presented in the subsequent section 4. Section 5 presents the analysis based on the proposed perspective. Section 6 presents the conclusions drawn to end the paper.
2
Related Work
The coverage and connectivity issues in 2D WSNs have been addressed in the literature such as [1,5,6,11]. A few papers such as [2,3,9,10,14] have focused on solving coverage problem for 3D WSN. The solutions for coverage and connectivity issues are closely related to the mode of deployment of nodes. The deployment of nodes can be random or deterministic. Next we discuss few solutions for coverage problem with random deployment. In [2], the authors have modeled and simulated 3D WSN. They presented the analysis and design of networks of wireless sensors. And analyzed the behavior of these networks depending on the different cover and connectivity ranges and demonstrated that, how long a network can be alive is dependent on the number of nodes that are being used. The authors in [3] formulated the coverage problem as a decision problem, whose goal is to determine whether every point in the service area of the sensor network is covered by at least α sensors, where α is a given parameter and the sensing regions of sensors are modeled by balls (not necessarily of the same radius). The authors in [9] have used the 3D network model to detect the sensor density λ in terms of the volume of the spherical sensing range and the probability f of any point being covered by a node. They also presented an approach to detect redundant nodes to make them sleep, to save energy. In [10], the authors have considered the spherical model. They proposed and analyzed the coverage based on a distributed algorithm with random deployment. The optimal number of active nodes were determined based on the thickness of intersection between the sensing ranges. If the sensing range of any node is fully covered by other nodes, then that node can remain deactivated to make the coverage energy efficient.
On Optimal Space Tessellation for Coverage in 3D WSN
75
Though, random deployment is generally used in analysis of coverage and connectivity, and evaluation of various algorithms, it is considered too expensive as compared to optimal deterministic deployment patterns while deploying sensors [7,12]. The work in [14], is based on deterministic deployment. They suggested the sensor deployment pattern that creates the Voronoi tessellation of Truncated Octahedral cells in 3D space. The suggestion is directly from Kelvins conjecture. The numerical data in [14] illustrates Truncated Octahedron tessellation is better than the tessellation of Cube, Hexagonal Prism, and Rhombic Dodecahedron. Further, in [15], they extended their work to address the connectivity issue also.
3
Problem Elaboration
The focus of this work is to identify the polyhedron model for optimal 1-coverage of a 3D WSN with the best trade off between the gain in performance due to overlapping and the number of nodes required to tessellate the service space. 3.1
Objectives
We have focused on two aspects, one to define an unit cell which will tessellate the space with minimum nodes, and the other to maximize the overlapping of the sensing ranges of the nodes in each cell to improve performance. So, the objectives of the work are defined as follows: – To compare the space filling polyhedra on the basis of number of nodes required to tessellate, amount of overlapping achieved, symmetry of lattice, and scalability to K-coverage. – To identify the most suitable polyhedra for space tessellation for 1-covered 3D WSN with a trade off among the above mentioned criteria. These objectives have been worked upon based on the following assumptions. 3.2
Assumptions
Homogenous Nodes: All nodes are homogeneous in nature. This means, that all the nodes have identical sensing ability, computational ability, and the ability to communicate. We also assume that, the initial battery power of the nodes are identical at deployment. Spherical Sensing Range: All nodes have identical sensing range R. Sensing is omnidirectional and the sensing region of each node can be represented by a sphere of radius R, having the sensor node at its center. Arbitrarily placeable: For static deployment we assume their location to be arbitrary. For mobile nodes, the nodes are randomly deployed initially, and their movement is unrestricted.
76
4
M.K. Mishra and M.M. Gore
Proposed Perspective for Optimal Coverage
With deterministic deployment, the nodes are placed at predefined positions. Therefore, the selection of these positions is critical for the coverage problem of WSN. In general, the solutions for coverage problem of 3D WSN focus on finding the minimum number of nodes required to fully cover the service volume. The authors in [14], were the first to propose a solution for the coverage problem in 3D WSN in terms of models using space filling polyhedra. The Truncated Octahedral unit cell model was identified as the best model for 1-covered 3D WSN. This selection was made based on the metric called Volumetric Quotient, which is an indicator of the portion of volume of the spherical sensing range occupied by the polyhedral unit cell. Truncated Octahedral unit cell occupied 68.329% of the spherical sensing range, and thus, identified as the best model. To achieve full coverage of the service volume with the minimum number of nodes, the spherical sensing ranges of nodes are allowed to have minimum overlapping. But, in all cases there will be patches of volume which are common to more than one node. These √ K-covered (K ≥ 2) patches enhance the tracking of an event by a factor of K [4], in the volume of multi coverage. Further, the assumed spherical shape of the sensing ranges of nodes may be deformed due to boundaries and obstacles. These deformations will largely affect the extreme points to be covered by a node, which is the surface points of the sensing range, either on the sphere or on the modeled polyhedral unit cell. This type of deformation can produce coverage holes in the WSN. The presence of overlapped volume in the sensing ranges will provide a padding volume against generation of such kind of holes. Hence, the existence of these patches and their impact in gain in performance cannot be ignored. But, with the increase in volume of such patches, the number of nodes required to tessellate the service volume will also increase. Therefore, the optimal solution to the coverage problem in 3D WSN should have a balance between the gain in performance and the number of nodes required to space tessellate any given target volume. Apart from this, for critical applications, the WSN to deployed need to be K-covered. Though, the solution for 1-covered 3D WSN is able to provide patches of K-covered volume, but, to make the entire service volume K-covered, the model used for 1-coverage should be scalable to higher degrees of coverage. Once K-coverage is achieved, the power consumption at nodes can be reduced by implementing a duty cycle to keep the number of nodes active at any instance to toggle between 1 to K. This duty cycle can have an simpler implementation, if the polyhedron used as unit cell generates symmetry lattice in space tessellation. Besides coverage, connectivity is another related issue with the deployment of nodes. The more a node is connected with other nodes, the better will be the data transfer. Therefore, we propose to analyze the space filling polyhedra on these issues based on the following queries/observations for newer and greater insight. Observation 1: Does the polyhedral model provide a trade off between the gain in performance metrics due to optimum overlapping and the number of nodes required for space tessellation?
On Optimal Space Tessellation for Coverage in 3D WSN
77
Observation 2: Is the proposed model for space tessellation best scalable to K-coverage with deterministic deployment? Observation 3: If any duty cycling is done on the nodes, does the proposed model has the symmetry in lattice to facilitate a simpler implementation? Observation 4: Does the proposed model provide any resilience to the coverage hole formation? Observation 5: How does the model scale on the connectivity issue? In the next section, we analyze various space filling polyhedra and provide justification for the optimal model based on the above-mentioned observations.
5
Analysis
In this section, we will analyze the results to identify the most stable polyhedron for 1-covered 3D WSN. The various parameters we will be concentrating on, are: – Number of nodes required to tessellate any given space to make 1-covered WSN. – Overlapping volume and its gain in terms of multi covered patches, and resilient to reduction in sensing range. – Trade off between patches of overlapping volume and the number of nodes deployed. – Scalability to K-coverage with deterministic deployment. – Symmetry in lattice to simplify duty cycling with K-coverage. – Transmission radius for connectivity with all neighbor nodes. The polyhedron model for the unit cell having a major share in these parameters will qualify as the optimal and stable model for space tessellation in 1-covered WSNs. The space-filling property in a polyhedron is the ability to tessellate a 3D space. The Cube is the only Platonic solid possessing this property. However, there are five space-filling convex polyhedra with regular faces: the Triangular Prism, Hexagonal Prism, Cube, Truncated Octahedron, and Gyrobifastigium. The Rhombic Dodecahedron and Elongated Dodecahedron are also space-fillers. The Cube, Hexagonal Prism, Rhombic Dodecahedron, Elongated Dodecahedron, and Truncated Octahedron are all ”primary” parallelohedra. Elongated Dodecahedron also known as the Extended Rhombic Dodecahedron is constructed by stretching a Rhombic Dodecahedron until the middle ring of rhombi become regular hexagons. As Elongated Dodecahedron is constructed from Rhombic Dodecahedron, we consider the Cube, Hexagonal Prism, Rhombic Dodecahedron, and Truncated Octahedron for our analysis. For each polyhedron unit cell, we consider that, the node is placed at the body center of the polyhedra, and the sensing range of the node will be equivalent to the circumscribing sphere of it. However, for analysis we will consider the sensing range to be that of a spherical shape.
78
M.K. Mishra and M.M. Gore
The analysis is independent of the placement strategy to be followed. However, the placement of nodes must follow an approach with one of the four polyhedra as unit cell, such that, it provides complete space tessellation of the target volume. One such approach has been discussed in [14]. 5.1
Trade Off between the Gain due to Overlapping and the Number of Nodes Required to Space Tessellate
The Volumetric Quotient in [14] has been defined as the ratio between the volumes of the polyhedron unit cell and the corresponding circumsphere, the sensing range of the node. It represents the effectiveness of any polyhedron unit cell in uniquely occupying the volume of the corresponding spherical sensing range, the circumsphere of it. Table 1 presents this quotient values from [14] for the four polyhedron models. It shows an increase of 43.25%, 43.25%, and 85.9% in the number of nodes required for tessellation with respect to Truncated Octahedron model, for models using Rhombic Dodecahedron, Hexagonal Prism, and Cube respectively. This means that Truncated Octahedron model is able to capture and utilize 68.329% of the spherical sensing volume of the participating sensor nodes. Hence, the minimum requirement of nodes for coverage. Thus, the model using Truncated Octahedron is selected as the most efficient one, if the objective is to deploy the minimum number of nodes to fully cover a 3D WSN target volume. Table 1. Volumetric Quotient for all polyhedron models, and the number of nodes required in percentage of the number of nodes required by Truncated Octahedron model as unit cell Polyhedron model
Volumetric Quotient Number of nodes required to tessellate a given volume
Cube Hexagonal Prism Rhombic Dodecahedron Truncated Octahedron
0.36755 0.477 0.477 0.68329
185.9 143.25 143.25 100
However, because of the inability of the spherical sensing volumes to space tessellate, there will be patches of multi-covered volumes in the WSN due the overlapping in each polyhedron model. These patches are ignored in deriving the minimum number of nodes required for 1-coverage 3D WSN. However, the presence of such patches adds to the performance of the WSN. Table 2 presents the overlapping volume in fraction of the sensing ranges of nodes. It also tabulates these gains in percentage of the gain by the model using Truncated Octahedron as unit cell. The overlapping volume presented in Table 2, actually is doubled for each pair of neighbor nodes sharing a face. Because, the penetration of one sphere into another results in a convex lens shaped intersection volume [16], which is bisected by the chord, the common face in this case. Hence, the effective volume
On Optimal Space Tessellation for Coverage in 3D WSN
79
Table 2. Volume of spherical sensing range of any node overlapped by neighboring node’s sensing range for different polyhedron as unit cell Polyhedron model
Overlapping volume in Gain in percentage of the fraction of sensing volume gain by Truncated Octahedron model
Cube Hexagonal Prism Rhombic Dodecahedron Truncated Octahedron
0.63245 0.523 0.523 0.31671
199.694 165.135 165.135 100
with multi coverage is doubled. As we have taken the gain by other models in percentage of the gain by one model, the overall effectiveness does not get affected. Multi covered state of any volume results in redundant detection of an event in that particular volume. This increases the accuracy in detection of an event’s occurrence, as well as provides reliability in case of failure of any node covering that particular volume. Apart from this, the overlapping volume also provides a scope for being resilient against generation of coverage holes in case the boundaries and obstacles results in a reduction of the sensing range of a node. The more the existence of such overlapping patches, the better is the gain in performance. Table 2, reveals that the Hexagonal Prism, and Rhombic Dodecahedron both have same overlapping volume. But, the Cube has the highest overlapping volume of multi coverage. But, the gain due to overlapping of volume should not cost dearly in terms of the number of nodes required for tessellation. Therefore, it is of interest to estimate the trade off ratio between the gain in performance due to overlapping and the number of node required for space tessellation. The estimation of this trade off ratios are presented in Table 3. The trade off ratio between gain in performance and the number of nodes required to space tessellate, indicates that both Hexagonal Prism and Rhombic Dodecahedron models are the best gainers. But, in case of Rhombic Dodecahedron model, only 4 out of 14 vertices lie on the circumsphere surface. And, for every Table 3. Ratio of Gain in performance to number of nodes for tessellation for different polyhedron as unit cell Polyhedron model
Gain in percentage of the gain by Truncated Octahedron model (A)
Number of nodes re- Trade off ratio quired in percentage of (A/B) the number of nodes required by Truncated Octahedron model (B)
Cube Hexagonal Prism Rhombic Dodecahedron Truncated Octahedron
199.694 165.135 165.135 100
185.9 143.25 143.25 100
1.07420 1.15277 1.15277 1
80
M.K. Mishra and M.M. Gore
node there are 12 neighboring nodes sharing a common face, thus there will be some patches of volume which will be 3-covered, resulting in more gain. 5.2
Scalability to K-Coverage with Deterministic Deployment
The sensing range of a node is assumed to be spherical in shape. To make the WSN 1-covered, we use polyhedron unit cells for space tessellation, which are assumed to be inscribed in the sensing range. But, there will be patches of multi covered volumes in the network. Whenever the network is made to increase the degree of coverage, the overlapping volumes draw attention. And, generally the optimal overlapping approach is preferred for deployment. The deployment may be random or deterministic. In case of Random deployment, the overlapping volume and the degree of coverage can be estimated using some approach as in [3]. Once estimated, the minimum common degree over the entire network is identified as the degree of coverage for that network. To achieve maximum overlapping with minimum node deployment, the deployment has to be deterministic. The deterministic deployment draws advantage from the fact that the nodes can be placed at the key points in the lattice. In deterministic approach for K-coverage, the strategic positions need to be identified for deployment of add-on nodes. Such positions are generally the common points between the neighboring nodes, so that the added node will have equal effect on the neighboring nodes and can generate maximum symmetric overlapping. The common vertex, and the center points on the common edge or the common face are examples of such potential points. These points lie on various spheres associated with the polyhedron, like circumsphere, midsphere, and insphere. Therefore, it is of interest to analyze the circumsphere radius, the midsphere radius, and the insphere radius of these polyhedron models. These radius for the four polyhedra are presented in Table 4. The Cube and Hexagonal Prism have neighbors with all the three types of sharing (shared vertex, shared edge, and shared face). Rhombic Dodecahedron has neighbors with the later 2 types of sharing, where as Truncated Octahedron has neighbors only sharing faces, but these may be hexagonal faces or square faces. Both Cube, and Rhombic Dodecahedron have equi-distance neighbors in all cases, which reflect lattice symmetry. But, both Hexagonal Prism, and Truncated Octahedron have neighbors with varied distances in different cases reflecting Table 4. Circumsphere, Insphere, and Midsphere Radius in terms of Sensing Range (R) for different polyhedron as unit cell Polyhedron model
Distance from Distance from center center to any to any face (Insphere vertex (Circum- Radius) sphere Radius)
Distance from center to any edge (Midsphere Radius)
Cube Hexagonal Prism Rhombic Dodecahedron Truncated Octahedron
1 1 1, 0.866 1
0.8165 0.91287, 0.8165 0.82916 0.9487
0.57735 0.70710, 0.57735 0.70710 0.89443, 0.7746
On Optimal Space Tessellation for Coverage in 3D WSN
81
conditional lattice symmetry. Apart from this, the kissing number for 3D is 12 as per [17], which is identical to the number of faces in Rhombic Dodecahedron. With Rhombic Dodecahedron as the unit cell, the number of neighboring nodes for any node will be 12, identical to that of the kissing number. Further, to make the WSN K-covered, we can place nodes at the center of each face in an unit cell. The vertices of the Cuboctahedron, dual polyhedron of Rhombic Dodecahedron can represent the positions of these nodes. Cuboctahedron is a quasiregular polyhedron, therefore, Rhombic Dodecahedron model is the best scalable model for K-coverage. 5.3
Symmetry in Lattice for Simple Duty Cycling with K-Coverage
With K-coverage, to reduce the power consumption at nodes, a duty cycling is generally adopted such that the coverage can toggle between 1 to K, whenever needed. This duty cycle can have an simpler implementation, if the polyhedron used as unit cell generates lattice symmetry in space tessellation. The Cube, and Rhombic Dodecahedron shows lattice symmetry in space tessellation. This property is represented by Table 4, with the distance from neighboring nodes being constant for these polyhedra. Hence, both these polyhedra models support a simpler implementation of a duty cycle. 5.4
Resilient to the Coverage Hole Formation
The space tessellation is achieved by all the polyhedron models with neighboring nodes sharing faces (most of the cases). As a pair of neighboring nodes meet at the common face, the spherical part of the sensing range of one node beyond the face gets penetrated into the sensing range of the neighbor node and vice-versa. This results in an intersection of both the sensing spheres of the neighboring nodes. The volume of intersection is of a shape of convex lens and the thickness of it is twice the difference between circumsphere radius and insphere radius for that polyhedron. Whenever the sensing range of a node gets reduced due to boundaries and obstacles, the vertices which are at the surface of the circumsphere starts getting truncated and the shape of the polyhedron gets deformed. This makes us infer that, the polyhedron which has the maximum difference between the circumsphere radius and insphere radius will be best suitable for providing resilience to generation of coverage holes due to this truncation. The results in Table 4 indicate that, the Cube has the highest difference, followed by Hexagonal Prism, and Rhombic Dodecahedron. Truncated Octahedron has the least difference in radius, hence, most vulnerable to such truncation holes. Further, Rhombic Dodecahedron has a specific characteristic among the four polyhedra, that is, it has only 4 out of 14 vertices on the circumsphere surface. As 10 vertices of it stay well inside the sphere, this leads to patches of volume with 3-coverage, and less prone to vertex truncation. 5.5
Connectivity
The coverage and connectivity issues in WSN go hand in hand, and both depend largely on the deployment strategy and the position of the nodes. While the
82
M.K. Mishra and M.M. Gore
sensing range drives the coverage issue, the transmission range dictates the connectivity property of the network. But, the degree of coverage may not be equal to the degree of connectivity. The higher the degree of connectivity, the faster and better is the transfer of the sensed data over the network. The transmission range required for effective communication with all the neighboring nodes of an unit cell is equal to the distance between the pairs of neighboring nodes. This is equal to twice of the insphere radius (presented in Table 4) of the polyhedron model. The transmission range required for effective communication with all the neighboring nodes of an unit cell with a given sensing range R for different polyhedron model are presented in Table 5. Table 5. Transmission Range in terms of Sensing Range (R) needed to communicate to any neighbor node for different polyhedron as unit cell Polyhedron model
x axis
Cube Hexagonal Prism Rhombic Dodecahedron Truncated Octahedron
1.1547 1.4142 1.4142 1.7889
y axis R R R R
1.1547 1.4142 1.4142 1.7889
z axis R R R R
1.1547 1.1547 1.4142 1.5492
Maximum transmission range R R R R
1.1547 1.4142 1.4142 1.7889
R R R R
The transmission range requirements for both the Cube and Rhombic Dodecahedron are same along all the axes and for all types of neighbor pairs. But, for Hexagonal Prism, and Truncated Octahedron, the transmission range requirements are different in different directions. These results show that least transmission radius is required by the Cube, followed by Hexagonal Prism and Rhombic Dodecahedron. The Truncated octahedron needs the highest transmission radius. From all the above subsections, it is found that the Rhombic Dodecahedron, and the Hexagonal Prism have same gain in overlapping, same trade off ratio, and transmission range. The Cube also has the highest gain in overlapping, is scalable to K-coverage, and has symmetry. But, it fails to trade off between the gain and the number of nodes. Therefore, on the basis of the trade off between the gain and the number of nodes deployed, symmetry and scalability to Kcoverage, the Rhombic Dodecahedron emerges as the most stable and optimal polyhedron model for 1-covered 3D WSN.
6
Conclusion
We analyzed four polyhedra having the space filling property, to be used as unit cells for a 1-covered 3D WSN, and identified Rhombic Dodecahedron as the optimal model. We, defined a trade off ratio between the amount of overlapping achieved along side the 1-coverage solution to the number of nodes deployed. And, used this ratio to identify Rhombic Dodecahedron as the optimal model.
On Optimal Space Tessellation for Coverage in 3D WSN
83
The future work will be, to analyze different deterministic approaches for multi coverage of the WSN, and to validate the identified model based on these approaches. The other direction of work is to use this deterministic model for 1-coverage to adapt to random deployment.
References 1. Ghosh, A., Das, S.K.: Coverage and connectivity issues in Wireless Sensor Networks: A survey. Pervasive and Mobile Computing 4(3), 293–334 (2008) 2. Ortiz, C.D., Puig, J.M., Palau, C.E., Esteve, M.: 3D Wireless Sensor Network Modeling and Simulation. In: Proceedings of SensorComm 2007, pp. 297–312 (2007) 3. Huang, C.-F., Tseng, Y.-C., Lo, L.-C.: The Coverage Problem in ThreeDimensional Wireless Sensor Networks. In: Proceedings of the Global Telecommunications Conference, vol. 5, pp. 3182–3186 (2004) 4. Hall, D.L., Llinas, J.: Handbook of Multisensor Data Fusion. CRC Press, Boca Raton (2001) 5. Ammari, H.M., Das, S.K.: Clustering-Based Minimum Energy Wireless m Connected k -Covered Sensor Networks. In: Verdone, R. (ed.) EWSN 2008. LNCS, vol. 4913, pp. 1–16. Springer, Heidelberg (2008) 6. Ammari, H.M., Das, S.K.: Promoting Heterogeneity, Mobility, and Energy-Aware Voronoi Diagram in Wireless Sensor Networks. IEEE Transaction on Parallel Distributed Systems 19(7), 995–1008 (2008) 7. Zhang, H., Hou, J.C.: Is Deterministic Deployment Worse than Random Deployment for Wireless Sensor Networks? In: Proceedings of the 25th IEEE International Conference on Computer Communications, INFOCOM, pp. 1–13 (2006) 8. Akyildiz, I.F., Su, W., Sankarasubramaniam, Y., Cayirci, E.: Wireless sensor networks: a survey. Computer Networks 38(4), 393–422 (2002) 9. Rao, L., Wenyu, L., Guo, P.: A coverage algorithm for three-dimensional large-scale sensor network. In: ISPACS 2007, pp. 420–423 (2007) 10. Watfa, M.K., Commuri, S.: The 3-Dimensional Wireless Sensor Network Coverage Problem. In: Proceedings of the 2006 IEEE International Conference on Networking, Sensing and Control, pp. 856–861 (2006) 11. Ahmed, N., Kanhere, S.S., Jha, S.: Probabilistic Coverage in Wireless Sensor Networks. In: Proceedings of the IEEE Conference on Local Computer Networks, pp. 672–681 (2005) 12. Balister, P., Kumar, S.: Random vs. Deterministic Deployment of Sensors in the Presence of Failures and Placement Errors. In: Proceedings of the 28th IEEE International Conference on Computer Communications, INFOCOM, Rio de Janeiro, Brazil, pp. 2896–2900 (2009) 13. Kumar, S., Lai, T.H., Balogh, J.: On k-coverage in a mostly sleeping sensor network. In: Proceedings of the 10th annual international conference on Mobile computing and networking, pp. 144–158 (2004) 14. Nazrul Alam, S.M., Haas, Z.: Coverage and Connectivity in Three-Dimensional Networks. In: Proceedings of ACMMobiCom 2006, pp. 346–357 (2006) 15. Nazrul Alam, S.M., Haas, Z.: Coverage and Connectivity in Three-Dimensional Underwater Sensor Networks. Wireless Communication and Mobile Computing (WCMC) Journal 8(8), 995–1009 (2008) 16. http://www.mathworld.wolfram.com/Sphere-SphereIntersection.html 17. http://en.wikipedia.org/wiki/Kissing_number_problem
Seamless Handoff between IEEE 802.11 and GPRS Networks Dhananjay Kotwal, Maushumi Barooah, and Sukumar Nandi Department of Computer Science and Engineering, Indian Institute of Technology Guwahati, Assam, India
Abstract. In this paper we propose a scheme which provides seamless mobility between GPRS and IEEE 802.11 (WiFi) networks. In the proposed scheme, we envision a single Service Provider, who provides Cellular (GPRS), WLAN (WiFi) as well as data (Internet) services. The Service Provider’s network consists of an ‘Intermediate Switching Network’ (ISN), placed between the data services and the GPRS-WiFi networks. The ISN uses MPLS and MP-BGP to switch data traffic between GPRS and WiFi networks, as per the movements of the user.
1
Introduction
Cellular Wireless technologies offer voice and data services over large coverage areas, but at lower data rates. Wireless LAN technologies offer higher data rates, but over smaller coverage areas (‘HotSpots’). Users would prefer to use WLAN whenever possible and continue existing data sessions as they move in and out of HotSpots (ubiquitous data service). This can be achieved if we have seamless mobility between Cellular and WLAN technologies. Existing approaches for handoff between GPRS and WLAN networks can be broadly classified into ‘Loosely Coupled ’ and ‘Tightly Coupled ’ approaches. In ‘Loosely Coupled ’ approach, handoff is carried out by using technologies like Mobile IP [1]. This approach suffers from sub-optimal routing, increased handover latency and end-to-end delay for the packets. In ‘Tightly Coupled ’ approach, integration is done using a translation gateway [2]. This approach gives faster handover but suffers from limited data rates and extra network equipment (translation GW). A study presented in [3] finds that handover from GPRS to WLAN takes less time than the other way due to delay during set up of PDP Context in the GPRS network.A U.S patent #6,680,923 states a system for hybrid communication which provides voice, data and video information over the Internet at approximately 760kbps. It incorporates a transceiver assembly on Bluetooth technology(2.4 GHz spread spectrum). It allows auto-switching capabilities and enables a connection and communication wirelessly via short-range ad hoc networks and support upto eight peripheral devices in a piconet which may be defined as two or more bluetooth or equivalent technology units shatring a common channel. We need a handover scheme that avoids the drawbacks of ‘Tightly Coupled ’ & ‘Loosely Coupled ’ approaches. In this paper we achieve this goal by placing an Intermediate Switching Network T. Janowski and H. Mohanty (Eds.): ICDCIT 2010, LNCS 5966, pp. 84–90, 2010. c Springer-Verlag Berlin Heidelberg 2010
Seamless Handoff between IEEE 802.11 and GPRS Networks
85
(ISN) between the data services (Internet) and the GPRS-WiFi networks. The ISN is based on a Mobility Label Based Network (MLBN) detailed in [4] and is used to switch data traffic between GPRS and WiFi. The Mobile Node, is a high end device (PDA or Smart Phone) which is equipped with two interfaces (for GPRS and WiFi) and can be used simultaneously.
2
Proposed GPRS-WiFi Integration
As shown in Fig. 1 the ISN consists of MSF-LERs, which are Label Edge Routers with a Mobility Support Function (MSF) associated with them. Each of the MSF-LERs is a BGP speaker and a BGP Peer to every other MSF-LER in the ISN, thus forming a complete mesh. The MSF-LERs are connected with each other via Label Switched Path (LSP) tunnels. The integration points of ISN with GPRS and WiFi network are the GGSN (GGSN/MSF-LER) and Access Point (AP/MSF-LER) respectively. MSF-LER1 acts as the point at which IP Datagrams are entering the ISN. At GGSN, MSF operates between MPLS and GTP layers. At AP, MSF operates between MPLS and LLC layers. MN has a dual stack with MSF operating between IP and SNDCP layers for GPRS and between IP and LLC layers for WiFi.
Internet
1 0 00 11
CN Router Handover
MSF−DB1
MSF−DB1
{IP}
{IP}
MSF−LER2 IP
MSF−LER2 IP
L1
MSF−LER3 IP
L2
ML1
ML1
MSF−LER1
FTN
MSF−LER3 IP
L1
MSF−LER2 IP
L2
MSF−LER3 IP
FTN
LSR
IP Datagram ML1
L1
L2
LSR
AAA Server
LSR
ML1 IP Datagram
DHCP Server
ISN MN2 MN1 GGSN/MSF−LER2
DHCP Relay
AP2/MSF−LER3
DHCP Relay
AP1/MSF−LER4
IEEE 802.11 (N/W 2) HLR
MN1
SGSN BSC
BTS
MN2
GPRS (N/W1)
BTS
Fig. 1. Integrating ISN, GPRS & WiFi
86
D. Kotwal, M. Barooah, and S. Nandi
Intial Registraion with ISN − Start (MN starts up in IEEE 802.11) 802.11 AP1 Signal Strength Above Threshold MN Associates with 802.11 AP1 ICMP Solicitation (No Mobility Label, MN IP,MAC Addr,...) ICMP Advt (Mobility Label−ML1,AP1/MSF−LER IP,...) BGP Update (Mobility Binding) (MN IP,ML1,AP1/MSF−LER IP)
BGP Update (Mobility Binding) (MN IP,ML1,AP1/MSF−LER IP)
Activate Uplink PDP Context (PDP Address, Qos Profile − lower data rates, NSAPI etc)
ICMP Solicitation (Dummy Registration)
Activate Alternate PDP Context PDP Context (PDP Address,QoS Profile − lower data rate, NSAPI etc)
ICMP Advertisement (Dummy Registration)
Intial Registraion with ISN − End Data Traffic from ISN using L2 path (IEEE 802.11 MAC)
Data Traffic from ISN
MN About to move out of 802.11 coverage Signal Strength of AP1 below Threshold − Scanning for new AP New AP not found
Handover IEEE 802.11 −> GPRS − Start ICMP Solicitation (Registration) (ML1, MN IP,...) ICMP Advt (Registration Accept) (ML1,GGSN/MSF−LER IP,..)
BGP Update (Mobility Binding) (ML1,MN IP,GGSN/MSF−LER IP) BGP Update (Mobility Binding) (ML1,MN IP,GGSN/MSF−LER IP)
Data Traffic from ISN using L2 path (Alternate PDP Context)
Data Traffic from ISN
Handover IEEE 802.11 −> GPRS − End
Activate PDP Context for Data Traffic(PDP Address, QoS Profile − higher data rates, NSAPI,etc)
Data Traffic from ISN using L2 path (Desired PDP Context)
Data Traffic from ISN
MN moves back into coverage area of 802.11 network AP1 Signal Strength above threshold − MN Associates with AP 1
Handover GPRS −> IEEE 802.11 − Start ICMP Solicitation (ML1, MN IP, MAC Addr, ...) ICMP Advt (ML1, AP1 MSF/LER IP,...)
BGP Update (Mobility Binding) (ML1,MN IP,AP1/MSF−LER IP)
BGP Update (Mobility Binding) (ML1,MN IP,AP1/MSF−LER IP) Data Traffic from ISN using L2 path (IEEE 802.11 MAC)
Data Traffic from ISN
Handover GPRS −> IEEE 802.11 − End AP1 Signal Strength below Threshold AP2 Signal Strength above Threshold MN Associates with AP2
Handover IEEE 802.11 −> IEEE 802.11 − Start ICMP Solicitation (Registration) (ML1, MN IP,...) ICMP Advt (Registration Accept) (Same ML1,AP2/MSF−LER IP,..) BGP Update (Mobility Binding)
BGP Update (Mobility Binding) (ML1,MN IP,AP2/MSF−LER IP)
(ML1,MN IP,AP2/MSF−LER IP) Data Traffic from ISN using L2 path (IEEE 802.11 MAC)
Handover IEEE 802.11 −> IEEE 802.11 − End
Mobile Node
GPRS/MSF−LER
AP1/MSF−LER
Fig. 2. Handover Procedures
AP2/MSF−LER OR Other MSF−LERs
Seamless Handoff between IEEE 802.11 and GPRS Networks
2.1
87
Handoff between GPRS – WiFi Networks
Handover between GPRS - WiFi networks is initiated by MN depending on the presence of a stable WiFi network. Fig 2 gives the handover procedures between GPRS and WiFi Networks. Initial Registration (ICMP messages) – MN starts up in WiFi and sends registration message containing the pair [MN IP Addr,MAC Addr] to AP/MSFLER. AP/MSF-LER generates a Mobility Label (ML) and sends it back to MN. The ML is used by ISN to uniquely identify the MN. AP/MSF-LER generates a Mobility Binding [ML, MN IP Addr, AP IP Addr] and distributes it to all other MSF-LERs for storage, using BGP update message [4]. MSF-LER1 forwards IP Datagrams destined for MN IP Address to AP/MSF-LER using LSP tunnels, with the ML pushed at the bottom of label stack. AP/MSF-LER uses ML to identify the MAC Addr for MN and forwards packets to MN using L2 path (IEEE 802.11 MAC). Registration here also contains the setting up of an Alternate PDP Context (lower data rates) in GPRS network (Dummy ICMP messages). The Alternate PDP Context is used to temporarily transfer data traffic to MN till a Desired PDP Context (higher data rates) is set up. Handover IEEE 802.11 to GPRS – MN moves out of WiFi and registers with ISN via GGSN/MSF-LER using Alternate PDP Context. This contains the pair [Same ML, MN IP Address]. In GPRS network the L2 path is provided by the PDP Context. GGSN/MSF-LER associates the PDP Context information with the ML. It then creates the Mobility Binding [same ML, MN IP Addr,GGSN IP Addr] and sends it to other MSF-LERs using BGP update message. MSFLER1 now uses LSP tunnels to redirect traffic to the GGSN. At the GGSN, MSF uses ML from label stack to get the (Alternate) PDP Context and uses it to forward the IP Datagrams to MN. Simultaneously, MSF at GGSN also starts the creation of a Desired PDP Context (having higher data rates). Once Desired PDP Context is established, it is used to forward IP Datagrams instead of Alternate PDP Context. Handover GPRS to IEEE 802.11 – When MN moves into WiFi network, the registration process using the same ML is carried out again with AP/MSF-LER. The Mobility Binding sent out by AP/MSF-LER now prompts MSF-LER1 to switch traffic to AP/MSF-LER which forwards packets to MN using WiFi MAC Address.
3
Simulations
The proposed scheme was implemented using Qualnet Network Simulator. Use of same IP address in both networks can be made possible by using a central DHCP server in the ISN, with DHCP relays in GPRS and WiFi networks (Fig. 1). However, in Qualnet both interfaces cannot be assigned same IP Address. The change in IP address will cause existing data sessions to break. To overcome
88
D. Kotwal, M. Barooah, and S. Nandi
this, the destination IP of IP datagrams received at MN is used as the source IP for IP Datagrams being sent from MN, for a session. MN transmits these IP Datagrams over the same interface with which it is registered with ISN. Scenario Setup – The scenario setup is as shown in Fig. 1. We study the effects on UDP traffic, as MN moves between GPRS and WiFi network. The behavior of UDP traffic was analyzed based on - 1) Handover Duration (Cumulative UDP Packets vs Simulation time (seconds)). 2) End-To-End Delay for UDP Packets. 3) Packet Loss. (Analysis & Graphs for End-To-End delay are not included in the paper). The scenario is executed twice – 1)Path of MN1 2)Path of MN2. The scenario parameters are given in Fig. 3(a). Handover from GPRS to WiFi – Fig 3(b) During handover, there is no break in arrival of packets as MN receives packets on its GPRS interface. The Mobility Binding Update finishes at 1005.35s. The surge in graph is seen as MN simultaneously receives en-route data packets in GPRS network and packets from Handover GPRS -> IEEE 802.11 - CBR (100kbps) - ISN (3 hops)
UDP Packets (Cumulative)
48290
48280
48270
48260
48250
1004.8
(a)
1005
1005.2
1005.4
1005.6
1005.8
Simulation Time (Seconds)
(b)
Handover IEEE 802.11 -> GPRS - CBR (100kbps) - ISN (3 hops)
Handover IEEE 802.11 -> IEEE 802.11 - CBR (100kbps) - ISN (3 hops) 64670
29900
UDP Packets (Cumulative)
UDP Packets (Cumulative);
64660 29880
29860
29840
64650
64640
64630
64620
64610 29820
64600
64590 636.5
637
637.5
638
1331.8
Simulation Time (Seconds)
1332
1332.2
1332.4
1332.6
1332.8
1333
1333.2
1333.4
Simulation Time (Seconds)
(c)
(d)
(e) Fig. 3. (a)Scenario Parameters (b)Handover - GPRS to WiFi (c)Handover - WiFi to GPRS (d)Handover - WiFi to WiFi (d)Handover Duration Summary (100Kbps)
Seamless Handoff between IEEE 802.11 and GPRS Networks
89
WiFi network. In order to accommodate the en-route packets, GPRS interface should be kept active for certain duration. This duration can be found using GPRS data rate and by sending a probe packet to CN. Handover from WiFi to GPRS – (Fig 3(c)) From 636.7073s to 636.7637s MN scans for new AP. From 636.7637s to 636.7891s, MN carries out registration with GGSN/MSF-LER. Now GGSN/MSF-LER starts using the Alternate PDP Context (Desired PDP Context not in place) and receives first packet at 636.871s. During this entire period MN does not receive any packets (packet loss). The Alternate PDP Context is used till time 637.4501s. This causes packets to be buffered in GPRS network(low data rates). At 637.4501s, Desired PDP Context is setup, and traffic switches over to Desired PDP Context. A surge in data rate is now observed as packets from both the Alternate (buffered packets) and Desired PDP Contexts are received simultaneously. Handover from WiFi to WiFi – (Fig 3(d)). For performing horizontal handoff, MN starts scanning, associates with new AP/MSF-LER and completes Registration procedure for new AP/MSF-LER. For this entire period, the MN does not receive any packets, which are dropped. This is shown by a gap in graph.
4
Conclusion
In this paper we proposed the use of an Intermediate Switching Network (MLBN [4]) for switching traffic between GPRS and WiFi networks. The ISN was integrated with the GPRS and WiFi network at the GGSN and AP respectively. The use of the switching network avoids sub-optimal routing [5] and also allows the user to experience higher data rates when in the WiFi network. Switching of traffic based on MPLS and MP-BGP reduces the handoff delay. During handoff from WiFi to GPRS, the high handoff latency experienced due to the need for creating a PDP Context, was reduced by the pro-active creation of an Alternate PDP Context. The Simulations carried out show that no packet loss is experienced during handoff from GPRS to WiFi. During handoff from WiFi to GPRS/WiFi network some packet loss is experienced due to the scanning time taken by the MN. The proposed scheme provides a means for seamless mobility between GPRS and WiFi networks, while reducing handoff latency, packet loss and allowing the user to use the network which offers best data rates.
References 1. Chen, J.-C., Chen, W.-M., Lin, H.-W.: Design and analysis of GPRS-WLAN mobility gateway (GWMG). In: 2005 IEEE International Conference on Communications, ICC 2005, May 16-20, vol. 2, pp. 918–923 (2005) 2. Fredson, P., Murthy, M.: WLAN-GPRS Tight Coupling Based Interworking Architecture with Vertical Handoff Support. Wireless Personal Communications 40, 137–144 (2007)
90
D. Kotwal, M. Barooah, and S. Nandi
3. Bernaschi, M., Cacace, F., Iannello, G., Za, S., Pescape, A.: Seamless Internetworking of WLANs and Cellular Networks: Architecture and Performance Issues in a Mobile IPv6 Scenario. IEEE Wireless Communications 12, 73–80 (2005) 4. Berzin, O., Malis, A.: Mobility Support Using MPLS and MP-BGP Signaling. Network Working Group - Internet-Draft, October 29 (2008) 5. Berzin, O., Daryoush, A.: Mobility Label Based Network: Support for Mobility using MPLS and Multi-Protocol. In: 2008 IEEE BGP: adio and Wireless Symposium, January 22-24, vol. 52(9), pp. 511–514 (2008) 6. McGuiggan, P.: GPRS in Practice: A Companion to the Specifications. Wiley, West Sussex England (2004)
A Tool to Determine Strategic Location and Ranges of Nodes for Optimum Deployment of Wireless Sensor Network Amrit Kumar, Mukul Kumar, and Kumar Padmanabh Software Engineering and Technology Labs Infosys Technologies Ltd, Bangalore, India {amrit_kumar,mukul_kumar03,kumar_padmanabh}@Infosys.com
Abstract. The deployment of wireless sensors is very challenging, yet it is one of the most unexplored areas of this domain. In most of the WSN applications, the sensing location is fixed. In this paper, we are describing a deployment tool we have developed for strategic localization and network optimization. Even before going into field deployment, this tool can be used to calculate precisely the location of the nodes and transmission range while considering line of sight and obstacle into account. The JPEG image of the terrain or floor plan and line of sight or obstacle are the input parameters. The algorithm used in this tool addresses two aspects of the problems: how to minimize the range of the sensor nodes while still forming a well connected network and how to find the location of the forwarding nodes such that the energy required in communication is minimum. Based on these algorithms we have developed a tool to calculate the network parameters such as minimum numbers of forwarding nodes, minimum range for each node required to form a well connected mesh network and minimize the power consumption. Our initial experiments with google maps give very encouraging results. Keywords: Deployment Tool, Network Optimization, Power Management, Range Optimization.
1 Introduction Fundamentally any WSN architecture [1] is characterized by four key features; viz self-organization, local decision making, wireless communications and traffic flow towards the ‘sink’. WSNs have been widely used in habitat monitoring, surveillance systems, environmental monitoring, military, building and home automations.[2]. In a typical wireless sensor network the locations of the sensing points are fixed and determined by that particular application. This is one basic assumption in planning the deployment of WSN to be considered henceforth. Finding the deployment parameter such as locations, number of nodes required, the transmission range of the nodes, maximum number of available path etc manually is rigorous and inefficient. In simple applications where the surveillance area is small, trial and error techniques can be used. However, with the increase in ‘area of monitoring’, the T. Janowski and H. Mohanty (Eds.): ICDCIT 2010, LNCS 5966, pp. 91–97, 2010. © Springer-Verlag Berlin Heidelberg 2010
92
A. Kumar, M. Kumar, and K. Padmanabh
complexity in deployment increases and hence the determination of a system to control the parameters of the network is almost impossible. These factors impose the challenge of planning and deployment of WSN as very critical for the next generation wireless sensor network deployments. There is limited literature available on planning and deployment of WSN’s. Architectures such as POWER have been suggested by authors in [3]. However, none of them have considered each individual node as a different entity. We propose a system to deploy a Wireless sensor network in any environment, Industrial, field or home automation or wherever there is a need for scalable wireless sensor network deployment and planning. The battery life, an important aspect of consideration in a wireless sensor network depends largely on the transmission range of the sensor nodes or motes. Our system optimizes the deployment in three phases. We propose to first identify the sensing locations in the Initialization phase. In the network formation phase, we run an indigenous algorithm to find the optimum range for each mote while ensuring the reliability of the network. In the third phase, we find the additional nodes (or repeaters) required by the network and the location of the additional nodes as a function of the physical geography of the “area of monitoring”. It also determines the network parameters (e.g minimum range, friend’s list or redundant links, location coordinates. for the repeaters and the results of the system are then applied for deployment. In essence, the system aims for • Accelerated deployment of WSN. • Optimizing the network parameters number of neighbors and most importantly the range of each node. • Reduction of cost of deployment in terms of time, effort and maintenance. In what follows we describe the system model and key consideration in the paper in section-2, tour proposed algorithm in section-3 and experimental result in section-4 and finally conclusion and future work in section-5.
2 Key Considerations and Our System Parameter Though the maximum transmission ranges of the nodes are inherent characteristics of the hardware, however the actual transmission range of the node is a configurable parameter. How many numbers of repeater will be required in a particular deployment is completely dependent on the geographical parameter (distance, atmospheric condition). For the large deployment we cannot go to the field and determine the location of repeaters and transmission ranges of all individual nodes. Therefore, the best idea will be to do it offline, get the coordinates and transmission ranges of the nodes and use the same. We intend to use the floor plan or terrain map of the geographical area .We will locate the sensing point and then our tool will determine the network parameters. The key points which need to be addressed during the planning of WSNs are: Location of the nodes. To sense effectively the sensors must be placed as close to event of interest as possible to record the event closely. For most practical purposes we shall assume that
A Tool to Determine Strategic Location and Ranges of Nodes
d [li , le ] ≤ r
93
(1)
Where, li is the location of node ‘ i’ and le is the location of event of interest and ‘r’ is maximum distance from the which the event of interest can be sensed correctly. Communication Range of a node (R). This is the range of transmission, a configurable parameter of a node. Typically the maximum range of a wireless sensor node is limited by the hardware used to manufacture the node. The cost of the node increases with higher . (2) Further in the paper, we shall refer communication range
as ‘range’ of a node.
Battery Life of a node. This is one of the key parameters in the WSN deployment. d=
∂E ∂t
(3)
The nodes have limited battery power. The battery drain for any operation by the node is defined as the energy consumed by the node in that operation. A node performs three tasks which accounts for its battery drain a) sensing the event of interest b) processing the event of interest c) sending the packet generated over the RF channel. The amount of battery drain in a) and b) is negligible in comparison to c [6].
d α R4
(4)
Where, d is the battery drain and R is the range of the node. Thus it is important that the range of the nodes must be kept at minimum to have increased battery life of the node.[6] Forwarding nodes. The wsn applications determine the sensing locations and hence the geographical coordinates of the location where ambient parameters will be sensed. However due to limitations of the transmission range, all nodes placed at the sensing point may not form a connected network. Other nodes known as forwarding nodes or repeaters are needed to fill the gap to form well connected network. Network Architecture and Connectivity of each node. WSN network consists of sensor nodes. Nodes are connected to each other in mesh topology. However every sensor node is connected to the outside world via a powerful node known as base station. For simplicity and without loss of generality, we shall consider one base node in our network located at the origin of the network.
3 Proposed Algorithm The system workflow is mentioned in figure-1(c). The algorithm has two prime objectives: 1) to find the minimum range required for the node while ensuring the connectivity requirement specified by the WSN application. 2) To find the optimum location/coordinates of the forwarding nodes, while considering base station at the
94
A. Kumar, M. Kumar, and K. Padmanabh
origin. The algorithm works in three phases: network initialization, network formation and network optimization. 3.1 Network Initialization
The initialize network phase evaluates the sensing locations depending on the location of the area of interest. These locations are the initial location of the deployment system and act as an input to the subsequent phase. These locations satisfy the equation (1) defined above. 3.2 Network Formation
Once the sensing locations are defined, the next task is to form a network based on desired network redundancy and reliability and the topology of the Floor plan. The network formed should have a predictable reliability and the maximum battery life. Network Reliability: It is defined as the measure of the probability of successful communication of a packet from a node to the base station in the network. Reliability ( of a packet to be delivered successfully to the HECN (5) ,
,
,
Battery Life: The battery drain communication of the node.
of a node is inversely proportional to the range of ,
(6)
Thus to have increased battery life, range of the nodes should be minimal Obstacle Management: Sensor network is not limited to the line of sight. It means two nodes can communicate across a concrete wall. However, signal strength gets decreased. This is resulted in reduced transmission range in case of the obstacle. An obstacle is defined as any hindrance in the space between two nodes such that the communication capability between them is reduced partially or fully. One solution to overcome the loss of communication is to increase the range of communication of the nodes. While calculating the range of the nodes, this must be taken into account. For simplicity, we state that any obstacle with thickness “t” introduces a factor say μ such that .μ
.t
(7)
μ can be determined by experimentation for any obstacle such as glass door, cardboard, humid environment etc and the range obtained by the algorithm-1 in fig1(a) can be adjusted according to the equation above. For our experiment, we shall set the parameter μ 1.0, without any loss of generality.
A Tool to Determine Strategic Location and Ranges of Nodes
95
In summary, the network formed must have high reliability and higher battery life. To achieve an optimum network, the trade-off between higher reliability and battery life, must be made. 3.3 Network Optimization
The network optimization takes into account, the maximum range permissible for any node, the minimum no of friends required and the obstacle range enhancement. Network Optimization determines the number of repeaters or forwarders required in the network and re-calculate the modified ranges for each of the nodes Fig1b. Base station at[0,0],All n nodes at [x,y],NL[] – Network list D[n,n] – distance matrix which gives the distance between each node. Q[] – Queue,k = proximity mimimum or least no of friends required for each node.R[0,1,2….n] = range of each node. Step1 : Arrange n nodes in increasing order of their distance from base station in Vector V[0,1-----,n] such that V[0] =0 (base station) Step2 : Calculate the distance matrix D[n,n] for each node. Step3: For each node i in V[0,1,2…….n] Add V[i] to NL[] If (size(NL[]) =1) // whenonly root is present in NL { //Select k nodes at minimum distance from V[i] using D[V[i],n] counter =0,max = 0 While (counter !=k) { Find the next node NN at minimum distance from V[i]; Add NN to Network list NL R[NN] = D[NN,V[i]]//Calculate range for NN // Enqueue side effect queue Insert NN to Q[];max = Max(max,R[NN]) Counter <- counter+1 } R[V[i]]= max; // Calculate range R[V[i]] While(Q[] is not empty] // Build side effect chain { SideNode= Pop[Q[]]; Find node NM such that D[NM,Sidenode] is least and NM not in NL Add NM to Q[] Add NM to NL } If (size(NL[]) !=1) // not for root node {//Select k nodes from NL such that distance[V[i],each of k]is least counter =0, max = 0 While (counter !=k) {Find the next node NN in NL at minimum distance from V[i]; Add NN to Network list NL max = Max(max,D[V[i],NN); Counter <- counter+1 // Enqueue side effect queue Insert NN to Q[] } R[V[i]] = max // Build side effect chain While(Q[] is not empty] { SideNode= Pop[Q[]]; Find node NM such that D[NM,Sidenode] is minimum and NM is not present in Network List NL Add NM to Q[]; Add NM to NL }} Step 4 : Plot the ranges R[0,N] and the final network.
Fig. 1a. Network Formation Algorithm
Algorithm 2 : Optimixation Base station at[0,0] All n nodes at [x,y] NL[] – Network list D[n,n] – distance matrix which gives the distance between each node. Q[] – Queue k = proximity mimimum or least no of friends required for each node. R[0,1,2….n] = range of each node. x1,x2,y1,y2 Step 1: From the range matrix R[0,n] find those i for which R[i] > Rmax and the edge e for which the Ri is maximum If i <- Null go to Step 5 Add i and e to Q[];x1 = ex1 x2 = ex2;y1 = ey1 y2 = ey2 Step 2: Place a router node at the location (xr,yr) xr = (ex1+ex2)/2 and yr = (ey1+ey2)/2 Step 3 : Run Algorithm 1 Step 4 : Proceed through step1 Step 5 : Draw the network
Fig. 1b. Network Optimization Algorithm
Initialize Network
N.F
N.O
WSN Deployment
Fig. 1c. System Workflow
96
A. Kumar, M. Kumar, and K. Padmanabh
4 Experimental Results The simulator facilitates uploading different geographical map on the system. The system records the sensing location by user click action and assigns a virtual sensor node in blue to the clicked coordinates [Fig 2a]. The base node is assumed at the centre of the map in this simulation having the origin as the coordinates. The user also defines the maximum range for the nodes constrained by the hardware (in this epoch 75), the minimum number of friends for each node (in this epoch -3) as constrained by the application and hardware used. In the network formation phase [Fig 2b] algorithm 1[Fig 1a] ensures that each node has a range just sufficient to connect it to a minimum no of reliable links closest to the base station, both for forward and reverse channel communication. For Green links, the required network conditions have been met and the minimum range required for the node adjoining the link are < . Dashed Links in red represent that, the minimum range required for the node has exceeded the maximum permissible range for that node. Hence optimization is required. In the Network Optimization phase, Algorithm 2 [Fig 1b] is run to find the placement of repeaters. This serves as an input to Algorithm 1[Fig 1a] and the process is repeated to remove all dashed red links.
Fig. 2a. Network Initialization Fig. 2b. Network Formation Fig. 2c. Network Optimization
Fig 3a shows the result of the deployment phases (network formation and final network optimization). There is significant reduction in the ranges of the nodes from formation to optimization while the numbers of neighbouring nodes i.e. the redundant links being almost same in both formation and optimization. The drain measure according to equation 6 is calculated for each node and is depicted in Fig 3b for the stages, formation and optimization. The number of forwarding nodes added to the network in optimization phase [fig 1b and fig 1c] in this epoch is 7 (node 54 to node 60). Fig 3c demonstrates the final range of the nodes when the application has different reliability requirements i.e. different number of minimum friends. nRange graph means requirement of minimum ‘n’ friends for each node.
A Too ol to Determine Strategic Location and Ranges of Nodes
97
Fig. 3a. Range and Friendss Fig. 3b. Battery drain at Nodes Fig. 3c. Range with ‘n’ friennds Results
5 Conclusions and Future F Work The deployment system pro ovides a holistic approach for any WSN planning with the evident benefits of speedy planning and deployment. The user defined reliability and the constraints of range and d location of the sensing points are taken into account whhile calculating the optimum raange for the nodes and hence optimizing the battery lifee of the nodes. The output givess the instant results for the location, range, of each senssing and repeater nodes. The algorithms [fig 1a] and a [fig 1b] are being adapted for dynamic wireless sennsor networks in which the sen nsing locations are not fixed and the nodes are mobilee. It poses new challenge of com mputation of algorithms in Fig 1a and Fig 1b at runtime.
References 1. Estrin, D., Govindan, R., Heidemann, J., Kumar, S.: Next century challenges: scalaable coordination in sensor neetworks. In: ACM/IEEE International Conference on Moobile Computing and Networking archive, pp. 263–270. ACM Press, Seattle (1999) 2. Jourdan, D.B.: Wireless Seensor Network Planning with Application to UWB Localizatioon in GPS Denied Environmentts, Department of Aeronautics and Astronautics, Massachussetts Institute of Technology (Ju une 2006) 3. Li, J., Bai, Y., Ji, H., Maa, J., Qian, D.: The Architecture of Planning and Deploym ment Platform for Wireless Sen nsor Networks. In: Wireless Communications, Networking and Mobile Computing, WiCOM 2006 (2006) 4. Li, J., Bai, Y., Ji, H., Ma, J., Tian, Y., Qian, D.: POWER: Planning and Deploym ment Platform for Wireless Sensor Networks. In: Grid and Cooperative Computing Workshops, GCCW apos 2006 (2006) 5. Tang, F., Li, M., Weng, C., C Zhang, C., Zhang, W., Huang, H., Wang, Y.: Adances in G Grid and pervasive Computing. In: Combining Wireless Sensor Network with Grid for Intelliggent city Traffic, http://www w.springerlink.com/content/r56j28x64186157 7r/ 6. Padmanabh, K., Roy, R.: Maximum Lifetime Routing in Wireless Sensor Networkk by minimizing Rate Capacity Effect. In: IEEE ICPP-W 2006 (2006)
An Efficient Hybrid Data-Gathering Scheme in Wireless Sensor Networks Ayon Chakraborty1, Swarup Kumar Mitra2, and M.K. Naskar3 1 Department of CSE, Jadavpur University, Kolkata, India Department of ECE, MCKV Institute of Engineering, Howrah, India 3 Department of ETCE, Jadavpur University, Kolkata, India {jucse.ayon,swarup.subha}@gmail.com, [email protected] 2
Abstract. For time-sensitive applications requiring frequent data gathering from a remote wireless sensor network, it is a challenging task to design an efficient routing scheme that can minimize delay and also offer good performance in energy efficiency and network lifetime. In this paper, we propose a new data gathering scheme which is a combination of clustering and shortest hop pairing of the sensor nodes. The cluster heads and the super leader are rotated every round for ensuring an evenly distributed energy consumption among all the nodes. We have implemented the proposed scheme in nesC and performed simulations in TOSSIM. Successful packet transmission rates have also been studied using the interference-model. Compared with the existing popular schemes such as PEGASIS, BINARY, LBEERA and SHORT, our scheme offers the best “energy × delay” performance and has the capability to achieve a very good balance among different performance metrics. Keywords: Data Gathering, Network Lifetime, Interference Model, Energy x Delay.
1 Introduction Wireless Sensor Networks (WSNs) are usually self-organized wireless ad hoc networks comprising of a large number of resource constrained sensor nodes. One of the most important tasks of these sensor nodes is systematic collection of data and transmit gathered data to a distant base station(BS) where the data is processed. But once the nodes are deployed it is often undesirable or infeasible to replace or recharge them. Hence network lifetime becomes an important parameter for efficient design of data gathering schemes for sensor networks. Each node is provided with transmit power control and omni directional antenna and therefore can vary the areas of its coverage [1]. Since communication requires significant amount of energy compared to computations, sensor nodes must collaborate in an energy-efficient manner for transmitting and receiving data so that not only the lifetime is enhanced but also a better “energy x delay” performance is achieved. We propose and analyze in this paper a new cluster-based routing scheme called Hybrid Data gathering Scheme(HDS), which can ensure the best “energy × delay” performance while, at the same time, achieve a good balance among other performance T. Janowski and H. Mohanty (Eds.): ICDCIT 2010, LNCS 5966, pp. 98–103, 2010. © Springer-Verlag Berlin Heidelberg 2010
An Efficient Hybrid Data-Gathering Scheme in Wireless Sensor Networks
99
metrics such as energy efficiency and network lifetime. We divide the network into clusters and subsequently SHORT [2] is applied for data gathering in each cluster as well as among the cluster heads. The data gathering scheme HDS is coded in nesC for TinyOS[3] software platform. This not only signifies the coding feasibility of the scheme, but also verifies it for running in real hardware platforms (like Micaz or Mica2). The TOSSIM radio interference model has been used in simulating the packet reception ratio.
2 Related Works Several cluster based and chain based algorithms have been proposed for efficient data gathering. PEGASIS scheme proposed in [1] is based on a chain, which starts from the farthest node from the BS. By connecting the last node on the chain to its closest unvisited neighbor, PEGASIS greatly reduces the total communication distance and achieves a very good energy and lifetime performance for different network sizes and topologies. CDMA capable and non- CDMA-capable sensor nodes, the chain-based BINARY and 3-Level Hierarchy schemes were proposed respectively in [4] to achieve better “energy × delay” performance than PEGASIS. In [5], a clusterbased Load Balance and Energy Efficient Routing Algorithm (LBEERA) is presented. LBEERA divides the whole network into several equal clusters and every cluster works as in PEGASIS. To reduce energy consumption, a new algorithm to construct the lower chain in each cluster is also proposed. A tree-structure routing scheme called Shortest HOP Routing Tree (SHORT) [2] offers a great improvement in terms of “Energy x Delay” with a good performance for network lifetime. LEACH [6] rotates the roles of cluster heads among all the sensor nodes. In doing so, the energy load is distributed evenly across the network and network lifetime (in unit of data collection rounds) becomes much longer than the static clustering mechanism.
3 The System Model We consider a field containing N randomly deployed sensor nodes, divided into M geographic clusters. Without any loss of generality, we assume that Cluster 1 contains N1 nodes, Cluster 2 contains N2 nodes and so on, Cluster M containing NM nodes. Data aggregation is performed at intermediate nodes by generating single k-bit packet from multiple incoming k-bit packets. The position information of all the nodes is known to the BS by using the Global Positioning Systems (GPS) or other techniques[6]. For wireless communication, the simple first-order radio model is used to calculate the energy consumption for transmitting and receiving data packets. Let ξelec = 50nJ/bit and ξamp = 100 pJ/bit/m2 denote the energy consumption rates for operating the electronics in radio transceiver and transmitter amplifier, respectively. We assume ξelec also take into account the energy consumption for aggregating multiple incoming data packets and generating a single same sized outgoing packet which is known as data fusion. For receiving a k-bit packet, a sensor node consumes Erx (k) Joule of energy, or, Erx(k)= ξelec * k
(1)
100
A. Chakraborty, S.K. Mitra, and M.K. Naskar
While for transmitting a k-bit packet to another node over a distance of d meters, the energy consumption is given by, Etx(k, d) =( ξelec + ξamp * d2) *k
(2)
The packet reception ratio in this scheme was simulated by the radio interference model in TOSSIM which is based on the empirical data. The loss probability captures transmitter interference using original trace that yielded the model. More detailed measurements would be required to simulate the exact transmitter characteristics; however experiments have shown the model to be very accurate.
4 Proposed HDS Algorithm The key idea of our approach is to divide the whole field into a number of Clusters as in LBEERA. The applied SHORT scheme in each of the cluster adopts centralized algorithms and requires the powerful BS, rather than the sensor nodes with limited resources, to take the responsibility to manage the network topology and calculate the routing path and time schedule for data collection. The cluster-head in each of the cluster acts as a leader. HDS operates in three phases: (i) Cluster and group formation phase: In each round one leader for each cluster will be elected based on the residual energy of the cluster members and their distances from the BS. (ii) Leader and super leader selection phase: Initially in each cluster the nearest node to the BS is selected as the cluster-head and among the cluster-heads the nearest to the BS is selected as the super leader. From the 2nd round to select cluster-heads as well as the super leader, BS considers two important parameters. The first parameter is the distance between the node and BS, denoted by D. The remaining energy of a node denoted by E residual, where Pi = E residual i / Di2 For a particular round the cluster member with the maximum P will be selected as the cluster-head and the cluster-head with the maximum P as the super leader. Super leader and leader are rotated in every round according to criterion for evenly distributing the energy load among all the nodes. (iii) Data transmission phase: After the creation of the clusters and selection of cluster-heads and super leader, sensors start data gathering and transmission operation. 4.1 Calculation of Delay, Message Complexity, Energy* Delay Product and Mean Delay i) Delay calculation: In each cluster delay for data gathering in the individual cluster heads is log 2 Ni, for the ith cluster. After the data is accumulated in the M cluster heads, it takes another log 2 M + 1 time slots to gather the data in the base station. The ‘plus one’ factor is for transmitting the final data packet from the super leader to the base station. Since, the algorithm is applied in parallel among all the clusters, data gathering tasks to the cluster heads occur simultaneously. Thus the delay for data
An Efficient Hybrid Data-Gathering Scheme in Wireless Sensor Networks
gathering in each cluster is lower bounded by log N2… NM). So overall delay will be,
2
101
Nmax, where Nmax = max of (N1,
ceil( log 2 Nmax ) + ceil( log 2 M ) + 1 ≈ ceil( log 2 (Nmax M ) ) = ceil( log 2 N ) + k
(3)
k is a constant which decreases as the distribution of the sensor nodes is more uniform, and becomes zero when N1 = N2 = ….. = NM = Nmax. . Here ceil(x) is the ceiling of x, denoting the least integer, greater than or equal to x. So, we see the delay to have a complexity be O(logN). ii) Message Complexity: In the HDS scheme, in each round, there is a single cluster and group formation phase and a single leader and super leader election phase. Assuming, each node has a packet to send in every round, total number of messages passed is N. Thus message passing complexity is linear, i.e. O(N). iii) Energy – Delay product: As we can, reasonably say that the radio transmission and reception energy is greater than CPU or processing energy by several orders of magnitude, we take message passing as a rough measure of energy consumption in the nodes. Thus ‘energy x delay’ product have a complexity of O(N logN). iv) Mean Delay: We define the mean delay as the average of the delay to the BS from each of the nodes. The network has a total of N1+N2 + … + NM (=N) nodes. In the first slot they are divided in (N1 /2 + N2 /2 + … + NM /2) groups. Now each of the (N1 /2 + N2 /2 + … + NM /2) transmitter nodes in the first slot ( 1 from each group) will have a delay of (log(MNi) + 1) time slots to the base station, where Ni denotes the nodes in the ith cluster. Similarly calculating for the tth slot each of the (N1 /2t + N2 /2t + … + NM /2t) transmitter nodes have a delay of (log(MNi) – t + 1) time slots to the base station. So, for calculating the mean delay, we go for the weighted mean,
MD =
log N
log M
i =1
j =1
∑ { ∑ ( Nj[log Ni − i + 2] / 2i )}
(4)
N
5 Simulation Results Our proposed scheme is validated by extensive computer simulations. A network consisting of 100 homogeneous sensor nodes deployed randomly in a field of size 50m x 50m is considered for the simulation model. The BS is fixed and located x=50m and y=150m. The network is geographically divided into 5 equal sized clusters. HDS is compared with the classical data gathering schemes in literature like the PEGASIS, LBERRA, SHORT and BINARY. It not only shows a very good network lifetime as compared to these schemes but also has a better energy-delay product. The simulation results are as follows.
102
A. Chakraborty, S.K. Mitra, and M.K. Naskar
Fig. 1. Comparison of Network Lifetime and Energy-Delay Product vs Number of Nodes
From Figure 1, we see that as the number of nodes increases the network lifetime falls. But among others HDS shows the best performance. So it is energy efficient. The better performance in the Energy-delay product signifies better throughput on top of energy efficiency. The detailed results about the simulation are given in Table 1.
Fig. 2. Fraction of Packets successfully reaching the base station with retransmission Attempts the upper dark portion of the bar shows the range of the fraction, the tips indicating maximum and minimum fractions (simulated in TOSSIM)
The packet reception ratio is calculated as the ratio of the packets received successfully to the total packets transmitted. In HDS we calculated the fraction of packets reaching the BS successfully, by varying the number of retransmission attempts. As the number of retransmission attempts increase the packet reception ratio also increases. The Figure 2 depicts the packet loss in HDS. This simulation introduces the interference model in our simulation making it more realistic.
An Efficient Hybrid Data-Gathering Scheme in Wireless Sensor Networks
103
5.1 Performance Comparison Table 1. Comparison of the various schemes FND: First Node Dies, HND: Half Nodes Die, LND: Last Node Dies, Delay calculated in average slots per round
Performance Metrics
PEGASIS
BINARY
SHORT
LBEERA
HDS
FND
849
514
1427
1377
1455
HND
2587
1744
2533
2400
2583
LND
2945
2271
2992
2504
2989
Energy Consumption (mJ)
20.12
25.87
19.85
24.42
19.46
Delay
66.02
7.38
7.71
17.82
7.74
Network Lifetime (rounds)
Mean
6 Conclusions Our proposed algorithm overcomes the losses incurred from all other data gathering schemes proposed in literature. HDS makes a good harmony among network lifetime, energy costs and network throughput. It not only reduces the network lifetime but also guarantees the best energy-delay product. The coding of HDS in nesC deserves a special mention as it proves the scheme to be feasible on real hardware platforms. Also the radio interference model used for simulation purposes helped us to study the problem from the perspective of a more realistic physical layer.
References 1. Lindsey, S., Raghavendra, C.S.: PEGASIS: Power Efficient Gathering in Sensor Information Systems. In: Proceedings of IEEE ICC 2001, pp. 1125–1130 (2001) 2. Yang, Y., Wu, H.H., Chen, H.H.: SHORT: Shortest Hop Routing Tree for Wireless Sensor Networks. In: IEEE ICC 2006 proceedings (2006) 3. Levis, P.: TinyOS Programming (2006) 4. Lindsey, S., Raghavendra, C.S., Sivalingam, K.: Data Gathering in Sensor Networks using energy*delay metric. In: Proceedings of the 15th International Parallel and Distributed Processing Symposium, pp. 188–200 (2001) 5. Yu1, Y., Wei, G.: Energy Aware Routing Algorithm Based on Layered Chain in Wireless Sensor Network, 1-4244-1312-5/07/$25.00 ©. IEEE (2007) 6. Heinzelman, W., Chandrakasan, A., Balakrishnan, H.: Energy- Efficient Communication Protocol for Wireless Microsensor Networks. In: IEEE Proceedings of the Hawaii International Conference on System Sciences (2000)
Introducing Dynamic Ranking on Web Pages Based on Multiple Ontology Supported Domains Debajyoti Mukhopadhyay1,4, Anirban Kundu2,4, and Sukanta Sinha3,4 1
Calcutta Business School, D.H. Road, Bishnupur 743503, India Netaji Subhash Engineering College, West Bengal 700152, India 3 Tata Consultancy Services, Whitefield Rd, Bangalore 560066, India 4 WIDiCoReL, Green Tower C- 9/1, Golf Green, Kolkata 700095, India {debajyoti.mukhopadhyay,anik76in,sukantasinha2003}@gmail.com 2
Abstract. Search Engine ensures efficient Web-page ranking and retrieving. Page ranking is typically used for displaying the Web-pages at client-side. We are going to introduce a data structural model for retrieval of the searched Webpages. We propose two algorithms in this paper. The first algorithm constructs the Index Based Acyclic Graph generated by multiple ontologies supported crawling and the second algorithm is for calculation of ranking of the selected Web-pages from Index Based Acyclic Graph. Keywords: Search Engine, Ontology, Relevant Page Graph Model, Index Based Acyclic Graph Model, Web-page Ranking.
1 Introduction Search Engine exhibits a list of Web-pages as a result of a search made by the users. In this scenario, the display order of the links of Web-pages is very important factor. Different Search Engines use several ranking algorithms to rank the Web-pages properly with respect to the users’ point of view [3]. Relevant Page Graph Model consists of multiple domain specific Web-pages [2]. This model takes huge time to retrieve the data. In this background, we incorporate a new Index Based Acyclic Graph Model which provides faster access of Web pages to the users. This paper involves the basic idea of searching Web-pages from Index Based Acyclic Graph and also provides the order of selected Web-pages at the user-end.
2 Existing Model of Relevant Page Graph Model In this section, Relevant Page Graph (RPaG) is described. Every Crawler [5] needs some seed URLs to retrieve Web-pages from World Wide Web (WWW). All Ontologies [1], Weight Tables and Syntables [4, 6] are needed for retrieval of relevant Web-pages. RPaG is generated only considering relevant Web-pages. In RPaG, each node contains Page Identifier (P_ID), Unified Resource Locator (URL), four Parent Page Identifiers (PP_IDs),Ontology relevance value (ONT_1_REL_VAL, T. Janowski and H. Mohanty (Eds.): ICDCIT 2010, LNCS 5966, pp. 104–109, 2010. © Springer-Verlag Berlin Heidelberg 2010
Introducing Dynamic Ranking on Web Pages
105
ONT_2_REL_VAL, ONT_3_REL_VAL), Ontology relevance flag (ONT_1_F, ONT_2_F and ONT_3_F) fields information. A sample RPaG is shown in Fig. 1. Each node in this figure of RPaG contains four fields; i.e., Web-page URL, ONT_1_REL_VAL, ONT_2_REL_VAL and ONT_3_REL_VAL. Here, “Ontology Relevance Value” contain calculated relevance value if these overcome “Relevance Limit Value” of their respective domains. Otherwise, these fields contain “Zero (0)”.
Fig. 1. Arbitrary Example of Relevance Page Graph (RPaG)
Definition 1. Weight Table - This table contains two columns; first column denotes Ontology terms and second column denotes weight value of that Ontology term. Weight value must be in the interval [0,1]. Definition 2. Syntable - This table contains two columns; first column denotes Ontology terms and second column denotes synonym of that ontology term. For a particular ontology term, if more than one synonyms exists then it should be kept using comma (,) separator.
3 Proposed Approach with Analytical Study In our approach, we have constructed IBAG from RPaG and further have searched the Web-pages from IBAG for a given “Search String”. Finally, a search string is given as input on the Graphical User Interface (GUI); and as a result, corresponding Webpage URLs are shown as per ranking mechanism followed. 3.1 Index Based Acyclic Graph Model In this section, Index Based Acyclic Graph (IBAG) has been described. A connected acyclic graph is known as a tree. In Fig. 2, a sample IBAG is shown. It is generated by our prescribed algorithm which is described in Section 3.2. RPaG pages are related in some Ontologies and the IBAG generated from this specific RPaG is also related to the same Ontologies. Each node in the figure (refer Fig. 2) of IBAG contains Page Identifier (P_ID), Unified Resource Locator (URL), Parent Page Identifier (PP_ID), Mean Relevance value (MEAN_REL_VAL), Ontology link (ONT_1_L, ONT_2_L, ONT_3_L) fields. In each level, all the Web-pages’ “Mean Relevance Value” are kept in a sorted order and all the indexes which track that domain related pages are also stored. In Fig. 2, ‘X’ means currently the ontology link does not exist. The calculation of MEAN_REL_VAL is described in Method 1.1 of Section 3.2. Using “Maximum Mean Relevance Span Value” (α), “Minimum Mean Relevance Span Value” (β) and “Number of Mean Relevance Span level” (n) we calculate Mean Gap Factor (ρ) = (α - β) / n. Now we define ranges such as β to β+ ρ, β+ ρ to β+ 2ρ, β+ 2ρ to β+ 3ρ and so on.
106
D. Mukhopadhyay, A. Kundu, and S. Sinha
Fig. 2. Index Based Acyclic Graph (IBAG)
3.2 Construction of IBAG from RPaG In this section, the design of an algorithm is discussed. It generates IBAG from RPaG. Different methods are shown for better understanding of the algorithm. Algorithm 1. Construction of IBAG INPUT: Relevant Page Graph (RPaG) Constructed from Original Crawling, Number of Mean Relevance Span Level, Maximum Mean Relevance Span and Minimum Mean Relevance Span OUTPUT: Index Based Acyclic Graph (IBAG) Step 1: Take Relevant Page Graph (RPaG) Constructed from Original Crawling, Number of Mean Relevance Span Level, Maximum Mean Relevance Span and Minimum Mean Relevance Span from user and generate one Dummy Page for each Mean Relevance Span Level Step 2: Take one Page (P) from RPaG and Call CAL_MEAN_REL_VAL (Page P) and find Mean Relevance Span Level Step 3: If this Mean Relevance Span Level contains only Dummy Page; Then replace the Dummy Page and goto Step 4; Otherwise goto Step 5 Step 4: For Each Supported Ontology Set Ontology Index Filed of That Level = P_ID of Page P End Loop goto Step 6 Step 5: Insert Page (P) in IBAG as follows: Call Find_Location (Incomplete IBAG, Page P) Call Find_Parent (RPaG, Incomplete IBAG, Page P) Call Set_Link (RPaG, Incomplete IBAG, Page P) Step 6: goto Step2 until all the pages traverses in RPaG Step 7: End Method 1.1: Cal_Mean_Rel_Val Cal_Mean_Rel_Val (Page P) MEAN_REL_VAL:= •(Relevance (Relevance Value for each Ontology) / Number of supported Ontology. Return MEAN_REL_VAL END
Introducing Dynamic Ranking on Web Pages
107
Method 1.2: Find_Location Find_Location (Incomplete IBAG, Page P) All Left Side Page Mean Relevance Value is Grater Than Page P Mean Relevance Value and All Right Side Page Mean Relevance Value is Lesser Than Page P Mean Relevance Value and return Location. END Method 1.3: Find_Parent Find_Parent (RPaG, Incomplete IBAG, Page P) If More than one parent exists in RPaG Then For Each Parent Page Call Cal_Mean_Rel_Val(Parent Page of Page P in RPaG) End Loop Take Maximum MEAN_REL_VAL Page among those Parent Pages in RPaG as a Parent of Page P in IBAG End If If Page P Location is Left Most Position Then For each left side page in parent level IBAG of right side Parent Page of page P If parent of P in RPaG found Then Add Page P as a Child of that Parent Page in IBAG and Return; End If End Loop Add Page P as a Child of Right Side Page Parent in IBAG Else If Page P Location is Right Most Position Then For Each right side Page in parent level IBAG of left side Parent Page of Page P If parent of P in RPaG found Then Add Page P as a Child of that Parent Page in IBAG and Return; End If End Loop Add Page P as a Child of left Side Page Parent in IBAG Else If Left Side Page Parent of Page P in IBAG = Parent of Page P in RPaG Then Add Page P as a Child of Left Side Page Parent in IBAG Else If Right Side Page Parent of Page P in IBAG = Parent of Page P in RPaG Then Add Page P as a Child of Right Side Page Parent in IBAG Else If Left Side Page Parent of Page P in IBAG != Right Side Page Parent of Page P in IBAG Then Find ‘Parent Page of P in RPaG’ between those two Parents in IBAG If Found Then Add Page P as a Child of that Parent Page in IBAG
108
D. Mukhopadhyay, A. Kundu, and S. Sinha
Else Add Page P as a Child of left Side Page Parent in IBAG End If Else Add Page P as a Child of Left Side Page Parent in IBAG End If Return; END Method 1.4: Set_Link Set_Link (RPaG, Incomplete IBAG, Page P) For Each Supported Ontology Check Left Side Page Ontology Link Field until Link Not Found and Then If Link Came From Index Then Set Page P Ontology Link Field = Ontology Index Filed of That Level and Ontology Index Filed of That Level = P_ID of Page P Else Set Ontology Link Field of Page P in IBAG = Ontology Link Field of Left Side Tracked Page in IBAG and Ontology Link Field of Left Side Tracked Page in IBAG = P_ID of Page P End If End Loop END 3.3 Procedure for Web-Page Selection and Its Related Dynamic Ranking In this section we have described an algorithm which typically selects Web-pages from IBAG from the given Relevance Range and have selected Ontologies from User-side. Finally, Web-page URLs are shown based on their calculated rank. Algorithm 2. Web-page Selection INPUT: Relevance Range, Ontology Flags, Search String, Index Based Acyclic Graph (IBAG) OUTPUT: Web Pages According to the Search String Step 1: Initially taken one Search string, Index Based Acyclic Graph (IBAG) Step 2: Parse the Input Search string and find ontology terms. If there doesn’t exist any ontology terms then exit Step 3: Select all Web pages according to their Range and Selected Ontology Step 4: Call Cal_Rank (Input String Ontology Terms, Selected Web Pages) Step 5: Display Web pages according to their Rank Step 6: End
Introducing Dynamic Ranking on Web Pages
109
Method 2.1: Cal_Rank Cal_Rank (Input String Ontology Terms, Selected Web Pages) For Each Web Page For Each Input String Ontology Term RANK = RANK + Number of occurrence of Input String Ontology Terms in the Web page * Weight Value of Ontology Term; End loop Set RANK Value of the Web Page and then make RANK = 0; End loop END
4 Conclusion In this paper, a prototype of Multiple Ontology supported Web Search Engine is shown. It retrieves Web-pages from Index Based Acyclic Graph model. This prototype produces faster result as well as it is highly scalable and the Ranking algorithm generates the order of the Web-page URLs.
References 1. Heflin, J., Hendler, J.: Dynamic Ontologies on the Web, Department of Computer Science University of Maryland College Park, MD 20742 2. Mukhopadhyay, D., Sinha, S.: A New Approach to Design Graph Based Search Engine for Multiple Domains Using Different Ontologies. In: 11th International Conference on Information Technology, ICIT 2008 Proceedings, Bhubaneswar, India. IEEE Computer Society Press, California (2008) 3. Kundu, A., Dutta, R., Mukhopadhyay, D.: An Alternate Way to Rank Hyper-linked Webpages. In: 9th International Conference on Information Technology, ICIT 2006 Proceedings, Bhubaneswar, India. IEEE Computer Society Press, California (2006) 4. WordNet, http://en.wikipedia.org/wiki/WordNet 5. Mukhopadhyay, D., Biswas, A., Sinha, S.: A New Approach to Design Domain Specific Ontology Based Web Crawler. In: 10th International Conference on Information Technology, ICIT 2007 Proceedings, Bhubaneswar, India. IEEE Computer Society Press, California (2007) 6. WordNet, http://en.wikipedia.org/wiki/George_A._Miller
Multi-criteria Service Selection with Optimal Stopping in Dynamic Service-Oriented Systems Oliver Skroch Business Informatics and Systems Engineering, Universität Augsburg, Universitätsstr. 16, D-86159 Augsburg, Germany [email protected]
Abstract. Advanced, service oriented systems can dynamically re-compose service invocations at run time. For this purpose they may opportunistically select services that are available on the Internet or on open platforms in general. In this paper, a multi criteria selection approach is developed on two optimization goals, quality of service and cost of service invocation. Also, stopping theory is applied to optimize the multi criteria service re-composition. An efficient algorithm is presented which is run time capable since it decides within a predefined time frame. The algorithm yields the best possible lower probability bound for taking an optimal choice. The applied theory is confirmed in simulation experiments. Keywords: Service Oriented Software, Software Architecture, Distributed Systems, Run Time Re-configuration, Optimal Stopping, Multi Criteria Optimization.
1 Introduction In distributed, service oriented system architectures [1,2,3,4] components implement a number of services through well defined interfaces. Selecting suitable components and services is a decisive step in the development of service oriented software. An adequate selection of components makes up a larger service oriented application, with services that jointly perform all operations to cover a set of functional requirements. But requirements do not restrict to functionality and also include non-functional features such as reliability, performance, etc. [5]. Service selection on functional aspects takes place initially at design time, and guides static software architectures. The approach presented here does not focus on functional selection but aims at the improvement of non-functional quality properties in dynamic systems that can be re-composed and preserve their functional behavior. Such re-composition techniques are used in many engineering disciplines. One example in software engineering are Voice over IP (VoIP) clients that select their codec from a number of predefined options according to actual data transfer rates measured at run time. For such non-functional improvements the system needs service alternatives that show different quality characteristics while they are function-wise indistinguishable. In closed systems the run time alternatives can be predefined at design T. Janowski and H. Mohanty (Eds.): ICDCIT 2010, LNCS 5966, pp. 110–121, 2010. © Springer-Verlag Berlin Heidelberg 2010
Multi-criteria Service Selection with Optimal Stopping
111
time, such as different codecs in the VoIP example. But useful service alternatives may be unknown at design time and may become available only later at run time. Open systems that can opportunistically perform parts of their functionality also through externally hosted components may then profit at run time by finding and using an external functional alternative that is superior in non-functional terms (Fig. 1). Static Composition
Dynamic Re-composition
Closed System
endemic monolith
Service invocation with predefined options (parameters)
Open System
development with explicit dependencies only (low integration efforts)
opportunistic service invocation with unknown external options
Fig. 1. Open systems with dynamic re-composition capabilities can opportunistically invoke services from external sources
Improving large and distributed software systems is challenging, because several quality requirements have to be dealt with. Also publicly available services from open platforms increasingly come with monetary costs, for example a charge per service invocation. The multi criteria optimization approach developed in this paper considers both quality of service and cost of service invocation for the selection of service options. The rest of the paper is structured as follows. Section 2 explains the concept with service oriented systems architecture, run time service re-composition with search on open platforms (the Internet), multi criteria optimization, and an optimal stopping strategy from mathematical statistics. Section 3 applies the concept to a practical scenario, develops the multi criteria utility function, algorithmically applies the optimal stopping solution with service quality and service invocation costs, and presents results from simulation experiments. Section 4 presents related work and section 5 briefly summarizes and concludes the paper.
2 Concept 2.1 Component Based and Service Oriented System Architectures The engineering paradigm of component and service orientation provides basic means to construct flexible and distributed systems across the Internet. Fig. 2 shows a general model of component based and service oriented system architectures and how services are provided and requested through service interfaces. In these system architectures, components offer and demand services through a number of well-defined interfaces. Two principles guide the process of designing a system, composition and delegation. According to the composition principle, a provided interface can be composed with a requested interface from another component, to combine into aggregated components. According to the delegation principle, interfaces can be delegated to other interfaces of the same type, to hand over processing. See [1,6,7].
112
O. Skroch
open platform
component
component component
component component
requested service interface
supplied service interface
delegation
composition
alternative composition
Fig. 2. Component based and service oriented system architecture example, based on [6]. System components provide and request services through service interfaces.
Composing a larger distributed system implies to match supplied interfaces with requested interfaces. Supplied services are available from other components, which could even be located externally on the Internet or on other open platforms. The goal is to invoke a best possible service option in terms of highest quality and lowest cost. 2.2 Run Time Service Re-composition on Open Platforms Run time adaptation of service oriented systems aims at dynamic service recompositions in open system architectures. Service options that are provided on open platforms may fit function-wise but can still differ in other ways such as in particular quality and cost. It is possible to improve a given service composition by selecting a provided service alternative that is better than the internally provided service (assuming a non-empty set of options which can be guaranteed with the built-in internal option as fallback if no service is found). Open Platform n external service options
Service Oriented System m internal compositions
Choice 1 composition : 1 option
Search 1 composition : n options
Allocation m compositions : 1 option
Screening m compositions : n options
Fig. 3. Possible matching schemes from internal compositions in service oriented systems to externally provided service options on open platforms (the Internet)
Fig. 3 proposes a classification of the possible schemes for matching n supplied service options in m existing compositions: screening, search, allocation and choice. Choice compares one composition – a requested service with its provided service – against the alternative composition of this same requested service with another provided service alternative. It is the fundamental decision and also the basic consideration for the other three matching schemes. Allocation determines the suitability of one particular service option in many or all compositions. It can be seen as a repetition of
Multi-criteria Service Selection with Optimal Stopping
113
choice with one option in each of the compositions. Search checks many supplied service options in one particular composition. It can be seen as a repetition of choice with each of the options in one composition. Screening compares all compositions with all service options. It is the most general approach and can be seen as search with allocation. Before a requested service interface calls its provided interface at run time, the system looks for externally supplied options and decides whether to re-compose this service call. This implies matching operations from the search scheme and excludes allocation and screening schemes. The related computing can be done by the system itself, for example in a sub-system that orchestrates and monitors re-compositions. Validated approaches of such ‘self-adaptable’ systems reach back at least to the Viable Systems Model in the 1960s and 1970s [8]. 2.3 Multi Criteria Optimization The general multi criteria optimization problem is to optimize a vector of objectives, where each objective contains its decision variables or criteria, under additional equality and/or inequality constraints. In contrast to single criterion optimization, a multi criteria problem typically has no unique global solution, instead there will usually be a set of points that all comply to the predetermined definition of the optimal solution [9,10]. Hence a concept to define one optimal solution is required. The predominant concept in literature is Pareto optimality, two other widely known concepts are efficiency and compromise. Three major categories of multi criteria optimization methods can be distinguished as to their articulation of preferences: a priori, a posteriori, or none. With a priori preferences (known in advance), as in this paper’s approach the preferences for the best quality of service and the lowest cost of service invocation, the most common approach is the weighted sum method [9]:
U=
∑i=1 wi Fi (v) l
(1)
where U is the utility function, Fi(v) are the objectives (l = 2 with two objectives), v is a vector of decision variables or criteria, and w is a vector of weights usually set such that
∑i =1 wi = 1 and w > 0. If all wi are positive, then minimizing (1) is sufficient for k
Pareto optimality [9]. 2.4 1/e Law of Optimal Stopping
Optimal stopping problems can be understood from the example of n options that are discovered in random sequence, and the intention is to choose a best option. When discovering an option, the final decision has to be made to either accept or reject, and recalls are inadmissible. At first glance, it seems that in this situation regardless of the decision method, the probability to make a right decision approaches zero when the number of available options grows. But this is not true, instead the following optimal decision strategy is known. Let F(z) be a distribution function on the real time interval [ 0, t ], let Z1, Z2, … be i.i.d. random variables with the continuous distribution function F where Zk is the
114
O. Skroch
arrival time of option k. Let N be a non-negative integer random variable independent of all Zk so that N represents the unknown total number of options. With N = n, each arrival order 〈1〉 , 〈 2〉 , ... , 〈 n〉 of the options is equally likely. Now let the waiting time x be defined as the time up to which all incoming options are observed without accepting, but the value of the leading option is remembered. For any distribution g with P(N>0) > 0 there exists an optimal waiting time x* maximizing the success probability if we accept the first leading candidate arriving after time x, if there is one, and refuse all candidates if there is none. Also, for all ε > 0 there is an integer m where N ≥ m implies ⎡1 ⎧ 1 ⎤ 1 1⎫ x* ∈ ⎢ − ε ; ⎥ where = inf ⎨ x | F ( x) = ⎬ e e⎭ e e ⎩ F F ⎦ ⎣ F
(2)
(2) is the only waiting time policy with the best possible success probability ≥ 1/e and this is valid for any distribution of N. This is the so-called 1/e law of optimal stopping from [11].
3 Application 3.1 Application Scenario
Software re-composition at run time is performance critical, in particular with searching for options on open platforms. Searching therefore needs to be restricted to make any run time adaptation with uncontrolled external options from the Internet an applicable scenario in practice. To set one practical scenario that can realistically be applied, this approach restricts the search by limiting the run time delay that arises from searching to a predefined and fixed maximum time frame. Publicly available services from open software platforms can not be controlled by the re-composing system either. One important consequence is that any externally provided service alternative can be unavailable or changed in the next moment. Therefore it is not possible to memorize an external option and, after further unsuccessful search, get back and use this service option. This implies that the final decision whether to invoke a certain external service must be made straight away. Hence conditions in the application scenario are: − decision criteria for choosing external functional service options are quality of service and service invocation cost, − the maximum time to be spend for the decision process is limited in advance, − any decision to accept and use an external service option, or to finally reject it without recall, has to be taken straight away, − and the number of suitable service options is unknown or cannot be predefined. Under these conditions, improvements are possible within the described concept, and optimal stopping rules are applicable as exact algorithmic optimizations on multiple decision criteria. The following application of stopping theory with multiple optimization goals is guided by this application scenario and optimizes the re-composition process. It determines the best point to stop the search for further alternative options yielding the best possible lower probability bound for making an optimal choice.
Multi-criteria Service Selection with Optimal Stopping
115
3.2 Multi Criteria Utility Function
In the multi criteria approach it is assumed that the quality of service q and the cost of service invocation c are optimization objectives. From (1) the full utility function can be obtained with U = q + c = w1 F1 (v) + w2 F2 (v)
(3)
where the decision variables v ∈ E j are j-dimensional vectors, the criteria functions
Fi (v) : E j → E1 , the weights wi > 0, and w1 = w2 to simplify the demonstration of the approach. Quality of service. In many application areas, it can be simple to define and compare a plain quality objective F1(v) for an opportunistic service invocation. For example with Web services offering currency exchange rates, more recent rates are better, and rates with greater accuracy through more digits can be seen as better, too. The theoretical and operational complexity of the common quality function F1(v) is very high though. Universal quality of service definitions and system quality predictions are a challenging area of currently ongoing research. A concrete suggestion that can be operationalized is the generic quality framework from [5], where Encapsulated Evaluation Models (EEM) are attached to each component for each quality attribute, and operational usage profiles, composition algorithms and evaluation algorithms complement the framework. Defining an EEM as Qv(Y), (1) can again be applied to formally yield F1 (v) = ∑ v =1 wvq Qv (Y ) 6
(4)
where the service type Y is assumed to be constant for each decision (since functionally indistinguishable services are considered), and index v = 1,2,...,6 denotes the quality attributes in the decision vector according to [5], where the six established attributes are reliability, availability, safety, security, performance and timeliness. The weights w q > 0 , and for ease of demonstration w1q = w2q = ...w6q . Cost of service invocation. The cost function F2(v) can often be reduced to a plain cost criterion, for example in this application scenario a one-off charge is assumed as F2(v) = rC where C is a fixed amount in units charged for one service invocation and r is a suitable unit conversion factor. But also with F2(v), the operational complexity grows with the number of elements in the decision variable vector, which means with the pricing models and the effective tariffing rules that are applied. It is known from other domains that a wide range of models from flat rates to highly sophisticated, multi dimensional rating and discounting can be relevant. To describe the general cost objective, Encapsulated Cost Models Cv(Y) can be defined according to the EEM from [5]. Then it is possible to once more apply (1) and define a formal cost function
116
O. Skroch
F2 (v) =
∑v wvc Cv (Y )
(5)
where the component type Y is again constant (for the same reasons as mentioned with the quality function) and index v now denotes the cost attributes in the vector, such as for example one-off charge, time of invocation, etc. Again, the weights wc > 0 and for ease of demonstration w1c = w2c = ... Quality of service and service invocation cost can be calculated as described, so that the values of U for a service invocation alternative are known to an orchestration and re-composition sub-system, or can be identified at least on an ordinal scale. 3.3 Optimal Stopping
To improve the dynamic re-composition process, the optimal point when to stop the search for further service alternatives can be determined such that U is minimized. The stopping rule from (2) is applicable. We assume x = F(z), z ∈ [ 0; t ] with a continuous time scale x between 0 and 1, and Xk = F(Zk) uniform on [ 0; 1 ]. The stopping rule will make the optimal choice if the min U option 〈1〉 arrives in ( x; 1 ] before all other options arriving in ( x; 1 ] which are better than the best of those which arrived in [ 0; x ]. From the k+1 best options the option 〈 k + 1〉 arrives in [ 0; x ] and the k best ones in ( x; 1 ] with probability x(1-x)k. Since 〈1〉 arrives before 〈 2〉 ,..., 〈 k 〉 with probability 1/k:
Pn ( x ) = x
∑k =1 k (1 − x )k n
1
= x
∑k =1 k (1 − x )k n −1 1
+
1 (1 − x) n n
(6)
with n ≥ 2 . The sum term of (6) contains the Taylor expansion of –ln(x) and as n → ∞ one obtains Pn(x) → –x ln(x) which has a unique maximum at x = 1/e. The well known value 1/e ≈ 0.368 is the asymptotically best possible lower bound. See also [11;12]. An algorithm therefore identifies externally provided service options suitable for matching with the internal composition’s requested service, calculates U and rejects the options, while memorizing the utility value of the best option found yet. With a uniform distribution of service discovery events, as soon as a proportion of t/e of the predefined time frame has passed the next leading service option is chosen, if there is one within the predefined time period. Otherwise, if no more option that is better than all previous ones shows up until the time frame expires, no alternative choice is made and the original internal composition remains in place. Since t is predefined in this scenario, t/e is constant. The pseudo code fragment for the optimal stopping algorithm returns a handle to the selected service (line 15) or the null handle, if no service is found (line 17). The stopping algorithm is time efficient even in the worst case, because the maximum length of the time frame t is predefined and will cut off the search. Space complexity for the stopping rule is also constant even in the worst case, because no more than one value (the best yet) is stored at any time.
Multi-criteria Service Selection with Optimal Stopping
117
Pseudo code fragment for the optimal stopping algorithm implementing the 1/e law 01 service_handle Select_Service( service_queue SQ ) { 02 // clear utilities and set cut-off times 03 u_ext = u_ext_min = 0; 04 t_opt = _Now() + ( t / 2,71828 ); 05 t_end = _Now() + t; 06 // poll services, reject and keep U 07 while( _Now() <= t_opt ) { 08 u_ext = Calculate_u_ext( Poll( SQ ) ); 09 if( u_ext < u_ext_min ) u_ext_min = u_ext; 10 } 11 // poll services, choose best 12 while( _Now() <= t_end ) { 13 this_service = Poll( SQ ); 14 if( Calculate_u_ext( this_service ) < u_ext_min ) 15 exit( this_service ); // found one 16 } 17 exit( null ); // found nothing 18 }
In the application scenario, the algorithm will statistically outperform an unsystematic service re-composition already with more than three alternative service options (since n-1 < e-1 for all n > 3). 3.4 Simulation Experiments
Experiments were conducted with simulated generic Web services offering different quality of services at a fee per service invocation. Table 1 shows results from the simulation. Table 1. Simulation results for the application scenario experiment
avg. utility achieved
best invoked
fallback invoked
avg. services searched
avg. run time
1-500 501-1000 1001-1500 1501-2000 2001-2500
12.282 12.562 11.556 11.310 11.548
100 104 106 111 103
181 186 168 164 166
39.6 40.2 39.1 38.3 39.7
0.1494 0.1512 0.1449 0.1503 0.1485
all
11.8516
104.8
173.0
39.38
0.14886
In the simulation experiments, uniformly distributed utility values U between 0 (best) and 99 (worst) were randomly assigned to the compositions with external service options. The internal composition was given an assumed fixed utility value of U = 30. This simplifies the simulation without loss of generality, and is also the basis for measuring and calculating the improvement values. The proposed optimization method works with any utility measurement function that produces at least ordinal results for the matching operation. 2500 experiments were conducted in five runs with 500 run time re-compositions each, and 100 different Web service options were available for each invocation. The options were function wise indistinguishable but had distinct utility values. The maximum run time delay t was set to 0.200 seconds.
118
O. Skroch
In the run time re-composition experiments with multi criteria optimal stopping, the best available service was actually selected in 524 experiments, or 21 percent of the service invocations related to the number of options actually matched. The ratio for selecting the best available service related to the unknown number of service options available within the time frame was measured with 0.3706. This differs by only 0.7 percent from the theory and the small deviation can be explained with minimal timing imprecision in the simulation. On the bottom line, the dynamic system performed 18.148 utility points better than the assumed static system with U=30. The total average utility improvement was 11.852 utility points, or 39.3 percent. In 865 experiments the internal service remained in use. On the average 39.4 services were evaluated causing an average run time delay of 148.9 milliseconds. The fastest decision was made after 77.1 milliseconds and also returned the best available option.
4 Related Work Prediction and optimization of service oriented system quality, especially performance, is addressed in [13]. Algorithms for service selection under quality of service constraints are proposed and simulated in [14], and a general approach to improve software architectures with multi criteria optimization strategies is described in [15]. Two strategies for Web service selection in dynamic and failure prone environments are derived in [16]. The use of a knowledge-based network planning heuristic for the automatic composition of publicly available Web services is proposed in [17]. Already in [18] the modeling of dynamic service oriented architectures is addressed, and recently [19] reasons about a self-organized computing framework. In [20] a general approach is proposed to constantly monitor dynamically changing component architectures; in [21], dynamic monitoring of a system with its components distributed over different hardware is suggested. [22] proposes a two-phase strategy to support run time decisions in distributed systems. [23] describes how components can be combined to fulfill requirements on minimal costs. Actual application areas of dynamic software architectures include dynamic resource selection for service composition in grid computing [24], or evaluation of faster components against cheaper ones in digital signal processing [25]. An industry pilot for dynamic software architectures is described and its benefits over a static approach are observed in [26]. Stopping theory and its possible benefits seemingly have received little if any attention in literature on software quality optimization or dynamic, distributed systems yet. But stopping problems are a well known research topic in mathematical statistics. The problem class discussed in this paper has been described already in [12] and the so-called 1/e stopping law used in this paper’s application scenario is provided in [11]. Single criteria stopping theory was applied already in the author’s previous research on service oriented systems architectures [27], further related work in the area could not be found. Other areas discussing stopping rules in computer science can be found in evolutionary algorithm research [28,29,30].
Multi-criteria Service Selection with Optimal Stopping
119
5 Summary and Conclusion Multi criteria optimization has been used in combination with an optimal stopping strategy to improve distributed and dynamic, service oriented systems. The goals were to choose a best quality service at lowest invocation cost, and then opportunistically invoke the chosen external service instead of a less qualified built-in option. A realistic application scenario with a predefined maximum time frame for the service reselection was assumed. The concept was applied to the scenario and an algorithm was developed that decides within the predefined time frame and yields the best possible lower probability bound for taking the optimal choice. Simulation experiments confirmed the developed approach. The increasing use of the Internet as open platform for large and distributed, service oriented systems and mashups could drive the application of the proposed approach. Important areas of possible application examples already include grid computing, distributed multimedia, mobile computing, and self-healing software. The approach defined a multi criteria utility function for service re-composition. But there are challenging and currently unsolved research questions related to the prediction of software quality. Consequently, operational usage profiles or composition and evaluation algorithms are not explicitly considered in the quality of service objective. The simulated approach assumed a uniform distribution of service discovery events. While this is no loss of generality, future research could aim at empirically identifying actual distribution functions. It could also be of interest for upcoming research to compare the optimal stopping approach not only to a static system but to dynamic systems using other selection strategies. Finally it might be possible in further research to adapt the proposed concept with service re-compositions in distributed systems that dynamically self-adjust to changing functional requirements.
References 1. Atkinson, C., Bunse, C., Groß, H.-G., Kühne, T.: Towards a General Component Model for Web-Based Applications. Annals of Software Engineering 1-4, 35–69 (2002) 2. Bernstein, P.A., Haas, L.M.: Information Integration in the Enterprise. Communications of the ACM 9, 72–79 (2008) 3. Gamble, M.T., Gamble, R.: Monoliths to Mashups: Increasing Opportunistic Assets. IEEE Software 6, 71–79 (2008) 4. Schulte, R.W., Natis, Y.V.: Service Oriented Architectures, Part 1, SPA-401-068, Gartner Research, Stamford, USA (1996) 5. Grunske, L.: Early quality prediction of component-based systems - A generic framework. The Journal of Systems and Software 5, 678–686 (2007) 6. Shaw, M., Garlan, D.: Software Architecture: Perspectives on an Emerging Discipline. Prentice-Hall, Upper Saddle River (1996) 7. Szyperski, C., Gruntz, D., Murer, S.: Component Software: Beyond Object-Oriented Programming, 2nd edn. Addison-Wesley, London (2002) 8. Beer, S.: Brain of the firm, 2nd edn. Wiley, Chichester (1981) 9. Marler, R.T., Arora, J.S.: Survey of multi-objective optimization methods for engineering. Structural and Multidisciplinary Optimization 6, 369–395 (2004)
120
O. Skroch
10. Figueira, J., Greco, S., Ehrgott, M. (eds.): Multiple Criteria Decision Analysis: State of the Art Surveys. Springer, Boston (2005) 11. Bruss, F.T.: A unified approach to a class of best choice problems with an unknown number of options. The Annals of Probability 3, 882–889 (1984) 12. Lindley, D.V.: Dynamic programming and decision theory. Applied Statistics, 39–51 (1961) 13. Becker, S., Grunske, L., Mirandola, R., Overhage, S.: Performance Prediction of Component-Based Systems – A Survey from an Engineering Perspective. In: Reussner, R., Stafford, J.A., Szyperski, C. (eds.) Architecting Systems with Trustworthy Components. LNCS, vol. 3938, pp. 169–192. Springer, Heidelberg (2006) 14. Yu, T., Zhang, Y., Lin, K.-J.: Efficient Algorithms for Web Services Selection with Endto-End QoS Constraints. ACM Transactions on the Web 1, 1–26 (2007) 15. Grunske, L.: Identifying ‘Good’ Architectural Design Alternatives with Multi-Objective Optimization Strategies. In: Proceedings 28th International Conference on Software Engineering, Shanghai, China, May 20-28, pp. 849–852. ACM, New York (2006) 16. Hwang, S.-Y., Lim, E.-P., Lee, C.-H., Chen, C.-H.: Dynamic Web Service Selection for Reliable Web Service Composition. IEEE Transactions on Services Computing 2, 104–116 (2008) 17. Oh, S.-C., Lee, D.: Effective Web Service Composition in Diverse and Large-Scale Service Networks. IEEE Transactions on Services Computing 1, 15–32 (2008) 18. Allen, R., Douence, R., Garlan, D.: Specifying and Analyzing Dynamic Software Architectures. In: Astesiano, E. (ed.) FASE 1998. LNCS, vol. 1382, pp. 21–37. Springer, Heidelberg (1998) 19. Liu, L., Thanheiser, S., Schmeck, H.: A Reference Architecture for Self-organizing Service-Oriented Computing. In: Brinkschulte, U., Ungerer, T., Hochberger, C., Spallek, R.G. (eds.) ARCS 2008. LNCS, vol. 4934, pp. 205–219. Springer, Heidelberg (2008) 20. Muccini, H., Polini, A., Ricci, F., Bertolino, A.: Monitoring Architectural Properties in Dynamic Component-Based Systems. In: Schmidt, H.W., Crnković, I., Heineman, G.T., Stafford, J.A. (eds.) CBSE 2007. LNCS, vol. 4608, pp. 124–139. Springer, Heidelberg (2007) 21. Mikic-Rakic, M., Malek, S., Beckman, N., Medvidovíc, N.: A Tailorable Environment for Assessing the Quality of Deployment Architectures in Highly Distributed Systems. In: Emmerich, W., Wolf, A.L. (eds.) CD 2004. LNCS, vol. 3083, pp. 1–17. Springer, Heidelberg (2004) 22. Andreolini, M., Casolari, S., Colajanni, M.: Models and Framework for Supporting Runtime Decisions in Web-Based Systems. ACM Transactions on the Web 3, 17 (2008) 23. Cortellessa, V., Crnkovic, I., Marinelli, F., Potena, P.: Experimenting the Automated Selection of COTS Components Based on Cost and System Requirements. Universal Computer Science 8, 1228–1255 (2008) 24. Cheung, W.K., Liu, J., Tsang, K.H., Wong, R.K.: Dynamic Resource Selection for Service Composition in The Grid. In: Proceedings 3rd IEEE/WIC/ACM International Conference on Web Intelligence, Beijing, China, September 20-24, pp. 412–418. IEEE Computer Society, Los Alamitos (2004) 25. Bakshi, S., Gajski, D.D., Juan, H.-P.: Component Selection in Resource Shared and Pipelined DSP Applications. In: Proceedings European Design Automation Conference, Geneva, Switzerland, October 16-20, pp. 370–375. IEEE Computer Society, Los Alamitos (1996)
Multi-criteria Service Selection with Optimal Stopping
121
26. Frank, C., Holfelder, W., Jiang, D., Matlys, G., Pepper, P.: Dynamic Software Architectures for a ‘Sometimes Somewhere’ Telematics Concept. Technical Report No 2003-11, Technische Universität Berlin, Berlin, Germany (2003) 27. Skroch, O., Turowski, K.: Improving service selection in component-based architectures with optimal stopping. In: Proceedings 33rd Euromicro Conference on Software Engineering and Advanced Applications, Lübeck, Germany, August 27-31, pp. 39–46. IEEE Computer Society, Los Alamitos (2007) 28. Dimitrakakis, C., Savu-Krohn, C.: Cost-Minimising Strategies for Data Labelling: Optimal Stopping and Active Learning. In: Hartmann, S., Kern-Isberner, G. (eds.) FoIKS 2008. LNCS, vol. 4932, pp. 96–111. Springer, Heidelberg (2008) 29. Martí, L., García, J., Berlanga, A., Molina, J.M.: A Cumulative Evidential Stopping Criterion for Multiobjective Optimization Evolutionary Algorithms. In: Proceedings Genetic and Evolutionary Computation Conference, London, UK, July 7-11, pp. 2835–2842. ACM, New York (2007) 30. Polushina, T.V.: Estimating optimal stopping rules in the multiple best choice problem with minimal summarized rank via the Cross-Entropy method. In: Proceedings IEEE Congress on Evolutionary Computation, Trondheim, Norway, May 18-21, pp. 1668–1674. IEEE Computer Society, Los Alamitos (2009)
Template-Based Process Abstraction for Reusable Inter-organizational Applications in RESTful Architecture Cheng Zhu2, Hao Yu2, Hongming Cai1,2, and Boyi Xu3 1
School of Software, Shanghai JiaoTong University, Shanghai, China 2 BIT Institute, University of Mannheim, Mannheim, Germany 3 Antai College of Economic & Management, Shanghai JiaoTong University, Shanghai, China {zhucheng.de,hao.yu.de}@googlemail.com, {hmcai,byxu}@sjtu.edu.cn
Abstract. Currently there is rising interest in using REST architecture to implement business processes. To avoid duplicate designs of similar processes, abstract business processes are used to support for reusability. The modeling of abstract business processes based on RESTful architecture, however has been ignored. This paper abstract the similarities between isomorphic processes by introduce a template process in RESTful architecture. BPEL is extended by introducing the concept of resource. A case study is given to illustrate the effectiveness of the approach. This reusable mechanism has the potential to remedy the problems of duplicate designs in RESTful architecture, and can also help inter-organizational applications obtain high maintainability. Keywords: RESTful Services, Abstract Business Process, Template-based process, BPEL, Adaptive Workflow.
1 Introduction The business process is one of the core concepts in enterprise information systems, and is generally accepted as an effective way to organize the resources in an enterprise, especially in a distributed, inter-organizational environment. To model and implement highly dynamic and scalable business processes in this type of environment, flexible abstracting methods and modeling approaches are necessary. Service Oriented Architecture (SOA) [1] is generally believed to be the foundation on which business processes are executed. In the SOA paradigm, both business services and business processes are mapped to IT services. Business Process Execution Language (BPEL) [2] is the standard service composition language for modeling and executing business processes. In many cases, there are many similarities between different business processes, even though they are used by different organizations and for different purposes. Thus, designing business processes often is lots of duplicated work if these similarities are not taken into consideration. Abstracting and reusing existing business process models makes it possible for information systems to not only react quickly to the changes in business processes, but also reduce the cost of T. Janowski and H. Mohanty (Eds.): ICDCIT 2010, LNCS 5966, pp. 122–133, 2010. © Springer-Verlag Berlin Heidelberg 2010
Template-Based Process Abstraction for Reusable Inter-organizational Applications
123
modification and implementation. The concept of the BPEL Abstract Process is introduced in standard BPEL to resolve these problems. BPEL Abstract Process defines only a partially specified process and hides some of the required concrete details, e.g. activities, information entities and virtual resources, so that the concrete BPEL can be implemented based on filling absent content. Rising interest in REST (Representational State Transfer) [3] is the key success factor of World Wide Web. RESTful architecture is based on the concept of resource, which is an abstraction of an information entity and can be referenced with a global identifier (URI). A few papers that use REST principles to support the business process modeling and execution have been published [4] [5] [6]. To fill the gap between REST and SOA, a few specifications also appear, e.g. WADL (Web Application Description Language) [7] and WSDL 2.0 can be used to describe web application and also to describe RESTful services. Some papers also represent different approaches to extend standard BPEL to support RESTful services [8] [9] [10]. But the main problem is the ignorance of modeling abstract business processes based on RESTful architecture. We propose designing template-based process with extended BPEL in RESTful architecture to abstract the similarities between isomorphic processes, so that similar processes and resources with similar structures need not be redesigned. Advantages of this approach include: • The reusability and high maintainability can be achieved especially in a large enterprise information system based on REST architecture with lots of similar processes and resources, so that the duplicated design work can be saved. • In the process template design phase, only the topological structure and resources structure need to be taken into consideration. The semantic of a process will be determined after the concrete resources are injected.
2 Related Work In the workflow area there are many discussions for some time on the adaptive workflow which shares the same target of improving the reusability and maintainability as this paper, but in different way. For example, Aalst et al. [11] presented an approach to modify many workflows in consequence of the modification of a workflow schema. Such an approach would improve the development, maintenance and usability of realworld workflow application. Weske et al. [12] introduced an approach to enhance the flexibility of workflow management systems by providing the ability to change the structure of workflow instances dynamically. Geebelen et al. [10] presented a framework that allows the design of processes in a modular way based on reusable templates, and also introduces an approach to process template based on parameter values. However, the abstraction of structural information of processes is not taken into consideration, and the whole approach cannot be used in REST environment. There is now increasing interest in using REST architecture style to support the business process execution. For example, Kumaran et al. [5] presented a so-called RESTful BPM, which models business process by an information-centric approach. Business process was viewed as the evolution of the states of a collection of business entities. The business process abstraction is not mentioned in the article, nor are
124
C. Zhu et al.
RESTful service description and composition. The client gets the next available options only at the execution time and has no chance to know them before hand. BPEL is widely accepted as the standard language to describe business process execution. BPEL can also be used to describe abstract processes, which share the same expression power [2]. Through the use of explicit opaque tokens and omission, BPEL is capable of supporting the abstract processes, but the REST-based abstract processes cannot be described. Some work has been done to extend BPEL to support RESTful services. The “resource-oriented” BPEL was proposed, which is an attempt to model the internal state of the resources published by RESTful services. But, the “resource-oriented” BPEL cannot invoke or composite the external RESTful services [8]. In [9] BPEL is extended to support the composition of RESTful services natively. The extension is achieved in the following two aspects. Firstly, a process can be published as resources dynamically by adding a new element. Secondly, the RESTful services with BPEL can be invocated by adding four activities according to the standard HTTP methods. But these extensions are not able to describe the RESTbased template process model because of the lack of expressive power in the abstract level. This paper is built upon this BPEL extension and introduces some new elements to support the REST-based template-process.
3 Overview of Template-Based Process Abstraction In RESTful architecture, business services interact with each other by operating on resource objects. The business process is regarded as a set of activities that use correlated resources separately. All these resource entities in the RESTful architecture are encapsulated as the RESTful services. Thus, activities of a business process can access resources by invoking operations of the resources. The isomorphs abstraction of the business process models is based on RESTful web services architecture, in which a business process can be regarded as the transformation of the resource states in a resource set. All the information entities and objects related to the process and based on resource description, are regarded as a resource set. A resource set is hence the critical element to be abstracted for customizing the business processes with different resources, and encapsulating the common structures of similar resources between business processes. Here, template process is defined as a set of activities, in which an activity is defined as a series of invocations of different resources. The whole abstraction approach is summarized as a flowchart as Fig. 1. The similarities among isomorphic processes are abstracted first by analyzing the resources used by processes in RESTful architecture. Then, we introduce a mechanism with which to abstract these similarities by designing a template process that contains the common structural information of similar processes and is described by extended BPEL. In this phase, the analysis and the abstracting of the similar processes are achieved by process designer. The template process is described by BPEL which is extended in this paper to support resource structures. The BPEL used to describe the template process is called template-BPEL. At compile time, the set of concrete resource (with URL) in these similar processes are injected into the predefined template-BPEL separately, and by a mapping mechanism, the concrete BPEL are generated, which facilitates the way of implementing
Template-Based Process Abstraction for Reusable Inter-organizational Applications
125
Fig. 1. Template-based Process Abstraction Approach
a set of similar processes described by the same template-process. This can definitively improve the reusability especially in large inter-organizational information systems. Imagine that a set of similar processes need to be changed, and the modifications are also very similar, e.g. delete some resources in common activities, or add an activity to all these processes at a similar position. The requirements can be achieved by changing the common template process of these similar processes.
4 Template-Based Process Abstraction 4.1 Template-Based Process Abstraction Definitions To support the template-based process abstraction using the resource-centric REST principle, some work has to be done. First, a dumb-resource set should be built to enable modeling of resources involved in the system and business process. Second, on the base of the dumb-resource set, the template process should be built to support the business process abstraction. Finally, the relationship between template process and concrete process should be defined and modeled. Definition 1. Dumb Resource Set (MRSk) The Dumb Resource Set MRSk used by a Template Process MPk is defined as a set of Dumb Resource (MR). MRSk = {MR1k, MR2k, MR3k, …}, |MRMk |>0 Definition 2. Template Process (MPk) A Template Process MPk, is defined as a set of Activity Set (ASk), Relation of Activity (RAk), Dumb Resource Set (MRSk) and the relation between activity and resources (AMRk). Thus the Template Process MPk is defined as the following tuple.
126
C. Zhu et al.
MPk = < ASk, RAk, MRSk, AMRk >
ASk = {astartk, aendk, Ank} where Ank={a1k, a2k, a3k, …}, |Ank|>0, astartk∉Ank, aendk ∉Ank. The mapping function from an activity to dumb resource AMRik is defined as a set of resources, which are used in the activity aik in the Template Process MPk. AMRik = AMRk (aik) = {mr | mr∈ MRSk where mr is used in aik, aik ∈ ASk} Definition 3. Dumb Resource (MR) The Dumb Resource is defined as a set of Operation Set (OSi) and Resource State (RS). Thus the Dumb Resource is defined as the following tuples: MRik = , MRik ∈ MRSk, OSi ∈OSREST Here the Resource State (RS) indicates the name of the state. The Operation Set (OS) is defined as a combination of the four most important HTTP operations: POST, GET, PUT and DELETE, which are defined by OSREST as follows OSREST = {oppost, opget, opput, opdelete} Definition 4. Relation of Activity (RAk) The relation of Activity RAk in the Template Process MPk is defined as the following set. RAk = {r|r=, (ai ∈ {∅, astart}∪ Ank) ∧ (aj∈ {aend}∪ Ank)}. A Template Process can be regarded as a set of activities together with their execution sequences. Definition 5. Process (Ptk) A business process Ptk , which is satisfied with the Template-Process MPk, is defined as a set of Resource Set (ResSt ) and the Template-Process MPk it references. Thus the business process Ptk is defined as the following tuple. Ptk = . A business process Ptk can be generated by injecting a set of resource to the MPk. The Dumb Resource Set (MRSk) in the MPk will then be replaced with ResSt. Definition 6. Resource Entity Set (ResSt) The Resource Entity Set ResSt used by the process Ptk is defined as a set of Resource Entities. ResSt = {R1k, R2k, R3k, …}, | ResSt|>0. Definition 7. Resource Entity (Rit) The Resource Entity Rit used by process Ptk is defined as a set of Name, URL and XML Schema and the corresponding IDi of MRik defined in MPk. Thus the Resource Entity is defined as the following tuple: Rit = . 4.2 BPEL Extension BPEL is the standard SOA composition language that describes business process. In runtime, BPEL coordinates the partners through web services interface to achieve the business goal. BPEL is used to describe both executable and abstract processes, which share the same expressive power. The latter is extended through the use of explicit opaque tokens and omission [2]. However, BPEL cannot be directly used to compose
Template-Based Process Abstraction for Reusable Inter-organizational Applications
127
RESTful services that are not dependent on WSDL. In [9] a new element is introduced to enable the publishing of resources to client dynamically depending on whether or not the declarations are reached during the execution of the BPEL process. If a process is executed to a element, its URI is then published and client is allowed to operate on the resource. The four HTTP standard methods (GET, PUT, DELETE and POST) are defined as the handlers of the element and are invoked correspondingly. With these extensions RESTful services can be well composed. These extensions are nevertheless not able to describe the REST-based template process because of the lack of expressive power in the abstract level. For example, there are no concrete resources in the template process, not to mention its URI. Thus, BPEL must be further extended to support the description of REST-based template process abstraction. A mechanism to describe resource in the abstract level has to be introduced. As Fig. 2 shows, the following extensions are introduced in reference to the original expression of abstract process in BPEL. ? … ? … ? … ? … … … … …
Fig. 2. BPEL extensions for REST-based Template Process Abstraction
As shown in Fig.2, an element is introduced to declare a dumb resource consistent with the way standard BPEL handles the abstract process. For example, when there is a element in BPEL which is used as a placeholder for one executable activity, the extension presented in this paper follows the same principle so that users who are familiar with standard BPEL have no problem with understanding. With element , a placeholder for resource is defined. The element uses the ref attribute to abstractly specify the resource. In the later generation of concrete process, concrete resources are injected, which causes the replacement of the element with a element. To handle a dumb resource, only the four HTTP standard methods are allowed, which are defined as the handlers of the dumb resource, namely the four elements , , and . With these definitions the basic control flow of a template process can be expressed. In [9] four activities , , <delete> and <post> are introduced to enable the native support of invoking RESTful Web services from BPEL. In order to describe a template process, these extensions are not appropriate. At the abstract level the URI of resources are not specified. Again, following the principle in standard BPEL to describe abstract process, the four elements are introduced, namely , , and , which, compared with original four activities, use the ref attribute to reference a dumb resource that is replaced by concrete URI in the generation of concrete process.
128
C. Zhu et al.
With the extensions mentioned above, BPEL is then able to describe a template process, in which all resources and operations are described abstractly. This kind of BPEL acts as a template to guide the generation of concrete processes. 4.3 Transformation from Template Process to Concrete Process A template process captures the essential process logic, while excluding the execution of details. For REST-based template processes, certain parts are published as a dumb resource with the element . The corresponding handlers such as element are also defined in template process to control the execution of process. Invocation of other RESTful services in an abstract way is also given in the BPEL extensions. These all represent in the essential process logic and thus are described in the template process. The template process lacks concrete information about a resource, such as its URI and representation form, which belong to the execution details and will be specified during the transformation from template process to process. The approach of transforming the template process to process is also transformation of BPEL and WADL. We define therefore the transformation rules for BPEL as follows: 1. A set of resources are given. These resources must have unique URI and at least one representation form (e.g. XML Schema or JSON etc.) 2. The given resources are then mapped to dumb resources defined in the BPEL for template process. This kind of mapping must not be one-to-one mapping because it is generally allowed to map one resource to many dumb resources. During the mapping the element is replaced with element. The uri attribute of element is filled with the URI of the resource, as Fig. 3 shows. The handler parts can also be correspondingly adapted. 3. The opaque invocation of RESTful services is then replaced with concrete one. Specifically the four elements , , and are replaced with corresponding elements , , <delete> and <post>. The uri attributes are filled with the concrete resource URI.
Fig. 3. Mapping Dumb Resource to Resource
4. Any other opaque elements such as opaque activities or opaque conditions must be concretized. Specially to mention is the opaque conditions defined in the template process, which are just placeholders and must be replaced with meaningful business logic. During the transformation the conditions rely on some resources. 5. Additional BPEL elements may be added to any possible place in the process to reflect the specialty of the process. Template process represents generic business logic. It provides users thus a good starting point to design own processes. The design of own processes has then two phases, namely the fill-up of resources to generate process and the adaptation of generated process.
Template-Based Process Abstraction for Reusable Inter-organizational Applications
129
After the five steps are complete, the concrete process is then generated from the template process. In the concrete process only the concrete elements exist and all opaque definitions of resources, activities, conditions are eliminated. The concrete process is then an executable process and is ready for the BPEL engine.
5 A Case Study 5.1 Business Process and Structural Information Analysis Two common business processes are used as illustrative examples to explain the approach, which are modeled with a UML activity diagram (Fig.4 (a) and Fig.4 (b)).
Fig. 4. Example Scenarios of Two Similar Application Processes
Fig.4 (a) presents a common vacation application process normally used by a HR department. When a clerk wants to apply for vacation, firstly s/he must first retrieve an application form from HR department. Then s/he should submit the completed application to his chief directly. His chief checks the detail information filled in the application, and also checks the vacation record of the clerk. The chief writes an application result after he makes a decision. If the chief can’t make a decision, he should forward the application to his boss. The boss checks not only the application and the vacation record, but also the business plan, and then he makes the decision and writes an application result. Finally the vacation record is updated based on the result of the vacation application, and the application result is sent to the clerk. In Fig.4 (b) the process of a procurement application is presented. A clerk starts the process by retrieving a purchase
130
C. Zhu et al.
requisition based on which he writes a purchase order with detailed purchase information. Similar to the process of a vacation application, the purchase order is directly sent to his chief. If the chief can’t make the decision, the purchase order will then be forward to his boss. Based on the purchase order and the budget, the boss makes the decision and writes the application result, which finally will be sent to the clerk. It can be seen, that these two processes have similar structures, and the distinctions can be limited to the resources they used. To improve the reusability and maintainability, the common structural information and the related resources of the two similar processes should be abstracted into a template process. The resources involved in these two processes are first analyzed. Some of the resources will be changed after the correlated activity is successfully executed. These changed resources are defined as main resources. Other resources will only serve as references in some activities, while their values and states are never changed by any activity in the process. These resources are defined as static resources. Refer to Fig. 4 (a), the vacation application process involves five resources and ten activities. Based on the activities and the state diagrams of each resource, the state space in this process can be defined as Table 1. According to the table 1, a static resource BusinessPlan is identified, thus based on the other four master resources, the activity diagram in Fig. 4 (a) can be represented by the state transformation of the resource array ReSappvacation=<AppForm, Application, VacRecord, AppResult>. Table 1. State Space of Resource Array in Vacation Application Process AppForm Application VacRecord BusinessPlan AppResult
0 Init Init Init Init Init
1 Created Written Updated Filled
2 ToChief Sent
3 ChiefChecked -
4 ToBoss -
5 BossChecked -
In the same way, the state space of resource array in procurement application process is defined as Table 2. In the procurement application process, the only static resource is Budget. The resource array in procurement application process is defined as ReSappprocurement = . Table 2. State Space of Resource Array in Procurement Application Process PurRequest PurOrder Budget Result
0 Init Init Init Init
1 Created Generated Filled
2 ToChief Sent
3 ChiefChecked -
4 ToBoss -
5 BossChecked -
According to the state spaces in the two processes, the common resource structures can be identified according to the following principles. • All the static resources in both state spaces should not be taken into consideration. • The most frequently changed resources should be analyzed and compared, e.g. Applcation in Table 1 and PurOrder in Table 2.
Template-Based Process Abstraction for Reusable Inter-organizational Applications
131
• The resources, which have similar states and are used by similar activities in different processes, should also be analyzed. Based on these principles, we get the common resource structures and their state space as Table 3 defined. Table 3. State Space of Dumb Resource Array
R1 R2 R3
0 Init Init Init
1 Created Filled Filled
2 ToChief Sent
3 ChiefChecked -
4 ToBoss -
5 BossChecked -
Template process design is based on these common resources compose the dumb resource set. Fig. 4 (c) presents a possible template process of the general application, which abstracts the similarities of the two processes and their common resource structures. The abstraction of the template process in the example scenario is based on these following principles. • All the activities in the original process, which don’t use any dumb resource entities, should not exist in the template process. • A meaningful name is chosen to define activities in template process corresponding with the activity name in original processes. • The static resources used by original process should be replaced by <SR> in the corresponding activities in template process. • Select the intersection part of resources in the corresponding activities as the resources contained in the activity in template process. E.g. the activity “Boss Check <Application, VacationRecord, BusinessPlan>” can be mapped to “BossCheck >”. The corresponding activity in template process should be written as “BossCheck>”. In the vacation application process, the AppForm, Application and AppResult are defined as the resource entities corresponding to R1, R2 and R3 respectively. Similarly, in the procurement application process, PurRequest, PurOrder and Result are defined as the corresponding resource entities. In compile time, different resource entity sets are injected into the template process to generate concrete processes. 5.2 Template Process Description with Template-BPEL The template process description is based on the BPEL extensions introduced in part 4.2 and relates to the dumb resources such as R1, R2. The next step is to describe the template-process with template-BPEL which acts as a template to simplify the generation of concrete BPEL. At first, the dumb resource R1 is declared and is waiting for the GET request from client. The response is represented as XML. The dumb resource R2 is then declared to wait for the client to create it with the PUT method. The dumb resource SR is obtained by invoking the corresponding RESTful services and the response is stored in a variable. The concrete address of SR should be filled up later.
132
C. Zhu et al.
Based on R2 and SR the dumb resource R3 is created. Next is a condition declaration whose concrete content is unknown and hence the attribute opaque is assigned as yes. The rest of the BPEL is the same as the parts above.
5.3 Generation of Concrete BPEL Through injection of the concrete resources the concrete BPEL document are generated based on template. We use generation of vacation application from template process to explain the procedure introduced in 5.1. All five resources are assigned with a unique URI and their XML schemas are also given. The element is then replaced with .
The element is replaced by with its URI is specified.
In the transformation from template process to process the condition must be concretized. The condition can be, for example, the evaluation of certain attributes of any resource. In this vacation application the condition is the decision of the chief, if the vacation application should be submitted to boss. approveResult == "submit to Boss"
At the end the generated process should be complemented with an extra operation on the resource vacationRecord, which is not captured in the template process for the lack of counterpart in the procurement application.
6 Conclusion In this paper we proposed a template-based mechanism to abstract the similarities between REST-based processes in inter-organizational applications. BPEL are extended to be able to describe the template process abstraction in REST-based environment. By our
Template-Based Process Abstraction for Reusable Inter-organizational Applications
133
mapping mechanism, the concrete business processes can be generated by injecting different resource entity sets. Our approach is highlighted by its reusability and maintainability during the business process design and management in REST-based interorganizational applications. Our future work focuses on building relationships between resources and processes using domain ontology to better manage large amount of processes. Another possible extension to the template framework is to allow real time modification of process by injecting different resource set.
Acknowledgments This paper is supported by the National High Technology Research and Development Program of China (“863” Program) under No.2008AA04Z126, the National Natural Science Foundation of China under Grant No.60603080, No.70871078, and the Aviation Science Fund of China under Grant No.2007ZG57012. Furthermore we want to express our gratitude to the Alfried Krupp von Bohlen und Halbach Foundation for its supporting of this joint research.
References 1. Cherbakov, L., Galambos, G., Harishankar, R., Kalyana, S., Rackham, G.: Impact of service orientation at the business level. IBM Systems Journal 44(4), 653–668 (2005) 2. Web Service Business Process Execution Language 2.0, OASIS, http://docs.oasisopen.org/wsbpel/2.0/OS/wsbpel-v2.0-OS.pdf 3. Fielding, R.T.: Architecture Styles and the Design of Network-based Software Architectures. Doctoral dissertation, University of Califonia, Irvine (2000) 4. Muehlen, M., Nickerson, J.V., Swenson, K.D.: Developing web services choreography tandards: the case of REST vs. SOAP. Decision Support Systems 40(1), 9–29 (2005) 5. Kumaran, S., Liu, R., Dhoolia, P., Heath, T., Nandi, P., Pinel, F.: A RESTful Architecture for Service-Oriented Business Process Execution. In: Proceedings of IEEE International Conference on e-Business Engineering, pp. 197–204 (2008) 6. Xu, X., Zhu, L., Liu, Y., Staples, M.: Resource-Oriented Architecture for Business Processes. In: 15th Asia-Pacific Software Engineering Conference, pp. 395–402 (2008) 7. Hadley, M.J.: Web Application Description Language (WADL). Sun Microsystems Inc. (2006), https://wadl.dev.java.net/wadl20061109.pdf 8. Overdick, H.: Towards resource-oriented BPEL. In: 2nd European Conference on Web Service (ECOWS), Workshop on Emerging Web Services Technology (2007) 9. Pautasso, C.: BPEL for REST. In: Dumas, M., Reichert, M., Shan, M.-C. (eds.) BPM 2008. LNCS, vol. 5240, pp. 278–293. Springer, Heidelberg (2008) 10. Geebelen, K., Michiels, S., Joosen, W.: Dynamic Reconfiguration Using Template Based Web Service Composition. In: Workshop on Middleware for Service Computing, MW4SOC 2008 (2008) 11. Van der Aalst, W.M.P., Weske, M., Wirtz, G.: Advanced Topics in Workflow Management: Issues, Requirements, and Solutions. Journal of Integrated Design and Process Science 7(3) (2003) 12. Weske, M.: Formal Foundation and Conceptual Design of Dynamic Adaptations in a Workflow Management System. In: 34th Annual Hawaii Int. Conf. on System Sciences (HICSS-34), January 3-6, vol. 7 (2001)
Enhancing the Hierarchical Clustering Mechanism of Storing Resources' Security Policies in a Grid Authorization System Mustafa Kaiiali, Rajeev Wankar, C.R. Rao, and Arun Agarwal Department of Computer and Information Sciences, University of Hyderabad, Hyderabad, India [email protected], {wankarcs,crrcs,aruncs}@uohyd.ernet.in
Abstract. Many existing grid authorization systems adopt an inefficient structure of storing security policies for the available resources, which reduces the scalability and leads to huge repetitions in checking security rules. One of the efficient mechanisms that handles these repetitions and increases the scalability is the Hierarchical Clustering Mechanism (HCM) [1]. HCM outperforms the Brute Force Approach as well as the Primitive Clustering Mechanism (PCM). This paper enhances HCM to accommodate the dynamism of the grid and the same is demonstrated using new algorithms. Keywords: Grid Authorization, Hierarchical Clustering Mechanism, HCM, Access Control.
1 Introduction Grid computing is concerned with shared and coordinated use of heterogeneous resources, belongs to distributed virtual organizations to deliver nontrivial quality of services [2]. In grid, security has a major concern [3]. The independent access policies between different domains would incur redundancy in checking resources’ security policies. Every resource has its own security policy stored independently (ex. Grid Map File [4]). This security policy may be identical or quite similar to security policies of other resources. This fact motivates the idea to cluster resources which have similar security policies; so the system can build a hierarchical decision tree that helps to find the user's authorized resources among different domains more quickly [1]. Many grid authorization systems concentrate on how to write the resource's security policy either by using standard specification languages like SAML or XACML (ex. VOMS [4]), or by using a language specific to a particular system like Akenti [5]. They also concentrate on the authorization process to be centralized or decentralized [6]. Some systems adopt transport level security rather than message level security as the later involves slow XML manipulations which make adding security to grid services a big performance penalty [7]. While these systems have no well-defined data structure to store and manage the security policies to work more effectively. One of the efficient representations of security policies is the hierarchical representation HCM [1]. The idea presented in that paper is considered as efficient, but knowing the T. Janowski and H. Mohanty (Eds.): ICDCIT 2010, LNCS 5966, pp. 134–139, 2010. © Springer-Verlag Berlin Heidelberg 2010
Enhancing the HCM of Storing Resources' Security Policies
135
dynamic nature of the grid, several tools can be embedded for more efficient representation. One of these tools is the temporal caching mechanism that helps to avoid reparsing the decision tree for every user process is covered in section 2. In some grid environments, the user's roles are frequently changeable thus it is important for the caching mechanism to handle this case, which is addressed in section 2.1. In section 3, another caching mechanism which depends on the hamming distance is proposed. In section 4 the adaptability of HCM for the dynamic changes in the grid is studied. The illustration proposed in section 2.1 of paper [1] is extended for demonstrating the tools and algorithms.
2 The Temporal Caching Mechanism (TCM) Each parent node in the decision tree of HCM can be represented as shown in Fig 1; where N_id is a unique number, the Timestamp refers to the last time in which the Security Rules List of that particular node (or its ancestors) has been modified. Each user also has a special record (Fig 2); where the Timestamp refers to the last time in which the system has parsed the decision tree for the user, the Parent Nodes List field is a linked list of the user’s allocated parent nodes. When a user requests for a grid service, the system provide his/her authorized resources by applying the following algorithm:
Fig. 1. Parent Node
Fig. 2. User's Record
HCM-TCM Algorithm (steps 1–3) 1. If the timestamp field of the user's record is NULL, then according to the associated user's roles, parse the decision tree starting from the root 1.1. For each parent node for which the user satisfies its security policy; add the parent node’s unique id to the Parent Nodes List, along with a timestamp as shown in Fig 3. 1.2. Continue parsing the tree until there are no more authorized parent nodes. Then exit. 2. Directly point to the first parent node of the Parent Nodes List of the user's record. 2.1. Check if the timestamp field of the user's record is greater than the timestamp field of the corresponding parent node. If no, update the user’s timestamp to NULL and go to step 1. 2.2. Move up starting from that parent node up to the root of the decision tree; add all the single resources to the user's authorized resource group. 3. Repeat step 2 for all other parent nodes in the Parent Nodes List field. Then exit.
136
M. Kaiiali et al.
Fig. 3. Simple Example on the Temporal Caching Mechanism
2.1 User's Frequently Changeable Roles In some grid environments, user's roles are frequently changed. The temporal caching mechanism is efficient only if it distinguishes between the user's frequently changeable roles and the user's static roles. As an example, being a teacher in an XYZ university can be considered as a static role, but being an administrative in charge is a changeable role. For these two types of roles the system has to parse the decision tree in two steps. First, it parses the tree for the static roles producing the pair (Timestamp1, Parent Node List1). In the second step, it continues parsing the tree for the remaining changeable roles producing the pair (Timestamp2, Parent Node List2). Thus the user's record has to save 2 Parent Nodes Lists along with 2 timestamps as shown in Fig 4.
Fig. 4. User's Record Structure
If the timestamp of the entire user's roles does not match with the timestamp of the corresponding parent nodes, it directly points back to the parent nodes that corresponds to the user’s static roles and checks the timestamp again. If it matches, the system continues parsing the tree only from that parent node. Fig 5 shows an example of this process.
Fig. 5. Example on User's frequently changeable roles Vs User almost static roles
Enhancing the HCM of Storing Resources' Security Policies
137
3 Hamming Distance Caching Mechanism (HDCM) In the real scenario, it is observed that the user's roles are not completely changed. The change is only for adding or deleting a few roles, so the new allocated parent nodes are very close in the tree to the old one (either it will be one more higher level, if an old role has been revoked from the user, or one more lower level if a new role is granted to the user). Considering this fact, the current authorized parent nodes group can be derived from the previous one as follows: 1. Consider each parent node's security policy as a vector SPV of zeros and ones (Table 1). Table 1. One parent node Security Policy Vector N_id Role 1 Role 2 … Role n 150
1
0
…
1
2. Let URV(t) be the vector of user's roles at t point of time. At (t +t1) point of time, the user's roles vector will be URV(t +t1). The system computes the hamming distance between the user's roles vector URV(t+t1), and the SPV of each parent node in the Parent Nodes List. 3. If the hamming distance is zero, this indicates that there is no change. Therefore the new Parent Nodes List is the same as the old one. Otherwise, we have two cases: a.
b.
If (SPV dom URV for any parent node in the Parent Nodes List) then move one level up in the decision tree, starting from that particular parent node, until you find the parent node for which (URV dom SPV). Then this node is considered as the new parent node. If (URV dom SPV) then do the same process in reverse direction (move one level down).
Where the dom relationship SPV dom URV ⇔ SPV Λ URV = URV .
is
defined
as
follows:
4 The Decision Tree Manipulation Processes In a grid environment deletion, update, and addition of resources take place quite often. If HCM is used in its present form, these processes require rebuilding the decision tree every time. We can avoid this by adding simple constructs to HCM that makes it more amenable for implementation. • Resource Deleting: This is the simplest manipulation process. It can be done by just removing the resource from the decision tree.
138
M. Kaiiali et al.
• Resource Adding: Any new resource (rk) can be added to the decision tree without the need of rebuilding the tree entirely by applying the following algorithm:
1. Parse the decision tree according to the security policy of rk until a parent node with an identical security policy is found. Add rk as a child resource to it and exit. 2. If this parent node is not found. Choose the parent node (CP) with the closest security policy to rk. Make a new parent node (NP) having the same security policy as rk. Add rk as a child resource to NP. Add NP as a child node to CP. 3. After X number of additions, the decision tree will not remain as efficient as it was earlier. So at this stage, we need to rebuild the decision tree from the original Security Table (ST). • Resource Updating: For any resource updating process, the decision tree can be synchronized by combining the previous (deleting/adding) algorithms: Delete the old version of the resource from the tree. Add the modified version of the resource as a new resource to the tree.
By following the previous algorithms, one can observe that the decision tree remains usable for many numbers of modifications before the system is advised to rebuild it again. Thus they can be called as incremental algorithms. To study the efficiency of these algorithms, a Java program is written to simulate 100 instances of HCM decision tree with 700 resources and 30 different security rules. For each instance, the time required to add 50 resources is calculated and compared with the time required to rebuild the decision tree entirely. Results are depicted in Fig 6. The average time required for the rebuilding approach was 79.5ms with a standard deviation as 27.79, while it was only 17.7ms for the proposed algorithm with a standard deviation as 5.70. Determining the value of X (number of additions before the system is advised to rebuild the decision tree entirely) is also an important parameter to study the efficiency of HCM in a dynamic environment. Many experiments have been done to estimate X’s value before the system is advised (with a possibility of 50%) to rebuild the decision tree entirely. Results are depicted in Table 2. It is seen from Table 2 that the percentage of X to the total number of resources is reasonable enough to consider HCM as an efficient mechanism in a dynamic environment.
Fig. 6. Incremental Algorithm Experiments and Results
Enhancing the HCM of Storing Resources' Security Policies
139
Table 2. Experiment to Determine the Value of X Resource Number
Security Rules Number
X
Modification Percentage
550
7
474
46%
572
7
1476
72%
1210
8
838
41%
1300
8
2796
68%
456
9
568
55%
1310
9
738
36%
6000
10
2192
27%
6912
10
9472
58%
5 Conclusion This paper proposes novel basic algorithms, Temporal Caching Mechanism and Hamming Distance Caching Mechanism, to enhance HCM. The simulation studies show how the decision tree of the enhanced HCM is stable and usable for a reasonable rate of dynamic modifications. Efficient security policy storage strategies can be derived by adopting the enhanced HCM.
References 1. Kaiiali, M., Wankar, R., Rao, C.R., Agarwal, A.: Design of a Structured Fine-Grained Access Control Mechanism for Authorizing Grid Resources. In: IEEE 11th International Conference on Computational Science and Engineering, São Paulo, Brazil, July 16-18, pp. 399–404 (2008) 2. Foster, I.: What is the Grid? A Three Point Checklist. GRID Today, July 20 (2002) 3. Chakrabarti, A., Damodaran, A., Sengupta, S.: Grid Computing Security: A Taxonomy. IEEE Security & Privacy 6(1), 44–51 (2008) 4. Alfieria, R., Cecchinib, R., Ciaschinic, V., dell’Agnellod, L., Frohnere, A., Lorenteyf, K., Spatarog, F.: From gridmap-file to VOMS: managing authorization in a Grid environment. Future Generation Computer Systems 21(4), 549–558 (2005) 5. Johnston, W., Mudumbai, S., Thompson, M.: Authorization and Attribute Certificates for Widely Distributed Access Control. In: Proceedings of IEEE 7th International Workshops on Enabling Technologies: Infrastructures for Collaborative Enterprises 6. Chakrabarti, A.: Grid Computing Security. Springer, Heidelberg (2007) 7. Shirasuna, S., Slominski, A., Fang, L., Gannon, D.: Performance comparison of security mechanisms for grid services. In: Proceedings of Fifth IEEE/ACM International Workshop on Grid Computing, November 8, pp. 360–364 (2004)
A Framework for Web-Based Negotiation Hrushikesha Mohanty1 , Rajesh Kurra1 and R.K. Shyamasundar2 1
Department of Computer & Information Sciences University of Hyderabad, Hyderabad, India 2 School of Technology & Computer Science TIFR, Mumbai, India {mohanty.hcu,rajeshyadav.hcu}@gmail.com, [email protected]
Abstract. Negotiation in business is natural and so it is while business is performed on world wide web. There has been investigation on to perform negotiation on web. In this paper, we view negotiation is a process and propose a framework for it’s implementation.
1
Introduction
Web service is defined as an autonomous unit implementing an application logic that provides either some business functionality or study on web services is getting attention of research community as information to other applications on Internet Connection [4]. More and more services are being offered on Web. Features like platform independence, interoperability, self description, low cost of use and the flexibility provided by Web services are being appreciated. Web services can be thought of as a conglomeration of a set of constituent services and they execute in co-ordinated way for giving desired results. Infrastructure required for implementing Web services includes Universal Description, Discovery and Integration (UDDI) to host service descriptions for service seekers. A service is specified in Web Service Description Language (WSDL) and communication among services follows Simple Object Access Protocol (SOAP). Typically availing Web services includes three stages: Discovery, Negotiation and Contracting. This paper mainly emphasizes on negotiation process between a service provider and a service consumer. Web service negotiation mainly involves an exchange of offers and counter offers between two parties (service provider and service consumer) with the main objective of establishing agreement over the non-functional features of Web services. Generally non-functional service attributes include robustness, price, response time, security, availability, performance, throughput etc. Some of these service attributes may depend on other attributes. So there is a need of balancing these attribute values from the perspectives of service provider as well as service consumer, and that can be achieved through negotiation. In this negotiation, consumers can continuously customize their needs and providers can tailor their offers[1]. This shows negotiation can be thought of a well defined process concurrently run at service provider and consumer. The process at one end, evaluates T. Janowski and H. Mohanty (Eds.): ICDCIT 2010, LNCS 5966, pp. 140–151, 2010. c Springer-Verlag Berlin Heidelberg 2010
A Framework for Web-Based Negotiation
141
the offer sent by the other and generates an offer to send. And the process continues till one unable to provide new offer or accepts other’s offer. Some of the recent research includes [1,2,6,7,8,9]. These works mainly concentrate on offer generation and evaluation. We view negotiation as a distributed application and that needs a co-ordination among the participating parties. The process can be thought of as a composition of two basic blocks: the first offer evaluation and generation; and the second is co-ordination. Where the former is application specific, the later is generic. So, one can think of a generic co-ordination framework in which evaluation and generation strategy can be plugged in and played for different application specific negotiations. This work stands out among other published work [2,9] in the context of engineering generic framework for web based negotiation. The rest of the paper is organized as follows. Section 2 reviews related work on Web services and Negotiation. Section 3 describes negotiation process. Section 4 analyzes the negotiation process and Section 5 presents an example of negotiation. Section 6 presents our proposed Web based negotiation framework and it’s higher level implementation details and lastly Section 7 concludes with remarks.
2
Related Work
A way back more than a decade, the research issues in negotiation are studied in multi-agent paradigm. Autonomous agents can negotiate among them before making a decision on a service. A requesting agent can negotiate on terms and conditions of a service with the providing agent. In [8,9], a formal model is proposed for negotiation among agents. The model specifies strategies for initial offers, proposal evaluation and counter proposal. Computationally, traceability is the point the proposed model takes on and the use of their model with a case study taken from a business domain is shown. Agents are considered as highly autonomous with their defined objective, strategies to achieve it and they utilize resources required for that. However, sometimes to achieve an objective, an agent may opt for services from another agents. As agents are individualistic, they can not be directed to provide a service, but they can be persuaded by proposing offers/incentives till an agreement among agents is reached. Thus negotiation is an iterative process for persuasion. Later, there has been effort in making use of agent technology for collaborative service provisioning to achieve certain requirements[5]. Agents are initiated with certain tasks and then they negotiate for an on-demand service composition. Here, the composition is considered as a constraint satisfaction problem and a strategy for solving the problem is proposed as a multi-agent negotiation algorithm. On wide availability of Internet, practitioners are looking into this technology paradigm as a viable paradigm for electronic commerce (e-commerce). As it is seen, most of the e-commerce systems are of type business-to-consumer (B2C) and characteristically these systems are inflexible as here consumers are required to agree with the terms and conditions of the business to avail its services. This has inspired researchers to bring negotiation component to e-commerce and a comprehensive classification of negotiation for e-commerce is done in [6].
142
H. Mohanty, R. Kurra, and R.K. Shyamasundar
Later, Web services- an upcoming paradigm has become popular for providing a new framework for e-commerce. This framework is useful for both : B2C and B2B business applications. The infrastructure for Web services is composed of service consumer, provider and registry-a directory keeping details on interfacing services. Flexible approach for business interfacing is achieved for technology e.g. XML, SOAP on which BPEL and WSDL for specifying business logic and communication are defined. While, the framework offers a novel paradigm still remains inflexible in light of business proposition being devoid of negotiation mechanism. This has again raised interest to study negotiation in Web services. A search in registry environment such as JUDDI will result into several choices. A requester has to choose one among those to award the service contract. The process of selection is mainly based on QoS parameters and reported in [4,7] as Web service negotiation. It identifies three main issues: Negotiation Message, Negotiation Protocol and Negotiation Decision Making - as essentials for the negotiation process. A proposal on negotiation decision making is reported in [3]. This uses fuzzy truth propositions to represent service limitations and preferences are calculated by mathematical function that calculate utility of fuzzy attribute values. The sequence of message passing at different states of a negotiation life cycle is called as negotiation protocol. Thus protocol is a vehicle that carries forward the process of negotiation. And so, effort is on to devise domain independent protocol for making it so generic that negotiation decision making strategy can be plucked in to play. This paper has taken up this challenge and proposes UML based framework for Web service negotiation. This is in tune with the trend of UML based specification for contract negotiation and SLA. As [1], the proposed framework provides environment for offer and counter offer generation during negotiation, and their evaluations leading to termination of negotiation. The framework proposed time-based invocation (for progress of protocol) instead of asynchronous message passing as we proposed here. Next section we discuss on a general idea on negotiation protocol.
3
On Negotiation
Web service Negotiation is carried out between service provider and service consumer. According to our negotiation mechanism, Web service negotiation is an interplay of offer and counter-offers between both on QoS parameter values. Negotiation mainly involves finding out the Web services that matches the functionality needed by service consumer initially via registries like JUDDI etc. Later, the mechanism of negotiation indicating the exchange of offers will start between consumer and provider with the aim of generating a Service Level Agreement (SLA). Service Level Agreement (SLA) mainly contains the terms and conditions that the provider should follow to make sure that both the parties are functioning according to the agreement. Initially, any of the two (preferably service provider) generates an offer message composed with QoS parameters and associated values. An offer is associated with utility value that motivates either party to engage in the process of negotiation. Each party intends to improve on it’s utility value at each round of
A Framework for Web-Based Negotiation
143
negotiation. For a negotiating party (X) for service (S), the utility value of an S offer is designated as UX . In the process of negotiation, each(X) maintains three S S types of utility values viz. l UX - the lowest utility value from service, p UX -the e S utility obtained in previous round of negotiation, UX -the expected utility for the service(S). Negotiation is a process that involves deliberations on an issue such that each wishes to achieve a desired utility value i.e that lies between the lowest and the expected values. Offer evaluation is domain specific; that is a function of service parameters and their contributions to a service utility value. The negotiation interaction until either of agent accept or rejects the received offer or stalemate condition is reached. A stalemate is a state where either of agent always receives an offer that gives that same utility as in previous ’n-1’ number of iterations where ’n’ is the mutual agreed value between both the parties. A negotiating party in an iteration may reach at one of the cases: S S 1. Accept : UX > e UX -the offer utility is more or equal to expected utility. S l S 2. Reject : UX < UX -the offer utility is less that of minimal expected utiltity. S S S 3. Continue : l UX <= UX <= e UX - the negotiator tries to achieve still more from other party.
In this paper, We are showing the generic events and actions during the negotiation between both parties in Algorithm 1. We are considering offer receiving as an event and evaluating the received offer and decision making as an action. We are representing these time ordered sequence of events and actions in Fig 1. Negotiation Framework mainly involves two software components such as Offer generation and Offer evaluation. A negotiator is a 4-tuple knowledge system (X, S, O, U) where 1. X = (SP/SR) that can be either service provider or service consumer. 2. S is a Web service that provides service to the service consumer(SR) offered by provider(SP). 3. Let O={(si ,di ,wi ,vi )}ni=1 be the Offer-Evaluation model of ’n’ number of Web services that describes an offer as a function of the following entities of service ’i’ such as – si denotes service attributes. – di denotes an interval of values for service attribute (si ). In case of qualitative values, a suitable range of numeric values can be worked out. – w i nis weight given to service attribute si such that i=1 wi = 1. – vi is the value of service attribute ’i’ and can be calculated as follows. (max −of f er ) i i if decreasing (maxi −mini ) vi = (of f eri −mini ) if increasing (maxi −mini ) 4. U is an Utility value that shows the gain obtained by negotiating party(X) over service(S) that can be obtained as n S UX = i=1 wi ∗ vi
144
H. Mohanty, R. Kurra, and R.K. Shyamasundar
Fig. 1. Interaction Diagram of Negotiaiton Mechanism
Based on the above concept, negotiation between producer and consumer is carried out by iterative offer exchanges. The interaction between two is shown in Fig.1 with an interaction diagram. The same logic for negotiation drives both producer and consumer. Logic for negotiation is coded in Algorithm 1. A copy of this algorithm runs at both the ends to carry out negotiation.
4
An Analysis
Generally automated negotiation contains a minimum of three components: highlevel protocol, objectives and strategies[1]. The high level protocol contains negotiation process depending on types of negotiation i.e for example auctions and bargaining are two different ways of negotiation. In this paper, we have considered bargaining where each proposes an offer for other to accept while each considers it’s utility value to increase or atleast in safe. The Algorithm:1 implements this concept of negotiation. The negotiation protocol is based on the following assumptions: – Both the service provider and service consumer agents are eager to finish their negotiation with in their own weighted interests, domain values over service attributes as well as same value and utility functions. – Both the agents can finish their offer evaluation and generation of offer and counter offer within deadline .
A Framework for Web-Based Negotiation
Algorithm 1. Service Negotiation 1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14: 15: 16: 17: 18: 19: 20: 21: 22: 23: 24: 25: 26: 27: 28: 29: 30: 31: 32: 33: 34: 35: 36: 37:
A copy runs at SP and SR Inputs: X, X 1 : Can be either provider or service requester. S : A Web service. l S UX : Minimum utility value limit by X over service S. e S UX : Expected utility value of offer by X over service S. S OX : Offer over service attributes made by X. p S UX : Previous offer utility value. S UX : Utility value obtained by X. stalematelimit : Mutually agreed stalemate limit value. Outputs: Negotiated offer if succeed or NULL in case of failure. Steps: {Initialization} nego Success=nego Quit=false; p S S e S l S UX =UX = UX = UX =NULL, nego Stalemate=0; while !nego Success and !nego Quit do S on receive message Offer(X 1 , X, S, OX ) {X receives offer from X 1 and executes} begin S S UX =evaluate(OX 1) S l S if UX < UX then {Case 1: Quit negotiation} nego Quit=true; S S S else if e UX != NULL and UX > e UX then {Case-2: agreement reached} nego Success=true; S S S S else if UX >= l UX and UX <= e UX then {Case-3: negotiation to continue} if !stalemate(X, X 1 ) then p S S UX = UX ; nego Stalemate=0; gen send offer(X, X 1 , S); end if end if end end while
Procedure 2. gen send offer(X, X 1 , S) S 1: OX =gen Offer(); S 2: message offer(X, X 1 , S, OX ); e S S 3: UX =getExpectUtility(OX ); {store expected utility of the offer}
145
146
H. Mohanty, R. Kurra, and R.K. Shyamasundar
Procedure 3. Stalemate(X, X 1 ) S S 1: if p UX == UX then 2: nego Stalemate++; 3: if nego Stalemate == stalematelimit then S S 4: if UX >= l UX then 5: nego Success=true; 6: end if 7: end if 8: else 9: nego Stalemate=0; 10: end if
– Both the agents have negotiation capability. – Both the agents come out of negotiation after a certain number of rounds during exchange of offers. Algorithm Service Negotiation essentially models negotiation dynamics that takes place simultaneously at a producer and a consumer. It is a distributed algorithm for asynchronous execution at both the ends. After initialization of control variables, negotiator (SP or SR) waits to receive an offer from the other. This is achieved by transaction of a message called Offer. Any one of the two can initiate the process of negotiation by executing function gen send offer(X, X 1 , S). S The function gen send offer at negotiator X generates an offer OX for a service 1 S. This offer is sent to the other party X for consideration. This is achieved S by message offer(X, X 1 , S, OX ). Then the negotiator X also fixes an expected e S S utility value UX for the offer OX . Each negotiator (SP or SR) for a service negotiation fixes a lower limit in utility value beyond the limit any offer is not acceptable. It carries on negotiation with the counter part unless the later’s offer is below it’s minimum value. That is checked as case-1 in the algorithm. S 1 The offer message offer(X 1 , X, S, OX by X for service 1 ) is received from X S S with the offer OX 1 . On receiving this message, X evaluates the utility value of the offer. And then case-1 is tested to decide the negotiation is to continue or not. If the offer utility is more or equal to the lower limit, then it checks whether the offer’s utility is more or equal to the expected utility. The expected utility concept is introduced to put an upper limit to utility value. It also helps a negotiator to bargain more. In case-2, if an offer meets the expected utility then the negotiation ends. The conditionality for continuation of negotiation is incorporated at case-3. If an utility value of a proposed offer lie in between lower limit and the expected value, then the negotiation continues. This allows progress in negotiation. However, the process can’t be infinitely continues. In order to recover negotiation from live deadlock, the algorithm implements stalemate checking in case-3. The function stalemate(X, X 1 ) checks whether the offer made by X 1 for the previous consecutive iterations remain the same or in a defined range. In that case, the negotiation process has obtained a stalemate. However, at the S stalemate stage, if the obtained utility is more or equal to the limit l UX then the algorithm terminates designating the success in negotiation for X. If both
A Framework for Web-Based Negotiation
147
the participating parties succeeds in negotiation, then the negotiation succeeds and then both reach on agreement i.e from Service Level Agreement. Interactions between producer and consumer during negotiation is shown in Fig.1. Here producer has initiated the negotiation process. From the above descriptions and the algorithm we can state the following on the algorithm behavior : – Statement-1 : The algorithm is decisive and terminating This is due to Case1, Case-2 and Case-3 of the algorithm. – Statement-2 : The algorithm entertains bargain. This is due to Case-3 of the algorithm. However, live deadlock is avoided due to Stalemate function. – Statement-3 : The algorithm makes negotiation progressive. Essentially, the algorithm is driven by the receipt of offer message in each iteration. As we assume eventual receipt of the message, and it’s execution, then the progress of the negotiation process is guaranteed. – Statement-4 : Negotiation arrives at agreement when both succeeds at arS S riving, at agreed utility value i.e l UX or e UX or a value in between. In the next section, we explain computational aspects of the algorithm with an example.
5
Example
The following initializes the Offer-Evaluation model(O) elements as – si ={price, availability, response time} are some of service attributes over which negotiation will be done. – D = Domain values of service attributes. X/D DP rice Davailability DResp.T ime SP [1, 4] [0.5, 0.9] [0.4, 0.9] SR [0.5, 3] [0.6, 0.9] [0.6, 0.9] – Let W be the weights over the service attributes X/W WP rice Wavailability WResp.T ime SP 0.70 0.2 0.1 SR 0.80 0.1 0.1 S e S – Let l UX , UX be the minimum and expected utilitiy values by provider and consumer as shown below. S e S X l UX UX SP 0.55 0.95 SR 0.56 0.9
Let the offers stored at service providers are [4,0.8,0.8], [3,0.8,0.8], [3,0.7,0.7] results in decreasing order of utilities such as 0.95, 0.93, 0.69 in his perspective respectively. Similarly, service consumer stores some of the offers such as [3,0.7,0.7],
148
H. Mohanty, R. Kurra, and R.K. Shyamasundar
[3,0.6,0.6], [2,0.9,0.9] resulting the decreasing order of utility values such as 0.93, 0.80, 0.0.68 respectively. For negotiation success case, service provider initiates bargaining by sending an offer say [4, 0.8, 0.8] that makes his expected utility value S (e UX ) to 0.95. Once the same offer reaches consumer, he evaluates it to an utility S value (UX ) of 0.93. So consumer accepts the received offer as he obtains more than his expected utility value (i.e 0.9). For negotiation quit case, let provider sends an offer say [2.5, 0.8, 0.9] that results an utility value of 0.6 in provider’s perspective. Once consumer evaluates the offer to 0.55, rejects it thereby quit from negotiation. For negotiation continue case, let consumer has received an offer that was S evaluated to an utility value in between 0.56 and 0.9 (in first round or e UX in other rounds), then he generate and sends a counter-offer from his stored offers to S provider and updates his expected utility value (e UX ). Each party stores recent utility value of offer received to check for stalemate and liveness of the protocol. These exchange of offer and counter-offers will continue until either or both of them accept or quit that leads to completion of negotiation.
6
Web-Based Negotiation Framework
In this section, we presented a framework to implement a distributed system for performing negotiation. Mainly, there are usecases viz. NegoIntf and NegoControl for communicating and decision making respectively. An abstract
Fig. 2. Detailed Archetecture of Web service Negotiation
A Framework for Web-Based Negotiation
149
Fig. 3. Component Deployment
implementing of these two usecases is shown in mid part of Fig.2. The components OfferSendIntf and OfferRecvIntf are for usecase NegoIntf and InitEndControl, IterationControl are for the usecase NegoControl. The implementation of these components at producer/consumer is shown at the end part of Fig.2. InitEndControl components initiate negotiation by generating an offer and sending it to the counter-part through component OfferSendIntf. The interactions among components are depicted by directed solid line and these lines are labeled with the information that passes from a sender component to a receiving component. The component OfferRecvIntf receives an offer from a counter-part and passes it to IterationControl Component to take decision based on the three cases we have shown in the algorithm. On termination of the negotiation process, it sends end message to InitEndContl component. If it decides to continue negotiation, then sends new offer to OfferSendIntf component. In the lase part of Fig.2, a possible implementation of these components at producer/consumer. As told earlier copies of these components are placed at both the negotiating ends. The components have two types of interfacing viz. external and internal. An external interfacing symbolically shown by and represents interfacing a component in a system. In Fig.3, a deployment of components is shown. A consumer on discovering a producer from UDDI may initiate negotiation. And then negotiation process is carried out between them. Each node has it’s knowledge that guides the negotiation. Each contains two packages viz. Boundary and Control containing their corresponding components as shown in Fig.3. Another approach for component
150
H. Mohanty, R. Kurra, and R.K. Shyamasundar
deployment could be due to UDDI. Both producer and consumer may authorize UDDI to perform negotiation on their behalf. The gain due to this approach is time for no external message passings. In that case, the external interfaces shown in the last part of Fig.2 turns to internal interfaces. In dotted box in Fig.3, this option of the component deployment is shown. UDDI other than ServiceDataServer (containing WSDL details of services) can have Nego-Knowledge packages of producer and consumer with respect to a service. It will also have a copy of Boundary, Nego-Knowledge and Control packages. For a negotiation these packages will have different instantiations with respect to a producer and consumer. Thus we present a framework higher level implementation of negotiation process.
7
Conclusions
In this paper, mostly we have concentrated on the process of negotiation and it’s framework details for automating the process. We have proposed a UML-based architecture for negotiation process design. We plan to make the framework independent of offer / counter-offer generation algorithms and application domain. One should be able to plug-and-play these algorithms to see the successful implementation of negotiation. Further, we will extend this architecture to automate multi-party negotiation, monitoring and tracing of a negotiation. Our future work also includes performance evaluation of proposed negotiation framework as well as study on offer generation while selecting a web service.
Acknowledgments This work is supported by Department of Science and Technology(DST) and Indo- Trento Promotion for Advanced Research (ITPAR) EFL/I/M 38/2290.
References 1. Patankar, V., Hewett, R.: Automated Negotiations in Web service Procurement. In: The Third International Conference on Internet and Web Applications and Services (ICIW 2008), pp. 620–625 (2008) 2. Parkin, M., Kuo, D., Brooke, J.: A Framework & Negotiation Protocol for Service Contracts. In: Proceedings of the IEEE International Conference on Services Computing (SCC 2006), pp. 253–256 (2006) 3. Yao, Y., Yang, F., Su, S.: Evaluating Proposals in Web services Negotiation. In: Levi, A., Sava¸s, E., Yenig¨ un, H., Balcısoy, S., Saygın, Y. (eds.) ISCIS 2006. LNCS, vol. 4263, pp. 613–621. Springer, Heidelberg (2006) 4. Patrick, C.K., Hung, Li, H., Jun-Jang, Jeng: WS-Negotiation: An Overview of Research Issues. In: Proceedings of the Proceedings of the 37th Annual Hawaii International Conference on System Sciences (HICSS 2004), pp. 10033.2 (2004) 5. Abdoessalam, A.M., Nehandjiev, N.: Collaborative Negotiation in Web service Procurement. In: Proceedings of the 13th IEEE International Workshops on Enabling Technologies: Infrastructure for Collaborative Enterprises (WETICE 2004), pp. 47–52 (2004)
A Framework for Web-Based Negotiation
151
6. Lomuscio, A.R., Wooldridge, M., Jennings, N.R.: A Classification Scheme for Negotiation in Electronic Commerce. Journal of Group Decision and Negotiation 12(1), 31–56 (2003) 7. Faratin, P., Sierra, C., Jennings, N.R.: Using Similarity Criteria to Make Issue Tradeoffs in Automated Negotiations. In: Proceedings of the Fourth International Conference on MultiAgent Systems. Journal of Artificial Intelligence, vol. 142, pp. 205–237 (2002) 8. Carles, P.F., Sierra, C., Jennings, N.R.: Negotiation Decision Functions for Autonomous Agents. International Journal of Robotics and Autonomous Systems 24, 3–4 (1998) 9. Sierra, C., Faratin, P., Jennings, N.R.: A Service-Oriented Negotiation Model between Autonomous Agents. In: Boman, M., Van de Velde, W. (eds.) MAAMAW 1997. LNCS, vol. 1237, pp. 17–35. Springer, Heidelberg (1997)
Performance Analysis of a Renewal Input Bulk Service Queue with Accessible and Non-accessible Batches Yesuf Obsie Mussa1 and P. Vijaya Laxmi2 1
Department of Mathematics, Andhra University, Visakhapatnam - 530 003, India 2 Department of Applied Mathematics, Andhra University, Visakhapatnam - 530 003, India vijaya [email protected], [email protected] Keywords: Accessible Batch, Finite Buffer, Supplementary Variable, Batch Service, Embedded Markov Chain.
1
Introduction
Batch service queues have been discussed extensively over the last several years and applied in the areas of transportation systems, telecommunication systems and computer operating systems, see Neuts [1], Chaudhry and Templeton [2], Medhi [3], etc. In accessible batch service queues, the arriving customers will be considered for service with current batch service in progress with some limitation. The concept of accessibility into batches during service has been considered by Gross and Harris [4], Kleinrock and [5]. The infinite buffer queue with accessible and non-accessible batch service rule has been studied by Sivasamy [6], wherein the arrivals and services are exponentially distributed. A similar model with finite and infinite buffer in discrete-time case has been studied by Goswami et al. [7]. The present paper focusses on the study of finite buffer queue with accessible and non-accessible batches wherein inter-arrival time of arrival customers and service time of batches are, respectively, arbitrarily and exponentially distributed. The supplementary variable and the embedded Markov chain techniques have been used to obtain the steady state distributions of the number in the system (queue) at pre-arrival and arbitrary epoches. Numerical examples of performance measures have been presented in the form of graphs. As a special case, when a = d, the present model reduces to general batch service GI/M (a,b) /1/N queue.
2
Description and Analysis of the Model
Let us consider a finite buffer queue where customers inter-arrival times are independent, identically distributed (i.i.d.) random variables with probability distribution function A(u), probability density function a(u), u ≥ 0, Laplace transform A∗ (θ), Re(θ) ≥ 0 and mean inter-arrival time 1/λ. Service times S T. Janowski and H. Mohanty (Eds.): ICDCIT 2010, LNCS 5966, pp. 152–156, 2010. c Springer-Verlag Berlin Heidelberg 2010
Performance Analysis of a Renewal Input Bulk Service Queue
153
are assumed to be exponentially distributed random variable with rate μ. The customers are served by a single server in batches of maximum size ‘b’ with a minimum threshold value ‘a’. However, if the number of customers in the queue is less than the minimum threshold value ‘a’, the server remains idle until the number of customers in the queue reaches ‘a’. If ‘b’ or more customers are present in the queue at service initiate epoch then only ‘b’ of them are taken into service. It is further assumed that the late entries can join a batch in course of ongoing service as long as the number of customers in that batch is less than d < b (called maximum accessible limit). The system has queue capacity of size N (≥ b) so that the maximum number of customers allowed in the system at any time is (N + b). The traffic intensity is given by ρ = λ/bμ. The model is denoted by GI/M (a,d,b) /1/N . The state of the system at time t is defined as the number of customers in the system (queue) Ns (t)(Nq (t)) , U (t) is the remaining inter-arrival time for the next arrival, and ζ(t) =0 or 1 indicates whether the the server is idle/busy with accessible batch or busy with a non-accessible batch, respectively. We define the joint probabilities by Pn,j (u, t)du, as n customers in the queue (system) and the server is idle/busy with accessible batch or busy with a nonaccessible batch, respectively, with remaining inter-arrival time u at time t. At steady-state, the above probabilities will be denoted by Pn,j (u). 2.1
Steady State Distribution at Pre-arrival Epochs
The pre-arrival epoch probabilities, where an arrival sees n customers in the system(queue) and the server is in state j at an arrival epoch is given by − Pn,j =
1 Pn,j (0), λ
0 ≤ n ≤ d − 1, j = 0; 0 ≤ n ≤ N, j = 1.
(1)
Let fk be the conditional probability that exactly k batches have been served during an inter arrival time of a customer. Hence, for all k ≥ 0, we have ∞ k −µu fk = (µu) dA(u). k! e 0
Observing the state of the system at two consecutive embedded points (arrival epoches), the one step transition probability matrix (TPM) P with four block matrices of the form:
P=
Γd×d
Δd×(N+1)
Θ(N+1)×d Λ(N+1)×(N+1)
(N+d+1)×(N+d+1)
and their elements can be obtained from the following expressions: Γi,j
Δi,j
⎧ 1 : 0 ≤ i ≤ a − 2, 0 ≤ j ≤ a − 1, i + 1 = j, ⎪ ⎪ ⎨ fo : a − 1 ≤ i ≤ d − 1, a ≤ j ≤ d − 1, i + 1 = j, = ⎪ ψ1 (i) : a − 1 ≤ i ≤ d − 1, j = 0, ⎪ ⎩ 0 : otherwise. fo : i = d − 1, j = 0, = 0 : otherwise.
154
Y.O. Mussa and P.V. Laxmi
Θi,j =
Λi,j =
⎧ f[ i ]+1 : a − 1 ≤ i ≤ N − 1, a ≤ j ≤ d − 1, bi + 1 = j, ⎪ b ⎪ ⎪ ⎨ ψ2 (i) : a − 1 ≤ i ≤ N − 1, j = 0, bi = either a − 1 or a or . . . or b − 1, ψ (i) : 0 ≤ i ≤ N − 1, 1 ≤ j ≤ a − 1, i + 1 ≥ j, : i = N, 0 ≤ j ≤ N, : otherwise.
3 ⎪ ⎪ ⎪ ⎩ Θi−1,j 0 ⎧ fo ⎪ ⎪ ⎪ ⎪ ⎨ f[ bi ]+1
f i+1−j
i+1−j , b
is an integer,
: 0 ≤ i ≤ N − 1, 1 ≤ j ≤ N, i + 1 = j, : d − 1 ≤ i ≤ N − 1, j = 0, bi = either d − 1 or d or . . . or b − 1, : b ≤ i ≤ N − 1, 1 ≤ j ≤ N − b, i + 1 ≥ j, i+1−j , is an integer, b
b ⎪ ⎪ ⎪ Λ : i = N, ⎪ ⎩ i−1,j
0 ≤ j ≤ N, : otherwise.
0
Here, the quantities ψj (i), j = 1, 2, 3 are defined as: N d−1 ψ1 (i) = 1 − Γ + Δ i,j i,j , a − 1 ≤ i ≤ d − 1, j=0 j=1 d−1 N ψ2 (i) = 1− Θ + Λ , 0 ≤ i ≤ N −1, bi = either a− i,j i,j j=1 j=0 1 or a or . . . , or b − 1, N d−1 ψ3 (i) = 1 − Θ + Λ , 0 ≤ i ≤ N − 1, i i,j i,j j=0 j= +2 b
Note that [x] and x/y represent the greatest integer contained in x and the remainder obtained after dividing integer x by integer y, respectively. − − The pre-arrival epoch probabilities Pn,0 , 0 ≤ n ≤ d − 1 and Pn,1 ,0 ≤ n ≤ N can be obtained by solving the system of equations: Π = ΠP, where Π = − − − − − − [P0,0 , · · · , Pa−1,0 , Pa,0 , · · · , Pd−1,0 , P0,1 , . . . , PN,1 ]. 2.2
Steady State Distribution at Arbitrary Epochs
Using the supplementary variable technique, the steady state equations can be written as (1)
− P0,0 (u) = μ
d−1
Pk,0 (u) + μP0,1 (u),
(2)
−Pn,0 (u) = μPn,1 (u) + a(u)Pn−1,0 (0), 1 ≤ n ≤ a − 1,
(3)
(1) −Pn,0 (u)
(4)
k=a (1)
(1)
= −μPn,0 (u) + μPn,1 (u) + a(u)Pn−1,0 (0), a ≤ n ≤ d − 1,
−P0,1 (u) = −μP0,1 (u) + μ
b
Pk,1 (u) + a(u)Pd−1,0 (0),
(5)
k=d (1)
−Pn,1 (u) = −μPn,1 (u) + μPn+b,1 (u) + a(u)Pn−1,1 (0), 1 ≤ n ≤ N − b,
(6)
(1) −Pn,1 (u)
= −μPn,1 (u) + a(u)Pn−1,1 (0), N − b + 1 ≤ n ≤ N − 1,
(7)
(1) −PN,1 (u)
= −μPN,1 (u) + a(u)[PN−1,1 (0) + PN,1 (0)].
(8)
Using the definitions of Laplace transforms on the steady-state differential difference equations given above and the pre-arrival epoch probabilities (1), the relations between distributions of number of customers in the system(queue) at pre-arrival and arbitrary epochs are given by
Performance Analysis of a Renewal Input Bulk Service Queue − PN,1 = ρbPN−1,1 , − − Pn,1 = ρb Pn−1,1 − Pn,1 , N − b + 1 ≤ n ≤ N − 1, − − Pn,1 = Pn+b,1 + ρb Pn−1,1 − Pn,1 , n = N − b, N − b − 1, · · · , 1,
P0,1 =
b
155 (9) (10) (11)
− − Pk,1 + ρb Pd−1,0 − P0,1 ,
(12)
k=d
− − Pn,0 = Pn,1 + ρb Pn−1,0 − Pn,0 , a ≤ n ≤ d − 1. ∗(1) − Pn,0 = Pn−1,0 − μPn,1 (0) , 1 ≤ n ≤ a − 1,
(13) (14)
∗(1)
where Pn,1 (0), (1 ≤ n ≤ a − 1) can be obtained by differentiating the Laplace transform equations w.r.t θ, setting θ = 0 and using (1). Finally the only unknown is obtained quantity P0,0 by using the normalization condition: P0,0 = N d−1 1− n=1 Pn,0 + n=0 Pn,1 . Once the state probabilities at various epochs are known from above, performance measures can be easily obtained. The average number of customers in the a−1 N queue at an arbitrary epoch Lq = n=0 nPn,0 + n=0 nPn,1 , the probability − of blocking or loss Ploss = PN,1 and the average waiting time in the queue of a customer using Little’s rule Wq = Lq /λ , where λ = λ(1 − Ploss ) is the effective arrival rate.
3
Numerical Examples and Conclusion
Extensive numerical computations have been done in order to know the behavior of the system, but only few graphs have been presented here for reasons of brevity. A detailed numerical study on the performance measure of this model for arrival distributions such as exponential (M ), Erlang (Ek ), deterministic (D) and hyperexponential (HE2 ) distributions show the following facts: – Among all inter-arrival time distributions the deterministic distribution gives better performance. 0.04
14
0.035
12
0.025
Exponential Erlang−7 Deterministic Hyperexponential
0.02
0.015
0.01
2
10
ρ = 0.1 ρ = 0.5 ρ = 0.9
8
6
4
2
0.005
0
The parameters are λ=2, d=16, b=20, N=25 for HE distribution.
q
0.03
Average queue length (L )
Blocking probability (P
loss
)
The parameters are λ=3, ρ=0.3, a=2, d=4, b=5.
5
10
15
20
25
30
Buffer size (N)
Fig. 1. Effect of N on Ploss
35
0
2
4
6
8
10
12
Min. threshold value (a)
Fig. 2. Effect of a on Lq
14
156
Y.O. Mussa and P.V. Laxmi
– If the minimum threshold value a decreases, performance of the system increases. – If the accessibility limit d or maximum batch size b increases, performance of the system increases. – If the buffer size is taken sufficiently large, the loss probabilities tend to zero for all the arrival distributions, as the model behaves as an infinite buffer queue. The result obtained in this paper may be useful in system design, in telecommunication and in many other related applications. Further, the cost analysis for optimal choice of the parameters a, d, b and N is left for future investigation.
References 1. Neuts, M.F.: A general class of bulk queues with Poisson input. Annals of Mathematical Stat. 38, 759–770 (1967) 2. Chaudhry, M.L., Templeton, J.G.C.: A First Course in Bulk Queues. Wiley, New York (1983) 3. Medhi, J.: Stochastic Models in Queueing Theory. Academic Press Professional, Inc., San Diego (1991) 4. Gross, D., Harris, C.M.: Fundamentals of Queuenig Theory, 3rd edn. John Wiley & Sons, Inc., New York (1998) 5. Kleinrock, L.: Queueing Systems: Theory, vol. I. John Wiley & Sons, Inc., New York (1975) 6. Sivasamy, R.: A Bulk service queue with accessible and non-accessible batches. Opsearch: Journal of the Operational Research Society of India 27(1), 46–54 (1990) 7. Goswami, V., Mohanty, J.R., Samanta, S.K.: Discrete-time bulk-service queues with accessible and non-accessible batches. Applied Mathematics and Computation 182(1), 898–906 (2006)
A Distributed Algorithm for Pattern Formation by Autonomous Robots, with No Agreement on Coordinate Compass Swapnil Ghike1 and Krishnendu Mukhopadhyaya2 1 Computer Science and Information Systems Group Birla Institute of Technology and Science, Pilani 333031, India [email protected] 2 Advanced Computing and Microelectronics Unit Indian Statistical Institute, 203 B T Road, Kolkata 700108, India [email protected]
Abstract. The problem of coordinating a set of autonomous, mobile robots for cooperatively performing a task has been studied extensively over the past decade. These studies assume a set of autonomous, anonymous, oblivious robots. A task for such robots is to form an arbitrary pattern in the two dimensional plane. This task is fundamental in the sense that if the robots can form any pattern, they can agree on their respective roles in a subsequent, coordinated action. Such tasks that a system of robots can perform depend strongly on their common agreement about their environment. In this paper, we attempt to provide a distributed algorithm for pattern formation in the case of no agreement on coordinate axes. We also discuss the limitations of our algorithm. Keywords: Autonomous robots, Pattern Formation, No Compass, Mobile Robots.
1
Introduction
Systems of multiple autonomous mobile robots engaged in collective behaviour (also known as robot swarms) have been extensively studied throughout the past two decades. These robots have motorial abilities allowing them to move; they also have sensorial abilities implying that they can sense their environment. They are anonymous - not a single robot can be distinguished from others; cannot communicate explicitly - they have to observe positions of other robots and accordingly decide; they are oblivious - they cannot remember the past computations. Moreover, they perform in completely asynchronous fashion with respect to time lengths of computation cycles, which brings them closest to practical situations. Most studies assume these robots to be simple points for theoretical analysis. This subject is of interest for a variety of reasons. The main advantage of using multiple robot systems is the ability to accomplish tasks that are infeasible for a single robot, powerful as it may be. The use of simpler, expendable individual robots results in decreased costs, increased T. Janowski and H. Mohanty (Eds.): ICDCIT 2010, LNCS 5966, pp. 157–169, 2010. c Springer-Verlag Berlin Heidelberg 2010
158
S. Ghike and K. Mukhopadhyaya
reusability, increased expandability and easy replacement. These systems have immediate applicability in a wide variety of tasks, such as military, search and rescue, mining operations and space missions. There are algorithms for arbitrary pattern formation problem in cases of partial or full agreement on coordinate compass. For the no agreement case, the known research goes only as far as saying that arbitrary pattern formation is not possible. In this paper we have tried to address this particular case by proposing a distributed algorithm which can form a large range of patterns even when there is no agreement on compass.
2
Related Work
There has been a lot of interest generated in the engineering area by the pioneering research activities which include the Swarm Intelligence of Beni et al. [5], Parker [6], Balch and Arkin [7], to cite just a few. There are researches related various other problems like gathering problem by Peleg et al [9], flocking by Prencipe [2]. Two main approaches are common for solving the problem of arbitrary pattern formation. The first approach was employed in the investigations of Suzuki and Yamashita [3,4], and the second approach is by Prencipe et al. [2] which assumes complete asynchronicity in a single computation cycle. In both these approaches, a computational cycle is defined to consist of four phases: 1. Wait 2. Look 3. Compute 4. Move. The difference between the two approaches is that first approach assumes the computation cycle to be a single step, i.e., it consists of instantaneous actions. In second approach, complete asynchronicity in the cycle allows different phases of the cycle to be of arbitrarily different, but finite time lengths. We believe that complete asynchronicity in the cycle renders the model closer to practical situations. We assume that the robots have infinite visibility in the plane, i.e., their view of environment is not restricted by distance and they are dimensionless points. Thus we follow the CORDA (Coordination and control of a set of robots in a totally distributed and Asynchronous Environment) model of Prencipe, Flocchini et al [8], since we believe that complete asynchronicity allowed by this model is closer to practical situations. We also follow the definition of Arbitrary Pattern Pattern Formation Problem given by Prencipe et al. [8]. They have proved that arbitrary pattern formation is not possible in case of no agreement of coordinate compass. But there is a scope for forming patterns with the exception of only a few types, and this is exactly what we have tried to solve.
3
The Algorithm
Input: An arbitrary pattern P is described as ratios of sides and angles between the sides. There is no agreement on the coordinate system. An initial configuration of point robots in two dimensional plane is given. Assumptions: Unit distance is denoted by u, origin by O. EndC is an expression which means:
Pattern Formation with No Agreement on Coordinate Compass
159
If (destination Is Equal to the point where I currently stand) Then destination:= null; END OF COMPUTATION. Algorithm NoCompass C1,P1 := MinimumEnclosingCircle(InitialConfigurationRobots), Its centre; G := Complete Graph obtained by connecting every point of input pattern with every other point of input pattern; S := Smallest Segment in G out of all possible segments; |S|:= u; (5) Pattern := PlotLocalPattern(); /* Plots the pattern in local coordinate system of robot. Output is a pattern with segment S on local positive X-axis, with smallest of the segments in G adjoining S making acute angle with local positive Y-axis. Their intersection is at local origin.*/ C2, P2 := MinimumEnclosingCircle(Pattern), Its centre; (10) FinalPositions:=FindFinalPositions(C1,C2,P1,P2,Pattern); RO := Robot at origin; NR1 := Robot nearest to O on Positive X-axis at distance > 0; NF1 := Final position nearest to O on Positive X-axis at distance > 0; PPeriphery := Robots on periphery of C1; (15) FPeriphery := Final Positions on periphery of C1; Case I Am * RO: If P1 is in FinalPositions Then destination :=null; EndC. (20) Else destination := Point on Positive X-axis at distance >0 ; EndC. * NR1: If (P1 is Not in FinalPositions AND There Is a Robot at P1) Then destination :=null; EndC. Else (25) If (P1 is in FinalPositions AND There Is No Robot at P1) Then destination := P1; EndC. If (P1 is in FinalPositions AND There Is a Robot at P1)) Then If ((I am Not at NF1) AND (30) (∀ Only One Robot ri such that 0 < |O − ri | < (max|O − N R1|, |O − N F 1|+ )) Then destination := NF1; EndC. Else destination := null; EndC.
160
S. Ghike and K. Mukhopadhyaya
* One of RPeriphery : If ((P1 is Not in FinalPositions AND There Is a Robot at P1) OR (35) (P1 is in FinalPositions AND There is No robot at P1) OR (∀ More than One Robot ri such that 0 < |O − ri | < (max|O − N R1|, |O − N F 1|+ ) OR (NR1 Is Not at NF1) OR (∀ Pos In FPeriphery such that No Robot Is at Pos)) Then destination :=null; EndC. (40) Else FR := Robots Not On One Of The FinalPositions; FPT := FinalPositions With No Robots On Them; OBS := Robots On One Of The FinalPositions; MoveToDestination(FR, FPT, OBS, NF1, C1); (45) * Default : If ((P1 is Not in FinalPositions AND There Is a Robot at P1) OR (P1 is in FinalPositions AND There is No robot at P1)) Then destination :=null; EndC. Else (50) If (∀ More than One Robot ri such that 0 < |O − ri | < (max|O − N R1|, |O − N F 1|+ )) Then MoveRadiallyOut(O, max|O − N R1|, |O − N F 1|, ); /* Move radially outward from O to reach distance of >0 (55) from circle of radius max|O − N R1|, |O − N F 1| centred at O. */ Else If NR1 Is Not at NF1 Then destination := null; EndC. Else (60) If V Pos In FPeriphery such that No Robot at Pos Then FR :=Robots Not On One Of The FinalPositions; FPY:=Free FinalPositions On Periphery of C1; OBS := Robots On One Of The FinalPositions; (65) MoveToDestination(FR,FPY, OBS,NF1,C1); Else FR:= Robots Not On One Of The FinalPositions; FPT := FinalPositions With No Robots On Them; OBS := Robots On One Of The FinalPositions; (70) MoveToDestination(FR,FPT,OBS,NF1,C1); End NoCompass. The algorithms used in NoCompass can be found in Appendix A.
Pattern Formation with No Agreement on Coordinate Compass
4
161
Steps and Correctness
The minimal enclosing circle of the configuration of robots is denoted by C1 and its centre is origin O. We define the robot nearest to origin to be the robot which is at shortest distance greater than zero from origin; similarly we define final position nearest to origin. We denote these two points by NR1 and NF1 respectively. Circle C’ is centred at O and has radius equal to maximum of distance of NR1 and NF1 from origin. When we mention , we assume it to be a very small positive number. We shall use the said terminology in proving correctness of the algorithm. 4.1
Steps
The order in which robots form the pattern is : 1. If the origin is not a final position and there is a robot at origin, the robot moves away on the positive X-axis. 2. If origin is a final position and there is no robot at the origin, a robot moves to it. 3. Let d be greater of distances of NR1 and NF1 from origin. Robots at distance less than or equal to d move outwards towards boundary of circle, until they are at a slightly greater distance than d from centre. 4. NR1 moves to NF1. 5. If there are unoccupied final positions on boundary of circle, other robots except those which lie at final positions on boundary of circle move to fill the final positions on the boundary of circle. 6. Robots which are not at final positions move to occupy the free final positions. Algorithm NoCompass ensures following while forming the pattern: – At a time only one robot moves. – The circle C1 remains intact as an invariant until the pattern is formed. – If origin is a final position, then agreement by robots on XY axes and origin remains invariant after a robot moves to centre of circle, until pattern is formed. – If origin is not a final position, then the agreement by robots on XY axes and origin always remains an invariant until the pattern is formed. Figure 1 illustrates formation of a pattern where steps 2, 4, 5, 6 are followed. In diagram A, NR1 moves towards it since there is a final position at origin. In diagram B, the new NR1 moves to NF1. Thus the agreement on XY axes changes and then it remains invariant until pattern is formed. Diagram C shows that robots fill peripheral final positions without breaking C1 and diagram D shows how the rest of final positions are occupied. Arrows indicate direction of movement of robots. Numbers in increasing order over arrows indicate the order in which robots reach final positions.
162
S. Ghike and K. Mukhopadhyaya
Fig. 1. Formation of pattern for a given configuration of robots. Dark circles are free robots, hollow circles are unoccupied final positions, dark circles within hollow circles are final positions occupied by robots.
4.2
Correctness of Algorithm NoCompass
Theorem 1. All the robots agree on orientation of pattern to be formed in their local coordinate systems. Proof. According to Algorithm NoCompass (lines 1 to 10), every robot r first forms a minimal enclosing circle C1 of initial configuration of robots. It also plots the pattern to be formed in its own local coordinate system. First r forms a Complete Graph G by connecting all points of the input pattern with each other. If there are n points in G, there will be a total of n*(n-1) segments possible by joining these points. Every robot r identifies the smallest segment in this graph G, we call this segment S. Then r plots S on its own local positive X-axis from 0 to 1. Both end points of S form n-2 segments each (which do not include S). Out of these 2*(n-2) segments, r selects smallest segments, let it be called S’. Then r plots S’ in its local coordinate plane such that S’ makes acute angle with local positive Y-axis and arranges the pattern in its local view such that the local origin lies at intersection of S and S’. Ties are broken by (length, angle with local positive Y-axis) pair. Thus, every robot has now the same orientation of local pattern about Y-axis. Also the requirement that S’ makes an acute angle with local positive Y-axis means that all robots agree on orientation of local pattern about X-axis. Finally, agreement on origin to be the Intersection of S and S’ ensures that all robots share a same view of pattern in their local coordinate systems. Then r plots a minimum enclosing circle C2 of the local pattern.
Pattern Formation with No Agreement on Coordinate Compass
163
Theorem 2. Before beginning movement of robots, all the robots agree on the orientation and scale of pattern to be formed. Proof. In Algorithm FindFinalPositions, centre of C2 is shifted to centre of C1 and accordingly pattern points are shifted. Then C2 is compressed or expanded to overlap C1. Depending on the extent of change of diameter of C2, the points of local pattern are also relocated to their new locations. A minimum enclosing circle of a set of points has two or more of those points on its boundary. Thus there are now two or more robots as well as two or more pattern points on boundary of C1. Now every robot agrees on the centre of these circles as the origin. Thus now with an agreement on origin in place, we need to take care that all robots agree on orientation of the pattern about origin. All robots agree on direction of positive X-axis to be the ray from O to NR1. Every robot rotates its local pattern such that pattern point closest to origin lies on this agreed positive X-axis. Now there are two possible directions of positive Y-axis out of which robots must agree on one direction. In both cases, they find out smallest angle M and M’ made by a pattern point on boundary of C1 with the agreed positive X-axis. The direction corresponding to smaller angle out of these two is agreed as positive Y-axis by all robots. Thus all robots agree on XY axes and origin, thus they have same view of pattern to be formed in physical 2-dimensional plane. The two angles M and M’ can turn out to be equal. This tie can be resolved as follows:- NR1 moves slightly towards its local positive X-axis such that it stays nearest to the centre. Thus the tie resulting out of symmetry can be broken. Until this symmetry is broken, no other robot should be allowed to move. However we have not included this mechanism in our algorithm in this paper. Possibility of tie among robots while selecting the nearest robot has not been resolved. Theorem 3. Starting from the initial configuration I0, the robots form a configuration I1 in finite time (i.e. in finite number of cycles) such that NR1 is at NF1. Proof. There can be four independent cases: 1. If the origin is not a final position and there is a robot at origin, the robot moves away on the positive X-axis. (lines 18 to 21) 2. If the origin is a final position and has a robot, robot stays there. (lines 18 to 21) 3. If origin is a final position and has no robot, NR1 will move there. (lines 22 to 27) 4. Origin is not a final position and there is no robot at the centre. In this case, let d be maximum of distance final position nearest to centre of C1 and distance of the robot nearest to centre. Robots at distance less than or equal to d move outwards towards boundary of circle, until they are at a slightly greater distance than d from centre. (lines 46 to 56) Refer proof of Theorem 4. Then NR1 moves to NF1 (line 28 to 33).Until these robots reach their destinations one by one, no other robot moves (lines 35 to 40, 45 to 50).
164
S. Ghike and K. Mukhopadhyaya
Thus the configuration I1 is formed. Since for forming I1, finite distances are covered by robots in straight line fashion, I1 is formed from I0 in finite time. Theorem 4. Algorithm MoveRadiallyOut moves robots at distance shorter than |O−N R1| and |O−N F 1| away towards boundary of C1, while avoiding collisions. Proof. Algorithm MoveRadiallyOut selects a single robot based on its polar coordinates and moves it away toward boundary of C1. While doing it avoids collisions by not selecting destinations as the points on circle centred at O of radius of rad + which already have a robot (lines 5 to 11). The algorithm also ensures that robots further from O are moved first (lines 1 to 5), hence a robot has no need to take care of collisions before reaching distance rad + from O. Theorem 5. Algorithm MoveToDestination always lets the robots reach their destinations in finite time without disturbing the notion of positive X-axis, while avoiding collisions. Proof. MoveToDestination ensures that robots in FR move to final positions in FT one by one without getting closer to O than NR1 standing at NF1. This algorithm selects a pair of free robot R and unoccupied final position T such that the angle made by them at O is least (line 1). If the line R-T does not intersect the circle C’ centred at O of radius NF1 + , then R reaches T in finite time. If the line R-T intersects C’, then R travels on the tangents to C’ such that path to T is shortest (lines 17 to 22). In figure 2, outer circle is C1 and inner circle is C’. R should ideally reach point X and then point T as given in figure 2.A. But as seen in figure 2.B, if R stops somewhere at a point M on segment AX since it has already travelled maximum permissible distance in a single cycle, then in the next computation cycle R will compute the destination to be X’ instead of X. This may happen because of the oblivious nature of robots according to CORDA model. Though this may happen multiple times, and R may keep on computing new intersections Xs of new tangents, still at some definite point of time R is bound to reach one of the
Fig. 2. In spite of their oblivious nature, MoveToDestination ensures that robots reach their target points in finite number of cycles
Pattern Formation with No Agreement on Coordinate Compass
165
Xs within finite number of cycles since it must travel a finite positive amount of distance in every cycle. After reaching a intersection point X, R will reach T in finite time in straight line fashion (lines 25 to 30). The robots which are between C1 and C’ do not move if they are on final positions. So the set of OBS OB remains does not change as R tries to reach T. Thus R avoids OBS by travelling to a point P, which lies just by the side of obstacle and not on or within circle C’, such that there is no obstacle on segment P-T. Thus in any case, R will reach T in finite time while avoiding collisions. Also any robot in FR never gets closer to O than NR1. Hence the notion of positive X-axis is not disturbed. Theorem 6. From configuration I1, the robots form a configuration I2 in finite time such that NR1 is at NF1 and final positions on boundary of C1 are occupied. Proof. Algorithm NoCompass (lines 45 to 50) suggests that robots on boundary of C1 don’t move unless all the final positions on boundary are occupied. This is important to maintain C1 as invariant. So only the robots strictly within boundary of C1 and not the nearest ones to O can move. One of these robots moves to final positions on boundary according to Algorithm MoveToDestination. After this, NoCompass (lines 60 to 66) forces another robot to reach a point on boundary of C1. Thus I2 is formed in finite time. Theorem 7. From configuration I2, the robots form a configuration I3 which is same as the pattern to be formed. Proof. Once final positions on boundary of C1 are occupied, free robots use Algorithm MoveToDestination to reach the unoccupied final positions. At the same time, robots already at their final positions will not move. Thus from proof of Theorem 5, we can say that I3 is formed in finite time. Theorem 8. Starting from configuration I0 until the pattern is formed, agreement on origin remains an invariant. Proof. From Theorem 6, we see that unless all the final positions on boundary of C1 are occupied, no robot on boundary of C1 moves. Hence C1 remains an invariant, which implies its centre - Origin remains an invariant. Theorem 9. If origin is not a final position, then starting from configuration I0 the agreement by robots on XY axes always remains an invariant until the pattern is formed. Proof. The agreement on X-axis can get disturbed if notion of robot nearest to origin gets disturbed because of one these resons: 1. NR1 travels away from origin so that it no longer remains nearest to origin 2. Another robot comes closer to origin than NR1.As seen in Theorem 3, NR1 does not move until other robots have moved sufficiently away from origin. Theorems 5,6,7 indicate that this robot does not move again after reaching its destination. Thus first case does not occur. Second case has also been taken care of as seen in proofs of Theorems 5 and 6. Hence X-axis is agreed upon by all
166
S. Ghike and K. Mukhopadhyaya
robots and maintained as an invariant. From proof of Theorem 2, we can see that direction and orientation of Y-axis remains invariant since the locations of final positions are dependent on origin and X-axis. Since origin and X-axis are an invariant, hence agreement on Y-axis is also invariant. Theorem 10. If origin is a final position, then the agreement by robots on XY axes remains invariant after a robot moves to centre of circle, until pattern is formed. Proof. The agreement on X-axis can get disturbed if origin is one of the final positions, since NR1 should move to it. But once the final position at origin is occupied, robots agree on a new X-axis and there is no way to disturb this new agreement (Theorem 8). Thus there is a new agreement on Y-axis as well and it remains an invariant until pattern is formed (theorem 9). Note that agreement on origin is never disturbed whether origin is a final position or not. Theorem 11. NoCompass ensures that only one robot moves at a given time. Proof. Referring to steps of NoCompass and said proofs, we see that at a time only one robot moves: Step 1 and step 2 (Theorem 3), Step 3 (Theorems 3 and 4), Step 4(Theorem 3), Step 5(Theorems 5 and 6), Step 6(Theorems 5 and 7).
5
Conclusion
The arbitrary pattern formation problem has been successfully solved by researchers for the cases of partial and full agreement on coordinate compass. But not much has been algorithmically stated about the no agreement case, beyond theoretically proving that arbitrary pattern formation is not possible in case of no agreement. We have proposed an algorithm that can form a large range of patterns, hence it can be used in the most practical scenario where robots may not share any sense of orientation at all. The limitations of our algorithm are: – A possible tie in selecting the robot nearest to origin is not resolved. – If there are not sufficient robots to occupy free final positions on boundary of C1, then the pattern cannot be formed. Maintaining C1 as an invariant is the crucial premise to successful formation of pattern. We are working toward removing these limitations. The first case may not occur in most of the practical cases since exact equality of distances in a practical pattern formation is rare, or can be avoided with a small deviation from ideal measures. Our main task is to improve the algorithm to remove second limitation, by being able to identify the free robots on boundary of C1 which can be moved without breaking C1 itself. If this is achieved, it will imply that Algorithm NoCompass will be able to practically form almost every pattern of robots.
Pattern Formation with No Agreement on Coordinate Compass
167
References 1. Flocchini, P., Prencipe, G., Santoro, N., Widmayer, P.: Hard tasks for weak robots: The role of common knowledge in pattern formation by autonomous mobile robots. In: Aggarwal, A.K., Pandu Rangan, C. (eds.) ISAAC 1999. LNCS, vol. 1741, pp. 93–102. Springer, Heidelberg (1999) 2. Prencipe, G.: Distributed coordination of a set of autonomous mobile robots, Ph.D Thesis, University of Pisa (2002) 3. Suzuki, I., Yamashita, M.: Distributed anonymous mobile robots - formation and agreement problems. In: Proc. 3rd Colloqium on Structural Information and Communication Complexity, SIROCCO 1996, pp. 313–330 (1996) 4. Suzuki, I., Yamashita, M.: Distributed anonymous mobile robots: Formation of geometric patterns. SIAM Journal on Computing 28(4), 1347–1363 (1999) 5. Beni, G., Hackwood, S.: Coherent swarm motion under distributed control. In: Proc. 1st. Int. Symp. on Distributed Robotic Systems, DARS 1992, pp. 39–52 (1992) 6. Parker, L.E.: Adaptive action selection for cooperative agent teams. In: Proc. 2nd Int. Conf. on Simulation of Adaptive Behavior, pp. 442–450. MIT Press, Cambridge (1992) 7. Balch, T., Arkin, R.C.: Motor schema-based formation control for multiagent robot teams. In: Proc. 1st International Conference on Multiagent Systems, ICMAS 1995, pp. 10–16 (1995) 8. Flocchini, P., Prencipe, G., Santoro, N., Widmayor, P.: Arbitrary pattern formation by asynchronous, anonymous, oblivious robots. Theoretical Computer Science 407, 412–447 (2008) 9. Agmon, N., Peleg, D.: Fault-tolerant gathering algorithms for autonomous mobile robots. SIAM Journal on Computing 36, 56–82 (2006) 10. Flocchini, P., Prencipe, G., Santoro, N., Widmayer, P.: Gathering of asynchronous robots with limited visibility. Theoretical Computer Science 337(13), 147–168 (2005)
Appendix A Algorithm MoveRadiallyOut(O, rad, ) C := Circle of radius rad centred at O; C’ := Circle of radius rad+ centred at O; If There Is a robot ri other than Me within C’ such that (|O − ri|, Angle(P ositiveX − axis, O − ri)) > (|O − r|, Angle(P ositiveX − axis, O − r)) Then destination :=null; EndC. (5) Else p := Point on C’ such that there is no robot at p; If There is an obstacle at p Then destination := Point on C’ just by the side of p; Else destination :=p; EndC. (10) End MoveRadiallyOut.
168
S. Ghike and K. Mukhopadhyaya
Algorithm MoveToDestination(FR, FT, OB, NF1,C1) (r , p) := MinimumAngle(FR, FT); /* It choose pair (r,p) such that Angle(Or, Op) = minAngle(Or’, Op’) r’ FR, p’ FT
Ties are broken by (radial distance, anticlockwise angle) pair */ C := Circle of radius |O − N F 1|+ centred at O; If I Am Not r (5) Then destination:= null; EndC. Else /*I Am r */ If Line passing through r and p Does Not Intersect circle C Then If No Obstacle Is on the Line passing through r and p (10) Then destination:= p; EndC. Else Safe := (Circular area of C1 - Circular area of C); d: = Point strictly inside Safe Not occupied by any Obstacle, and such that No Obstacle Is on segment [d-p]; destination:= d; EndC. (16) Else tr1, tr2 := Tangents from r to C; tp1, tp2 := Tangents from p to C; /* If there is only tangent for r to C, then tr1=tr2. Similarly for p.*/ xij := Points of intersection of tri, tpj ; (21) D := xij corresponding to mintri,trji=1,2 and j=1,2 ; If No Obstacle Is on the Line passing through r and D Then destination:= D; EndC. Else (25) Safe := (Circular area of C1 - Circular area of C); d: = Point strictly inside Safe Not occupied by any Obstacle, and such that No Obstacle Is on segment [d-D]; destination:= d; EndC. End MoveToDestination. (30) Algorithm FindFinalPositions(C1,C2,P1,P2,Pattern) ShiftCentre(); /*Shifts P2 to P1. Accordingly shifts all Pattern points.*/ Pattern1 = SuperImpose(); /*Compresses or expands C2 to overlap C1. */ O := P1; R1 := Robot nearest to O at distance > 0; (5) Positive X-axis := Ray OR1; RotatePattern(Pattern1); /* Rotates given pattern such that Pattern point nearest to O lies on positive X-axis.*/ D1,D2 := Two possible directions of Positive Y-axis; F1,F2 := Two possible choices of Pattern points; (10) Periphery1: = Pattern points on periphery of C1 in case of D1; Periphery2: = Pattern points on periphery of C1 in case of D2;
Pattern Formation with No Agreement on Coordinate Compass
m1 := MinAngle(Positive X-axis, Periphery1); m2 := MinAngle(Positive X-axis, Periphery2); /* It chooses p from Periphery1 / Periphery2, such that (15) Angle(Positive X-axis,Op) :=minAngle(Positive X-axis,Op’)< 180o where p’ Periphery1/Periphery2 and returns Angle(Positive X-axis,Op)*/ If m1 < m2 Then Positive Y-axis := Ray OD1; FinalPositions := F1; Else Positive Y-axis := Ray OD2; FinalPositions := F2; (20) AntiClockWise := Direction from Positive X-axis to Positive Y-axis without crossing Negative Y-axis; End FindFinalPositions.
169
Gathering Asynchronous Transparent Fat Robots Sruti Gan Chaudhuri and Krishnendu Mukhopadhyaya ACM Unit, Indian Statistical Institute 203 B.T. Road, Kolkata - 700108, India [email protected], [email protected]
Abstract. Gathering of multiple robots is a well known and challenging research problem. Most of the existing works consider the robot to be dimensionless (point). Algorithm for Gathering fat robots (unit disc robots) has been reported for 3 and 4 robots. This paper proposes a distributed algorithm for Gathering n (n ≥ 5) autonomous, oblivious, homogeneous, asynchronous, fat robots. The robots are assumed to be transparent and they have full visibility. Keywords: Asynchronous, Fat Robots, Gathering.
1
Introduction
The geometric robot Gathering problem [1] is defined as gathering of multiple autonomous mobile robots into a point or a small region not fixed in advance. The world of the robots usually consists of the infinite 2D plane and autonomous, memoryless, homogeneous multiple point robots living in it [4,5]. The robots perform the same algorithm (Look-Compute-Move cycle) asynchronously. They can not communicate explicitly. They communicate only by observing the robot positions in the plane. A robot is able to see another robot within its visibility range (may be infinite). The movement of the robots is not instantaneous (CORDA model) [6]. The robots may have a common coordinate system or individual coordinate systems. Gathering is not possible for large number of asynchronous robots [7]. However, Convergence of multiple robots have been solved for fully asynchronous model for arbitrary number of robots [2]. Czyzowicz et al., [3] worked on Gathering considering the robots as unit disc, called fat robots. The methods of Gathering of 3 and 4 fat robots have been described by them. We propose a Gathering algorithm for n (n ≥ 5) fat robots considering the robots to be transparent in order to achieve full visibility.
2
Gathering Multiple Transparent Fat Robots
The desired gathering pattern for transparent fat robots is a circular layered structure of robots with a unique robot in the center C (Fig. 1(a)). Finding a unique robot to put in the center is not possible for 2, 3 and 4 robots due T. Janowski and H. Mohanty (Eds.): ICDCIT 2010, LNCS 5966, pp. 170–175, 2010. c Springer-Verlag Berlin Heidelberg 2010
Gathering Asynchronous Transparent Fat Robots
171
=2 C
=1 =0
(a)
C
(b)
(c)
Fig. 1. Gathering pattern build by multiple robots
to symmetry (Fig. 1(c)). Therefore, at least 5 robots are required to build the gathering pattern (Fig. 1(b)). For Gathering multiple robots, the approach is to select one robot and move it to its computed destination. All other robots remain stationary till the designated robot reaches its destination. At each turn, the robot selected for motion is chosen in a way so that, during the motion, if any other robot looks and computes, this particular robot would be the only one eligible for movement. The robot which is nearest to the destination, is usual choice. If multiple robots are at same distance from the destination, then a tie breaking procedure is required to elect one robot for moving. 2.1
Tie Breaking Algorithm
The robots equidistant from the destination are on the circumference of a circle with destination as its center. They also form a convex polygon. Definition 1. A polygon is called symmetric, if the perpendicular bisector of any side of the polygon divides the polygon into two parts and one part is the mirror image of the other part. A polygon which is not symmetric is called asymmetric. Let P P be a side of a polygon. Let the vertices of the polygon in clockwise direction, starting from the vertex next to P upto the vertex previous to P , be P1 , P2 , . . . , Pm . Let L be the line on which the line segment P P lies. The points Q1 , Q2 , . . . , Qm are the foots of the perpendiculars drawn from vertices P1 , P2 , . . . , Pm respectively. Let SP be the ordered list of distances of the foots of the perpendiculars from P (i.e., |P Q1 |, |P Q2 |, . . . , |P Qm |). Similarly, SP is the ordered list of distances of the foots of the perpendiculars from P (i.e., |P Qm |, |P Qm−1 |, . . . , |P Q1 |). Lemma 1. In a symmetric polygon, there exists atleast one side P P for which SP = SP . Proof. According to Definition 1, there exists a side P P in a symmetric polygon, where |P Pi | = |P Pm−i+1 | for 1 ≤ i ≤ m. This implies that, |P Qi | = |P Qm−i+1 | for 1 ≤ i ≤ m. Therefore, SP = SP . Corollary 1. SP = SP for all sides of an asymmetric polygon.
172
S. Gan Chaudhuri and K. Mukhopadhyaya
Now we present an algorithm for breaking tie among m (m ≥ 2) robots. Let Z be a set of m (m > 2) sets. Each member of Z is associated with a side of the polygon, constructed by the 3 − tuples: (|P P |, SP and SP ). We compare SP and SP lexicographically. Whichever is smaller, we keep that first. Algorithm 1. Tie-Breaking algorithm for m robots Input: m robots on the circumference of a circle. Output: One elected robot among m robots or report if the polygon is symmetric. if m = 2 then report the configuration is symmetric; else Sort the set Z lexicographically. Find the minimum of Z, with SP = SP . if Z has an element with SP = SP then report the configuration is symmetric; else if SP < SP then select P ; else select P ;
Lemma 1 is used to detect any symmetric polygon formed by a set of robots equidistant from the destination. There can be multiple such sets, i.e., the robots within a set are equidistant from the destination but the robots in different sets are not equidistant from the destination. Hence, we get multiple concentric convex polygons whose vertices are on the circumference of a circle with the destination at the center. Let K such concentric polygons be on the plane forming multiple levels. The polygon at level 1 is smallest and the polygon at level K is biggest. Let, the polygon at level 1 is symmetric. L is the line on which the line segment P P (a side of the symmetric polygon at level 1) lies. The perpendiculars are drawn from all vertices of the polygon at level k (2 ≤ k ≤ K) on L. SPk be the ordered list of distances (constructed as before) of the foots of the perpendiculars from P and SPk be the ordered list of distances (constructed as before) of the foots of the perpendiculars from P . Lemma 2. If the polygon at level k (2 ≤ k ≤ K) is symmetric, then there exists at least one side P P at level 1, for which SPk = SPk . Proof. Similar to the proof of lemma 1.
Corollary 2. If the polygon at level k (2 ≤ k ≤ K) is asymmetric, then SP = SP for all sides of the polygon at level 1. Definition 2. A set of K concentric convex polygons is said to be symmetric, if the polygon at level 1 is symmetric as well as the polygons of all levels k (2 ≤ k ≤ K) are symmetric with respect to the polygon at level 1. If symmetric configuration is detected at the level 1 by Algorithm 1, then Algorithm 2 is used for breaking tie with the help of the other levels. Consider the polygon at level k (2 ≤ k ≤ K). For each side P P of the polygon at level 1
Gathering Asynchronous Transparent Fat Robots
173
construct the 3 − tuples: (|P P |, SPk and SPk ). If level 1 consists 2 robots, say P and P , then we consider the line P P . Algorithm 2. Tie-Breaking algorithm for multiple levels Input: K concentric convex polygons formed by robots. The polygon at level 1 is symmetric. Output: One elected robot from level 1 or report the configuration is symmetric. k = 2. while k ≤ K do Sort the set Z k lexicographically. k k Find the minimum of Z k , where SP = SP . k k if Z has an element with SP = SP then report the configuration is symmetric at level k; k++; else k k if SP < SP then select P ; else select P ; if k = K + 1 then report the configuration of K concentric convex polygons is symmetric;
2.2
Gathering Algorithm
Now we present the algorithm for Gathering n (n ≥ 5) transparent fat robots. Initially all robots are assumed to be stationary and separated (No two robots touch each other). All of them are placed in set B. The algorithm finds the center C of the Smallest Enclosing Circle (SEC) obtained by set B. A robot (R), which is nearest to C, is selected for movement. If there are multiple nearest robots, then Algorithm 1 and/or Algorithm 2 are called to elect a robot. If the configuration formed by the robots is symmetric, then Gathering algorithm reports that gathering is not possible. The elected robot (R) moves to C to build the desired gathering pattern. If C is not occupied by any other robot, then R becomes the central robot of the gathering pattern. If C is occupied by other robot, then the algorithm finds the last layer of the gathering pattern. If is full, then R makes the new layer +1 (Fig. 2(a)). If is not full, then R may slide around C (if required) (Fig. 2(b, c)) and is placed in layer . When R reaches
R
R
C
=2 =1 =0
C d
d
(a)
R
=1 =0
C
=1 =0
d
(b)
(c)
Fig. 2. Formation of the Gathering pattern around the center (C) of the SEC
174
S. Gan Chaudhuri and K. Mukhopadhyaya
its destination, it is removed from set B. The Gathering algorithm terminates when gathering pattern is formed involving all robots in set B (set B becomes null) or symmetric configuration is formed by the robots in set B. Algorithm 3. Gathering of n (n ≥ 5) transparent fat robots Input: A set of separated robots (B). Output: Gathering pattern or report if symmetric configuration encountered. Calculate SEC by the robots in set B. Let C be the center of SEC. while set B is not null or symmetric configuration is not encountered do Sort and level the robots in set B based on their distances from C. The robots equidistant from C, make a level. Let there be K levels of distances. The robots nearest to C are at level 1 and the robots farthest from C are at level K. if one robot is at level 1 then select that robot (R) for moving; else call Algorithm 1 ; if Algorithm 1 elects a robot (R) then select R for moving; else if K > 1 then call Algorithm 2 ; if Algorithm 2 elects a robot (R) from level 1 then select R for moving; else report the configuration is symmetric and Gathering is not possible; exit; else
report the configuration is symmetric and Gathering is not possible; exit;
if C is not occupied by other robot then move R until it reaches C ; else Find the last layer () of the gathering pattern formed around C. if last layer is full then move R and form a new layer( + 1) (Fig. 2(a)); else move and slide R around C (if required) and gets the final position at layer (Fig. 2(b, c)); When R reaches to its destination and stops, it is removed from set B.
Lemma 3. When R is moving towards C and/or sliding around C, its distance from C is still minimum among all the robots in set B. Proof. Let d be the distance between R and C (Fig. 2). However, R is nearest to C and no other robot is inside the circle of radius d. During the motion of R, it remains inside the circle of radius d. The robots other than R remain stationary until R reaches its final position. Hence, R remains nearest to C during its motion. Lemma 4. No obstacle appears in the path of R in motion. Proof. According to lemma 3 no robots can appear inside the circle of radius d. As we consider the robots in set B is separated, no robots will be on the circumference of the circle of radius d touching R. Hence, no obstacle appears in the path of a mobile robot to its destination.
Gathering Asynchronous Transparent Fat Robots
3
175
Conclusion
In this paper, a distributed algorithm is presented for Gathering n (n ≥ 5) autonomous, oblivious, homogeneous, noncommunicative, asynchronous, transparent fat robots having individual coordinate systems. The algorithm ensures that multiple mobile robots will gather if no symmetric configuration encountered during run time. The continuation of this work is, to assure that no symmetric configuration is formed during the execution of the algorithm. In order to achieve full visibility, this paper assumes that the robots are transparent unit discs. The work on the Gathering of multiple transparent fat robots will be extended for multiple solid fat robots which obstruct the visibility of the other robots.
References 1. Cieliebak, M., Flocchini, P., Prencipe, G., Santoro, N.: Solving the Robots Gathering Problem. In: Baeten, J.C.M., Lenstra, J.K., Parrow, J., Woeginger, G.J. (eds.) ICALP 2003. LNCS, vol. 2719, pp. 1181–1196. Springer, Heidelberg (2003) 2. Cohen, R., Peleg, D.: Convergence Properties of Gravitational Algorithm in Asynchronous Robots Systems. SIAM J. Comput. 34(6), 1516–1528 (2005) 3. Czyzowicz, J., Gasieniec, L., Pelc, A.: Gathering Few Fat Mobile Robots in the Plane. In: Shvartsman, M.M.A.A. (ed.) OPODIS 2006. LNCS, vol. 4305, pp. 350–364. Springer, Heidelberg (2006) 4. Efrima, A., Peleg, D.: Distributed Models and Algorithms for Mobile Robot Systems. In: van Leeuwen, J., Italiano, G.F., van der Hoek, W., Meinel, C., Sack, H., Pl´ aˇsil, F. (eds.) SOFSEM 2007. LNCS, vol. 4362, pp. 70–87. Springer, Heidelberg (2007) 5. Peleg, D.: Distributed Coordination Algorithms for Mobile Robot Swarms: New Directions and Challenges. In: Pal, A., Kshemkalyani, A.D., Kumar, R., Gupta, A. (eds.) IWDC 2005. LNCS, vol. 3741, pp. 1–12. Springer, Heidelberg (2005) 6. Prencipe, G.: Instantaneous Action vs. Full Asynchronicity: Controlling and Coordinating a Set of Autonomous Mobile Robots. In: Restivo, A., Ronchi Della Rocca, S., Roversi, L. (eds.) ICTCS 2001. LNCS, vol. 2202, p. 154. Springer, Heidelberg (2001) 7. Prencipe, G.: Impossibility of Gathering by a Set of Autonomous Mobile Robots. TCS 384(2-3), 222–231 (2007)
Development of Generalized HPC Simulator W. Hurst1 , S. Ramaswamy2 , R. Lenin3 , and D. Hoffman4 1 Philander Smith College University of Arkansas at Little Rock 3 University of Central Arkansas 4 Acxiom Corporation [email protected], [email protected], [email protected], [email protected] 2
Abstract. With ever growing interests in High Performance Computer (HPC) systems for their inherent capabilities for data consolidation, data modeling, systems simulation, data storage, or intensive computation solutions there will obviously be ever growing needs from researchers within governments, businesses, and academia to determine the right HPC framework for meeting their computational needs for the least cost. While HPC simulators are available from commercial sources, they are not free and often commit a group into design choices that are not always optimal. In this short paper, we present our preliminary effort to design an open-source, adaptable, free-standing and extendible HPC simulator.
1
Problem Statement and Its Significance
Currently, the only way to begin addressing the questions of capability versus performance is to perform an in-depth analysis of an organizations current needs, the capabilities of the HPC systems currently available, and the development of expected growths in the requirements for all potential system users. This type of evaluation can be effectively performed for varying degrees of growth, for various user group combinations, occurring over ranges of future time-frames, using various potential hardware combinations. The current approach is to just buy your HPC system with an embedded HPC simulator which may or may
Fig. 1. HPC Overview T. Janowski and H. Mohanty (Eds.): ICDCIT 2010, LNCS 5966, pp. 176–179, 2010. c Springer-Verlag Berlin Heidelberg 2010
Development of Generalized HPC Simulator
177
not completely meet the needs. Such a option often precludes most of the indepth analysis specified above and does not satisfactorily resolve the evaluation process. The obstacles met by our group in evaluating and testing various general purpose HPC job-scheduling algorithms resulted in this pursuit of a open-source, adaptable, free-standing and extendible HPC simulator.
2
Commercial Suppliers
2.1
Software/Services Vendors
As a point of reference, a list of some excellent commercial software/services vendors would include:
Vendor Column A Vendor Column B http://www.clusterresources.com/ http://www.criticalsoftware.com/ http://www.penguincomputing.com/ http://www.scali.com/ http://www.scyld.com/ http://www.terrasoftsolutions.com/products
2.2
Job Schedulers
Along the same lines, here are a couple links to commercial cluster job schedulers:
Vendor Column A http://www.buyya.com/libra/
3
Vendor Column B http://supercluster.org/maui
HPC Simulator Implementation Overview
The HPC simulator of this paper was developed using Omnet++ version 4.01 . Figure 2 presents a screen shot of the HPC simulator during a sample run. A close look at the elements of the figure will reveal the following sub-module characteristics: (i) Client[0-N]: each “Client” module is a simulation model of a user creating jobs. (ii) Switch: acts as a simple router directing all network traffic from source to destination. (iii) HNode: Head Node performs the duties of the job scheduler. As jobs come in, they are placed into a “job request” queue and prioritized according to the job’s requested number of CPUs. At present the highest priorities are placed on bigger jobs, then “backfilling” is implemented to allocate the remaining CPUs. (iv) CNode[0-N]: Each compute node runs only one job at a time. The CPU processing time used for the run is the processing time requested from the originating “Client” module. 1
Downloadable from http://www.omnetpp.org/index.php
178
W. Hurst et al.
Fig. 2. HPC Simulator Framework Developed
These simulated hardware pieces are significant because they are the core components to any real HPC system. Using these core hardware components an HPC of any size and capacity can be configured; it is merely a matter of hardware capabilities; or variations of component capability constants in terms of software modifications.
4
Simulation Results
The preliminary results shown below were developed using the HPC cluster simulator software configured to 128 compute nodes, operating on the empirical distribution curves obtained from the real HPC trace log files (from our corporate partner), running for 12, 960, 000 seconds of simulated time. The simulation data collected during the simulation run was subsequently compared on the following HPC characteristics: (1) Process Times; (2) Queuing Times; (3) Service Times; and (4) End-to-End Delays. The significance of these characteristics are that they are statistically collectable data fragments naturally occuring during the operation of an HPC system. When the job is submitted to an HPC, it is timestamped, initializing the job statistic collecting information on End-to-End Delays for the job. At this point, the job scheduler evaluates the job requirements, identifies the job type and places it in the job queue; the initialization of two new statistical values: service time, and queuing time. When the job is pulled from the queue, and placed into the run state, it is the end to queuing time and the beginning of processing time. When the job has completed processing, it is the end to the processing time and the end of the service time. When the job results have been written back to the head-node, all End-to-End Delays for the job are over. Due to page length restrictions the only fragment of the information presented here represents a comparison between an actual HPC system and the simulated
Development of Generalized HPC Simulator
179
HPC system. The point of focus here is a comparison between actual hpc processing times and simulated processing times for all five client types (0-4) found on the actual HPC system; titled “Process Time Comparison (Dist vs HPC) in the chart below.” Process Time Comparison (Clients vs Dist [0-4]) 12000 Time Expended
10000 8000 6000 4000 2000 0 0
200
400
600
800
1000
1200
Values Dist Client Dist Client
0 0 1 1
Legend Dist 2 Client 2 Dist 3 Client 3
Dist 4 Client 4
The processing time comparison graph 4 is indicative of the clear matches between the processing times from the real HPC system and the simulated HPC system for all five client types.
5
Conclusion
In this paper, we present an overview of an adaptable, extendible, free-standing HPC simulator was developed using omnet++ 4.0 software development package1 . Results of job-scheduling analysis indicates that simulation results match actual HPC system performance characteristics. For those who are interested copies of the software, distribution curves, original data, etc. can be found at http : //cpsc.ualr.edu/wbhurst/hpc
1
Downloadable from: http://www.omnetpp.org/index.php
Performance Analysis of Finite Buffer Queueing System with Multiple Heterogeneous Servers C. Misra and P.K. Swain School of Computer Application KIIT University, Bhubaneswar-751024, India [email protected], [email protected]
Abstract. This paper analyzes a finite buffer steady state queueing system for three heterogeneous servers. The jobs are coming in the system according to a Poisson distribution with mean rate λ and service times are exponentially distributed with different mean values. It is assumed that if the servers were busy, incoming jobs would wait in the queue and maximum queue size will be finite. One example of this situation is, routers of different speeds serving the incoming packets in a network.
1
Introduction
There are great importance in last decade for utilizing queueing systems in many applications like network traffic congestion control, nodes in telecommunications network with links of different capacities, various processes in manufacturing plants etc, see reference in [1], [2], [3]. Though there are vast amount of literature available on two heterogeneous servers [4], [5], [6], [7], [8], [9] with finite buffer, very little attention have been paid on the corresponding finite buffer queue with three heterogeneous servers. Arora [4] present a two server infinite buffer queue where the arrival assumed according to Poisson process and service time is exponentially distributed under bulk service discipline. They obtained queue length and distributions of the length of busy periods for the minimum one channel is busy or both channels being busy. Aloisio et. al. [10] have presented a queueing system, consist of three heterogeneous servers with infinite buffer and packets are arriving in the system according to Poisson distribution with mean rate λ and service times are exponentially distributed with different mean values. It was assumed that if the servers were busy, incoming packets would wait in the queue and determined the queueing and re-sequencing delay. Queues with finite buffer space are more realistic in real life situations than queues with infinite-buffer space. Hence we have designed a finite buffer queueing system having multiple heterogeneous servers. However, if the buffer space is full, the arrived customers are considered to be lost. The rest of the paper is organized as follows. Section 2 provides model description and analysis of model. The performance measures are carried out in Section 3 . Numerical measures in the form of graphs are presented in Section 4. Section 5 concludes the paper. T. Janowski and H. Mohanty (Eds.): ICDCIT 2010, LNCS 5966, pp. 180–183, 2010. c Springer-Verlag Berlin Heidelberg 2010
Performance Analysis of Finite Buffer Queueing System
2
181
Model Description and Analysis
We consider an M/M/3/N queuing system single service queue with three heterogeneous servers with maximum buffer size N. The jobs arrive in the system according to Poisson distribution with mean rate λ. It is assumed that service time follows exponential distributions with the service rate μi where 1 ≤ i ≤ 3 and μ1 > μ2 > μ3 . In this model, let πi,j,k be the steady state probability and i, j and k represents first, second and third server and value of i, j, k can be 0 or 1 which indicate idle and busy state respectively. π3,n represents probability when three servers are busy and n units waiting in the queue, where 0 ≤ n ≤ N . Using probabilistic argument we obtain λπ0,0,0 = μ1 π1,0,0 + μ2 π0,1,0 + μ3 π0,0,1 , (λ + μ1 )π1,0,0 = λπ0,0,0 + μ2 π1,1,0 + μ3 π1,0,1 , (λ + μ2 )π0,1,0 = μ1 π1,1,0 + μ3 π0,1,1 ,
(1) (2) (3)
(λ + μ3 )π0,0,1 = μ2 π0,1,1 + μ1 π1,0,1 , (λ + μ1 + μ2 )π1,1,0 = λ(π1,0,0 + π0,1,0 ) + μ3 π0,3 ,
(4) (5)
(λ + μ2 + μ3 )π0,1,1 = μ1 π0,3 ,
(6)
(λ + μ1 + μ3 )π1,0,1 = μ2 π0,3 , (7) (λ + μ1 + μ2 + μ3 )π0,3 = λ(π0,1,1 + π1,1,0 + π1,0,1 ) + (μ1 + μ2 + μ3 )ρπ0,3 , (8) (λ + μ1 + μ2 + μ3 )πn,3 = λπn−1,3 + (μ1 + μ2 + μ3 )πn+1,3 , 1 ≤ n ≤ N − 1, (μ1 + μ2 + μ3 )πN,3 = λπN −1,3 . From equation (10) πN −1,3 =
πN,3 ρ
(9) (10)
where ρ =
λ μ1 +μ2 +μ3 .
Using above equation in (9), we obtain recursively, πn,3 = ρn−N πN,3 , 0 ≤ n ≤ N − 1,
(11)
Using equation (6),(7),(4) and (8) respectively, we get π0,1,1 = μ1 K1 ρ−N πN,3 , π1,0,1 = μ1 K2 ρ−N πN,3 ,
(12)
K1 + K2 ρ−N πN,3 , π1,1,0 = 1ρ − μ1 K1 − μ2 K2 ρ−N πN,3 , 1 −N π0,1,0 = λ+μ 1 − μ ρK − ρK (μ + μ ) πN,3 , 2 2 1 1 3 μ1 ρ 2 π0,0,1 =
μ1 μ2 (λ+μ3 )
π1,0,0 = K3 ρ−N πN,3 , π0,0,0 = K4 ρ−N πN,3 . where K1 = λ+μ12 +μ3 , K2 = λ+μ11 +μ3 , K3 =
1 λ
λ + μ1 + μ2 ρ1 − μ1 K1 + μ2 K2 −
λμ1 ρ(λ+μ2 )
1 − ρμ2 K2 − ρK1 (μ1 + μ3 ) − μ3 ,
182
C. Misra and P.K. Swain
K4 = λ1 K3 (λ + μ1 ) − μ2 ( ρ1 − μ1 K1 + μ2 K2 ) − (μ3 μ2 )K2 . Using normalizing condition we get πN,3 = ρN
3
1 ρ
(K1 +K2 ) + K3 + K4 + μ1 μ2λ+μ + 3
μ1 ρ(λ+μ2 )
−1 1 − ρμ1 K2 − ρK1 (μ1 + μ3 )
Performance Measure
Once the state probabilities at arrival are known, we can evaluate various performance measures such as the average number of customers in the queue (Lq ), average waiting time in the queue(Wq ) and the probability of blocking (PBL) N are given by Lq = n=1 nπn,3 . The average waiting time in the queue (Wq ) using Little’s rule is Wq = Lq /λ , where λ = λ(1 − PN,1 ) is the effective arrival rate and probability of blocking (PBL) is given by PBL=PN,1.
4
Numerical Discussion
Fig 1, shows the impact of the service rate μ1 on the average queue length Lq where λ = 2 and μ3 = 0.25. Here it is observed that Lq decreases when μ1 11
12 μ =0.15
10
2
μ =0.15
μ =0.30
2
q
average queue length (L )
2
μ =0.45 2
8
2
μ =0.45
q
Mean waiting time in the queue (W )
μ =0.30
9
10
μ =0.60 2
7 6 5 4 3
2
μ =0.60 2
8
6
4
2
2 1
1
1.5
2 service rate μ
2.5
0
3
1
1.5
1
2 Service rate μ
2.5
3
1
Fig. 1. Effect of μ1 on Lq with varying μ2
Fig. 2. Effect of μ1 on Wq with varying μ2
0.135 14 0.13
q
Average queue length L
Probability of blocking(PBL)
12 0.125
0.12
0.115
0.11
10
8
6
4 0.105 2 0.1 10
20
30 40 Buffer size(N)
50
Fig. 3. Effect of Buffer size on PBL
60
0.4
0.5
0.6
0.7 0.8 0.9 1 Service utilization factor ρ
1.1
Fig. 4. Effect of Lq on ρ
1.2
1.3
Performance Analysis of Finite Buffer Queueing System
183
increases for all value of μ2 and the average queue length Lq is more efficient, when service rate μ2 is large. Fig 2, presents the effect of the service rate μ1 on the mean waiting time Wq , where λ = 2 and μ3 = 0.25. It may be observe that the values of Wq decrease monotonically for increase of service rate μ1 . We observe that Wq converge for large values of service rate μ1 and will be efficient with the increase of service rate μ2 . The effect of the Buffer Size (N ) on the probability of blocking (PBL) is shown in Fig 3. PBL decreases with increase of buffer size (N), and it tends to zero when N is sufficiently large. Fig 4, shows the impact of the service utilization factor ρ on average queue length Lq , where λ=2 and μ2 =0.5 for various value of μ1 . Here, we observe that average queue length increases exponentially when ρ increases.
5
Conclusion
In this paper, we have carried out an analysis of M/M/3/N heterogeneous server queueing system. We have developed expressions of the steady state probabilities using recursive method. Computational experiences with variety of numerical results in the form of graphs are discussed to display the effect of the system parameters on the performance measures. This model can be extended to M/M b /3/N with heterogeneous server.
References 1. Gross, D., Shortle, J.F., Thompson, J.M., Harris, C.M.: Fundamentals of Queuing Theory. John Wiley and Sons, New York (2008) 2. Kleinrock, L.: Queueing systems. Theory, vol. 1. Wiley, New York (1975) 3. Trivedi, K.S.: Probability and Statistics with Reliability, Queueing and Computer Science Applications, PHI, New Delhi (2000) 4. Arora, K.L.: Two Server Bulk Service Queueing Process. Operations Research 12, 286–294 (1964) 5. Arora, K.L.: Time Dependent Solution of Two Server Queue Fed by General Arrival and Exponential Server Time Distribution. Operation Research 8, 327–334 (1962) 6. Krishna Kumar, B., Pavai Madheswari, S., Venkatakrishnan, K.S.: Transient Solution of an M/M/2 Queue with Heterogeneous Servers Subject to Catastrophes. Information and Management Sciences 18(1), 63–80 (2007) 7. Larsen, R.L., Agrawala, A.K.: Control of a Heterogeneous Two Server Exponential Queueing System. IEEE Transactions on Software Engineering 9(4), 522–526 (1983) 8. Lin, W., Kumar, P.R.: Optimal Control of a Queueing System with Two Heterogeneous Servers. IEEE Transactions on Automatic control 29(8), 696–703 (1984) 9. Satty, T.Y.: Time Dependent Solution of Many Servers Poisson Queue. Operations Research 8, 755–772 (1960) 10. Aloisio, C., Junior, N., Anzaloni, A.: Analysis of a Queueing System with Three Heterogeneous Servers, pp. 1484–1488. IEEE, Los Alamitos (1988)
Finite Buffer Controllable Single and Batch Service Queues J.R. Mohanty School of Computer Application, KIIT University Bhubaneswar, 751024, India [email protected]
Abstract. This paper analyzes Finite buffer Discrete-time single and batch service queueing system. The inter-arrival times of successive arrivals are assumed to be independent and geometrically distributed. The customers are served by a single-server either one at a time or in batches depending upon the number of customers present in the system. The service times are assumed to be independent and geometrically distributed. The analysis of this queueing model is based on the use of the recursive method. The steady-state probabilities for the late arrival systems with delayed access and some performance measures such as the average number of customers in the queue and expected busy period with computational experiences have been presented.
1
Introduction
In networking and telecommunication systems, the way buffers behave is an important research topic because the performance of the network and the Quality of Service (QoS) experienced by the users is closely related to the buffers’ behavior. Information can be lost because of buffer overflow or information units can suffer too long delays. The consequences to the users will vary depending on the application. For instance, large delays are very bad for real-time applications such as voice, video, etc., while they are more acceptable for non-real-time applications. On the other hand, loss of information is devastating for data, while it is to some extent acceptable for voice. In computer and telecommunication systems, the basic time unit is a fixed time interval called a packet or Asynchronous Transfer Mode (ATM) cell of transmission time. The discrete-time queue is an ideal model for analyzing this type of systems see [1]. These queueing models are more accurate and efficient than their continuous-time counterparts to analyze and design digital transmitting systems. For an extensive analysis of a wide variety of discrete-time queueing models and related references, see [2], [3], [4], and [5]. Over the last several years the general batch service queues with a number of variants have been studied by many authors, see [6], [7], [8], and [9]. The analysis of a single and batch service M/M/1 queueing system has been carried out by Baburaj and Manoharan [10]. T. Janowski and H. Mohanty (Eds.): ICDCIT 2010, LNCS 5966, pp. 184–187, 2010. c Springer-Verlag Berlin Heidelberg 2010
Finite Buffer Controllable Single and Batch Service Queues
185
This paper focuses on a discrete-time single and batch service finite buffer queueing system. The interarrival times of successive arrivals are assumed to be independent and geometrically distributed. The customers are served by a single-server either one at a time or in batches depending upon the number of customers present in the system. The service times are also assumed to be independent and geometrically distributed. By using the recursive method the steady-state probabilities and some performance measures such as the average number of customers in the queue and expected busy period with computational experiences have been presented. The results obtained in this paper are simple enough to be suitable for implementation in software packages developed for different management control systems where an automatic threshold control limits alert can be used as a warning mechanism.
2
The Model Description
We consider a single and batch service queueing system. The system is assumed to be finite capacity of size N excluding those in service. We assume that the interarrival times A of customers are independent and geometrically distributed with probability mass function (p.m.f.) an = P (A = n) = (1 − λ)n−1 λ, n ≥ 1. The customers are served by a single-server one at a time, if the system size is less than the control limit c at service initiate epoch. If the queue length is equal to or more than c then the server serves them altogether in a batch. At every departure epoch, that is, before initiating service, the server may find the system in any one of the following two cases: (i) 0 ≤ n ≤ c − 1 and (ii) c ≤ n ≤ N . In case (i), the server serves one customer at a time according to first-come-firstserve queueing discipline. In case (ii), the server takes all the customers for batch service. The service times S1 in case of (i) are independent and geometrically distributed with p.m.f. P (S1 = n) = (1 − μ1 )n−1 μ1 , n ≥ 1. The service times S2 of batches in case of (ii) are also independent and geometrically distributed with p.m.f. P (S2 = n) = (1 − μ2 )n−1 μ2 , n ≥ 1. Here we discuss the model for the late arrival system with delayed access and therefore, a potential arrival takes place in (t−, t) and a potential departure occurs in (t, t+), for t = 0, 1, 2, . . .. Let P0,0 (t−) denote the probability that the server is idle at time t− and P1,n (t−), n = 0, 1, . . . , c − 1, denote the probability that the server is busy with a single service and there are n customers waiting in the queue at time t−. Further, let P2,n (t−), 0 ≤ n ≤ N be the probability that the server is busy with a batch service and there are n customers waiting in the queue at time t−. Using the probabilistic argument and observing the state of the system at two consecutive prior to arrival epochs t− and (t + 1)−, in the steady state, we obtain 0 = −λP0,0 + μ1 (1 − λ)P1,0 + μ2 (1 − λ)P2,0 , 0 = −(λ + μ1 − 2λμ1 )P1,0 + λP0,0 + μ1 (1 − λ)P1,1 + μ2 (1 − λ)P2,1 +λμ2 P2,0 , 0 = −(λ + μ1 − 2λμ1 )P1,n + λ(1 − μ1 )P1,n−1 + μ1 (1 − λ)P1,n+1
(1) (2)
186
J.R. Mohanty
+μ2 (1 − λ)P2,n+1 + λμ2 P2,n ,
1 ≤ n ≤ c − 2,
(3)
0 = −(λ + μ1 − 2λμ1 )P1,c−1 + λ(1 − μ1 )P1,c−2 + μ2 (1 − λ)P2,c +λμ2 P2,c−1 ,
(4)
0 = −(λ + μ2 − λμ2 )P2,0 + λ(1 − μ1 )P1,c−1 + μ2
N
P2,n
n=c+1
+λμ2 P2,c , 0 = −(λ + μ2 − λμ2 )P2,n + λ(1 − μ2 )P2,n−1 , 0 = −μ2 P2,N + λ(1 − μ2 )P2,N −1 .
1 ≤ n ≤ N − 1,
(5) (6) (7)
The steady-state probabilities can be obtained from above equations (7) to (2) recursively. These are, respectively, given as μ2 P2,N , (8) λ(1 − μ2 ) λ + μ2 − λμ2 = P2,n+1 , n = N − 2, . . . , 0, (9) λ(1 − μ2 ) N 1 = (λ + μ2 − λμ2 )P2,0 − μ2 P2,n − λμ2 P2,c , (10) λ(1 − μ1 ) n=c+1 1 = (λ + μ1 − 2λμ1 )P1,c−1 − μ2 (1 − λ)P2,c − λμ2 P2,c−1 , (11) λ(1 − μ1 ) 1 = (λ + μ1 − 2λμ1 )P1,n+1 − μ1 (1 − λ)P1,n+2 λ(1 − μ1 ) −μ2 (1 − λ)P2,n+2 − λμ2 P2,n+1 , n = c − 3, . . . , 1, 0, (12)
P2,N −1 = P2,n P1,c−1 P1,c−2 P1,n
P0,0 =
3
1 [μ1 (1 − λ)P1,0 + μ2 (1 − λ)P2,0 ]. λ
(13)
Performance Measures
In this section, we discuss some important operating characteristics in queueing system. The average number of customers in the queue (Lq ) is given by Lq = c−1 N n=1 nP1,n + n=1 nP2,n . The average waiting time in the queue (Wq ) can be obtained using Little’s rule and given as Wq =Lq /λ , where λ =λ(1 − P BL) is the effective arrival rate, and P BL=P2,N represents the probability of blocking.
4
Numerical Results
In this section, we present numerical results in the form of graphs for steadystate probabilities. We have plotted two graphs. In Figure 1 we have seen that, for fixed N , the loss probability decreases with the increase of c. It is notable that blocking probability is lower in case of higher value of c in comparison
Finite Buffer Controllable Single and Batch Service Queues
187
8
0.7 The parameters are: λ = 0.9, μ1 = 0.2, μ2 =0.1
7
0.6
c=3 c=5 c=7
Average waiting time in queue (Wq)
6 Probability of blocking (PBL)
0.5 c=3 0.4 c=7
0.3 c=15 0.2
5
4
3
2
0.1
0
1
0
10
20
30
40
50
60
70
Buffer size (N)
Fig. 1. Effect of N on P BL for different values of c
0 0.2
0.3
0.4
0.5
0.6 μ1
0.7
0.8
0.9
1
Fig. 2. Effect of µ1 on Wq for different values of c
to lower value. We further observe that for all values of c considered here, the blocking probability decreases as N increases. In Figure 2, we observed that for a fixed value of control limit c, Wq monotonically decreases with the increase of μ1 . Further keeping μ1 fixed, Wq decreases with increase of c.
References 1. Kobayashi, H., Konheim, A.G.: Queueing models for computer communications system analysis. IEEE Transactions on Communications 25, 2–28 (1977) 2. Bruneel, H., Kim, B.G.: Discrete-Time Models for Communication Systems Including ATM. Kluwer Academic Publishers, Boston (1993) 3. Gravey, A., H´ebuterne, G.: Simultaneity in discrete time single server queues with Bernoulli inputs. Performance Evaluation 14, 123–131 (1992) 4. Hunter, J.J.: Mathematical Techniques of Applied Probability. Discrete Time Models: Techniques and Applications, vol. II. Academic Press, New York (1983) 5. Woodward, M.E.: Communication and Computer Networks: Modelling with Discrete-Time Queues, California. IEEE Computer Society Press, Los Alamitos (1994) 6. Medhi, J.: Stochastic Models in Queueing Theory, 2nd edn. Academic Press, New York (2001) 7. Goswami, V., Mohanty, J.R., Samanta, S.K.: Discrete-time bulk-service queue with accessible and non-accessible batches. Applied Mathematics & Computation 182(1), 898–906 (2006) 8. Yi, X.W., Kim, N.K., Yoon, B.K.: Analysys of the queue-length distribution for the discrete-time batch service Geo/G(a,Y ) /1/K. Europian Journal of Operational Research 181(2), 787–792 (2007) 9. Claeys, D., Laevens, K., Walraevens, J., Bruneel, H.: Delay in discrete-time queueing model with batch arrival & batch services. In: Proceedings of the Information Technology: New Generation, pp. 1040–1045 (2008) 10. Baburaj, C., Manoharan, M.: On the Waiting Time Distribution of A Single and Batch Service M/M/1 Queue. International Journal of Information and Management Sciences 9(1), 59–65 (1998)
Enhanced Search in Peer-to-Peer Networks Using Fuzzy Logic Sirish Kumar Balaga1, K. Haribabu2, and Chittaranjan Hota3 1,2 Computer Sc. & Information Systems Group, Birla Institute of Technology and Science Pilani, Rajasthan, India 3 Computer Sc. & Information Syst. Group, Birla Institute of Technology and Science, Pilani, Hyderabad Campus, Hyderabad, Andhra Pradesh, India [email protected], [email protected], [email protected]
Abstract. The efficiency of a Peer-to-Peer file sharing overlay is dependent on the lookup procedure. Huge size of peer-to-peer networks demands a scalable efficient lookup algorithm. In this paper we look at fuzzy logic approach in which we assign probabilities to each node based on the content it has. Lookup is guided by the probabilities. The results show that this algorithm is much better than standard lookup algorithms. Keywords: Peer-to-Peer, Overlay Networks, Fuzzy logic, Lookup.
1 Introduction P2P networks are popularly used in file sharing. Overlay networks are applicationlevel logical networks on top of the Internet with their own topology and routing [1]. The major characteristics of P2P networks are shared provision of distributed resources and services, decentralization, and autonomy. P2P overlay networks are categorized as unstructured and structured. An unstructured P2P system is composed of peers joining the network with some loose rules, without any prior knowledge of the topology. In structured P2P overlay networks, network topology is tightly controlled and content is placed not at random peers but at specified locations that will make subsequent queries more efficient. The lookup problem in a P2P overlay refers to finding any given data item in a scalable manner. More specifically, given a data item stored at some dynamic set of nodes in the overlay, we need to locate it [2].. In the unstructured overlays, the peers are organized in a random graph in flat or hierarchical manner. The unstructured overlays use flooding to lookup content stored on other overlay peers[3]. Flooding is an inefficient lookup algorithm. The messages produced per query increase in exponential order with respect to hop. In this paper, we aim to propose solution to lookup problem using fuzzy logic.
2 Proposed Method Our method is different from Freenet [4] which uses a symmetric lookup where queries are forwarded from node to node based on the routing table entries and iterative T. Janowski and H. Mohanty (Eds.): ICDCIT 2010, LNCS 5966, pp. 188–193, 2010. © Springer-Verlag Berlin Heidelberg 2010
Enhanced Search in Peer-to-Peer Networks Using Fuzzy Logic
189
deepening [5] where query is forwarded, first for a few hops and if no result found , the query is forwarded for increased number of hops. In our approach we propose a search algorithm that routes based on probabilities. A peer assigns probabilities to its neighbors and this probability is based on several factors. When a peer has to search for a file, it will send the search query to that node which has the highest probability. When a peer receives a search query and search is unsuccessful on that peer, it will select that node which is a neighbor to any of the already visited nodes and has the highest probability and forward the query to that node. This process goes on till the search is successful or all the nodes in the network has been queried or the hops become equal to TTL. The idea is based on the observation that when searching for something, we would first search at that place where the probability of finding it is more. We augment this idea to suit the peer to peer networks. Each node is like the possible place where I can find the file I am searching for and I would like to search that node first where the probability is more. We calculate this probability using fuzzy logic. 2.1 Fuzzy Based Probabilities All files that a node shares is classified based on type of the file i.e. VIDEO, AUDIO, IMAGE, WORD DOC, or PDF. We consider another type of classification based on the popularity of the file as RARE, NORMAL, and POPULAR. Hence any file will have 3 properties; name, type (video, audio…) and popularity (rare, popular or normal). Hence we would get 15 categories of files; popular video files, normal video files, rare video files, popular audio files e.t.c. A node calculates 15 probabilities for each neighbor, one for each category. To calculate probabilities, we take input as the number of files of each of the 5 category (video, audio…). and the number of popular files, number of normal files and number of rare files. For e.g., suppose that one peer is sharing 85 files of which 20 are video files, 15 are audio files, 25 are image files, 5 are DOC files and 20 are PDF files. Among the same type of files, the user will provide information of which files are popular, which are rare and which are normal. For example, suppose that among the 20 video files that the user is sharing, 7 are popular files (Example of a popular file can be the movie ‘Titanic’ or Michael Jackson’s video songs etc.), 5 rare files (Example can be a movie from the 1960s or the first video song of the Beatles etc.), 8 normal files(neither popular nor rare.). Now we calculate percentages of the popular, normal and rare files. The number of files of a particular category, the percentages of the files and the bandwidth of the link between the two peers are the three inputs to calculate our probabilities. We shall now justify the choice of inputs. The first parameter is the number of files of a particular type. This can be understood from a real life situation in which you want to buy a pair of shoes of a particular brand and you have many shops to go for. You would first go to that shop which has more number of shoes as it is more probable that you will find the shoe that you are looking for in that shop. The other parameter is the popularity of a file. This can be explained using another real world example. Suppose you want a very old book whose publication had stopped. Its more probable that you will find it in a library which keeps old books
190
S.K. Balaga, K. Haribabu, and C. Hota
rather than a book shop though both of them contains almost the same number of books. Here the probability of finding a rare book is more in a library since the library has more number of such rare books. So if one peer is searching for a video file and it is a rare file, he would go to that peer who shares more number of video files and shares more rare video files. We had to take percentage of the rare or normal or popular files as the second parameter rather than just the number itself. This can be easily understood by the following example. Consider two peers A and B, A sharing 50 video files of which 25 are rare and B shares 100 video files of which 30 are rare. Though A shares lesser number of rare files compared to B but the percentage is much more. Hence A has more tendency of sharing rare files than B and hence the probability of finding a rare file in A is more than that in B. Now we could have just considered only the second parameter alone to evaluate probability. But we need to consider the first parameter because the concept of popularity of a file is dependent on a person’s perception and it is totally up to the individual to decide whether a file is popular or rare or normal. One may consider a file as popular and another person as normal. Hence we should consider the number of files shared also as a parameter. The third parameter is the wavelength of the link between the peers. Hence we have three inputs and it is difficult to derive a mathematical formula involving these inputs to calculate the probability. Hence we have made use of techniques of fuzzy logic to calculate it. From this membership function, we calculate the membership values. The membership function for the bandwidth is shown in fig2.
Fig. 1. Membership function
Using this membership function, we calculate the membership values for bandwidth. Once the input are fuzzified, we apply the fuzzy rule base to arrive at the fuzzy output. Fig 3 shoes the fuzzy rule base. From the fuzzy outputs, we use center of gravity defuzzification [6] to arrive at the crisp output which is our required probability. The memFig. 2. Membership function for Bandwidth bership function for calculating probability is shown in fig 4.
Enhanced Search in Peer-to-Peer Networks Using Fuzzy Logic
191
Whenever a peer joins a network, it will send this probability table to its neighbors and receives the probability table from its neighbors. Hence in the network every peer will have the probability table of its neighbors. This information will be sufficient for our search algorithm which makes use of this probability values to determine to which peer the search query should be forwarded to next.
Fig. 3. Fuzzy rule base
We shall now present the search algorithm assuming that all the probability calculations are done. 2.2 Search Algorithm The search algorithm is inspired from Bayesian search theory in which the probabilities of finding an object in an area is determined and the area with the highest probability is looked first and if the object is not found, the probabilities of the surrounding areas increases. Inspired from this method, we calculate the probabilities of finding a file in a peer and we would like to start searching for our file on that peer which has the Fig. 4. Membership function for calculating probability highest probability of finding it. Every peer stores the probability table of its neighbors. Whenever a peer initiates a search query, the search query will be forwarded to that neighbor which has the highest probability. Our aim is to search all the higher probability peers first and then the lower probability peers.
192
S.K. Balaga, K. Haribabu, and C. Hota
The peer which initiates the search for a file, sends the search query to its neighbor which has the highest probability. Along with the search query, the information available with it i.e. the probability tables of its neighbors is also forwarded. The next peer then uses these probabilities tables to decide which peer to forward the search query to next. It chooses that peer which has the highest probability. Hence the nth peer in the search path will have the probabilities tables of all the n-1 peers in the path and the nth peer uses these tables to decide which peer to forward the search query to. This is very similar to the best first search. Pseudo code for this algorithm is shown in fig 5. ReceiveSearchMessage(Message: m){ If searchNode(m.filename) == true then path=findPath(m) sendMessage(path) else nextNode=findNextBest(m); if nextNode.isNeighborof(this) then sendTo(nextNode); else path=findPath(m,this,next_no de) sendMessage(path) }
ReceiveForwardSearchMessage(Mess age m){ If m.nextNode.isneighborOf(this) then sendTo(next_node) else if m.next_node==this then receiveSearchMessage(m) else path=m.path; path.removeFirst(); sendMessage(path); }
Fig. 5. Pseudo code
3 Simulation Results We used a 1000 node random graph with average degree of 10. We used a pure P2P model where there is no distinction among the peers. All peers have equal functionalities. There are 500 distinct objects in the network with the 1st object being the most popular and 500th object being the least popular. The replica distribution of these objects is done according to the Zifian distribution with parameter α = 0.82. The replicas are placed at randomly selected nodes. Queries for these objects also follow the Zifian distribution (α = 0.82) i.e. the popular objects receive more queries than that of less popular objects. The simulation was run for 55 queries. For simulating the search method flooding, the TTL is set to 7 and for k-random walk 5 walkers are used and TTL for each is set to 1024. We simulate Flooding (FL), Random walk (RW) and our proposed method. The comparison is done in terms of number of query messages per query, query response time. Fig 6 shows the comparison of the average number of messages per one query. We observe a huge difference in the number of messages between our method and the other methods. Fig 7 shows the average response time for one query and observe that it is least for our method. Fig 8 shows the traffic distribution with respect to time.
Enhanced Search in Peer-to-Peer Networks Using Fuzzy Logic
193
A V E R A G E
M E S S A G E
T I M E
C O U N T
(M S E C S)
Fig. 6. Average Messages per query M E S S A G E C O U N T
Fig. 7. Average Response Time
80 70 60 50 40 30 20 10 0 0-1
4-5
8-9
12-
16-
20-
24-
28-
32-
36-
40-
44-
13
17
21
25
29
33
37
41
45
Time in secs Fig. 8. Traffic Distribution with respect to time
References 1. El-Ansary, S., Haridi, S.: An Overview of Structured Overlay Networks. In: Wu, J. (ed.) Theoretical and Algorithmic Aspects of Sensor, Ad Hoc Wireless, and Peer-to-Peer Networks, pp. 665–684. CRC Press, London (2005) 2. Balakrishnan, H., Kaashoek, M.F., Karger, D., Morris, R., Stoica, I.: Looking Up Data in P2P Systems. Communications of the ACM 46, 43–48 (2003) 3. Gnutella Protocol Specification Version 0.4, http://www9.limewire.com/developer/gnutella_protocol_0.4.pdf 4. Clarke, I., Sandbert, O., Wiley, B., Hong, T.: A distributed anonymous information storage and retrieval system. In: Federrath, H. (ed.) Designing Privacy Enhancing Technologies: Design Issues in Anonymity and Unobservability, pp. 46–66. Springer, New York (2001) 5. Yang, B., Garcia-Molina, H.: Improving Search in Peer-to-Peer Networks. In: Proceedings of the 22nd International Conference on Distributed Computing Systems (ICDCS 2002), Vienna, Austria, July 2-5. IEEE Computer Society, Washington (2002) 6. Ganesh, M.: Introduction to fuzzy sets and fuzzy logic. Prentice Hall of India Private Limited, New Delhi (2006)
UML-Compiler: A Framework for Syntactic and Semantic Verification of UML Diagrams Jayeeta Chanda, Ananya Kanjilal, and Sabnam Sengupta B.P. Poddar Institute of Management & Technology Kolkata -52, India [email protected], [email protected], [email protected]
Abstract. UML being semi formal in nature, it lacks formal syntax and hence automated verification of design specifications cannot be done. To address this we propose a UML Compiler that takes context free grammars for UML diagrams and verifies the syntactic correctness of the individual diagrams and semantic correctness in terms of consistency verification with other diagrams. This UML-Compiler is a part of a framework, which consists of two modules. First module converts XMI format of UML diagrams, as generated by any standard tool into string format and second module (UML-Compiler) is for verification of the diagrams. This paper focuses on the second module and proposes a formal context free grammar for the two of the commonly used UML diagrams - Class diagram (depicting the static design) and sequence diagram (depicting behavioral design) and validated by using Lex and YACC. Keywords: UML Formalization, Context Free Grammar, Parse Tree, Formal Verification of UML.
1 Introduction UML is a widely accepted industry standard for modeling analysis and design specifications of object-oriented systems. But, UML lacks the rigor of formal modeling languages and hence verification of a design specified in UML becomes difficult. Formalization of UML diagrams is now a dominant area of research. This paper is also a work in that direction. We propose a framework for automatic transformation of the UML diagram from its graphical form into a string that conforms to the grammar we have defined in section 4 of this paper and a compiler for the string form of the diagrams. For compiling the string form of the diagrams we propose a grammar and implement it in Lex/YACC for syntax verification. The production rules, terminals, non-terminals are chosen and proposed such that it adheres to the notations of UML 2.0 standard. We have also proposed a set of verification criteria that comprises of syntactic correctness rules and consistency rules. The consistency rules have been proposed by analyzing the interrelationships among the diagrams so that they together represent a coherent design. Existing UML editors or the standard drawing tools do enforce syntactic correctness to some extent, but much needs to be done regarding semantic T. Janowski and H. Mohanty (Eds.): ICDCIT 2010, LNCS 5966, pp. 194–205, 2010. © Springer-Verlag Berlin Heidelberg 2010
UML-Compiler: A Framework for Syntactic and Semantic Verification
195
verification. Our proposed framework is a step towards that direction. The consistency rules will help in verifying semantic correctness of the design as they correlate the design models, which is not explicit in the UML standard. Verification of all the rules has been presented based on the proposed grammar. We derive the parse tree representation of the class and sequence diagrams, which is an alternate form of the same grammar and these different formal representations are deliberately used to bring completeness in our formal model.
2 Review of Related Work Formalization of UML has become a prominent domain of research for the last few years. Achievement of automated consistency checking and execution has led the software engineers and researchers to focus in this domain. Our work, too, proposes formalization of UML class and sequence diagrams based on a proposed grammar. In this section we will discuss a few works done in this domain related to formalization of UML static and dynamic models. In [1], UML class diagram is formalized in terms of a logic belonging to Description Logics, which are subsets of First-Order Logic to make it possible to provide computer aided support during the application design phase for automatic detection of relevant properties, such as inconsistencies and redundancies,. An algebraic approach is chosen in [2] because it is more abstract than state-based style languages. The transformation rules for formalizing UML statechart diagrams have been proposed in [3]. The target language for the transformation is Concurrent Regular Expressions (CREs), which are extensions of regular expression. RSL (RAISE (Rigorous Approach to Industrial Software) Specification Language) has been used in [4] as a syntactic and semantic reference for UML. An automated tool that implements the translation and the abstract syntax in RSL for the RSLtranslatable class diagrams are also presented. The integration of the domain modeling method for analyzing and modeling families of software systems with the SOFL formal specification language is discussed in [5]. A UML 1.5 profile named TURTLE (Timed UML and RT-LOTOS Environment) is proposed in [9], which is endowed with a formal semantics given in terms of RTLOTOS. An approach to formally define UML class diagrams using hierarchical predicate transition nets (HPrTNs) have been presented in [6] and preliminary results are presented. The authors show how to define the main concepts related to class diagrams using HPrTN elements. The semantics presented in [7] captures the consistency between sequence diagram with class diagram and state diagram. In [12], Hoare's CSP (communicating sequential processes) has been used to formalize the behaviors of UML Activity Diagrams. In [10], π-calculus has been applied to formalize UML activity diagrams to get rich process semantics for activity diagrams. This process model can be automatically verified with the help of π-calculus analytical tools. Hoare's CSP (communicating sequential processes) has been used in [13] to formalize the behaviors of UML activity diagrams and provides an approach to enable model checking during software analysis or design phase. UML's class diagram and OCL constraints is formalized in [14] using theorem prover Isabelle using one of its built-in logics, HOL.
196
J. Chanda, A. Kanjilal, and S. Sengupta
The operational semantics of UML sequence diagrams is specified and this specification is extended to include features for modeling multimedia applications as a case study in [8]. Dynamic Meta modeling has been proposed for specifying operational semantics of UML behavioral diagrams based on UML collaboration diagrams that are interpreted as graph transformation rules. The authors in [11] have defined a template to formalize the structured control constructs of sequence diagram, introduced in UML 2.0. Z specifications has been proposed in [13] to reduce risks associated with software development and increase safety and reliability by formalizing the syntax of (a sub-set of the popular UML diagrams (Use Case diagram, Class diagram, and State Machine diagram). Our earlier work [15] also used Z to propose a formal model for six UML diagrams. However Z is a non-executable language and hence automated verification is not possible unless translated or mapped to executable models like XML using ZML. In all these research works, UML diagrams have been formalized using other formal languages. In this paper, we define a context free grammar of the very widely used UML diagrams, Class and Sequence diagram. The CFG is executable unlike Z notation and validation of the grammar is done using LEX and YACC, which are open source software.
3 Scope of Work In this paper we propose a framework for automatic transformation of UML diagrams into a string and then compilation of that string for syntactic correctness and interdiagram consistency verification, which is a kind of semantic verification. The framework is divided into following two modules (as shown in Fig 1) 1) XMI to String Converter: This module will convert the XMI form of the UML diagrams into a string that will follow the context free grammar proposed by us in section 4 of this paper. 2) UML Compiler: This module will check the syntactic correctness of diagrams and verification of consistency among the diagrams. In this paper we have defined and implemented the second module i.e. the UML compiler. Here we have proposed the formalization technique using context free grammar for Class and Sequence diagrams – the two most commonly used UML models capturing the static and dynamic aspects of an object-oriented system. Based on the grammar and UML 2.0 standard, we have defined several rules to highlight syntactic correctness of the diagrams and inter-diagram consistency based on the common elements present. We have used Lex and YACC to validate the grammar.
UML Tools UML Diagram
XMI format
XMI to String Converter
String Format
UML Compiler
Fig. 1. The Framework for Graphical UML compiler
Verified UML
UML-Compiler: A Framework for Syntactic and Semantic Verification
197
4 Formal Model Use cases are realized through Sequence diagrams, which are a set of messages each having a number ordered with respect to time. Each message is sent between either two Actors or two Objects or an actor and an object. The messages may be text strings or methods of classes. Each sequence diagram corresponds to one use case. A Class diagram is defined as a set of both zero or more relationships and one or more classes. We will be considering only two of the commonly used UML diagrams for formalization – Class diagram that depicts the static characteristics and Sequence diagram that depicts the behavioral characteristics. The formal grammar has been defined for each of the diagrams followed by the parse tree representation. We have used regular expression features (eg. +, *) in the production rules for simplicity and easy understanding. However, for Lex/YACC implementation, we have used a recursive definition of the grammar. 4.1 Grammar for Class Diagram Let the grammar (S, N, T, P) where S, N, T, P represent start symbol, non terminals, terminals and production rules T= {char, void, integer , long , short , date , String , class ,double ,digit, +, - , # , association , generalization , aggregation , ( , ), .. , :} All other symbols used in the production rule P are non-terminals (N). P: S → class_diagram class_diagram→ classes+ relation* classes→ cname attribute* method_class* cname→ char attribute→ access_specifier data_type attribute_name | data_type attribute_name access_specifier→ + | _ | # data_type→ void |integer | long | short | date | String | class |double attribute_name→ char method_class→ access_specifier data_type method_name ( parameter_list) parameter_list→parameter* parameter→data_type parameter_name parameter_name→ char relation→ cname multiplicity* cname relationship relationship→ identifier description type | identifier type type→ aggregation | association | generalization identifier →char description → char multiplicity→ digit .. digit char→ [a-z A-Z][a-z A-Z 0-9]+ digit→[0-9 *]
198
J. Chanda, A. Kanjilal, and S. Sengupta
4.2 Grammar for Sequence Diagram P: S → sequence_diagram sequence_diagram → lifeline+ lifeline → object_name focus_of_control + | object_name message+ focus_of_control → focus_ID message+ message → time_order message_description source destination time_order → digit+ focus_ID → char message_description → char| method_sequence source→actor_from | object_from destination → actor_to | object_to (actor_from and object_from is destination actor & destination object) actor_to → char actor_from → char object_to → object_name object_from → object_name object_name → char : classname classname→ char (classname is name of the class and object_name is name of the object) method_sequence → char ( ) char → [a-z A-Z 0-9]+ digit → [0 – 9] 4.3 Notations Used in the above Grammar and Their Meanings The notations used in this paper and their meanings are given in Table 1. Table 1. Notations and their Meanings Sl. No.
Notation
Meaning
Sl. No.
Notation
Meaning
1.
class_diagram
Class diagram
12.
access_specifier
Access Specifier for attributes
2.
classes
13.
data_type
Data type
3.
relation
14.
attribute_name
Name of attribute
4.
cname
15.
method_name
Name of Method
5.
attribute
16.
parameter_list
List of parameters for a method
6.
method_class
17.
parameter_name
7
life_line
8
object_name
9.
focus_of_control
10.
message
11.
message_description
Classes defined in class diagram Relation between classes Name of class in Class diagram Attribute of a class Method defined in class diagram Object Life line in sequence diagram Name of object Focus of control of a message Message of sequence diagram Description of the message
Name of parameter
18.
method_sequence
19.
actor_to
Method defined in sequence diagram Source actor
20.
object_to
Source object
21.
time_order
Time order
22.
classname and objectname
Name of class and object
UML-Compiler: A Framework for Syntactic and Semantic Verification
199
The parse tree representation of the above grammar for class diagram is shown in Fig 2. The same is generated in figure 3 for the grammar (in section 4.2) defined for sequence diagram
class_ diagram
name
classes
..
attribute
..
relation
method_class
..
..
char access_ specifier Name
data_ type
parameter_ list access_ specifier method_ name
- or + or #
cname multiplicity
cname
relationship
) char char
(
identifier n..n
+
parameter
description
..
type
1
:
char
char
char
parameter_ name
void or integer ….
:
data_ type
association or …...
memid
void or integer ….
terminal
Fig. 2. Parse Tree for the proposed grammar of class diagram
s e q u e n c e _ d ia g ra m
lif e lin e
o b je c t _ nam e
char
. . fo c u s _ o f_ c o n tr o l
: char
fo c u s _ ID
m essage
tim e _ o rd e r
char
m essage _ d e s c r ip ..
s o u rc e
d e s tin a tio n
a c to r _ fr o m
d ig it +
o b je c t _ to
char char
o b je c t _ nam e
char
: char
Fig. 3. Parse tree for the proposed grammar of sequence diagram
200
J. Chanda, A. Kanjilal, and S. Sengupta
5 Defining Verification Criteria 5.1 Correctness Rules The syntactical rules for a Class Diagram to be a correct one are: 1. A class diagram must have at least one class. 2. A class diagram may or may not have a relationship (association, generalization, aggregation etc.) between two classes. 3. A class must have one and only one name. 4. A class may or may not have one or many attributes and methods. 5. A relation may or may not have multiplicity but it should have relationship. 6. A relationship should have unique id and type and it may or may not have description.
The syntactical rules for a Sequence Diagram to be a correct one are: 1. A sequence diagram must have at least one lifeline and henceforth at least one message. 2. A message must have one and only one time order. 3. A message must be composed of either one of the following combinations: • Two objects • One object and one actor • Two actors i.e. Objects in Sequence diagram belong to specific Classes 4. 5. 6. 7.
Objects in the sequence diagram have one and only one lifeline. Objects have one or many focus of control. Multiple messages may belong to same focus of control. A message may or may not have one and only one method (0 or 1 method)
Additional syntactic rules for correct Sequence Diagram 1. A message may have one object (when method of that object itself is invoked) but this message cannot be the only message of the sequence diagram. In other words, if there is more than one message, a particular message may have one object. if msgi ={ tm_o, m , ob_fr , ob_to} then (ob_fr = ob_to) iff ( i>1) where msg∈message , m∈method_sequence ,ob_to∈ object_to , ob_fr∈ object_from and tm_o ∈ time_order 2. The object name of the life line in which a message belongs should either be the source or the destination of the message
UML-Compiler: A Framework for Syntactic and Semantic Verification
201
∀message , if ob_nm ∈object_name then either ob_nm=ob_to or ob_nm=ob_fr where ob_fr ∈object_from ob_to∈object_to then either ob_nm=ob_to or ob_nm=ob_fr 5.2 Consistency Rules The class and sequence diagrams though represent different views of the system they are related. The sequence diagram uses the operations defined in classes. This is cross- checked for ensuring consistency between the two diagrams Necessary conditions - The following two rules have been proposed which are necessary conditions for ensuring consistency within class and sequence diagram. Rule 1: Each unique method of messages in Sequence diagram should be present as a method of a class in class diagram Rule 2: The classes having objects appearing in Sequence diagram must be present in Class diagram
6 Verification of Rules 6.1 Verifying Syntactic Correctness – Using Grammar Refer to the regular expression of the UML models defined in section 4.3. For a Class Diagram the correctness rules can be verified using the grammar in the following way: 1. 2.
A class diagram must have at least one class. A class diagram may or may not have a relationship (association, generalization, aggregation etc.) between classes. Rule (1) &(2) can be verified from the following production rules of the grammer class_diagram→ class+ relation* class→ name attribute* method_ class* From the above production rule we can derive the regular expression class_diagram → class1 class_diagram → class1 class2 relation
3.
A class must have one and only one name. class→ name attribute* method_class* From the above production rule we have the class → name
regular expression
202
J. Chanda, A. Kanjilal, and S. Sengupta
4.
A class may or may not have one or many attributes and methods. class→ name attribute* method_class* class→ name class→ name attribute class→ name attribute1 …….attributeN class→ name attribute1 …….attributeN method_class class→name attribute1…….attributeN method_class1……method_classN
5. A relation may or may not have multiplicity but it should have relationship. relation→ multiplicity* relationship multiplicity* relation→relationship relation→ multiplicity relationship relation→ multiplicity relationship multiplicity 6. A relationship should have unique id and type and it may or may not have description. relationship→ identifier description type| identifier type type→ aggregation | association | generalization identifier →char description → char relationship→ identifier type → id1 aggregation relationship→ identifier description type → id1 char generalization Therefore, all the correctness rules for the class diagram can be verified using the proposed UML grammar. Notations used in this section represent the same meaning as defined in section 4.1 For a Sequence Diagram the correctness rules can be verified using the grammar in the following way: 1. A sequence diagram must have at least one message. sequence_diagram → lifeline+ lifeline → object_name focus_of_control + |object_name message+ focus_of_control → focus_ID message+ 2. A message must have one and only one time order. message→ time_order message_description source destination 3. A message is between one & only source and one &only destination and must have a description. The message description can be a string or a method. message → time_order message_description source destination message_description→char|method_sequence
UML-Compiler: A Framework for Syntactic and Semantic Verification
203
4. A message must be composed of either one of the following combinations: •
Two objects source→object_to destination → |object_from
•
One object and one actor source→actor_to | object_to destination→actor_from| object_from
•
Two actors source→actor_to destination → actor_from
5. Sequence diagram have one and only one lifeline and lifeline is uniquely identified by object name. sequence_diagram → lifeline lifeline → object_name focus_of_control + |object_name message+ 6. Lifeline have one or many focus of control or
atleast one message.
lifeline → object_name focus_of_control + |object_name message+ Therefore, all the correctness rules for the class diagram can be verified using the proposed UML grammar. Notations used in this section represent the same meaning as defined in section 4.2 6.2 Verifying Consistency – Using Grammar Rule 1: Every method in Sequence diagram should be present in the Class diagram In the Sequence diagram, object_to object_from represent objects of classes which should belong to class diagram i.e. object_to ∈ class ⇒ object_to → name attribute* method_class* The method in Sequence diagram belongs to object_to (object of the Class to which message is sent) i.e. method_sequence ∈ object_to ⇒ method_sequence ∈ name attribute* method_class* ⇒ method_sequence ∈ method_class
204
J. Chanda, A. Kanjilal, and S. Sengupta
This implies that method method_sequence in sequence diagram within the message should be represented as method_class to which messages is sent. It satisfies consistency rule 1. Rule 2: Every object in Sequence should belong to a Class in Class diagram Again ,object_to ∈ class ⇒ object_to → name attribute* method_class* Again
method_sequence ∈ object_to
⇒method_sequence∈ name attribute*method_class* This implies that method_sequence within the message in sequence belongs to the object of the class (object_to ), the name of which should be the name of the class in class diagram that contains the method as method_class. It satisfies rule 2.
7 Conclusion UML being a visual modeling language is widely used for capturing and designing specifications of object oriented systems. However it is not a formal model and hence ambiguities may arise in design specifications between models that represent compiler overlapping but different aspects of the same system. In this paper we propose a UML compiler in which define a formal model for UML class and sequence diagrams, the two widely used models, which represent static and behavioral aspects. A context free grammar is proposed for both for the UML 2.0 standard. A set of verification criteria composed of correctness rules and consistency rules are defined. Using the proposed grammar each of these rules has been verified and parse tree for the grammar is generated. Our future work will focus on developing the first module of the tool, which will convert XMI format of the UML diagram into a string, which conforms to the grammar defined in this paper. The formal models can be defined for all other UML diagrams so that consistency among all the models can be verified, which is a kind of semantic verification.
References 1. Calì, A., Calvanese, D., De Giacomo, G., Lenzerini, M.: A Formal Framework for Reasoning on UML Class Diagrams. In: Hacid, M.-S., Raś, Z.W., Zighed, D.A., Kodratoff, Y. (eds.) ISMIS 2002. LNCS (LNAI), vol. 2366, pp. 503–513. Springer, Heidelberg (2002) 2. Pascal Andre, Annya Romanczuk, Jean-Claude Royer. Checking the Consistency of UML Class Diagrams Using Larch Prover, Rigorous Object Oriented Method, ROOM (2000) 3. Jansamak, S., Surarerks, A.: Formalization of UML Statechart Models Using Concurrent Regular Expressions. In: 27th Australasian Computer Science Conference, the University of Otago, Dunedin, NZ (January 2004) 4. Meng, S., Naixiao, Z., Aichernig, B.K.: The Formal Foundations in RSL for UML Statechart Diagram, Technical Report 299. UNU/IIST (July 2004) 5. Gomaa, H., Liu, S., Shin, M.E.: Integration of the Domain Modeling Method for Families of Systems with the SOFL Formal Specification Language. In: 6th IEEE International Conference on Complex Computer Systems (ICECCS 2000), Tokyo, Japan, September 11-15, p. 61 (2000)
UML-Compiler: A Framework for Syntactic and Semantic Verification
205
6. He, X.: Formalizing UML Class Diagrams: A Hierarchical Predicate Transition Net Approach. In: The Twenty-Fourth Annual International Computer Software and Applications Conference, Taipei, Taiwan, October 25-28, p. 217 (2000) 7. Li, X., Liu, Z., Jifeng, H.: A Formal Semantics of UML Sequence Diagram. In: Australian Software Engineering Conference (ASWEC 2004), Melbourne, Australia, April 13-16, p. 168 (2004) 8. Hausmann, J.H., Heckel, R., Sauer, S.: Towards Dynamic Meta Modeling of UML Extensions: An Extensible Semantics for UML Sequence Diagrams. In: IEEE 2001 Symposia on Human Centric Computing Languages and Environments (HCC 2001), Stresa, Italy, September 5-7 (2001) 9. Apvrille, L., Courtiat, J.-P., Lohr, C., de Saqui-Sannes, P.: TURTLE: A Real-Time UML Profile Supported by a Formal Validation Toolkit 30(7) (July 2004) 10. Dong, Y., Sheng, Z.S.: Using p - calculus to Formalize UML Activity Diagram. In: 10th IEEE International Conference and Workshop on the Engineering of Computer-Based Systems (ECBS 2003), Huntsville, Alabama, April 7-10 (2003) 11. Virani, H.S., Niu, A.J.: Formalize UML 2 Sequence Diagrams. In: 11th IEEE High Assurance Systems Engineering Symposium, HASE 2008, December 3-5, pp. 437–440 (2008) 12. Xu, D., Philbert, N., Liu, Z., Liu, W.: Towards Formalizing UML Activity Diagrams in CSP. In: International Symposium on Computer Science and Computational Technology, ISCSCT 2008, December 20-22, vol. 2, pp. 450–453 (2008) 13. Mostafa, A.M., Ismail, M.A., El-Bolok, H., Saad, E.M.: Toward a Formalization of UML2.0 Metamodel using Z Specifications. In: Eighth ACIS International Conference on Software Engineering, Artificial Intelligence, Networking, and Parallel/Distributed Computing, SNPD 2007, July 30-August 1 2007, vol. 1, pp. 694–701 (2007) 14. Ali, T., Nauman, M., Alam, M.: An Accessible Formal Specification of the UML and OCL Meta-Model in Isabelle/HOL. In: IEEE International Multitopic Conference, INMIC 2007, December 28-30, pp. 1–6 (2007) 15. Sengupta, S., Bhattacharya, S.: Formalization of Functional Requirements of Software Development Process. Journal of Foundations of Computing and Decision Sciences (FCDS) 33(1), 83–115 (2008)
Evolution of Hyperelliptic Curve Cryptosystems Kakali Chatterjee1 and Daya Gupta2 1
Research Scholar, Computer Engineering Deptt Delhi Technological University, India [email protected] 2 HOD, Computer Engineering Deptt Delhi Technological University, India [email protected]
Abstract. Due to short operand size, Hyperelliptic Curve Cryptosystem (HECC) of genus 3 is well suited for all kinds of embedded processor architectures, where resources such as storage, time or power are constrained. In the implementation of HECC, a significant step is the selection of secure hyperelliptic curves on which the Jacobian is constructed and speed up the scalar multiplications in the Jacobians of hyperelliptic curves. In this paper, we have explored various possible attacks to the discrete logarithm in the Jacobian of a Hyperelliptic Curve (HEC) that are to be considered to establish a secure HEC, analysed addition and doubling of divisor which are the prime steps of scalar multiplication and then proposed certain improvements in the existing explicit formula that will result in a performance gain for HECC of genus 3. Keywords: HECC, Scalar Multiplication, Explicit Formula.
1 Introduction HECC was proposed by Koblitz [2] in 1989 based on the discrete logarithm problem on the Jacobian of HEC over finite fields. In HECC, main operations such as key agreement and signing/verifying involve scalar multiplication using a large integer k. Fast scalar multiplication is crucial in some environments such as in hand-held devices with low computational power. Factors like choice of curve, representation of a scalar, multiplication algorithm etc. influence speed of multiplication. Koblitz’s idea to use HEC for cryptographic applications are analyzed in Menezes et al. 1997[1], Kuroki et al.2002[8], Gonda et al.2004[4], Fan et al. 2007[7], Smith 2008 [12]. Pelzl, Wollinger [5] propose a cost effective explicit formula for genus 2 and 3 curves for scalar multiplication which includes inversion round. Usually inversions are a few times costlier than multiplications. An Inversion-Free arithmetic on genus 3 HEC is proposed and implemented by Fan, Wollinger, Wang 2005 [6]. Our proposal is based on explicit formula proposed in [4], [6] for group operation on Genus 3 HEC. Current Research emphasize on finding efficient methods to select secure HEC and fast operations on the Jacobians. Our contributions in this paper are as follows: i)
We explored various possible attacks to the discrete logarithm in the Jacobian of a hyperelliptic curve that are to be considered to establish a secure HEC.
T. Janowski and H. Mohanty (Eds.): ICDCIT 2010, LNCS 5966, pp. 206–211, 2010. © Springer-Verlag Berlin Heidelberg 2010
Evolution of Hyperelliptic Curve Cryptosystems
ii) iii)
207
Group operations on a Jacobian have been explored in details so as to obtain the so called explicit formula for performing addition and doubling. We have suggested improvements in the existing explicit formula proposed in [4], [6] for genus 3 curve for efficient scalar multiplication. This will reduce the number of multiplications for addition and inversion free doubling of a divisor and as these are the crucial steps for performing a divisor scalar multiplication, this improvement will result in a significant performance gain for HEC of genus 3 over fields of characteristic 2.
The rest of the Paper is organized as follows: Section 2 provides secure HEC; Section 3 presents Group operations on a Jacobian for the proposed scheme; Section 4 presents proposed improvement in existing Explicit Formula of scalar multiplication; Finally, we conclude the paper in Section 5.
2
Secure Hyperelliptic Curves
Security of HEC is based on the difficulty of solving the discrete logarithm problem in Jacobian of HEC. Let Fq denote degree n extension of Fq (q=pr and p is a prime). Its Jacobian J(C; Fq ) over Fq is a finite abelian group and (qn/2-1)2g ≤#J(C; Fq ) ≤(qn/2 +1)2g. The HCDLP in J(C; Fq ) is: given two divisors D1, D2 defined on J(C, Fq ) over Fq , to determine integer m such that D2=mD1, provided such an integer m exists. To establish a secure HEC, its Jacobian should satisfy the following conditions: n
n
n
n
n
n
n
1)
2)
Adleman et al. [10] found a subexponential time algorithm to solve the DL in the Jacobian of HEC of a big genus over a finite field. Curves of higher genera (preferably g<4) are, therefore, not suitable for cryptographic use (2g+1 < log qn). If the group order is large, but is divisible by only small primes, the DLP can be broken by Pohlig-Hellman attack. Since the time complexity of PohligHellman's method is proportional to the square root of the largest prime factor of #J(C; Fq ), so far it is claimed that this largest prime factor should be at least 160 bits in length. To prevent the attack of Frey [9] which uses Tate pairing generation of MOV attacks, the large prime factor of # J(C; Fqn) should not divide (qn)k-1, here k<(log qn)2. To prevent the attack generated by Ruck [11], the Jacobian of a hyperelliptic curve over the large prime field GF(p) should not have p-order subgroup. n
3)
4)
3 Group Operations on a Jacobian Addition and Doubling of divisor on Jc(F) are performed in two steps as follows: Addition of generic divisors in characteristic 2 Let C be a hyperelliptic curve of genus 3 defined over a field F2 given by y2 + h(x)y = f(x). Let D1 = (u1, v1) and D2 = (u2, v2) be two divisors on C in a generic position for addition. The computation of D1+D2 is done as follows: n
(a) Addition: First step is to get a divisor Dcomp=D1+ D2; where weight(Dcomp) ≤ 3 and the inequality may be strict. Let Dcomp=(ucomp,vcomp), Let Z=support(D1) u support(D2). It follows that ucomp = u1u2. We want a polynomial vcomp (of smallest degree) such that
208
K. Chatterjee and D. Gupta
the curve y=vcomp passes through Z with proper multiplicity. Explicitly, this “passing through Z condition" translates into a pair of congruence conditions mod u1, mod u2. vcomp
≡
v1 mod u1 v2 mod u2
Let S be such that vcomp=v1+u1S. Chinese remainder theorem gives S=(v2–v1)(1/u1 mod u2)mod u2. This gives Dcomp which we reduce twice in succession to get a divisor of weight≤ 3.This further reduction gives reduced Mumford representation for D1+D2. (b) Reduction: The first reduction gives us a divisor DT = (uT , vT ) = reduced(Dcomp), where uT=(v2comp+ hvcomp –f)/(u1u2), vT = -h - vcomp mod uT. Since deg(uT )=4 and deg(vT) =3, DT needs to be reduced. Let DE=(uE,vE)=reduced(DT ), where uE =(v2T + hvT – f) /(uT), vE = -h - vT mod uE. Here weight(DE) ≤ 3. In practice, the first two steps are combined to compute DT directly. A final reduction of DT gives DE. Doubling of a generic divisor in characteristic 2 Let D1 = (u1,v1) be a divisor in a generic position for doubling. We want to compute DE = 2D1 such that weight(DE) ≤ 3. Doubling is done as follows: Let Z = 2 support(D1). We then have ucomp = u12. We now want a polynomial vcomp such that the curve y = vcomp passes through Z. This “passing through Z condition" translates into a congruence condition: vcomp ≡ v1 mod u1. The fact that (ucomp, vcomp) be a Mumford representation corresponds to another congruence condition: v2comp + hvcomp - f ≡ 0 mod ucomp. This gives us Dcomp. Further reduction gives us the reduced Mumford representation for 2D1. This is carried out in two main steps: (a) Doubling: Let Dcomp = (ucomp, vcomp), where ucomp = u21, vcomp ≡ v1 mod u1 v2comp + hvcomp - f ≡ 0 mod u21. Writing vcomp = v1 + Su1, for a polynomial S, it follows from the Chinese remainder theorem that: S=(((v21 + v1h – f)/u1)/(1/h mod u1))mod u1. From the above it follows that deg(S) ≤ 2. Let S = s2x2 + s1x + s0. This gives us Dcomp which needs to be reduced twice in succession to get a divisor of weight ≤ 3. (b) Reduction: Reduction is combined with computation of Dcomp to directly compute DT = (uT , vT ), given by uT = Monic(S2 + (Sh/u1) + (v21+ v1h – f) / u12), vT = -h - vcomp = h + v1 + Su1 mod uT ; Here, since deg(v12+ v1h - f) = 7 and deg(u12) = 6, (v12+v1h – f)/ u12 has degree 1. Since weight(DT) = 4, we reduce this to DE = (uE,vE) = reduced(DT ) of weight 3, where uE = (v2T + hvT – f) /uT, vE = -h - vT mod uT.
4 Proposed Improvement in Existing Explicit Formula We give the improvement of explicit formula for adding two reduced divisors in projective coordinate system. When inversions are much slower than multiplications (such as in a smart card), we will use addition and inversion-free doubling. Every element of Jacobian can be uniquely represented by a so-called reduced divisor. We will use the notation [u,v] for the divisor represented by u and v. For genus 3 curves, we have commonly [u,v ] = [x3+u2x2+u1x+u0, v2x2+v1x+v0] with u|(v2+hv-f). When using explicit formulae proposed in [4] and [6] to add two reduced divisors or double one reduced divisor, we will obtain another reduced divisor [u' , v'] = [x3+u'2x2+u'1x+u'0, v'2x2+v'1x+v'0]. For each addition and doubling above, we need
Evolution of Hyperelliptic Curve Cryptosystems
209
one inversion. In order to avoid inversion in doubling a divisor, we introduce a further coordinate Z to collect common denominator of usual 6 coordinates and let [U2,U1, U0,V2,V1,V0,Z] stand for[x3+ (U2/Z)x2+(U1/Z)x+(U0/Z), (V2/Z)x2+(V1/Z)x+(V0/Z)]. 4.1 Proposed Improvement in Existing Explicit Formula for Addition of Divisors of Genus-3 HEC Input:C: Y2 = F(x), F=x7+f5x5+f4x4+f3x3+f2x2+f1x+f0; Reduced divisors D1=(U1,V1) and D2=(U2,V2), U1=x3+u12x2+u11x+u10, V1=v12x2+v11x+v10, U2=x3+u22x2+u21x+u20, V2= v22x2+v21x+v20 Output:- A weight three reduced divisor D3=(U3,V3)=D1+D2 U3 = x3+u32x2+u31x+u30, V3= v32x2+ v31x+v30 Steps:- 1 To Compute Resultant r of U1 and U2 , we use Bezout’s theorem. n The resultant of two polynomials a= ∏ ni=1(x-∀i ) and b= ∏ j=1(x-∃j ) n n is defined by r(a,b) = ∏ i=1∏ j=1(∃j-∀i) . Calculating with the Bezout’s matrix, we get t1= u11u20-u10u21, t2 = u21u20-u10u22, t3=u20-u10,t4= u21-u11,t5= u22-u12, t6= (u21-u11)(u21+u11), t7= (u20-u10)(u21-u11), t8 = u12u21-u11u22, t9= u10u20-(u11- u20)(u22-u12), t10=(u12u20-u10u22)(u22-u12), r = t8t9+ t2(t10-t7) If r=0 then call the Cantor Algorithm. 2 Compute pseudo inverse inv=i2x2+i1x+i0 ≡ r/U1 mod U2 3 S'=s'2x2+s'1x+s0' = rs ≡ (v2-v1)inv mod U2 through Karatsuba Multiplication. Consider two degree-1 polynomials A(x) and B(x). A(x) = a1x + a0; B(x) = b1x + b0 Let D0,D1,D0;1 be auxiliary variables with D0= a0b0; D1= a1b1; D0;1 = (a0 + a1) (b0 + b1) Then polynomial C(x)=A(x)B(x) can be calculated as C(x)=D1x2+(D0;1-D0- D1)x +D0 We need four additions and three multiplications to compute C(x) s'1= t5+t6+(t10-t9)/2- (t8+(t2+t3) (t5t8-t6-u22t10-t9u22t5t8-u22t6-t10), s'2= t7-(t3+t6+t5+(t2+t4) (t5t8-t6+u21t5t8-u21t6-t10-t9) + (t9+t10)/2), s'0=-(t3+t6). If s'2 =0 then call Cantor Algorithm. 4 To compute S= (S'/r) and make a monic; S=x2+s1x+s0 use Montogomery trick. In Montgomery's trick, inverses are computed as follows: Let xl, ….. , xn be the elements to be inverted. Set a1 = x1 and for i = 2,..., n ; compute ai = ai-1 - xi. Then invert an, and compute xn-1 = an-1an-1. Now, for i = n-1, n-2,.,2; compute ai-1 = xi+1 ai+1-1 and xi1 = ai-1ai-1. Finally compute x1-l = a1-l = x2a2-l. This procedure provides xl-1,.. , xn-1 using a total of 3(n-1) multiplications and one inversion. 5 Z=x5+z4x4+z3x3+z2x2+z1x+z0= SU1 (using Toom multiplication) Where z0=u10s0, z1= (t2-t3)/2-t4, z2= (t2+t3)/2-z0, z3= u11+s0+t4, z4=u12+s1. 6 Computing Ut using Karatsuba multiplication. Ut = x4+ut3x3+ut2x2+ut1x+ut0 = (S(Z+2wiV1) – wi2((F-V12)/U1))/U2 7 Compute Vt = vt2x2+vt1x+vt0 ≡ wZ+ V1 mod Ut 8 Compute U3=x3+u32x2+u31x+u30=(F-Vt2)/Ut 9 Compute V3=v32x2+v31x+v30 ≡ Vt modU3 4.2 Proposed Improvement in Existing Explicit Formula for Doubling of Divisor of a Genus-3 HEC over GF(p) We discuss only the fastest doubling formula by using special curves with h(x)=1 in projective representation.
210
K. Chatterjee and D. Gupta
Input:- C: Y2 =F(x), F= x7+f5x5+f4x4+f3x3+f2x2+f1x+f0; Reduced Divisor D1= [U12,U11,U10,V12,V11,V10,Z] Output:- D'= [U'2,U'1,U'0,V'2,V'1,V'0,Z'] =2D1 Steps:- 1 Before computing resultant, we Compute-ź = z2,U*12= ZU12, U*11= ZU11 , U*10= ZU10 , V*12 = ZV12, V*11= ZV11, V*10 = ZV10 2 To compute the resultant r of U1 and V1. r =[t5t6+t2t7-zt2t4+zt3t1] ź where t1=U11V10-U10V11, t2=U12V10- U10V12, t3=V112, t4=U11V10, t5=U12V11-U11V12, t6=U10V10 - V12U11, t7=V12U12-U10V12-V get the output D=(u',v'). If r=0 then call the Cantor Algorithm. 3 Compute pseudo inverse I = i2x2+i1x+i0 ≡ r/V1modU1 where i2= Zt3-V12t5, i1= U12i2+Zt7, i0=U11i2+U12t7+t6 4 We compute Z=z2x2+z1x+z0=(F-v12)/U1modU1 using Toom’s multiplication like that. Input: R = r2X2 + r1X + r0, S = s1X + s0. Output: T = t3X3 + t2X2 + t1X + t0 = RS I: w1 = (r2 + r1 + r0)(s1 + s0), II: w2 = (r2 - r1 + r0)(-s1 + s0), III: t0 = r0s0 IV: t3 = r2s1, V: t1 = (2t3 + w1 - w2)/2, VI: t2 = (2t0 + w1 - w2)/2 Using Toom’s algorithm, the multiplication of a degree 2 polynomial can be done within 4M instead of 5M by Karatsuba’s multiplication. 5 Compute S' = s2'x2+s1'x+s0'-2rS = Z I modU1 where s2'=Z[t1-t2- t3+(t1+t0)(z2+z0)+t6-(t7+t11)/2, s1' = Z[t4-t1- t 2+t0z0+t1z1]+(t11-t7)/2, s0'=Zt2-t6 6 Make a monic S=x2+(s1'/s2')x +s0'/s2' using Montogomery trick modified. Instead of computing s(x)=s1x+s0≡ v2-v1/u1mod u2 , one first computes the resultant r of u1 and u2 as: inv =r/u1mod u2 (no field inversion needed) and s'(x)=rs≡ (v2-v1).inv modu2. In the latter step, only one inversion is necessary to obtain both r-1 and s-1. Variable s-1 is required in algorithm to make u0 monic and r -1 is required to calculate s(x) = s'r-1. 7 If s2 '=0 then call the Cantor’s algorithm. 8 Compute G=x5+g4x4+g3x3+g2x2+g1x+g0-SU1 9 Compute Ut = x4+ut3 x3+ut2x2+ut1x +ut0 10 Compute Vt=vt3x3+vt2x2+vt1x+vt0 11 Compute U3=x3+u22x2+u21x+u20 12 Compute V3=v32x2+v31x+v30 4.3 Analysis 1) Harley’s algorithm in the most frequent case needs 2I+27M for an addition, 2I+30M for a doubling. This algorithm was improved by [KGMC 02][8] and [PWGP03][5]. [KGMC 02][8] in the most frequent case needs I+81M for an addition, I+74M for a doubling. [PWGP03] in the most frequent case needs I+70M+6S for an addition, I+61M+10S for a doubling. When genus 3 HECC is defined over a prime field, theoretical results shows our explicit formulae will cost I+72M for an addition of divisor, 100 M+10S for a doubling of divisor. 2) For most cryptographic applications based on HEC, the necessary group order is of size at least ≈ 2160. Thus, for HECC over Fq we will need at least g. log2 q ≈ 2160, where g is the genus of the curve. Therefore, we will need a field order q ≈ 240 for genus 4, q≈ 254for genus 3, and q ≈ 280for genus 2 HEC. Hence, one needs 40-bits to 80-bit long operands to compute the group operations for HEC. In the case of ECC and RSA, it is 160 bits and 1024 bits respectively in order to achieve the same security. Thus for its short operand size, HECC is more suitable for implementation in the constrained platforms like the PDA, smartcard, handheld devices etc.
Evolution of Hyperelliptic Curve Cryptosystems
211
3) We have considered HEC of genus 3 because Curves of higher genera (preferably g<4) are not suitable for cryptographic use as Adleman et al. [10] found a subexponential time algorithm to solve the DL in the Jacobian of hyperelliptic curves of a big genus over a finite field.
5 Conclusion HECC have advantages over the existing public key cryptosystems for its short operand size and are more suitable for security products such as smart cards if we can find efficient methods to select secure hyperelliptic curves and fast operations on the Jacobians. With this in view, we have explored in this paper various possible attacks that are to be considered to establish a secure HEC, analysed the prime steps of scalar multiplication and proposed certain improvements in the existing explicit formula to have a performance gain for HECC of genus 3. In our view, HECC of genus 3 has the merit to be the preferred cryptosystem in constrained environment.
Acknowledgments We acknowledge the guidance by Prof. Asok De for this paper.
References 1. Menezes, A., Wu, Y., Zuccherato, R.: An elementary introduction to hyperelliptic curves, http://www.cacr.math.uwaterloo.ca/techreports/1997/ tech-reports97.html 2. Koblitz, N.: Hyperelliptic cryptosystems. Journal of Cryptology 1(3), 139–150 (1989) 3. Cantor, D.G.: Computing in the Jacobian of a hyperelliptic curve. Mathematics of Computation 48, 95–101 (1987) 4. Gonda, M., Matsuo, K., Kazumaro, A., Chao, J., Tsuji, S.: Improvements of addition algorithm on genus 3 hyperelliptic curves and their implementations. In: Proc. of SCIS 2004, pp. 89–96 (2004) 5. Pelzl, J., Wollinger, T., Guajardo, J., Paar, C.: Hyperelliptic Curve Cryptosystems: Closing the Performance Gap to Elliptic Curves, 351–365 (2003), http://eprint.iacr.org/ 6. Fan, X., Wollinger, T., Wang, Y.: Inversion-Free Arithmetic on Genus 3 Hyperelliptic Curves and Its Implementation. In: ITCC 2005, April 4-6, vol. 1, pp. 642–647 (2005) 7. Fan, X., Wollinger, T., Gong, G.: Efficient explicit formulae for genus 3 hyperelliptic curve cryptosystems over binary fields. IET Inf. Secur. 1(2), 65–81 (2007) 8. Kuroki, J., Gonda, M., Matsuo, K., Chao, J., Tsujii, S.: Fast Genus Three Hyperelliptic CurveCryptosystems. In: The 2002 Japan — SCIS 2002, pp. 503–507 (2002) 9. Frey, G., Ruck, H.: A remark concerning m-divisibility and the discrete logarithm in the divisor class group of curves. Mathematics of Computation 62, 865–874 (1994) 10. Adleman, L., De Marrais, J., Huang, M.: A subexponential algorithm for discrete logarithms over the rational subgroup of the Jacobians of large genus hyperelliptic curves over finite fields. In: Huang, M.-D.A., Adleman, L.M. (eds.) ANTS 1994. LNCS, vol. 877, pp. 28–40. Springer, Heidelberg (1994) 11. Ruck, H.G.: On the discrete logarithms in the divisor class group of curves. Mathematics Computation 68, 805–806 (1999) 12. Smith, B.: Isogenies and the Discrete Logarithm Problem in Jacobians of Genus 3 Hyperelliptic Curves. In: Smart, N.P. (ed.) EUROCRYPT 2008. LNCS, vol. 4965, pp. 163–180. Springer, Heidelberg (2008)
Reliability Improvement Based on Prioritization of Source Code Mitrabinda Ray and Durga Prasad Mohapatra Department of Computer Science and Engineering National Institute of Technology Rourkela, Orissa-769008, India [email protected] http://www.nitrkl.ac.in
Abstract. Even after thorough testing of a program, usually a few bugs still remain. These residual bugs are randomly distributed through out the code. It is observed that bugs in some parts of a program can cause more frequent and more severe failures compared to those in other parts. So, it is possible to prioritize the program elements at the time of testing according to their potential to cause failures. Based on this idea, we have proposed a metric to compute the influence of an object in an objectoriented program. Influence of an element indicates the potential of the element to cause failures. The intensity with which each element is tested is proportionate to its influence value. We have conducted experiments to compare our scheme with related schemes. The results establish that the failure rate can indeed be minimized using our scheme when the software is executed for some duration after the completion of testing phase. Our proposed metric can be useful in applications such as coding, debugging, test case design and maintenance etc. Keywords: Control Dependence Graph, Forward Slicing, Program Testing, Object Slicing, Object-Oriented Programming, Reliability.
1
Introduction
Higher quality means fewer defects remaining undiscovered when the software is handed over to customer. It is already found that software testing accounts for 50% of software development efforts throughout the history of software Engineering [1]. Thus possibility of increasing the testing effort any further appears bleak. As testing is a sample, it is always needed to make decisions about what to test and what not to test, what to do more or less. The problem with most systematic test methods, like white box testing, or black box methods like equivalence partitioning, boundary value analysis or cause-effect graphing, is that they generate too many test cases [1]. Also in traditional white box testing techniques such as statement coverage and branch coverage, each element of the software product is tested with equal thoroughness [1]. As it is not always possible to satisfy these coverage criteria within the testing budget, these testing techniques T. Janowski and H. Mohanty (Eds.): ICDCIT 2010, LNCS 5966, pp. 212–223, 2010. c Springer-Verlag Berlin Heidelberg 2010
Reliability Improvement Based on Prioritization of Source Code
213
do not able to expose all the bugs in the program. Since these standard testing techniques give equal importance to the discovery of all bugs, it is assumed that these residual bugs are uniformly distributed through out the source code. The presence of bugs in some parts cause more severe and frequent failures compared to other parts. This means we need to know bugs in which parts of the source code are causing more frequent failures than others. For example, if a method of a class produces crucial data that is used by methods of many other classes, then a bug in this method would affect many other classes. Upon analyzing the software failure history data from nine large IBM software products, Adams [2] demonstrated that a relatively small proportion of bugs in the software are responsible for majority of reported failures. The empirical reports like [3] propose a Pareto distribution such as 80% of all defects are found in 20% of the modules. Then, the question arises: how to identify the critical parts of the system which require more attention than others at the time of testing? Please note that we use the terms bug, fault and defect interchangeably when no confusion arises. A error done by a programmer results in a fault (bug) in the software source code. The execution of a fault(cause) may cause one or more failures(effect). Defect is a generic term can refer to either a fault or a failure. In this paper, we present a new metric for prioritizing the program elements in the source code on the basis of their involvement with other elements at run time. Our goal is to improve the reliability of a system by minimizing the failure rate, within the testing budget. For this, the test plan includes two items (1) What is most important in the application should be tested first. (2) Test the parts of the code thoroughly in which the presence of a single bug causes the probability of failure high. The first one can be determined by looking at visibility of functions, frequency of use and at the possible cost of failure. For the latter one, we have proposed an algorithm to identify the critical elements in the source code. Once the critical elements are identified, more exhaustive testing has to be carried out to minimize the bugs in those elements. This means we can cut down testing in less critical elements. Critical elements are those that are either executed more frequently than others during the normal operation of the software or the results produced by those elements are used extensively by a large number of other elements. According to Briand et al. [4], a class having high export coupling shows that the class is critical as it is used by more number of classes, but the authors have not explained how to find these critical classes. For further details on import coupling and export coupling of an object, we refer the readers to survey that have been published in [4] and [5]. Testing the behavior of an object is an important task in testing of objectoriented programs. The behavior of an object can be tested by analyzing the dynamic slice of that object at any execution step. In this paper, we have presented a novel approach to extract the dynamic slice of an object. Our slicing method overcomes the limitation of existing graph reachability methods for slicing [6], [7] and [8]. The main limitation in these existing slicing methods is that when the slicing criteria changes, we have to again start from the slicing point. The slices for different variables at different nodes are obtained by traversing the
214
M. Ray and D.P. Mohapatra
graph several times starting from the slice point. Mund and Mall [9] solved this problem by proposing an inter procedural dynamic slicing algorithm to compute the dynamic slice of procedural programs using only control dependence graph. The advantage of their method is that, the previous results that are saved in memory can be reused instead of starting from the beginning every time. They did not consider object-orientation aspects. We have extended their work to get dynamic slice of object-oriented programs. With this new dynamic slicing algorithm, we have computed influence metric to get the dynamic influence of an object by checking it’s contribution at every execution step. The prioritization of source code of an object-oriented program is a major challenge due to the interdependencies among objects that add complexity to the debugging process (compared to traditional sequential programs). The influence value of an object helps in identifying the criticality of that object. We are prioritizing the objects based on their influence value. The influence metric is used at the time of testing to improve the reliability of a system. Once the influence metric is computed, the next step is to test thoroughly the objects having high influence value at the unit level and integration level. For example, the respective classes of those objects could be tested by generating more number of test sequences for inter class testing or, the path coverage requirement for intra method testing can be made more stringent than other classes. At the unit testing, a lot of models such as class flow graph [10], class control flow graph [11] and specification based class testing based on Finite State Machine [12] have been proposed. The rest of this paper is organized as follows. In Section 2, we review the related work. In Section 3, basic concepts and definitions are described which are used later in this paper. In Section 4, we discuss our proposed metric for objectoriented programs. We present experimental study in Section 5 and conclude the paper in Section 6.
2
Related Work
Our work encompasses on prioritized testing to ensure that the testing process results in great software within the testing budget. We therefore, in this section, specifically concentrate on research results reported in the context of prioritization techniques, at the time of test case selection in test suites and before the construction of test cases. Researchers have developed many test case prioritization techniques to make their testing process more effective. A meaningful prioritization of test cases can enhance the effectiveness of testing with the same testing effort. One of the many ways in which test case prioritization has been done in the past was by defining the core and the non-core functions of a program. This is typically done by using a tool such as a profiler to analyze the average time for which the different functions are executed [13]. The idea behind such execution profile analysis is that when a function is executed for longer time or more frequently compared to other functions during a typical run of the software, any existing errors in these functions are more likely to manifest themselves during the run. However, the
Reliability Improvement Based on Prioritization of Source Code
215
length of time a function is executed does not wholly determine the importance of the function in the perceived reliability of the system. The reason is that the results produced by a function which is executed only for a small duration is extensively used by many other functions. In this case, the functions whose results are widely used by many other functions would have a very high impact on the reliability of the system even though it is itself getting executed for a small duration only. The authors of the papers [14], [15] and [16] have proposed test case prioritization techniques to reduce the cost of regression testing based on total requirement and additional requirement coverage. Their basic aim is to improve a test suite’s fault detection rate. The authors in their research paper [16] have added two major attributes such as test cost and fault severity to each element of the test suit. They have conducted dozens of controlled experiments of prioritization techniques and found that the test suites executed based on prioritization technique always outperform unprioritized test suites in terms of fault detection rate. The authors of the paper [17] have proposed a test case prioritization technique for regression testing using relevant slicing. These discussed prioritization techniques are used to improve testing by selecting appropriate test cases from the pool of test cases, but it is found that test case generation is an expensive process. More time and effort are consumed to create and maintain a large number of test cases. Unlike these discussed test case prioritization techniques, our aim is not to increase the rate of fault detection at the time of regression testing which is applicable during maintenance phase, but to detect the faults from the critical parts of the source code, which are responsible for frequent failures during testing stage of software development. There has been another group of related works which we can classify them as Pre testing effort for improving testing before test cases are constructed. One such research area is code prioritization technique [18]. Li [18] has proposed a priority calculation method based on dominator analysis that prioritizes and highlights the important parts of the code that need to be tested to quickly improve code coverage. Code coverage is a metric that represents how much of the source code for an application is run when the unit tests for the application are run, but in practice, code coverage is giving equal importance to each element of a program and it is not possible to achieve 100% code coverage for a practical software product [1]. Another area of pre testing effort is the statistical testing technique [19]. In this technique, usage models are designed and then test cases are developed from the models. The test suite is designed such that the larger the probability value of an input class, the more number of test cases are designed to take their input values from this class. However, the principal aim of the statistical testing technique is to determine the reliability of a software product rather than testing the product to detect bugs. The statistical testing technique takes a black-box view of the system and does not consider the internal structure of the system in selecting the test cases. Musa is credited with the pioneering work in the field of test suite design using operational profile [20]. However, Musa’s work has concentrated on
216
M. Ray and D.P. Mohapatra
selection of black-box test cases compared to our white-box approach. The need to capture the structural (syntactic and semantic) relationships that exist among the elements has been ignored.
3
Basic Concepts and Definitions
Before presenting our proposed algorithm, we first introduce a few definitions that would be used in the algorithm. Def (var) and U se(var) are the nodes in the intermediate graph for defining and using the variable var. During execution of a program, a statement always corresponds to a node N in CDG. Def. 1. RecDef V ar(v) and RecDef Control(N ): Each node defining a variable v is maintaining a data structure named RecDef V ar(v) and each control node (Predicate) N is maintaining RecDef Control(N ) at the time of execution. RecentDef V ar(v) or RecDef Control(N ) based on the node type (defining a variable, a predicate) is updated which is equal to {N ∪ RecentDef V ar(var1 ) ∪ RecentDef V ar(var2 ) . . . ∪ RecentDef V ar(varn ) ∪ RecDef Control(S) ∪ ActiveCallSlice } where { var1 , var2 , . . . vark } are the variables used at node N and S is the most recently executed control node under which node N is executing. ActiveCallSlice stores the information of calling function which is described later on. If N is a loop control node, and the present execution of the node N corresponds to exit from the loop, then RecDef Control(N ) = ∅. Def. 2. ActiveCallSlice: At the time of execution of a program, ActiveCallSlice data structure is maintained to keep the information of most recent function call. At a particular instance of execution time, it represents the node N corresponding to the most recent execution of calling a function. ActiveCallSlice={N ∪ ActiveCallSlice ∪ RecDef Control(S)}, where S is the most recently executed control node under which node N is executing. At the time of function call, the existing ActiveCallSlice is updated by adding some more information. This is explained by an example in Section 4.2. Def. 3. CallSliceStack: It is a stack which stores a relevant sequence of nested function calls during an actual run of a program. Def. 4. ActiveReturnSlice: During execution of a function, it represents the node N corresponding to a return statement. Before execution of the return node N , ActiveReturnSlice = { N ∪ ActiveCallSlice ∪ RecDef V ar(var1 ) ∪ RecDef V ar(var2 ) ∪ · · · ∪ RecDef V ar(vark ) ∪ RecDef Control(S)}, where {var1 , var2 , · · · , vark } are the variables used at node N and S is the most recently executed control node under which node N is executing. Def. 5. F ormal(N, f ), Actual(N, a) When a node N calls a function, some parameters may be passed by value or by reference. If at the calling node N , a is the actual parameter and its corresponding formal parameter is f then Actual(N, a) = f ⇔ F ormal(N, f ) = a.
Reliability Improvement Based on Prioritization of Source Code
4
217
Our Approach for Prioritization of Objects
The influence of an object is computed based on the number of other objects of the given program which are using that object directly or indirectly. Some times the return value of a method of an object is deciding the execution of other objects in a program. So the influence of an object is computed based on the number of other objects of the given program which are both control and data dependent on that object directly or indirectly. First, We construct an intermediate representation called control dependence graph [21] from the analysis of the source code. Then we run the program with given data set. In Section 4.1, we present our proposed algorithm that extracts the dynamic slice of an object at any execution point and also computes the influence value of any given object. The proposed algorithm is explained through an example in Section 4.2. 4.1
Computation of Influence Value
Our proposed algorithm updates data structures of the variables defined and used during execution of the program. A temporary set named temp is used to store the updated data structure of current node n temporarily and to check whether the current node is using any node which is used in defining the data members of the input object. Then the current node is added to the influence set of the input object for which influence is calculated, if temp is containing any such node. After the execution of a node is complete, temp is again initialized to null. We are using another data structure named Active object set which is containing all the existing objects during execution. Since all the data members of a class are accessible to it’s methods, we have taken implicitly the data members as call by reference parameters, whenever an object is calling it’s member function. Now, we present our proposed algorithm. Influence of Object Input: Any object used in the program. OutPut 1: Influence of given Object, OutPut 2: Dynamic slice of any existing Object. Influence Obj(Object O) { 1. Construct Control Dependence Graph (CDG) of the object-oriented program P and do initialization before execution of the program P starts. CallSliceStack = ∅, ActiveCallSlice=∅, inf luence(O) = {∅}, total executed nodes = 0, temp = {∅} and Active object set = {∅}. 2. Run the program P with the given set of inputs and repeat the following steps until the program ends or slicing command is given: Let the node n in CDG correspond to the statement s of P . 2.1 Carry out the following before each statement s of the program P is executed
218
M. Ray and D.P. Mohapatra
a) If node n represents a method invocation do the following a.1) If node n is a call to a constructor class then store the object in active object set and for each data member d of O do RecDef V ar(O.d)= ∅. If the node n is a call to a destructor class then delete the object from active object set. a.2) Update CallSliceStack and ActiveCallSlice as in Def. 3 and Def. 2. a.3) For each actual parameter a explicitly defined in the calling node n do RecDef V ar(F ormal(n, a)) = RecDef V ar(a) { If the actual parameter is an object then each data member of that objects is actual parameter for that function call.} b) If n is the RET U RN node then update ActiveReturnSlice as in Def. 4. 2.2 Carry out the following after each statement s of the program P is executed. {update the data structure of node n} a) If node n is only defining a variable var and not a call node then Update RecDef V ar(var) as in Def. 1. b) Else if node n is a control node then update RecDef Control(n) as in Def. 1. c) Else if n is a node which represents a method invocation statement then c.1) If the corresponding invoked method returns a value which is defining a variable var in node n then RecDef V ar(var) = ActiveReturnSlice. c.2) Update CallSliceStack and ActiveCallSlice as in Def. 3 and Def. 2 and then set ActiveReturnSlice = ∅ c.3) For each local variable l var of that called function do RecDef V ar(l var) = ∅ d) Store the updated data structure of node n in set temp. e) total executed nodes + + 2.3 Carry out the following to include the current node n in the influence list of object O { Check Influence of the Object} a) If node n is a call to a member function by the input object O or object O is an actual parameter in the function call then mark the node n. b) Else check the set temp to find whether any node of RecDef V ar (O.di ) is used in current node n. If used then mark the node n. { RecDef V ar(O.di ) contains the set of nodes used in defining the data members of object O} c) If n is marked then add n and RecDefControl(S) to Inf luence(O), where S is the most recently executed control node under which node n is executing.
Reliability Improvement Based on Prioritization of Source Code
219
2.4 temp = ∅
{ Object Slicing } 2.5 If a slicing command < n; obj > is given where n is the currently executed node and obj is any object in the Active object set then Dynamic Slice (n; obj) = RecDefVar (obj.d1 ) ∪ RecDefVar (obj.d2 )∪ · · · ∪ RecDefVar (obj.dn ), where di is the i-th data members of obj and RecDefVar(obj.di ) is the updated data structure of di after execution of current node. 2.6 Exit if the execution of program is completed or aborted. 3. After the end of execution, calculate % of Influence of the object O = Number of elements in the set influence (O) × 100 total executed nodes } Complexity Measurement. Each statement of a program is either a control statement or defining a variable or calling a function or an output statement. The data structure used in this program are Active object set, RecDef V ar(var), RecDef Con(n), ActiveDataSlice, ActiveReturnSlice and CallSliceStack. Excluding the last one, each one has maximum size of N , where N is the total number of statements in the program. The size of stack is c ∗ N , where c is the maximum level of call nesting in a program. So, worst case Space complexity is dependent on the level of recursion. Time complexity is linear in the number of statements in a program at run time. 4.2
Working of the above Algorithm
Consider the C++ program in Fig. 1a, which creates a number of objects. The intermediate representation of the program is shown in Fig. 1b. During the initialization step, the algorithm sets CallSliceStack=∅ and ActiveCallSlice =∅. Here we have run the program with input 12 and computed the influence of object bx after the execution complete. Now we have the following for some executed nodes of the program. – After execution of node 10:RecDef V ar(bx.a) = {2, 10}, RecDef V ar(bx.b) = {3, 10} – After execution of node 11: RecDef V ar(cx.a) = {2, 11}, RecDef V ar(cx.b) = {3, 11} – After execution of node 12: RecDef V ar(dx.a)= {2, 12}, RecDef V ar(dx.b) = {3, 12} – Before execution of node 13: CallSliceStack = {13}, ActiveCallSlice = {13} – After execution of node 5: RecDef Control(5) = {5 ∪ RecDef V ar(bx.b) ∪ ActiveCallSlice}={5, 3, 10, 13} – After execution of node 7: RecDef V ar(bx.b) = {7 ∪ RecDef V ar(bx.b) ∪ ActiveCallSlice}={7, 3, 10, 13}
220
M. Ray and D.P. Mohapatra
– After execution of node 13: CallSliceStack = {∅}, ActiveCallSlice = {∅} – After execution of node 14: RecDef V ar(n) = {14} – Before execution of node 15: CallSliceStack = {15}, ActiveCallSlice={15}, RecDef V ar(F ormal(n)) = {RecDef V ar(i)} = {14} . – After execution of node 4: RecDef V ar(bx.b) = {4 ∪ RecDef V ar(bx.b)∪ RecDef V ar(i) ∪ ActiveCallSlice}={4, 7, 3, 10, 13, 14, 15} After the end of execution of the program, Influence(bx)= {10,13,15,16,1,4,5, 6,7}. So the influence of bx =(9/20)*100=45%. Dynamic Slicing of Object. The dynamic slice of an object can be extracted at any execution step. The dynamic slice of bx after execution of node 16 is {RecDef V ar(bx.a) ∪ RecDef V ar(bx.b)} is {1, 2, 10, 4, 7, 3, 13, 14, 15, 5, 6, 16}. Proper inspection or review is required only for those nodes instead of whole program to find bugs in object bx.
(a) An Example Program
(b) It’s Control Dependence Graph
Fig. 1. Program with it’s Control Dependence Graph
5
Experimental Studies
We have implemented our proposed algorithm for calculation of influence for JAVA programs. The graphs used in the algorithm are obtained using ANTLR in the ECLIPSE frame work. We have considered two case studies. The programs that have been analyzed were selected from the programming assignments done by students. We made two copies of each of the case study and seeded same numbers and types of bugs in each copy. We have inserted class mutation operators such as Compatible Reference Type(CRT), Constructor(CON), Overriding Method(OVM) and AMC (Access Modifier Changes) [22] to seed bugs. These mutation operators are targeted at object-oriented specific features. Then one hundred number of test cases were designed for each copy. At this point, we emphasize the fact that at the time of testing, our aim is not to achieve complete fault-coverage
Reliability Improvement Based on Prioritization of Source Code
221
with a minimal test suite size, but to improve the reliability of a system using a test suite of given size. The test suite size would typically be determined by the testers depending upon how much effort they can devote for testing. At this point, careful test case construction is a very important problem that needs to be addressed during the product development cycle as the quality of the designed test suite largely determines the effectiveness of any testing effort. Uniform coverage based testing method was applied for the first copy and prioritized testing based on our proposed influence metric applied to the second copy. The basic difference between this two testing methods is that, at the time of designing test cases, in second copy, if the method of an object having highest influence value contains several paths, they were considered by a call to each different path individually and for others the coverage of number of paths were considered based on their influence value, whereas a constant threshold value was set, for selecting number of paths in traditional uniform testing depending on their importance. We want to prove that bugs in objects having high influence value are more tend towards failures. For this, in the second copy of each case study, seeding bugs and generating test cases for each class were proportional to the highest influence value of their corresponding objects, whereas in first copy, bugs were seeded randomly and test cases were selected for each class based on coverage of selected def-use paths. We had not considered library and framework objects. At the time of testing, we discovered some seeded errors and latent errors in both the copies. Table-1 shows the analytical comparison of error detection by different testing methods during testing phase. After resolving the detected errors found in Table-1, we have tested again both the copies of each case study by generating one hundred new test cases based on operational profile to observe the effect of these testing process. Table-2 shows the analytical result by two different testing methods in the eyes of the end-user. We have taken catastrophic, major and minor kinds of failures to include almost all variety of bugs. Result Analysis.Table-1 shows that by seeding more bugs in highly influenced objects are causing more number of failures compared to seeding bugs randomly. Table-2 shows the numbers of failures in each copy of the case studies found after resolving all the detected errors found in Table-1. It is found from Table-2 that the number of failures are greatly reduced by removing more bugs from the objects which are more critical in determining the reliability of a program. It is shown that after the completion of testing phase, the failure rate is minimized in our proposed scheme. We do agree that our results can have some inaccuracies as the case studies considered are not selected from commercial products. Table 1. Failures observed at testing phase Coverage Based Testing Proposed Prioritized Testing Case Study TC BS CL Failures BC Failures BC Car Race 100 60 24 67 51 84 73 Brickles 100 60 32 56 43 92 59 Abbreviation: TC- Test Cases, BS- Bugs Seeded, CL- Classes, BC- Bugs Caught
222
M. Ray and D.P. Mohapatra
Table 2. Failures observed after finishing of testing phase
Case Study Test Cases Car Race 100 Brickles 100
Coverage Based Testing Proposed Prioritized Testing Failures Failures 09 03 07 02
Threats to Validity of Results and Measures Taken. In order to justify the validity of the results of our experimental studies, we got the threats such as biased test suite design and influencing results, seeding biased errors, testing only for selected failures, using testing methods which may only be suitable for some particular bugs while may not reveal other common and frequent bugs. In order to overcome the above mentioned threats and validate the results for most common and real life cases, we have designed same test suite based on operational profile to observe the reliability of the system based on two testing methods. We also seeded same types of bugs in both the copies of each case study. We have taken care that the seeded bugs match with commonly occurring bugs. Using mutation operators, we ensure that a wide variety of bugs are systematically inserted in a somewhat impartial and random fashion.
6
Conclusion
We have proposed an influence metric for different objects in an object-oriented program. The influence metric prioritizes the objects for testing based on their criticality. A single fault in a more critical object has high influence to the reliability of the system during operation. We have compared our scheme with related schemes. Based on our analytical comparison, we found that the objects having higher influence value indeed determine the probability of failure to a large extent. From our experimental results, it is concluded that by spending more time and effort in key objects of an application compared to others at the time of testing, make the system more reliable within the testing budget. It is useful in test case design and test case prioritization. Our approach at present can not handle concurrency and exception. We are now augmenting our approach to handle these issues. The effectiveness of our approach also need to be validated on large industry size program. Finally we have confidence that whenever we stop testing, we have done the best thing with our testing budget.
References 1. Mall, R.: Fundamentals of Software Engineering, 3rd edn. Prentice-Hall, Englewood Cliffs (2009) 2. Adams, E.N.: Optimizing preventive service of software products. IBM Journal for Research and Development 28(01), 3–14 (1984)
Reliability Improvement Based on Prioritization of Source Code
223
3. Boehm, B., Basili, V.R.: Software defect reduction top 10 list. Computer 34(1), 135–137 (2001) 4. Briand, L.C., Daly, J.W., W¨ ust, J.K.: A unified framework for coupling measurement in object-oriented systems. IEEE Trans. Softw. Eng. G 25(01), 91–121 (1999) 5. Foyen, A.: Dynamic coupling measurement for object-oriented software. IEEE Trans. Softw. Eng. 30(8), 491–506 (2004); Member-Arisholm, Erik and MemberBriand, Lionel C 6. Larsen, L., Harrold, M.J.: Slicing object-oriented software. In: ICSE 1996: Proceedings of the 18th international conference on Software engineering, Washington, DC, USA, pp. 495–505. IEEE Computer Society, Los Alamitos (1996) 7. Liang, D., Harrold, M.J.: Slicing objects using system dependence graphs. In: ICSM 1998: Proceedings of the International Conference on Software Maintenance, Washington, DC, USA, pp. 358–367. IEEE Computer Society, Los Alamitos (1998) 8. Xu, B., Qian, J., Zhang, X., Wu, Z., Chen, L.: A brief survey of program slicing. SIGSOFT Softw. Eng. Notes 30(2), 1–36 (2005) 9. Mund, G.B., Mall, R.: An efficient interprocedural dynamic slicing method. J. Syst. Softw. 79(6), 791–806 (2006) 10. Parrish, A.S., Borie, R.B., Cordes, D.W.: Automated flow graph-based testing of object-oriented software modules. J. Syst. Softw. 23(2), 95–109 (1993) 11. Buy, U., Orso, A., Pezze, M.: Automated testing of classes. SIGSOFT Softw. Eng. Notes 25(5), 39–48 (2000) 12. Hoffman, D., Hoffman, D., Strooper, P., Strooper, P.: Classbench: A methodology and framework for automated class testing. Software—Practice and Experience 27, 27–35 (1996) 13. Profiler, http://www.gnu.org/manual/gprof-2.9.1 14. Rothermel, G., Untch, R.H., Chu, C., Harrold, M.J.: Prioritizing test cases for regression testing. IEEE Transactions on Software Engineering 27(10), 929–948 (2001) 15. Elbaum, S., Malishevsky, A.G., Rothermel, G.: Test case prioritization: A family of empirical studies. IEEE Trans. Softw. Eng. 28(2), 159–182 (2002) 16. Elbaum, S., Malishevsky, A., Rothermel, G.: Incorporating varying test costs and fault severities into test case prioritization. In: ICSE 2001: Proceedings of the 23rd International Conference on Software Engineering, Washington, DC, USA, pp. 329–338. IEEE Computer Society, Los Alamitos (2001) 17. Jeffrey, D., Gupta, N.: Experiments with test case prioritization using relevant slices. J. Syst. Softw. 81(2), 196–221 (2008) 18. Li, J.J.: Prioritize code for testing to improve code coverage of complex software. In: ISSRE 2005: Proceedings of the 16th IEEE International Symposium on Software Reliability Engineering, Washington, DC, USA, pp. 75–84. IEEE Computer Society, Los Alamitos (2005) 19. Whittaker, J.A., Thomason, M.G.: A markov chain model for statistical software testing. IEEE Trans. Softw. Eng. 20(10), 812–824 (1994) 20. Musa, J.D.: Operational profiles in software-reliability engineering. IEEE Softw. 10(2), 14–32 (1993) 21. Harrold, M.J., Rothermel, G., Sinha, S.: Computation of interprocedural control dependence. In: ISSTA 1998: Proceedings of the 1998 ACM SIGSOFT international symposium on Software testing and analysis, pp. 11–20. ACM, New York (1998) 22. John, S.K., Clark, J.A., Mcdermid, J.A.: Class mutation: Mutation testing for object-oriented programs. In: Proc. Net. ObjectDays, pp. 9–12 (2000)
Secure Dynamic Identity-Based Remote User Authentication Scheme Sandeep K. Sood, Anil K.Sarje, and Kuldip Singh Department of Electronics & Computer Engineering Indian Institute of Technology Roorkee, India {ssooddec,sarjefec,ksconfcn}@iitr.ernet.in
Abstract. In 2000, Sun proposed an efficient smart card based remote user authentication scheme to improve the efficiency of Hwang and Li’s scheme. In 2002, Chien et al. demonstrated that Sun’s scheme only achieves unilateral user authentication so that only authentication server authenticates the legitimacy of the remote user and proposed a new remote user authentication scheme. In 2004, Hsu demonstrated that Chien et al.’s scheme is vulnerable to parallel session attack. In 2005, Lee et al. proposed an improved scheme to overcome this weakness while maintaining the merits of Chien et al.’s scheme. In 2005, Yoon and Yoo found that Lee et al.’s scheme is vulnerable to masquerading server attack and proposed an improved scheme. In 2009, Xu et al. demonstrated that Lee et al.’s scheme is vulnerable to offline password guessing attack and proposed an improved scheme. However, we found that Lee et al.’s scheme is also vulnerable to impersonation attack, malicious user attack and reflection attack. Moreover, Lee et al.’s scheme fails to protect the user’s anonymity in insecure communication channel. This paper presents an improved scheme to resolves the aforementioned problems, while keeping the merits of Lee et al.’s scheme. Keywords: Cryptology, Dynamic Identity, Password, Authentication Protocol, Smart Card.
1 Introduction Password is the most commonly used authentication technique in smart card based authentication protocols. Smart card is used to support users in storing some sensitive data and performing some operations securely. The privileged participants like the server, the card issuer authority and the card reader machine can read or write the data to smart cards. The user (card holder) submits his identity and password to his smart card. Then smart card performs some operations using the submitted arguments and the data stored inside its memory to authenticate the user. Smart cards have been widely used in many e-commerce applications and network security protocols due to their low cost, portability, efficiency and the cryptographic properties. Most of password authentication schemes use additional storage devices known as smart cards to assist users for their authentication. In 1981, Lamport [1] suggested a password based authentication scheme that authenticates remote users over an insecure T. Janowski and H. Mohanty (Eds.): ICDCIT 2010, LNCS 5966, pp. 224–235, 2010. © Springer-Verlag Berlin Heidelberg 2010
Secure Dynamic Identity-Based Remote User Authentication Scheme
225
communication channel. Lamport's scheme removes the problems of password table disclosure and communication eavesdropping. Since then, a number of remote user authentication schemes have been proposed to improve security, effectiveness and cost. In 2000, Hwang and Li [2] found that Lamport’s scheme is vulnerable to risk of password table modification and the cost of protecting and maintaining the password table is large. Therefore, they proposed a cost effective remote user authentication scheme using smart card that is free from the mentioned risk. Hwang and Li’s scheme can withstand replay attack and also authenticate the remote users without maintaining a password table. In 2000, Sun [3] proposed an efficient smart card based remote user authentication scheme to improve the efficiency of Hwang and Li’s scheme. In 2002, Chien et al. [4] found that Sun’s scheme only achieves unilateral user authentication so that only authentication server authenticates the legitimacy of the remote user. Chien et al. also proposed a remote user authentication scheme and claimed that their scheme provides mutual authentication, requires no verification table, password selection freedom and uses only few hash operations. In 2004, Ku and Chen [5] showed that Chien et al.’s scheme is vulnerable to reflection attack, insider attack and proposed an improved remote user authentication scheme using smart card to preclude the weaknesses of Chien et al.’s scheme. In 2004, Hsu [6] demonstrated that Chien et al.’s scheme is vulnerable to parallel session attack so that the intruder without knowing the user’s password can masquerade as the legitimate user by generating a valid login message from the eavesdropped communication between the authentication server and the user. In 2005, Lee et al. [7] proposed an improved scheme to remedy this weakness while maintaining the merits of Chien et al.’s scheme. In 2005, Yoon and Yoo [8] found that Lee et al.’s scheme is vulnerable to masquerading server attack and proposed an improvement on Lee et al.’s scheme. In 2009, Kim and Chung [9] found that Yoon and Yoo’s scheme easily reveals a user’s password and is vulnerable to masquerading user attack, masquerading server attack and stolen verifier attack. Then Kim and Chung proposed a new remote user authentication scheme that removes aforementioned security flaws while keeping the merits of Yoon and Yoo’s scheme. In 2009, Xu et al. [10] demonstrated that Lee et al. scheme is vulnerable to offline password guessing attack in case the smart card is lost or stolen by an attacker and proposed an improved mutual authentication and key agreement scheme. In this paper, we found that the Lee et al.’s [7] scheme is also vulnerable to impersonation attack, malicious user attack and reflection attack. Moreover, this scheme fails to protect the user’s anonymity in insecure communication channel. To remedy these pitfalls, this paper presents an efficient smart card based authentication scheme free from different flaws. The proposed scheme inherits the merits of Lee et al.’s scheme with improved security. The rest of this paper is organized as follows. In Section 2, a brief review of Lee et al.’s scheme is given. Section 3 describes the cryptanalysis of Lee et al.’s scheme. In Section 4, our improved scheme is proposed. The security analysis of the proposed scheme is presented in Section 5. The comparison of the cost and functionality of our proposed scheme with the other related schemes is shown in Section 6 and Section 7 concludes the paper.
226
S.K. Sood, A.K. Sarje, and K. Singh
2 Review of Lee et al.’s Scheme [7] In this section, we examine the remote user authentication scheme proposed by Lee et al. in 2005. Lee et al.’s scheme consists of three phases viz. registration phase, login phase and verification phase as summarized in Fig. 1. 2.1 Registration Phase A user Ui has to submit his identity IDi and password Pi to the server S for registration over a secure communication channel. The server S computes Ri = H (IDi ⊕ x) ⊕ Pi, where x is the secret key of remote server S. Then the server S issues the smart card containing secret parameters (H ( ), Ri) to the user Ui through a secure communication channel. The server S also stores identity IDi of user Ui in its database.
Fig. 1. Lee et al.’s scheme
2.2 Login Phase The user Ui inserts his smart card into a card reader to login on to the server S and then submit his identity IDi* and password Pi*. The smart card computes C1 = Ri ⊕ Pi* and C2 = H (C1 ⊕ T1), where T1 is current date and time of input device and sends the login request message (IDi*, T1, C2) to the service provider server S. 2.3 Verification Phase The service provider server S verifies the received value of IDi* with the stored value of IDi in its database. Then the server S verifies the validity of timestamp T1 by checking (T2 – T1) <= δT, where T2 is current date and time of the server S and δΤ is expected time interval for a transmission delay. Afterwards, the server S computes C1* = H (IDi ⊕ x) and C2* = H (C1* ⊕ T1) and compares C2* with the received value of C2.
Secure Dynamic Identity-Based Remote User Authentication Scheme
227
If they are not equal, the server S rejects the login request and terminates this session. Otherwise the server S acquires the current timestamp T3 and computes C3 = H (H (C1* ⊕ T3)) and sends the message (T3, C3) back to the smart card of user Ui. On receiving the message (T3, C3), smart card checks the validity of timestamp T3 by checking (T4 – T3) <= δT, where T4 is current date and time of the client’s smart card. Then the client’s smart card computes C3* = H (H (C1 ⊕ T3)) and compares it with received value of C3. This equivalency authenticates the legality of the service provider server S and the login request is accepted else the connection is interrupted.
3 Cryptanalysis of Lee et al.’s Scheme Lee et al. [7] claimed that their protocol can resist various known attacks. Unfortunately, this protocol is found to be flawed for impersonation attack, malicious user attack and reflection attack. Lee et al.’s scheme also fails to protect the user’s anonymity in insecure communication channel. 3.1 Impersonation Attack The attacker can intercept a valid login request message (IDi*, T1, C2) of user Ui from the public communication channel. Now he can launch offline dictionary attack on C2 = H (C1 ⊕ T) to know the value of C1, which is always same corresponding to user Ui. After guessing the correct value of C1, the attacker can frame and send fabricated valid login request message (IDi*, Tu, C2) to the service provider server S without knowing the password Pi of the user Ui, where Tu is a current timestamp and C2 = H (C1 ⊕ Tu). Hence, the attacker can successfully make a valid login request to impersonate as a legitimate user Ui to the service provider server S. 3.2 Malicious User Attack The attacker can extract the stored values through some technique such as by monitoring their power consumption and reverse engineering as pointed out by Kocher et al. [11] and Messerges et al. [12]. Therefore, a malicious privileged user Uk having his own smart card can extract the stored value of Rk from its memory. 1.
Then malicious user Uk can launch offline dictionary attack on Rk = H (IDk ⊕ x) Pk to know the secret key x of the server S because user Uk knows its identity IDk and password Pk. Now this malicious user Uk intercepts the valid login request message (IDi*, T1, C2) of user Ui from the public communication channel. The malicious user Uk can compute C1 = H (IDi* ⊕ x) and C2 = H (C1 ⊕ Tu), where Tu is current date and time of input device. Then malicious user Uk can frame fabricated login request message (IDi*, Tu, C2) corresponding to user Ui and sends it to the server S. The service provider server S checks the validity of timestamp Tu by checking (T’’ – Tu) <= δT, where T’’ is current date and time of the server S and δΤ is expected time interval for a transmission delay. ⊕
2. 3. 4. 5.
228
6.
S.K. Sood, A.K. Sarje, and K. Singh
Afterwards, the server S computes the value of C1* = H (IDi ⊕ x), C2* = H (C1* ⊕ Tu) and compares the computed value of C2* with the received value of C2. This equivalency authenticates the legality of the user Ui and the login request is accepted by the service provider server S.
3.3 Reflection Attack Reflection attack on Lee et al.’s scheme can be demonstrated as follows. The attacker reuses user Ui’s login request message as the response message of the fake server as follows. 1.
2. 3.
By observing the login request message (IDi*, T1, C2) and the response message (T3, C3), it is clear that the C2 and C3 is symmetric to each other and difference between C2 and C3 is the timestamps and one additional hash operation, where C2 = H (C1 ⊕ T1) and C3 = H (H (C1 ⊕ T3)). An attacker intercepts a valid login request message (IDi*, T1, C2) of user Ui from the public communication channel. Now the attacker computes C3 = H (C2) = H (H (C1 ⊕ T1)) and impersonates as the server and sends the response message (T1, C3) back to the user Ui immediately. The response message will pass the server's verification with high probability.
3.4 User’s Anonymity The user Ui inserts his smart card into a card reader to login on to the server S and submits his identity IDi* and password Pi*. The smart card computes C1 = Ri ⊕ Pi* and C2 = H (C1 ⊕ T1), where T1 is current date and time of input device and sends the login request message (IDi*, T1, C2) to the service provider server S. User’s identity IDi* is transmitted clearly in login request message and hence the different login request messages belonging to the same user can be traced out and can be interlinked to derive some information related to the user Ui. Hence Lee et al.’s scheme is not able to preserve the user’s anonymity.
4 Proposed Protocol In this section, we describe a new remote user authentication scheme which resolves the above security flaws of Lee et al.’s [7] scheme. Fig. 2 shows the entire protocol structure of the new authentication scheme. The proposed scheme consists of four phases viz. registration phase, login phase, authentication and session key agreement phase and password change phase. 4.1 Registration Phase The user Ui selects a random number b to compute Ai = H (IDi | b) and submits Ai to the server S for registration over a secure communication channel, where IDi is the identity of user Ui. Step 1: C Æ S: Ai The server S computes the security parameters Fi = Ai ⊕ yi, Βi = Ai ⊕ H (yi) ⊕ H (x) and Ci = H (Ai | H (x) | H (yi)), where x is the secret key of remote server S. The server
Secure Dynamic Identity-Based Remote User Authentication Scheme
229
S chooses the value of yi corresponding to each user in such a way so that the value of Ci must be unique for each user. The server S stores yi ⊕ x corresponding to Ci in its database. Then server S issues the smart card containing security parameters (Fi, Bi, H ( )) to the user Ui through a secure communication channel. Step 2: S Æ C: Smart card Then user Ui computes security parameters Di = b ⊕ H (IDi | Pi), Ei = H (IDi | H (Pi)) ⊕ Pi and enters the values of Di and Ei in his smart card. Step 3: C Æ Smart card: Di, Ei Finally, the smart card contains security parameters as (Di, Ei, Fi, Bi, H ( )) stored in its memory. 4.2 Login Phase A user Ui inserts his smart card into a card reader to login on to the server S and submits his identity IDi* and password Pi*. The smart card computes Ei* = H (IDi* | H (Pi*)) ⊕ Pi* and compares it with the stored value of Ei in its memory to verify the legality of the user. Step 1: Smart card checks Ei* ?= Ei After verification, smart card computes b = Di ⊕ H (IDi | Pi), Ai = H (IDi | b), yi = Fi ⊕ Ai, H (x) = Bi ⊕ Ai ⊕ H (yi), Ci = H (Ai | H (x) | H (yi )), CIDi = H (H (x) | T) ⊕ Ci and Mi = H (H (x) | H (yi) | T), where T is current date and time of input device. Then smart card sends the login request message (CIDi, Mi, T) to the server S. Step 2: Smart card Æ S: CIDi, Mi, T 4.3 Authentication and Session Key Agreement Phase After receiving the login request from client C, service provider server S checks the validity of timestamp T by checking (T’ –T) <= δΤ, where T’ is current date and time of the server S and δΤ is expected time interval for a transmission delay. The server S computes Ci* = CIDi ⊕ H (H (x) | T) and extracts yi from yi ⊕ x corresponding to Ci* from the database of server S. If the value of Ci* does not match with any value of Ci in the database of server S, the server S rejects the login request and terminates this session. Otherwise, the server S computes Mi* = H (H (x) | H (yi) | T) and compares Mi* with the received value of Mi to check the authenticity of received message. Step 1: Server S checks Mi* ?= Mi If they are not equal, the server S rejects the login request and terminates this session. Otherwise, the server S acquires the current time stamp T’’ and computes Vi = H (Ci | H (x) | H (yi) | T | T’’) and sends the message (Vi, T’’) back to the smart card of user Ui. Step 2: S Æ Smart card: Vi, T’’ On receiving the message (Vi, T’’), the user Ui’s smart card checks the validity of timestamp T’’ by checking (T’’’ –T’’) <= δΤ, where T’’’ is current date and time of
230
S.K. Sood, A.K. Sarje, and K. Singh
Fig. 2. Proposed protocol
smart card. Then smart card computes Vi* = H (Ci | H (x) | H (yi) | T | T’’) and compares it with the received value of Vi. Step 3: Smart card checks Vi* ?= Vi This equivalency authenticates the legality of the service provider server S and the login request is accepted else the connection is interrupted. Finally, the client C and the server S agree on the common session key as H (Ci | yi | H (x) | T | T’’). 4.4 Password Change Phase The client C can change his password without the help of server S. User inserts his smart card into card reader and enters his identity IDi* and password Pi* corresponding to his smart card. Smart card computes Ei* = H (IDi* | H (Pi*)) ⊕ Pi* and compares it with the stored value of Ei in its memory to verifies the legality of the user. Once the legality of cardholder is verified then the client can instruct the smart card to change his password. Afterwards, the smart card asks the cardholder to resubmit a new password Pi new and then Di = b ⊕ H (IDi | Pi) and Ei = H (IDi | H (Pi)) ⊕ Pi stored in the smart card can be updated with Di new = Di ⊕ H (IDi | Pi) ⊕ H (IDi | Pi new) and Ei new = Ei ⊕ H (IDi | H (Pi)) ⊕ Pi ⊕ H (IDi | H (Pi new)) ⊕ Pi new.
Secure Dynamic Identity-Based Remote User Authentication Scheme
231
5 Security Analysis Smart card is a memory card that uses an embedded micro-processor from smart card reader machine to perform required operations specified in the protocol. Kocher et al. [11] and Messerges et al. [12] pointed out that all existing smart cards can not prevent the information stored in them from being extracted like by monitoring their power consumption. Some other reverse engineering techniques are also available for extracting information from smart cards. That means once a smart card is stolen by the attacker, he can extract the information stored in it. A good password authentication scheme should provide protection from different possible attacks relevant to that protocol. 1. Impersonation Attack: In this type of attack, the attacker impersonates as legitimate client and forges the authentication messages using the information obtained from the authentication scheme. The attacker can attempt to modify a login request message (CIDi, Mi, T) into (CIDi*, Mi*, T*), where T* is the attacker’s current date and time, so as to succeed in the authentication phase. However, such a modification will fail in Step 1 of user authentication and session key agreement phase because the attacker has no way of obtaining the value H (x), Ci and H (yi) to compute the valid parameter CIDi* and Mi*. Therefore, the proposed protocol is secure against impersonation attack. 2. Malicious User Attack: A malicious privileged user having his own smart card can gather information like Di = b ⊕ H (IDi | Pi), Ei = H (IDi | H (Pi)) ⊕ Pi, Fi = Ai ⊕ yi and Bi = Ai ⊕ H (yi) ⊕ H (x) from the memory of smart card. This malicious user can not generate smart card specific values of CIDk = H (H(x) | T) ⊕ Ck and Mk = H (H (x) | H (yk) | T) to masquerade as other legitimate user Uk to service provider server S because the values of CIDk and Mk is smart card specific and depends upon the value of IDk, Pk, bk, H (x) and H (yk). Although malicious user can extract H (x) from his own smart card but he does not have any method to calculate the value of IDk, Pk, bk and H (yk). Therefore, the proposed protocol is secure against malicious user attack. 3. Reflection Attack: In this type of attack, the attacker reuses login request messages (CIDi, Mi, T) as the response messages (Vi, T’’) of a fake server. In the proposed protocol, there is no symmetry in the values of CIDi = H (H (x) | T) Ci, Mi = H (H (x) | H (yi) | T) and Vi = H (Ci | H x) | H (yi) | T | T’’). Hence reflection attack is not possible in the proposed protocol. 4. Identity Protection: Our approach provides identity protection in the sense that instead of sending the real identity IDi of user Ui for authentication, the dynamic pseudo identification CIDi = H (H (x) | T) ⊕ Ci is generated by the smart card corresponding to the legitimate client for its authentication with service provider server S. The real identity information about the client is not transmitted in the login request message. This approach provides the privacy and unlinkability among different login requests belonging to same user. The attacker can not link different sessions belonging to the same user. 5. Stolen Smart Card Attack: In case a user's smart card is stolen by an attacker, he can extract the information stored in the smart card. An attacker can extract Di = b ⊕
232
S.K. Sood, A.K. Sarje, and K. Singh
H (IDi | Pi), Ei = H (IDi | H (Pi)) ⊕Pi, Fi = Ai ⊕yiand Bi = Ai ⊕ H (yi) ⊕H (x) from the memory of smart card. Even after gathering this information, an attacker has to guess out IDi and Pi correctly at the same time. It is not possible to guess out two parameters correctly at the same time in real polynomial time. Therefore, the proposed protocol is secure against stolen smart card attack. 6. Offline Dictionary Attack: In offline dictionary attack, the attacker can record messages and attempts to guess user’s identity IDi and password Pi from recorded messages. An attacker first tries to obtains some client or server verification information such as T, T’’, CIDi = H (H (x) | T) ⊕Ci, Mi = H (H (x) | H (yi) | T), Vi = H (Ci | H x) | H (yi) | T | T’’) and then try to guess the value of IDi, Pi, b, x and yi by offline dictionary attack. Even after gathering this information, the attacker has to guess at least two parameters correctly at the same time. It is not possible to guess two parameters correctly at the same time in real polynomial time. Therefore, the proposed protocol is secure against offline dictionary attack. 7. Denial of Service Attack: In denial of service attack, an attacker updates password verification information on smart card to some arbitrary value so that legal user can not login successfully in subsequent login request to server. In the proposed protocol, smart card checks the validity of user identity IDi and password Pi before password update procedure. Since the smart card computes Ei* = H (IDi* | H (Pi*)) ⊕ Pi* and compares it with the stored value of Ei in its memory to verify the legality of the user before the smart card accepts the password update request. It is not possible to guess out identity IDi and password Pi correctly at the same time even after getting the smart card of a user. Therefore, the proposed protocol is secure against denial of service attack. 8. Replay Attack: In this type of attack, the attacker first listens to communication between the client and the server and then tries to imitate user to login on to the server by resending the captured messages transmitted between the client and the server. Replaying a message of one session into another session is useless because client’s smart card and server S uses current time stamp values (T and T’’) in each new session, which make all the messages CIDi, Mi and Vi dynamic and valid for small interval of time. Old replayed messages are not valid in current session and hence proposed protocol is secure against message replay attack. 9. Leak of Verifier Attack: In this type of attack, the attacker may able to steal the verification table from the server. If the attacker steals the verification table from the server, he can use the stolen verifiers to impersonate a participant of the scheme. In the proposed scheme, service provider server S knows secret x and stores yi ⊕x corresponding to user’s Ci = H (Ai | H x) | H yi)) value in its database. The attacker does not have any technique to find out the value of x and hence can not calculate yi from yi ⊕x. Moreover, the attacker can not compute IDi, b or yi from Ci. Therefore, the proposed protocol is secure against leak of verifier attack. 10. Server Spoofing Attack: In server spoofing attack, the attacker can manipulate the sensitive data of legitimate users via setting up fake servers. Malicious server can not generates the valid value of Vi = H (Ci | H x) | H (yi) | T | T’’) meant for smart card because malicious server has to know the value of IDi, b, H (x) and H (yi) to generate
Secure Dynamic Identity-Based Remote User Authentication Scheme
233
the value of Vi corresponding to that client’s smart card. Therefore, the proposed protocol is secure against server spoofing attack. 11. Online Dictionary Attack: In this type of attack, the attacker pretends to be legitimate client and attempts to login on to the server by guessing different words as password from a dictionary. In the proposed scheme, the attacker has to get the valid smart card and then has to guess the identity IDi and password Pi corresponding to that client. It is not possible to guess out identity IDi and password Pi correctly at the same time. Therefore, the proposed protocol is secure against online dictionary attack. 12. Parallel Session Attack: In this type of attack, the attacker first listens to communication between the client and the server. After that, he initiates a parallel session to imitate legitimate user to login on to the server by resending the captured messages transmitted between the client and the server with in the valid time frame window. He can masquerade as legitimate user by replaying a login request message (CIDi, Mi, T) with in the valid time frame window. The attacker can not computes the agreed session key H (Ci | yi | H (x) | T | T’’) because the attacker does not know the value of IDi, b, yi and H (x). Hence proposed protocol is secure against parallel session attack. 13. Man-in-the-middle Attack: In this type of attack, the attacker intercepts the messages sent between the client and the server and replay these intercepted messages with in the valid time frame window. The attacker can act as the client to the server or vice-versa with recorded messages. In the proposed protocol, the attacker can intercept the login request message (CIDi, Mi, T) from the client to the server S, which is sent by a valid user Ui to the server S. Then he starts a new session with the server S by sending a login request by replaying the login request message (CIDi, Mi, T) with in the valid time frame window. The attacker can authenticate itself to server S as well as to legitimate client but can not compute the session key H (Ci | yi | H (x) | T | T’’) because the attacker does not know the value of IDi, b, yi and H (x). Therefore, the proposed protocol is secure against man-in-the-middle attack. 14. Message Modification or Insertion Attack: In this type of attack, the attacker modifies or inserts some messages on the communication channel with the hope of discovering client’s password or gaining unauthorized access. Modifying or inserting messages in proposed protocol can only cause authentication between the client and the server to fail but can not allow the attacker to gain any information about client’s identity IDi and password Pi or gain unauthorized access. Therefore, the proposed protocol is secure against message modification or insertion attack.
6 Cost and Functionality Analysis An efficient authentication scheme must take communication and computation cost into consideration during user’s authentication. The performance comparison of the proposed scheme with the relevant smart card based authentication schemes is summarized in Table 1. Assume that the identity IDi, password Pi, x, yi, timestamp values and output of secure one-way hash function are all 128-bit long. Let TH, TE and TX denote the time complexity for hash function, exponential operation and XOR operation respectively. Typically, time complexity associated with these operations can be roughly expressed as TE >> TH >> TX.
234
S.K. Sood, A.K. Sarje, and K. Singh Table 1. Efficiency comparison among smart card based schemes
Table 2. Functionality comparison among smart card based schemes
In the proposed scheme, the parameters stored in the smart card are Di, Ei, Fi, Bi and the memory needed (E1) in the smart card is 512 (= 4*128) bits. The communication cost of authentication (E2) includes the capacity of transmitting message involved in the authentication scheme. The capacity of transmitting message {CIDi, Mi, T} and {Vi, T’’}is 640 (= 5*128) bits. The computation cost of registration (E3) is the total time of all operations executed in the registration phase. The computation cost of registration (E3) is 7TH + 6TX. The computation cost of the user (E4) and the service provider server (E5) is the time spent by the user and the service provider server during the process of authentication. Therefore, the computation cost of the user (E4) is 10TH + 6TX and that of the service provider server (E5) is 6TH + 2TX. The proposed scheme requires less computation than that of Xu et al.’s [10] scheme and requires more computation than that of Kim and Chung’s [9] scheme but it is highly secure as compared to the related schemes. The proposed scheme provides identity protection and free from impersonation attack, malicious user attack and offline dictionary attack, while the latest schemes proposed in 2009 by [9][10] suffers from these attacks. The functionality comparison of the proposed scheme with the relevant smart card based authentication schemes is summarized in Table 2.
7 Conclusion Smart card based password authentication is one of the most convenient ways to provide authentication for the communication between a client and a server. In this paper,
Secure Dynamic Identity-Based Remote User Authentication Scheme
235
we presented a cryptanalysis of Lee et al.’s scheme and showed that their scheme is vulnerable to impersonation attack, malicious user attack and reflection attack. Moreover, Lee et al.’s scheme does not maintain the user’s anonymity in communication channel. An improved scheme is proposed that inherits the merits of Lee et al’s scheme and resists different possible attacks. Our proposed scheme allows the user to choose and change the password at their choice. One of the main features of the proposed scheme is that it does not allow the server to know the password of the user even during the registration phase. The proposed protocol is simplified and fast because only one-way hash functions and XOR operations are used in its implementation. Security analysis proved that the improved scheme is more secure and practical.
References 1. Lamport, L.: Password Authentication with Insecure Communication. Communications of the ACM 24(11), 770–772 (1981) 2. Hwang, M.S., Li, L.H.: A New Remote User Authentication Scheme using Smart Cards. IEEE Transactions on Consumer Electronics 46(1), 28–30 (2000) 3. Sun, H.M.: An Efficient Remote User Authentication Scheme using Smart Cards. IEEE Transactions on Consumer Electronics 46(4), 958–961 (2000) 4. Chien, H.Y., Jan, J.K., Tseng, Y.M.: An Efficient and Practical Solution to Remote Authentication: Smart Card. Computers & Security 21(4), 372–375 (2002) 5. Ku, W.C., Chen, S.M.: Weaknesses and Improvements of an Efficient Password based Remote User Authentication Scheme using Smart Cards. IEEE Transactions on Consumer Electronics 50(1), 204–207 (2004) 6. Hsu, C.L.: Security of Chien et al.’s Remote User Authentication Scheme using Smart Cards. Computer Standards & Interfaces 26(3), 167–169 (2004) 7. Lee, S.W., Kim, H.S., Yoo, K.Y.: Improvement of Chien et al.’s Remote User Authentication Scheme using Smart Cards. Computer Standards & Interfaces 27(2), 181–183 (2005) 8. Yoon, E., Yoo, K.: More Efficient and Secure Remote User Authentication Scheme using Smart Cards. In: Proc. of 11th International Conference on Parallel and Distributed System, vol. 2, pp. 73–77 (2005) 9. Kim, S.K., Chung, M.G.: More Secure Remote User Authentication Scheme. Computer Communications 32(6), 1018–1021 (2009) 10. Xu, J., Zhu, W.T., Feng, D.G.: An Improved Smart Card based Password Authentication Scheme with Provable Security. Computer Standards & Interfaces 31(4), 723–728 (2009) 11. Kocher, P., Jaffe, J., Jun, B.: Differential Power Analysis. In: Wiener, M. (ed.) CRYPTO 1999. LNCS, vol. 1666, pp. 388–397. Springer, Heidelberg (1999) 12. Messerges, T.S., Dabbish, E.A., Sloan, R.H.: Examining Smart-Card Security under the Threat of Power Analysis Attacks. IEEE Transactions on Computers 51(5), 541–552 (2002)
Theoretical Notes on Regular Graphs as Applied to Optimal Network Design Sanket Patil and Srinath Srinivasa International Institute of Information Technology Bangalore, Karnataka 560100, India {sanket.patil,sri}@iiitb.ac.in
Abstract. Although regular graphs have a long history, some of their properties such as diameter, symmetry, extensibility and resilience do not seem to have received enough attention in the context of network design. The purpose of this paper is to present some interesting theoretical results concerning regular graphs pertinent to optimal network design.
1
Introduction
Designing optimal networks is an important multiconstraint optimization problem across various domains. The operational objective of network design is to minimize the communication cost (or maximize the efficiency) of a network. At the same time, the lack of reliability on the part of machines and links poses issues of robustness of the network in the face of failures. The number of links that constitute a network poses infrastructure and maintenance costs. An asymmetry in the distribution of links across nodes poses issues of load balancing. In this paper, we focus on optimal network design under the symmetry constraint. We describe in brief two applications to motivate the relevance of regular graphs to the problem of network design under the symmetry constraint: (1) A distributed index (DI) topology for a distributed hash table (DHT) in a p2p network needs to be designed such that the lookup complexity is optimized, while maintaining symmetric load distribution. (2) The crucial challenge in networkcentric warfare (NCW) is the high probability of targeted attacks. Thus, it is important to design a robust network in such a manner that the loss of any node or edge does not cause more harm than the loss of any other [2]. In DIs, symmetry translates to a topology where each node is expected to take the same amount of bookkeeping (in terms of finger tables) cost in managing the DI. Symmetry is addressed by modelling the index in the form of a regular graph. In NCW, symmetry translates to building networks that are ideally vertex transitive, which informally means that every node has the same local environment such that no node can be distinguished based on its neighbourhood. Again, all vertex transitive graphs are necessarily regular graphs. A graph G = (V, E) is called r-regular if ∀v ∈ V (G), degree(v) = r. The term r is called the regularity of the graph G. If the graph is a directed graph, then r-regularity means that each node has r incoming and r outgoing edges. T. Janowski and H. Mohanty (Eds.): ICDCIT 2010, LNCS 5966, pp. 236–242, 2010. c Springer-Verlag Berlin Heidelberg 2010
Theoretical Notes on Regular Graphs as Applied to Optimal Network Design
237
Regular graphs have been used extensively in designing DHT topologies [3] since regular graphs are naturally suited to meet the main constraints of DHT design: symmetric load distribution and robustness in the face of perturbations. Gummadi et al. [4] suggest that “ring” topologies (which are regular) are the best in terms of flexibility and optimality. Qu et al. [5] propose Cayley graphs, which are regular hamiltonian graphs, as a model for design and analysis of optimal DHTs. Perfect Difference Networks [6] are proposed in the context of communication networks to construct topologies with asymptotically maximal number of nodes for a given fixed degree and diameter 2. Donetti et al. [7] introduce entangled networks with highly homogeneous degree and betweenness centralities as the optimal structures with respect to many criteria. Valente et al [8] show that topologies with a symmetric degree centrality are the most robust in the face of node and link failures. In earlier work, we present circular skip lists [9,10], which are regular or nearly regular, as optimal topologies in terms of balancing efficiency, symmetry and infrastructure cost. Owing to their universality in the design of optimal topologies, it is worthwhile to study regular graphs separately. The purpose of this paper is to present some interesting theoretical results regarding regular graphs that are pertinent to optimal network design.
2
Extending Regular Graphs
In the following sections, whenever we refer to regular graphs, we shall be referring to undirected connected regular graphs. We describe a simple deterministic algorithm to construct regular graphs of arbitrary number of nodes and arbitrary regularity by a procedure called extension. Theorem 1. If r is even (i.e. r = 2q, where q ∈ N), it is always possible to build an r-regular graph over n nodes, where n ≥ r + 1. Proof. The smallest 2q-regular graph is a clique of 2q + 1 nodes. Let Gk = (V, E) be a 2q-regular graph over k nodes where k ≥ 2q + 1. In order to add a new node v into V , first find a matching1 m of q edges. For every edge eij ∈ m, do the following: (1) disconnect it from one of its end nodes, say j, and connect the free end to v , thus effecting the edge eiv (2) add an edge between v and j, ev j . Note that the degrees of every node pair (i, j) remain 2q at the end of step (2), whereas the degree of v increases by 2. Thus, after q such operations, the degree of v is 2q, which is the same as all other nodes in the graph. Thus, we get Gk+1 , a 2q regular graph with k + 1 nodes. The crucial step in the above construction is to find a matching of size q. This is easily done since (1) the smallest 2q-regular graph is a clique of 2q + 1 nodes (2) such a clique has q edge-independent hamiltonian cycles, and (3) the presence 1
A matching on a graph G is a set of edges of G such that no two of them share a vertex in common.
238
S. Patil and S. Srinivasa
of a hamiltonian cycle in a graph is sufficient to show the presence of q nonadjacent edges or kamatching of size q. Thus, we can add the new node. Further, since there are 2q number of 2q-cliques in the graph, it is always possible to find a matching of size q. Thus, we can extend the graph indefinitely. When r is even, there are no constraints on the number of nodes over which an r-regular graph can be built, except for the lower bound of r + 1. This has implications in network design. For example, it is possible to extend a DHT topology while keeping it regular when new nodes arrive. The above algorithm essentially needs to find a cycle C, whose length is |C| ≥ 2q (such a cycle has two matchings each of size at least q). This can be done by a modified version of the standard depth first search (DFS) algorithm. Thus the complexity of the above algorithm is O(|V | + |E|). For odd regularities, the technique used in Theorem 1 does not work. We can extend Theorem 1 as shown below. Theorem 2. Given an r-regular graph with n ≥ r + 1 nodes where r is odd (i.e. r = 2q + 1, where q ∈ N) and n is even, there exists an r-regular graph with n + 2 nodes. Proof. Given a 2q+1-regular graph Gk = (V, E) with |V | = k, where k is even, in order to extend it by adding two nodes v and v respectively to V , find two size q matchings m and m in the graph. Use m and m to make 2q connections to v and v as described previously. Finally, add an edge between v and v . Thus, we have a 2q + 1-regular graph with k + 2 nodes. Note that the union of the two matchings, m ∪ m , need not be a matching. We can prove that it is always possible to extend the graph in this manner, using arguments similar to the ones in Theorem 1. Theorem 2 represents the minimal extension we can make to a graph with odd regularity. The above algorithm needs to find a cycle C of length |C| ≥ 2q (O(|V | + |E|)). It is often desirable to build regular graphs much faster than extending them by the minimal extension possible. Also, the minimal extensions that we described above are expensive, making it unattractive for extending a regular graph by several nodes in one operation. If there are a large number of nodes over which a DHT has to be constructed, it is desirable to start the DHT construction parallely involving several subsets of nodes. For example, GHT [11], a geographic hash table implemented over wireless sensor networks, where locality plays a major role in the DHT performance. It is easier for local nodes to form networks amongst themselves and then merge into a global network. Merging is also necessary when a network has to repair itself from a partitioning. Here we address merging of regular graphs without losing regularity. Theorem 3. Given any r-regular graph with n nodes and r ≥ 2, there exists an r-regular graph of n + r − 1 nodes. Proof. We prove this by merging an (r−2)-regular clique with an r regular graph. Note that an (r − 2)-regular clique has r − 1 nodes and is fully connected. So the combined graph would have n + r − 1 nodes.
Theoretical Notes on Regular Graphs as Applied to Optimal Network Design
239
Let Gr be the given graph and C r−2 be the (r − 2)-regular clique to be merged with Gr . For each node vi of C r−2 , pick an arbitrary edge from Gr and insert vi in the middle of the edge. Node vi , which already had degree r −2, now has degree r after inserting it in the middle of an existing edge. Once all nodes from C r−2 are inserted, we get one combined graph with regularity r. However, we need to address a caveat here. An edge (u, v) ∈ Gr can take insertions from nodes vi , vj ∈ C r−2 iff (vi , vj ) is not an edge in C r−2 . Failing which, we would have multiple edges between vi and vj in the combined graph. In order to ensure that the formation of multi-graphs can always be prevented, Gr needs to have at least as many edges as the number of nodes in C r−2 . Since C r−2 is an (r − 2)-regular clique, it has r − 1 nodes. If the number of nodes in Gr is n, then it has nr 2 edges. Since n has a lower bound of (r + 1), size of the smallest r regular graph, the inequality nr 2 > r − 1 is satisfied trivially. We can extend Theorem 3 by allowing the merging of any (r − 2)-regular graph (not necessarily a clique) with an r-regular graph as long as the number of nodes in the former is no more than the number of edges in the latter. It is possible to merge graphs with a larger number of nodes than that specified above, but the maximum node constraint is a safe upper bound below which, merging is always possible.
3
Regularity, Connectivity and Hamiltonicity
The edge-connectivity of a graph is the smallest number of edges whose removal would partition the graph. Edge connectivity has direct correspondence with the resilience of a network in the face of connection failures. In the following sections, when we refer to the connectivity of a graph, we are referring to edgeconnectivity. 3.1
Regularity and Connectivity
Theorem 4. Given an r-regular, k-connected graph (k ≤ r) over n nodes with the following constraints: r is even (i.e. r = 2q, where q ∈ N) and n ≥ 2r (i.e. n ≥ 4q); it is always possible to extend this graph by any number of nodes without altering the regularity r or edge connectivity k. Proof. In a k-connected, 2q-regular graph Gk,2q there exists at least one set of k edges, whose removal partitions the graph into two disconnected components. Let the graph be represented as two components C1 and C2 connected with k bridges across them. From the extension theorem (Theorem 1), we can extend either C1 or C2 arbitrarily without adding any edges across C1 and C2 . This keeps the graph 2q-regular and k-connected after extension. However, for the extension theorem to work for any component Ci , we see that Ci should have at least 2q nodes. Given the constraint that the number of nodes n ≥ 4q, at least one of C1 or C2 is guaranteed to have at least 2q nodes and can be arbitrarily extended.
240
S. Patil and S. Srinivasa
The above theorem is important for network design because, often, it is cheaper to build r-regular graphs with a connectivity less than r. In NCW, while it is ideal to have the maximum possible connectivity r, it might not be cost effective; hence, building/extending a network with connectivity k < r might be an useful alternative. The connectivity k, which even though is less than r, may be the minimal connectivity guarantee that the design offers for such a network. Theorem 5. Given an r-regular, k-connected graph, where r is odd, n ≥ 2r and n is even, there exists an r-regular, k-connected graph with n + 2 nodes. A side effect of the Theorems 4 and 5 is also a guarantee that we can always build and extend r-regular graphs that are r-connected, whenever r is even. Theorem 6. Starting from the r-regular clique, which is r-connected, it is always possible to build r-regular, r-connected graphs over any n, when r is even (i.e. r = 2q, where q ∈ N). Proof. We prove the theorem using the extension theorem for even regular graphs. A 2q-regular clique has only 2q + 1 nodes. When a new node v needs to be added to the graph, first, a matching of q edges is found. Each of these q edges is disconnected at one end and connected to v . Therefore, each of the q pairs of nodes now have one path less between them (connectivity is 2q − 1 or r − 1). Next, v is connected to each of the “free” ends of the above q edges. Now, each of the q pairs of nodes have a new path between them through the newly added node, v , thus restoring their pairwise 2q (or r) edge independence. Also, the newly added node is connected to 2q distinct nodes in the graph, each of which has 2q edge independent paths to all nodes. Thus, the connectivity of the graph remains 2q (or r). We can extend this result to odd regular graphs analogously using the extension theorem for odd regularity (Theorem 2). 3.2
Regularity and Hamiltonicity
Hamiltonicity has interesting implications for network design. Hamiltonian networks are advantageous for routing algorithms that make use of the ring structure (many DHT algorithms do this). Hamiltonian decompositions allows the load to be equally distributed [12], also making the network robust. Theorem 7. It is always possible to build a hamiltonian r-regular graph over n nodes, where r is even and n ≥ r + 1. Proof. The smallest r regular graph—a clique of r + 1 nodes—is trivially hamiltonian. Consider a hamiltonian graph G of n nodes. Let h = v1 , v2 , . . . vn , v1 represent a hamiltonian cycle in G. Now extend G using the extension theorem for even graphs (Theorem 1). When a new node v is to be added into G, break at least one edge (vi , vj ) that was on the hamiltonian cycle h and connect vi to v . Now connect v and vj . We have hence successfully inserted an extra node into the hamiltonian cycle and retained the hamiltonicity property the new graph, which has n + 1 nodes. Hence the theorem by induction.
Theoretical Notes on Regular Graphs as Applied to Optimal Network Design
241
While hamiltonicity is a desirable property in applications such as DHT and NCW, there are applications where not having a hamiltonian circuit is desirable. An example is of hybrid networks that have a mix of fixed and mobile components, where it is desirable to not have a single overarching data structure over the entire network. It may be desirable to model a DHT for hybrid networks as a set of distinct components connected by bridges, with an appropriate key distribution mechanism. Changes to a DHT structure in the face of “churn” (frequent arrival and departure of nodes) is usually expensive. In hybrid networks, it is desirable to isolate the mobile part of the network from the fixed part. All these may require to be performed in a way that does not affect the symmetry property of the DHT. In such cases the absence of hamiltonicity becomes an important property. Thus, we have the following corollaries whose proofs follow from Theorems 4 and 5. Corollary 1. Given an r-regular, non-hamiltonian graph with n nodes, where r is even and n ≥ 2r, it is always possible to extend this graph over any number of nodes and retain the regularity and non-hamiltonian properties. Corollary 2. Given an r-regular, non-hamiltonian graph over n nodes, where r is odd, n is even, and n ≥ 2r, there exists an r-regular, non-hamiltonian graph with n + 2 nodes.
4
Conclusions
The symmetry requirement is very common in many network design problems. The underlying universal structure of a symmetric topology is the regular graph. The objective of this paper has been to explore theoretical underpinnings of regular graphs that are pertinent to network design. This work is part of a larger effort to obtain a set of meta-heuristics for designing optimal network topologies in the face of arbitrary constraints.
References 1. Alberts, D.S., Garstka, J.J., Stein, F.P.: Network Centric Warfare: Developing and Leveraging Information Superiority, 2nd edn. (Revised). C4ISR Cooperative Research Program Publications Series. Department of Defense, USA (2000) 2. Dekker, A.H., Colbert, B.D.: Network Robustness and Graph Topology. In: Proc. Australasian Conference on Computer Science, pp. 359–368 (2006) 3. Loguinov, D., Casas, J., Wang, X.: Graph-Theoretic Analysis of Structured Peerto-Peer Systems: Routing Distances and Fault Resilience. IEEE/ACM Trans. on Networking 13(5), 1107–1120 (2005) 4. Gummadi, K., Gummadi, R., Gribble, S., Ratnasamy, S., Shenker, S., Stoica, I.: The impact of dht routing geometry on resilience and proximity. In: Proc. ACM SIGCOMM, pp. 381–394 (2003)
242
S. Patil and S. Srinivasa
5. Qu, C., Nejdl, W., Kriesell, M.: Cayley DHTs — A Group-Theoretic Framework for Analyzing DHTs Based on Cayley Graphs. In: Cao, J., Yang, L.T., Guo, M., Lau, F. (eds.) ISPA 2004. LNCS, vol. 3358, pp. 914–925. Springer, Heidelberg (2004) 6. Rakov, M., Parhami, B.: Perfect Difference Networks and Related Interconnection Structures for Parallel and Distributed Systems. IEEE TPDS 16(8), 714–724 (2005) 7. Donetti, L., Neri, F., Mu˜ noz, M.A.: Optimal network topologies: Expanders, Cages, Ramanujan graphs, Entangled networks and all that. J. Stat. Mech. 8 (2006) 8. Valente, A.X.C.N., Sarkar, A., Stone, H.A.: Two-Peak and Three-Peak Optimal Complex Networks. Phys. Rev. Let. 92(11) (2004) 9. Patil, S., Srinivasa, S., Mukherjee, S., Rachakonda, A.R., Venkatasubramanian, V.: Breeding Diameter-Optimal Topologies for Distributed Indexes. Complex Systems 18(2(c), 175–194 (2009) 10. Patil, S., Srinivasa, S., Venkatasubramanian, V.: Classes of Optimal Network Topologies under Multiple Efficiency and Robustness Constraints. In: Proc. IEEE SMC (to appear, 2009) 11. Ratnasamy, S., Karp, B., Shenker, S., Estrin, D., Yin, L.: Data-centric storage in sensornets with ght, a geographic hash table. Mobile Networks and Applications 8(4), 427–442 (2003) 12. Bermond, J.C., Darrot, E., Delmas, O., Perennes, S.: Hamilton circuits in the directed wrapped Butterfly network. Disc. App. Math. 84(1-3), 21–42 (1998)
Formal Approaches to Location Management in Mobile Communications Juanhua Kang1 , Qin Li2 , Huibiao Zhu2 , and Wenjuan Wu1 1
Department of Computer Science and Technology, East China Normal University 2 Software Engineering Institute, East China Normal University [email protected], {qli,hbzhu}@sei.ecnu.edu.cn, [email protected]
Abstract. One characteristic of mobile communications is the location management of mobile devices. If the location of one device changed, the base station will update its location information in time to keep the device still in the range of signal cover. Particularly, if the location changes during a call, a handover process is needed. In this paper, a formal model based on pi calculus was used to describe the mobile communication system. Two properties concerned by users are proposed and given formal description. By analyzing the model’s behavior caused by a valid location change, we proof that it satisfies these properties.
1
Introduction
The mobile communication standards have been developed to the 3rd generation. However the major characteristic of mobile communications is still the same: analogical signals as well as digital signals can be exchanged between two mobile devices even if their locations are changeable. Compared with fixed-line communications, the primary advantage of mobile communications lies in that the location of Mobile Station (M S) can change [9]. Devices are frequently connected with different stations in different location at different time. This makes the location management become a very challenging problem in mobile communication system. The current location management database is based on double deck database which consists of Home Location Register (HLR) and Visitor Location Register (V LR). Usually HLR stores information of all registered users including their scheduled operations, accounting information, location information, etc. V LR, with its control region named Mobile Switching Center M SC, stores the relevant information for mobile users who have registered in the M SC. M SCs are the core entities of the entire network, and their major jobs are authentication, location management, handovers, registration and the routing of calls to roaming mobile stations. Nowadays, a lot of researches of location management in mobile communication network focus on improving protocols, database construction, or security [1,12,13]. A formal model is proposed to analyze the handover process in GSM [10], but it considers that a network central control the process. In 3G based T. Janowski and H. Mohanty (Eds.): ICDCIT 2010, LNCS 5966, pp. 243–254, 2010. c Springer-Verlag Berlin Heidelberg 2010
244
J. Kang et al.
CDM A, the moving subject is M S. The location updates is initiated by M S whenever. This paper aims to state with an practical point of view. It focus on location updates and handover processes in the location management. As the formal tool for mobile communications, the π calculus can be used reasonably to describe and verify properties which concerned by users. According to the location management, our formal model contains four parts : HLR, V LR, M SC and M S. Moreover, M S has different states corresponding to different situations. They are also reflected on M SC. Using the formal model, we can discuss the location updates and the handover process when M S’s location has changed. We demonstrate two properties all mobile communication systems must reserve. One is M S’s mobility. The other is mobility irrelevance of the whole system. At last, through detailed analyze, we demonstrate our model is satisfied with these two properties. This paper is organized as follows. In section 2 below, we give a brief introduction to π calculus and location updates. Section 3 formalize every part associated with the location management, and assembles with all basis parts to get a simplified system model. The verification of properties of mobile communications is stated in section 4. Finally section 5 summarizes this whole paper and put forward some further improvements.
2 2.1
Theory and Location Management π Calculus
Pi calculus[8], introduced by Robin Milner[7], is a primitive calculus for describing and manipulating processes which perform uninterpreted actions. It is based on the process algebra CCS[6]. The power of the extension comes from the fact that the values sent and received can be channel names. Moreover, the language allows the transmission of private channels between processes. Mobility is the crucial feature of pi calculus which differs it from some static communication models such as CSP[3]. With the ability to set up a new channel, pi calculus is widely used to describe the changing structure of concurrent systems[4,5,11]. Therefore, it is appropriate to formalize the issues in mobile network environment[2][15]. The basic grammar of π calculus is as follows. P :: = Πi .Pi |P1 |P2 |[C]P |(new x)P |!P i=I
where Π :: = x(y)
receive y along x
x ¯y send y along x τ unobservable action; C : x = y or x = y; x, y, . . . ∈ X ; P1 , P2 . . . ∈ P
Formal Approaches to Location Management in Mobile Communications
245
Here X is the set of all actions including unobservable actions. We use f n(P ) and bn(P ) to stand for free channel and bound channel respectively. In our paper, We will use the notion of weak bisimulation to verify some properties. If two systems’ performance are identical according to the same observable actions, then the two systems are weak bisimulation. Weak bisimulation: A relation R over the states S is called a weak bisimulation if it satisfies the following transfer property: a ˆ
•P1 RP2 and P1 −→ P1 implies P2 −→ P2 , for some state P2 such that P1 RP2 ; a
a ˆ
•P1 RP2 and P2 −→ P2 implies P1 −→ P1 , for some state P1 such that P1 RP2 . a
And here L is the set of observable actions. If a ∈ X ∗ , then a ˆ ∈ L∗ is the sequence gained by deleting all occurrences of τ from a. 2.2
Location Management
If a M S moves from one area to another, it should update its location by sending it to the nearest M SC. Concretely, it goes as follows: when the M S moves into a new area, it sends a login request to the nearest M SC (we call it target M SC ) along a fixed channel between them. After verifying and authenticating the request, the target M SC sends its M SC location identity (msclai) to the M S. Then the MS stores the msclai as lai and registers in the HLR and the corresponding V LR in the M SC. Then, HLR will notify the old M SC (we call it source M SC) to delete the M S’s location information in the source V LR. The process is captured in figure 1. There are some special situations when M S is talking as well as moving, especially when it moves into a new location. The MS uses channels between itself and the source M SC to translate data, that is, to talk. When it moves into the target M SC, it needs a handover procedure to get the new channel with the target M SC and closes the old channel with the source M SC. The location update will be performed after the talk ends.
Fig. 1. The process of the location update
246
J. Kang et al.
The handover procedure is as follows: when the talking M S finds itself in a new area, it will send a request to the source M SC for handover. The source M SC notifies the target M SC through channel between M SCs to assign data channel for the coming M S. Then it notifies V LR to release old channels. We can see that the connection to the target is established before the connection to the source is broken. So the handover is called make-before-break. The handover procedure may fail if the target M SC have no free channel to assign for the coming M S. But in our paper, we do not consider this situation for simplicity.
3
Formalization
In this section, we will give formal definitions of corresponding components of the mobile communication system, and an entire model will be presented at the end of this section. 3.1
Model of Mobile Station
In the mobile communication system, M S is the activist. When M S moves into a new M SC, it actively sends location update request to attach to the new M SC.The request message includes imsi, lai and ci. imsi is the identifier of M S. M S keeps the lai (Location Area Indicator) until being shutdown. ci is the identifier of the former M SC connected to the M S. Then M S receives the new lai along channel response. After a serious actions, it comes into the state of M SAttach. In this state, M S can communicate with any other M S whose state is also at M SAttach, and then coming into the state M ST alk. In M ST alk state, M S can send/receive three kinds of message. The message cancel is used to express interruption of transmission. newci is the identifier of the new M SC which the M S will move into. data is the content (including image, audio and even video) to transmit. M S is defined as follows: M S =df reqci imsi, lai.rspci (lai).M SAttach + move(ci).M S M SAttach =df talkimsij .M ST alk + talk.M ST alk +move(ci).reqci imsi, lai.rspci (lai).M SAttach + apart.M S M ST alk =df transmsg.([msg = cancel]M SAttach + [msg = data]M ST alk) +trans(msg).([msg = cancel]M SAttach + [msg = data]M ST alk) +move(ci).transci.M ST alk + apart.M S
The channels req, rsp are the ports of M S used to get in touch with M SC. Contacting with which M SC is decided by the ci in M S. The channels talkreq and talkrsp are used to send and receive requests for building a new call. When talking with another M S, M ST alk use the channel trans to transmit and receive messages (msg). And the two processes set up a channel apart to notify the M SC when the communication terminates.
Formal Approaches to Location Management in Mobile Communications
247
Through the move action, M S gets the identifier of the new M SC, and issues a location update or a handover process. We will discuss concretely later in the paper. 3.2
Model of M SC and V LR
M SC is the central functional component of the mobile communication system. It sets up and releases the end-to-end connection. During the call, it handles location updates and handover requirements, as well as establishes the connections with network. The definition of M SC is as follows, assuming it has capacities of n, and 0 < k < n: M SC(0) =df req(imsi, lai).([lai = msclai] + [lai = msclai]updateimsi, msclai.) attachimsi.rspmsclai.{Ci /C}M SCAttach|M SC(1) +mm(imsi).attachimsi.{Di /D}M SCT alk|M SC(1) M SC(k) =df req(imsi, lai).([lai = msclai] + [lai = msclai]updateimsi, msclai.) attachimsi.rspmsclai.{Ci /C}M SCAttach|M SC(k + 1)) +del.note.M SC(k − 1) + note(imsi).delimsi.M SC(k − 1) +mm(imsi).attachimsi.{Di /D}M SCT alk|M SC(k + 1) +note(imsi, ci).mmci (imsi).del.M SC(k − 1) M SC(n) =df del.note.M SC(n − 1) + note(imsi).del.M SC(n − 1) +note(imsi, ci).mmci (imsi).del.M SC(n − 1)
In location update, the M SC decides whether an update to HLR is needed, according to the lai, which is received along the req channel. Then it writes the information of M S to V LR and creates an instance of M SCAttach, renaming its channels to Ci . The Ci includes talkreq, talkrsp, locreq, locrsp and tup. The channel mm between M SCs is used in handover procedure. During the procedure, the source M SC notifies the target M SC to prepare a Di channel. The Di channel includes data and trans for the forthcoming M S. The channels data in M SCT alks and channel trans between M ST alk and M SCT alk are used to translate msg. M SCAttach =df talk(imsij ).locreqimsij .locrsp(j).{tupj /tup}tup.{Di /D}M SCT alk +routereq.routerspj.tup.talk.{Di /D}M SCT alk + apart.noteimsi.0
The imsij is the identifier of the M S which is called. The tup is used to set up connection between M SC by rename. After setting up an end-to-end connection, the M SAttach also can come into M ST alk. M SCT alk =df trans(msg).{dataj /data}([msg = cancel]datamsg.M SCAttach +[msg = data]datamsg.M SCT alk + [msg = ci]noteimsi, ci.0) +data(msg).([msg = data]transmsg.M SCT alk)
248
J. Kang et al. +[msg = cancel]transmsg.M SCAttach) + apart.noteimsi.0
M SCAttach requests and gets the ci along channel locreq and locrsp according to imsij . The HLR notifies the M SC along channels routereq and routersp to find out for the called MS whether the M S being called is available. V LR stores information of all M Ss that are currently under the scope of the M SC. The definition of V LR is as follows: V LR(Φ) =df attach(imsi).V LR({imsi}) V LR(A) =df attach(imsi).V LR(A ∪ {imsi}) + v(imsi).delimsi.V LR(A − {imsi}) +del(imsi).vimsi.V LR(A − {imsi})
The M SC and its corresponding V LR are called by a joint name-M V . The definition of M V can be described as follows: M V (Φ) =df newX(M SC(Φ)|V LR(Φ)) M V (A) =df newX(M SC(A)|V LR(A))
X represents all channels between M SC and V LR. The notation Φ represents there is no M S login in. A is a set to keep imsi of the M S login in, assume A = {imsi1 , imsi2 . . . , imsik }, then (|A| = k ∈ [0, M ]), M SC(A) = |k1 {Di /D}M SC, and M SC can in M SCAttach or M SCT alk state. 3.3
Model of Home Location Register
In this system, the information of MS in each region are maintained unified by HLR. We use one HLR to control all the M SCsand V LRs in our system. All users’ important static data are stored in HLR. HLR keeps two types of information of MS. One is permanent parameters that all local users identifiers registered in the area, priority access, and the type of scheduled business information. The other is the parameters of the users’ current location. The data needs update at proper time. Even if the user moves out of the HLR’s region, HLR also need record the location information. HLR =df update(imsi, msclai).([vlai = off] + [vlai = on]voldci imsi.){on/vlai}HLR +locreq(imsi).routereqci imsi.routerspci (j).locrspj.HLR +v(imsi).{of f /vlai}.HLR
HLR receives request from M V along the update channel. Then according to the value of its vlai, it determines deleting the old location information or not in V LR. 3.4
System Composition
This section will consider the composition of the communication units implemented. Assume the authentication process of every MS is always legal. Suppose the system only have two MSC. Each MSC has a corresponding VLR. The whole system is controlled by one HLR. The structure of the simple system
Formal Approaches to Location Management in Mobile Communications
249
Fig. 2. The structure of the simple system
is shown in fig 2. The simple system with M S1 logging in M Va can be modeled as follows. SY S =df M SAttach1 |M Va (A1 )|HLR|M Vb (A2 )
Correspondingly, the system with M S1 moving to M Vb is as follows. SY S =df M Va (A1 − {imsi1 })|HLR|M Vb (A2 ∪ {imsi1 })|M SAttach1
And when M S1 talking with M Sj which attached in M Vb , the simple system changes into: SY ST alk(1, j) =df M ST alk1 |M Va (A1 )|HLR|M Vb (A2 )
Correspondingly when M S1 login in M Vb , SY ST alk(1, j) =df M Va (A1 − {imsi1 })|HLR|M Vb (A2 ∪ {imsi1 })|M ST alk1
4
Verification
A correct mobile communication system must meet two requirements. Firstly, any M S can move between any two M SCs. Secondly, the moving of any M S does not affect the communication. In this section, we will verify our model satisfying these two points. 4.1
Mobility
The mobility is the best feature of mobile communication systems. It makes it possible to use the small portable terminal at any time and anywhere, communicating with any kind of communication system. Theorem 1. The M S is mobility, formally: ∀M S.move(ci), ∃tr = move(ci) SY S =⇒ SY S ; SY ST alk(1, j) =⇒ SY ST alk (1, j). tr
tr
Proof. According to the reaction rules in π calculus, we can find the sequence tr like follows. We start with the initial situation where M S1 in M Va is the process willing to move into M Vb . The moving only changes the ci in M S1 . Case 1: M S1 moves with M SAttach state. It will initiate a location updates procedure. The process M SCAttach(imsi) has its own channels to contact with
250
J. Kang et al.
M SAttach. The parameters in M S1 are {imsi1 , lai = M SCa .lai, ci = a}, and correspondingly, the parameter in HLR is vlai = on. To simplify the representation, we use M ET (i) to denote the other processes in M SC(A) except for {Di /D}M SC below. And according to section 2.2, the location update process is performed as follows: SY S
move(b)
−→ M Va (A1 )|HLR|reqb imsi, lai.rspb (lai).M SAttach1 |M Vb (A2 )
reqb −−−−−→ updateb
M Va (A1 )|va {on/vlai}imsi1 .HLR|rspb .M SAttach1 |attachb imsi1 .
rspb msclai.{Ci /C}M SCAttachb |M ETb (1)|M SCb (k2 + 1)|V LRb (A2 ) va −−−−−−−→ dela ,note1
M ETa (1)|M SCa (k1 − 1)|V LRa (A1 )|HLR|rspb (lai).M SAttach1
|attachb imsi1 .rspb msclai.{Ci /C}M SCAttachb |M ETb (1)|M SCb (k2 + 1) |attachb (imsi1 ).V LRb (A2 ∪ imsi1 ) attachb − −→ rsp b
M Va (A1 − {imsi1 })|HLR|M SAttach1 |M Vb (A2 ∪ {imsi1 }) = SY S
When M S1 in M Va is turned on after moving into M Vb , the location update process is also needed. It is distinct from what had discussed above. But we can see that the states of system arrived is the same. apart
SY S −→1 M S1 |note1 imsi1 .0|M ETa (1)|M SCa (k1 )|V LRa (A1 )|HLR|M Vb (A2 ) note1 −−→ dela
M ETa (1)|M SCa (k1 − 1)|va imsi1 .V LRa (A1 − {imsi1 })
|va (imsi1 ).{of f /vlai}HLR|M S1 |M Vb (A2 ) va −−−−−→ move(b)
M Va (A1 − {imsi1 })|HLR|rspb .M S1 |updateb imsi1 , msclai.attachb imsi1 .
rspb msclai.{C1 /C}M SCAttachb |M ETb (1)|M SCb (k2 + 1)|V LRb (k2 ) updateb −−−−−−−−−→ attachb ,rspb
M Va (A1 − imsi1 )|HLR|M SAttach1 |M Vb (A2 ∪ {imsi1 }) = SY S
Case 2: M S1 Assume M S1 transferred to outside of the
moves with M ST alk state, a handover process will be performed. in M Va are talking with the M Sj in M Vb . The talking will be the M V2 in order to avoid call interruption when the phone gets M Va ’s range. The handover process is as follows. move(b)
SY ST alk(1, j) −→ trans1 b.M ST alk1 |{D1 /D}M SCT alka |M ETa (1)|M SCa (k1 ) |V LRa (A1 )|HLR|{Dj /D}M SCT alkb |M ETb (j)|M SCb (k2 )|V LRb (A2 ) trans1 −−−−−−−−→ notea ,mmb
M ETa (1)|dela imsi1 .M SCa (k1 − 1)|V LRa (A1 )|HLR|{D1 /D}M SCT alkb
|{Dj /D}M SCT alkb |M ETb (1, j)|attachb imsii .M SCb (k2 +1)|V LRb (A2 )|M ST alk1 dela − −−−− → attachb
M Va (A1 − {imsi1 })|HLR|M ST alk1 |M Vb (A2 ∪ imsi1 ) = SY ST alk (1, j)
From above, we can find that the moving of M S1 have nothing to do with M Sj . So no matter how the moving happens, the handover processes are all the same. Therefore we find a minimum sequence tr satisfies Definition 1. Thus the verification of the mobility property is completed in the simple system condition.
Formal Approaches to Location Management in Mobile Communications
4.2
251
Mobility Irrelevance
In order to show that the moving of M S won’t influent the whole system, we should prove even if the structure or the form of the whole system are changed, the function is always the same. According to weak bisimulation theorem in the π calculus, we can check whether two systems behave the same. And we get the theorem as follows. Theorem 2. The system is irrelevant of moving. Formally SY S ≈ SY S and SY ST alk(1, j) ≈ SY ST alk (1, j). Proof. In order to prove that SY S ≈ SY S , We need to find a bisimulation containing the pair (SY S, SY S ). The state of M S1 can reach is limited and no matter whichever MS except for M S1 . And the other M S will not influence the state of M V . So that the reachable state-space is not very large. So we can verify this theorem by exhaustive case analysis. Depending on the different communication process, the discussion is divided into the following two cases. Case 1: Before moving, M S1 in SY S can setup a call for any other M S which has been attached. If the called is M Sj attached in M Va : SY S
talk1 − −−−− → locreq1
M ST alk1 |locrsp(j).{tupj /tup}tup.{D1 /D}M SCT alka |M ETa (1)
|M SCa (k1 )|V LRa (A1 )|routereqci imsij .routerspci (j).locrsp1 (j).HLR|M Vb (A2 ) routereqa −−−−−−−→ routerspa
M ST alk1 |locrsp(m).{tupm /tup}tup.{D1 /D}M SCT alka |tupm .talkm .
{Dj /D}M SCT alka |M ETa (1, j)|M SCa (k1 )|V LR(A1 )|locrsp1 (m).HLR|M Vb (A2 ) locrsp1 −−−→ tupm
M ST alk1 |talkm .{D1 /D}M SCT alka
|{Dm /D}M SCT alka |M ETa (1, m)|M SCa (k1 )|V LR(A1 )|HLR|M Vb (A2 ) talkm
−→ M ST alk1 |{D1 /D}M SCT alka |{Dj /D}M SCT alka |M ETa (1, m)|M SCa (k1 )
|V LR(A1 )|HLR|M Vb (A2 ) = SY ST alk(1, m) ε
1 So we can get SY S −→ SY ST alk(1, m) and ε1 = τ.talkm . If the called is ε2 M Sj attached in M Vb , similarly, we can get: SY S −→ SY ST alk(1, j) and ε2 = τ.talkj . After moving, M S1 also can setup a call along talk1 , when the called is M Sj ε1 attached in M Va : SY S −→ SY ST alk (1, m) and ε1 = τ.talkm .
ε
2 And when the called is M Sj attached in M Vb : SY S −→ SY ST alk(1, j) and ε2 = τ.talkj . The details are omitted for brief. Up to now, these are respectively matched by
ε
ε
1 2 SY S −→ SY ST alk(1, m), SY S −→ SY ST alk(1, j).
Case 2: Before moving, M S1 can receive calls from any other M Ss which have attached to the mobile system. When the caller is M Sj in M Vb :
252
J. Kang et al.
talkj
SY S −→ M SAttach1 |M Va (A1 )|HLR|locreqj imsi1 .locrspj .{tup1 /tup}tup. {Dj /D}M SCAttachb |M ETb (j)|M SCb (k2 )|V LRb (A2 ) locreqj − −−−−−− → routereqa
M SAttach1 |routerspa j.tup1 .talk1 .{D1 /D}M SCT alka |M ETa (1)|M SCa (k1 )
|V LRa (A1 )|routerspa (j).locrspj .HLR|locrspj .{tup1 /tup}tup.{Dj /D}M SCAttachb |M ETb (m)|M SCb (k2 )|V LRb (A2 ) routerspa − −−−− → locrspj
M SAttach1 |tup1 .talk1 .{D1 /D}M SCT alka |M ETa (1)|V LRa (A1 )|HLR
|tup1 .{Dm /D}M SCAttachb |M ETb (j)|M SCb (k2 )|V LRb (A2 ) tup1 −−−→ talk1
M ST alk1 |M V (A1 )|HLR|M V (A2 ) = SY ST alk (1, j)
ε3 = talkj .τ
And then, when the caller M Sm is attached in M Va , the process is similar to ε4 above. We can get SY S −→ SY ST alk(1, m) and ε4 = talkm .τ . After moving, M S1 can also receive calls from any other M Ss. When the ε4 caller M Sm is attached in M Va : SY S −→ SY ST alk (1, m) and ε4 = talkm .τ . ε3 And when the caller M Sj is attached in M Vb , similarly we have: SY S −→ SY ST alk (1, j) and ε3 = talkj .τ . The details are omitted here for paper limit. So, these are respectively matched by ε
ε
3 4 SY S −→ SY ST alk(1, j), SY S −→ SY ST alk(1, m).
We could find that the derivatives of these two systems are paired in (SY ST alk(1, j), SY ST alk (1, j)), (SY ST alk(1, m), SY ST alk (1, m)).
Now we can see that the former is just the second part we want to prove. And the later is a pair saying that when M S1 is talking with M Sm (attached in M Va ), the M S1 moves from M Vb to M Va . It has exactly the same structure with the former, just has some differences in the location part. Obviously, their follow-up behaviors are consistent. So we only need to proof the former SY ST alk(1, j) ≈ SY ST alk (1, j). For SY ST alk(1, j) and SY ST alk (1, j), before moving, M S1 can send out data to M S2 . SY ST alk(1, j)
trans1 −−−→ data2
M ST alk1 |{D1 /D}M SCT alkb |M ETa (1)|M SCa (k1 )
|V LRa (A1 )|HLR|{D2 /D}transj .M SCT alkb |M ETb (2)|M SCb (A2 )|V LRb (A2 ) trans2
−→ SY ST alk(1, j) θ
1 We can get:SY ST alk(1, j) −→ SY ST alk(1, j) and θ1 = τ.transj (data). Simi-
θ
2 larly, M S1 can also receive data from M Sj : SY ST alk(1, j) −→ SY ST alk(1, j), θ2 = transj (data).τ . M S1 can end the talking by sending a cancel message, and we can get: θ3 SY ST alk(1, j) −→ SY S, θ3 = τ.transj (cancel). If the M Sj wants to end the
Formal Approaches to Location Management in Mobile Communications
253
θ
4 talking, we can get: SY ST alk(1, j) −→ SY S and θ4 = transj (cancel).τ . This specific process is omitted for paper limit. After moving, M S1 in SY ST alk (1, j) also can send data to M Sj . And
θ
1 we can get: SY ST alk (1, j) −→ SY ST alk (1, j), θ1 = τ.transj (data). Simi-
θ
2 larly, M S1 can also receive data from M Sj . We can get: SY ST alk (1, j) −→ SY ST alk (1, j) and θ2 = transj (data).τ . We can see that these are respectively matched by
θ
θ
1 2 SY ST alk(1, j) −→ SY ST alk(1, j), SY ST alk(1, j) −→ SY ST alk(1, j).
After moving, M S1 can also send a cancel message to end the talking. We can θ3 get: SY ST alk (1, j) −→ SY S and θ3 = τ.transj (cancel). When the M Sj wants to end the talking, the process is similar to the above θ
4 one. We can get: SY ST alk (1, j) −→ SY S and θ4 = transj (cancel).τ . So we can see that these are respectively matched by
θ
θ
3 4 SY ST alk(1, j) −→ SY S, SY ST alk(1, j) −→ SY S.
Now, we get that the derivatives of these two systems are paired in (SY S, SY S ), (SY ST alk(1, j), SY ST alk (1, j)).
By these exhaustive case analysis, it can indeed be verified that SY S ≈ SY S and SY ST alk(1, j) ≈ SY ST alk (1, j). we have therefore completed the proof of mobility irrelevance for the system.
5
Conclusion and Future Work
In this paper, we have researched the location update and handover process in mobile communication systems. Using the π calculus method, we have formalized each individual component of the mobile communication system. We have composed these components to build a formal model which is close to reality. Furthermore, we got the formal description of two properties which all the mobile communication systems must possess simultaneously. The first property is the mobility of M S. It indicates that each M S can change its location at random. The second is the mobility irrelevance of the mobile communication system. It reflects that the location change of M S will not influence the state of the whole system. At last, through detailed analysis, we have verified our model is satisfied with these two properties. It is guaranteed that our model is correct. We will focus on the following issues in our future work. The definition of the model is expected to be translated for automatic verification with workbench such as the MWB [14]. And in modeling the mobile communication system, we can also be concerned about lots of real features, such as the bandwidth of network. Varieties of future work can be done to make the system more powerful and mature.
254
J. Kang et al.
Acknowledgement. This work is supported in part by National Basic Research Program of China (No. 2005CB321904), National High Technology Research and Development Program of China (No. 2007AA010302), National Natural Science Foundation of China (No. 90718004), Macau Science and Technology Development PEARL project (No. 041/2007/A3) and Shanghai Leading Academic Discipline Project (No. B412).
References 1. Aura, T., Roe, M.: Security of internet location management. In: Proc. 18th Annual Computer Security Applications Conference, pp. 78–87. Press (2002) 2. Chaudhuri, A., Abadi, M.: Formal security analysis of basic network-attached storage. In: FMSE 2005: Formal Methods in Security Engineering, pp. 43–52. ACM, New York (2005) 3. Hoare, C.A.R.: Communicating sequential processes. Communications of the ACM 21(8), 666–677 (1978) 4. Lee, J.Y., John, I. I.: On modeling real-time mobile processes. In: Proceedings of the twenty-fifth Australasian Conference on Computer science, pp. 139–147. Australian Computer Science Communications (2002) 5. Li, Q., Zhu, H., He, J.: An inconsistency free formalization of b/s architecture, pp. 75–88. IEEE Computer Society, Los Alamitos (2007) 6. Milner, R.: A Calculus of Communication Systems. Prentice-Hall, Englewood Cliffs (1989) 7. Milner, R.: Communication and Concurrency. Prentice Hall International Series in Computer Science (1990) 8. Milner, R.: Communication and Mobile System: π-calculus. Cambridge University Press, Cambridge (1999) 9. Mouly, M., Pautet, M.-B.: The GSM System for Mobile Communications. Cambridge University Press, Cambridge (1992) 10. Orava, F., Parrow, J.: An algebraic verification of a mobile network. Formal Aspects of Computing 4, 497–543 (1992) 11. Goldsmith, M., Lowe, G., Ryan, P., Schneider, S., Roscoe, B.: Modelling and Analysis of Security Protocols. Addison-Wesley, Reading (2001) 12. Chew, P.H., Yeo, B.S., kuan, D.: Sensitivity study of location management area partitioning in cellular communication systems. Computer Networks (2007) 13. Sarker, J.H., Halme, S.J.: Optimizing the use of random access channels in gsmgprs. Wireless personal communications 22, 387–408 (2002) 14. Victor, B., Moller, F.: The mobility workbench - a tool for the π-calculus. In: Dill, D.L. (ed.) CAV 1994. LNCS, vol. 818, pp. 428–440. Springer, Heidelberg (1994) 15. Xia, Z., Zhong, Y., Zhang, S.: Analysis for active network security based on πcalculus model, pp. 366–371. IEEE Computer Society, Los Alamitos (2003)
Automated Test Scenario Selection Based on Levenshtein Distance Sapna P.G. and Hrushikesha Mohanty University of Hyderabad Hyderabad, India sapna [email protected], [email protected]
Abstract. Specification based testing involves generating test cases from the specification, here, UML. The number of automatically generated test scenarios from UML activity diagrams is large and hence impossible to test completely. This paper presents a method for selection of test scenarios generated from activity diagrams using Levenshtein distance. An activity diagram is transformed into a directed graph representing the sequence of activities. A modified Depth First Algorithm(DFS) is applied to obtain test scenarios. Levenshtein distance is calculated between the scenarios thus generated. The objective is to select the less similar test cases and at the same time provide maximum coverage. Keywords: Specification based testing, Test Scenarios Selection, UML activity diagrams.
1
Introduction
The objective of testing is to check whether software meets customer requirements. Testing is one of the expensive phases in the software development life cycle, accounting for a large proportion of software development effort. This phase has a major impact on quality and reliability of a delivered software. The size and complexity of software makes it impossible to test software exhaustively. Cost, effort and time are important factors that need to be taken into consideration. Hence, there is a need to perform testing in an optimized way that achieves maximum coverage and reveals faults at the earliest. Specification based testing is concerned with testing a system based on its specifications. The Unified Modeling Language (UML) artifacts are used to represent system requirements. For example, use case, activity, sequence and class diagrams can be used to represent details of a system at different levels of abstraction. Besides being used for design and development, UML artifacts can be used for testing. Test scenarios can be generated using UML diagrams as the basis. Testing based on specification helps find mismatches between a specification and its implementation. Using software specification for testing is advantageous due to the following reasons: one, same specification is used for design and testing, thereby reducing effort involved in building test specification; two, there is lesser chance of errors due to misinterpretation. T. Janowski and H. Mohanty (Eds.): ICDCIT 2010, LNCS 5966, pp. 255–266, 2010. c Springer-Verlag Berlin Heidelberg 2010
256
Sapna P.G. and H. Mohanty
One of the important issues involved in testing is attaining full coverage. Automated generation of test scenarios from specification creates all possible scenarios thereby making size of the test suite very large and unfeasible for exhaustive testing. Due to constraints of cost and time involved in exhaustive testing, there is need to reduce the size of a test suite at the same time maximizing throughput in terms of coverage and fault detection. The focus of our work on test case selection is reducing the number of test cases to be executed and maximizing test coverage. In our work, we use UML for system specification. UML use case and activity diagrams elaborate specification in a form that is easily understandable to the stakeholders of the system. It is used to model the dynamic behaviour of a system both sequential and concurrent control flow between activities. Being developed at the analysis and design phase, activity diagrams represent various aspects of the system and hence can be used across all phases of the development cycle. In our case, we use the activity diagram in generation of test scenarios. Functional requirements are recorded using use case diagrams. Each use case is elaborated using activity diagrams i.e. a use case may have one or more activity diagrams representing main, alternate and exception flows. Automated scenario generation from activity diagrams is done based on work by [9]][7][2]. We consider that loops in activity diagrams are traversed twice. Thus, for each use case, we have a set of scenarios whose size can be large dependent on the size and complexity of activity diagrams. A threshold value ’t’ (percentage of test scenarios to be selected) is obtained from the quality assurance team engineers. In this paper, we propose an automated strategy for test scenario selection based on the use of Levenshtein distance [21] as a measure of dissimilarity between scenarios generated. For each pair of scenarios, the Levenshtein distance is calculated. The least value indicates scenario pairs that are least dissimilar and vice versa. Hence, we base selection on the highest dissimilarity between scenarios. Our work differs from previous work in that between two similar scenarios, random selection was done. In our case, we pick the scenario pair having minimum Levenshtein distance and select one of them based on some heuristic(random, priority, type, subsumption based). The selected scenario is added to the test suite. We continue the process until the threshold is met. The test suite thus obtained is used for testing. Heuristics offer the added advantage of using other information available on test scenarios i.e. priority, type, knowledge of the quality assurance team (preferred selection) in the selection process thereby producing better results. In the next section, we discuss related work in the area. Section 3 briefly describes the relation between use cases, activity diagram and scenarios. Section 4 presents our test scenario selection methodology followed by a case study in section 5. Section 6 concludes the paper and highlights work in future.
2
Related Work
Test case selection provides several benefits in the automated testing process. Exhaustive testing being impossible, there is need to determine a subset of test
Automated Test Scenario Selection Based on Levenshtein Distance
257
cases that ensure test objectives, namely, maximum coverage, and early fault detection. For this, there is need to select an effective subset of the original test suite. Several techniques have been developed which attempt to maintain quality of throughput by reducing the size of the test suite and at the same time increasing efficiency. [1],[3] and [11] use similarity measures for test case selection. Test cases are selected based on the distance value computed between two test cases. Techniques used include pairwise comparison in [3] and Euclidean distance in [11]. Cartaxo et al. use Labeled Transition Systems (LTSs) as the model from which they obtain test cases. In [10], the authors extend the idea of distance to testing object oriented programs by defining a distance for objects, the ’object distance’. The object distance computes distances between arbitrary objects. They describe a model for representing the differences between two objects, and use Levenshtein distance to calculate distance between class names, object names and its contents. [14] in their work consider heuristic driven techniques for test selection. They use four factors, namely, risk, coverage, cost and efficiency, which helps in classification and selection of test cases according to different criteria. Risk based test selection is the focus of work done by [13]. Risk is based on the cost of each test case(valued by both customer and vendor) as well as severity. Risk exposure(RE), a product of cost and severity is taken as the basis for test selection. Scenarios that cover most critical cases are considered first. [12] in their Cow Suite tool use UML use case and sequence diagrams to record requirements. Each use case diagram is elaborated using a sequence diagram, to scenarios forming a tree structure. Weights are assigned based on functional importance such that the sum of weights at any level equals to one. The test generation algorithm used by Cow Suite generates all possible test cases. Selection is done based on the weight of the scenarios obtained by product of weights of all nodes in the path leading to particular scenario. [15] use activity diagrams for regression test selection. Regression test selection techniques involves selecting a subset of test cases to determine if the modified program has the same behaviour as a previous, acceptable version of the program running on T, a set of test cases. [20] in their work present a framework for analyzing regression test selection techniques. The framework consists of four categories: inclusiveness, precision, efficiency, and generality. They analyze different techniques on the four factors. In [19], the authors construct control flow graphs(CFG) for a procedure or program and its modified version. They use the CFGs to select tests that execute changed code from the original test suite. The input to our work involves use case diagrams capturing functionality of a system. Each use case is elaborated using activity diagrams. Scenarios are generated from activity diagrams which form the input of the test selection process. Our work also uses distance measure to determine similar scenarios, like work in [1][3][11]. But, unlike them, we use a heuristic to better the results obtained by random selection between two scenarios. Information like scenario type(main, alternate, exception) and priority are available as inputs provided by
258
Sapna P.G. and H. Mohanty
the quality assurance team or by customers. We believe that this information can be used to help in better selection of scenarios.
3 3.1
Test Scenario Generation Definitions
Each use case represents a functionality of the system represented through one or more activity diagrams. An activity diagram consists of a set of activities and transitions showing flow of control from one activity to another. SUT, the system under test may bave a set of usecases UC and for each usecase, there could be a set of activity diagrams. AD is the set of activity diagrams that specify the system. A function UcAd(uci )→AD’ applied on a usecase uci returns the activity diagrams AD’ associated to the usecase. Similarly, for each activity diagram, there is atleast one associated usecase diagram. i.e.AdUc(adj )→UC’ gives usecases associated to an activity diagram. An activity diagram, adi , can be thought as a graph with nodes representing activities and edges showing transitions from one activity to another. An activity diagram has a start and end nodes. A scenario specifies an atomic user requirement represented by a path in the activity graph starting from the start node to an end node. Thus, a scenario S = ≺ node {≺Intermediate nodes} node. An activity diagram or activity graph may have several scenarios.i.e. HasScen(adi )→ {sj }nj=1 and for each scenario there exists atleast one activity n diagram i.e. Isin(si )→ {adj }j=1 . Thus, for each usecase, there is a set of scenarios derived from all the activity diagrams associated to the usecsase. So, n HasScen(uc) = HasScen(UcAd(uc))→ {sj }j=1 . 3.2
Test Adequacy Criteria
Measurement of test quality is a key issue in testing. Test adequacy criterion specifies the requirement of testing. – Activity Coverage requires that all the activities in the activity diagram is covered. Activity coverage is calculated as the ratio of visited activities to all activities in the activity diagram. – Key Path Coverage requires that all the main paths in the activity diagram be covered. The value of main path coverage is the ratio between the traversed main paths and all the main paths in the activity diagram
4
Methodology
In this section, we introduce dissimilarity based selection of scenarios obtained from UML activity diagrams. Our work is inspired by the work of [11] and [3] where similarity measures are used for test case selection. While in the former, similarity measure is used to calculate distances between objects, in the latter, it is used to obtain a reduced test suite for test sequences obtained from LTS. Our work looks at using Levenshtein distance as a similarity measure for test scenario selection generated from UML activity diagrams.
Automated Test Scenario Selection Based on Levenshtein Distance
4.1
259
Levenshtein Distance
Named after Vladimir Levenshtein, the Levenshtein distance [21] is a metric for measuring the amount of difference between two sequences (i.e., the so called edit distance). The Levenshtein distance between two strings is given by the minimum number of operations needed to transform one string into the other, where an operation is an insertion, deletion, or substitution of a single character. A modification of the Levenshtein distance calculates genetic distance. Genetic distance between two words is taken as the edit distance divided by the number of characters of the longer of the two. The genetic distance is thus, any value between 0 and 1. The algorithm has a complexity of Θ(mn), where m and n are the lengths of the strings. For example, consider two strings, T E S T I N G and TESTED Edit distance : 3 (Substitute ’I’ by ’E’; substitute ’N’ by ’D’; and delete ’G’) Genetic distance : 0.43(Edit distance 3 divided by 7, length of longer string) 4.2
Application to Test Case Selection
For each use case, uci we arrive at a set of scenarios, SC, deduced from all activity diagrams elaborating the use case. Each of these scenarios is considered as a string for our purpose. The Levenshtein distance (genetic distance) is used to calculate the dissimilarity between two scenarios. This genetic distance between each pair of scenarios pertaining to the use case are stored in a matrix M = k * k where k is the number of scenarios. The minimum value in the matrix belongs to the scenarios, say, s1 and s2 i.e. having minimum dissimilarity. We make use of selection techniques to select between the two scenarios. 4.3
Selection Techniques
Random: One of the two scenarios, having minimum levenshtein distance is selected randomly. Priority based: Scenario with higher priority is selected. Priority of scenario is calculated based on one or a combination of the techniques: customer assigned, statement coverage, risk based, random, etc.[4] [18] [17] [6] In case of equal priority, a scenario is selected randomly. Subsumption: Given two test scenarios, X and Y, we say that X subsumes Y if and only if the scenario Y is contained in X in that order. Hence, X is selected. Type: Main scenario(flow) gets priority over alternate and exception flow. In case both are main scenarios, then both get included. In case of alternate and exception scenarios, one is selected at random.
260
Sapna P.G. and H. Mohanty
Preferred Scenarios: Preferred scenarios are the subset of all scenarios listed by the quality assurance engineers as those that must compulsorily be tested. All such scenarios are included in the test suite followed by selection using one of the above techniques. 4.4
Algorithm
Algorithm 1 shows the steps involved in selecting test cases using Levenshtein genetic distance. SC, the set of scenarios and P, the percentage of test cases to be selected is given as input. Tq denotes the selection technique to be adopted. D is the matrix, of size n × n, containing the Levenshtein distance calculated between scenarios. T is the threshold calculated based on the percentage of test cases to be selected i.e. the number of test cases to be selected. First, the Levenshtein genetic distance for the scenarios is calculated. Then, selection of scenarios is done based on the technique (random, subsumption, priority, etc.) selected. In case two scenarios are equal, i.e. two scenarios have same priority, then random selection is done.
Algorithm 1. Select(SC, P, Tq) SC : the set of scenarios P : the percentage of test cases to be selected Tq : the selection technique adopted n : size of SC, the set of scenarios T : Threshold for test case selection (n × P/100) Initialize: count = 0 D = matrix of size n × n, containing Levenshtein Distance between scenarios TS = test suite after selection D(min) is a function that returns the scenarios having minimum value in D Selectscenario(Tq,r,c) is a function that returns the scenario based on the selection technique D = levenshtein(SC) while (D <> 0) do // while matrix D is not null if (count < T) then // if number of scenarios selected is less than threshold, T r,c = min(D) // r and c are the scenarios having minimum value in D result = Selectscenario(Tq,r,c) // result contains the scenario selected w.r.t Tq Add scenario ’result’ to TS Remove scenario ’result’ from D else exit end if end while
Automated Test Scenario Selection Based on Levenshtein Distance
261
Fig. 1. (i) Graph (ii) 5 out of 9 scenarios generated
5
Case Study
5.1
Example
Consider as a simple example, the graph derived from an activity diagram (Figure 1). The activity diagram consists of nine scenarios(paths from the start node to the end node) five of which is shown in Figure 1(ii). Here, we consider that loops are executed atmost twice. For brevity, we consider here only the activity names. Table 1 shows the set of scenarios generated. Table 1. Set of scenarios generated for graph of Figure 1(i) [1] [2] [3] [4] [5] [6] [7] [8] [9]
A A A A A A A A A
→ → → → → → → → →
B B B B B B B B B
→ → → → → → → → →
C C C C C C C C C
→ → → → → → → → →
D D D D D D D D D
→ → → → → → → → →
F F E F E E E E E
→ → → → → → → → →
G G B G B B B B B
→ → → → → → → → →
H H C H C C C C C
→ → → → → → → → →
J I → G → H → J D → F → G → H → J I → G → H → I → G → H → J D → E → B → C → D → F → G → H → J D → E → B → C → D → F → G → H → I → G → H → J D → F → G → H → I → G → H → J D → F → G → H → I → G → H → I → G → H → J D → E → B → C → D → F → G → H → I → G → H → I → G → H → J
262
Sapna P.G. and H. Mohanty
The matrix D with the Levenshtein distance(genetic distance) for the scenarios listed above are shown in Table 2. Values indicate level of dissimilarity between the scenarios. Consider that the desired path coverage percentage is 50%. Then, we need to select four test cases out of a total set of nine scenarios. Hence, threshold value = 4. Table 2. Distance matrix of scenarios in Table 1
1 2 3 4 5 6 7 8 9
1 2 3 4 .27 .33 .43 .42 .21 .5
5 .5 .57 .25 .56
6 .58 .42 .37 .37 .16
7 .47 .27 .2 .33 .31 .21
8 .56 .39 .33 .22 .39 .26 .17
9 .63 .5 .45 .36 .27 .14 .32 .32
The scenarios with minimum dissimilarity is between scenarios ’6’ and ’9’ having Levenshtein distance = 0.14. Assuming that we have selected ’Subsumption’ as the selection technique, we select scenario ’9’. Scenario ’9’ is removed from the matrix. Table 3 shows matrix D after elimination of scenario ’9’. The next minimum value is 0.16 between scenarios ’5’ and ’6’. We continue this procedure until we attain the threshold value. The test suite with 50% coverage consists of the scenarios 9,6,8,7. Table 3. Matrix after elimination of scenario 9
1 2 3 4 5 6 7 8
1 2 3 4 .27 .33 .43 .42 .21 .5
5 .5 .57 .25 .56
6 .58 .42 .37 .37 .16
7 .47 .27 .2 .33 .31 .21
8 .56 .39 .33 .22 .39 .26 .17
We use a case study to compare our approach with random test selection method, where test selection is done in random. Experimental results demonstrate that our method provides better coverage. The results are compared in cases of varied selection, namely, priority, type, subsumption, random and preferred considering that the desired coverage is 50%.
Automated Test Scenario Selection Based on Levenshtein Distance
263
Table 4. Results showing our technique compared to random selection Method
Selection Technique
Activity Key Path Coverage(%) Coverage(%)
Random
68%
50%
Our Method Random Subsumption Scenario Type Priority Preferred Scenarios
70% 66% 81% 78% 87%
55% 66% 61% 61% 64%
The coffee vending machine is a small case study we have used to study effectiveness of our technique. It consists of 7 uses cases, 7 activity diagrams and a total of 62 unique activities. Table 2 shows the comparison between our approach and the random test selection method. 5.2
Results
We compare the results of our method with random selection(Table 4). Random selection based on similarity measure is better than selecting scenarios randomly from the test suite. The subsumption technique has 66% results. This is due to the fact that it requires one scenario to be completely subsumed by the other for selection (i.e. one functionality must be a subfunctionality of another). Selection based on scenario type (main/alternate/exception) has 81% coverage. Main scenarios constitute the important functionalities of the system. Also, most activities are covered when traversing the main scenarios. The technique, Preferred scenarios gets highest coverage. This technique is based on the knowledge of the quality assurance engineers of scenarios that have high risk and their relative importance. It requires that the preferred scenarios are input by the QA team. Given, that preferred scenarios are provided as input, this technique is best suited for test selection. Key paths involve the main functionalities of the software. Test selection based on the techniques provide better coverage compared to random selection. However, there is need to do further empirical studies to understand further the attributes that influence test selection. 5.3
Threats to Validity
Threats affect validity of the results and demand replication of the study. The main threats to the validity of this work are: – Internal validity threats, concerning factors that may have affected experimental measures. The software system considered for case study is a small one. The results of the case study are encouraging but cannot be generalized. Hence, there is need to implement the method on large case studies to substantiate effectiveness of the technique.
264
Sapna P.G. and H. Mohanty
– External validity threats, concerning the generalization of the findings. Generalization of the results obtained from case study is difficult due to its limitations. In our case, we have used UML activity diagrams for test scenario generation. Generalization of results to scenarios obtained from other diagrams is difficult. Also, generalization to other kinds of software (e.g. embedded) is even more difficult. The above threats while limiting validity of conclusions, motivate to do further empirical studies and research in the area looking at varied applications. 5.4
Discussion
We can interpret the results obtained from the case study as follows: – Levenshtein distance is a useful measure to calculate similarity measures between scenarios. The method helps determine activity as well as key path coverage. – The advantages of test selection are effective when number of scenarios is large and it is impossible to test exhaustively within cost and time contraints.
6
Conclusion and Future Work
An automated strategy for test case selection based on Levenshtein distance is presented. The strategy follows selection of test cases based on distance measures among computing scenarios found in functional behaviou of a computing system. Following the proposed selection process, we discard selection of minimally dissimilar scenarios, thereby reducing redundance and at the same time increasing coverage criteria. The process proposes a generic framework that can be used for test case selection based on different criteria(priority, scenario type, preferred scenarios). As future work, we plan to study the scalability of the proposed technique as well as study if other attributes like cost and fault probability will help produce better results.
References 1. Cartaxo, E.G., Andrade, W.L., Neto, F.G.O., Machado, P.D.L.: LTS-BT: A tool to generate and select functional test cases for embedded systems. In: SAC 2008: Proceedings of the 2008 ACM symposium on Applied computing, pp. 1540–1544. ACM, New York (2008) 2. Sapna, P.G., Mohanty, H.: Automated Test Scenario Generation from UML Activity Diagrams. In: International Conference on Information Technology, ICIT, pp. 209–214 (2008) 3. Cartaxo, E.G., Neto, F.G.O., Machado, P.D.L.: Automated Test Case Selection Based on a Similarity Function. GI Jahrestagung (2), 399–404 (2007)
Automated Test Scenario Selection Based on Levenshtein Distance
265
4. Srikanth, H., Williams, L.: On the economics of requirements-based test case prioritization. In: Proceedings of the Seventh International Workshop on EconomicsDriven Software Engineering Research (2005) 5. Yoo, S., Harman, M.: Pareto efficient multi-objective test case selection. In: Proceedings of the 2007 International Symposium on Software Testing and Analysis, pp. 140–150 (2007) 6. Sapna, P.G., Mohanty, H.: Prioritization of scenarios based on UML Activity Diagrams. In: First International Conference on Computational Intelligence, Communication Systems and Networks (July 2009) 7. Chen, M., Mishra, P., Kalita, D.: Coverage-driven Automatic Test Generation for UML Activity Diagrams. In: ACM Great Lakes Symposium on VLSI (GLSVLSI) (May 2008) 8. Kim, H., Kang, S., Baik, J., Ko, I.: Test Cases Generation from UML Activity Diagrams. In: 8th ACIS International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing(ACIS-SNPD). IEEE Computer Society, Los Alamitos (2007) 9. Mingsong, C., Xiaokang, Q., Xuandong, L.: Automatic Test Case Generation for UML Activity Diagrams. In: International Conference on Software Engineering, Proceedings of the 2006 International workshop on Automation of Software Test. ACM, New York (2006) 10. Ciupa, I., Leitner, A., Oriol, M., Meyer, B.: Object distance and its application to adaptive random testing of object-oriented programs. In: Proceedings of the 1st international workshop on Random testing, pp. 55–63. ACM, New York (2006) 11. Chen, T., Leung, H., Mak, I.: Adaptive random testing. In: Maher, M.J. (ed.) ASIAN 2004. LNCS, vol. 3321, pp. 320–329. Springer, Heidelberg (2004) 12. Basanieri, F., Bertolino, A., Marchetti, E.: The Cow Suite Approach to Planning and Deriving Test Suites in UML Projects. In: J´ez´equel, J.-M., Hussmann, H., Cook, S. (eds.) UML 2002. LNCS, vol. 2460, p. 383. Springer, Heidelberg (2002) 13. Chen, Y., Probert, R.L.: A Risk-based Regression Test Selection Strategy. In: Proceeding of the 14th IEEE International Symposium on Software Reliability Engineering (ISSRE 2003), Fast Abstract, November, 2003, pp. 305–306 (2003) 14. Burguillo, J.C., Llamas, M., Fernndez, M.J., Robles, T.: Heuristic-driven Techniques for Test Case Selection. In: FMICS 2002, 7th International ERCIM Workshop in Formal Methods for Industrial Critical Systems (ICALP 2002 Satellite Workshop). Electronic Notes in Theoretical Computer Science, vol. 66(2), pp. 50– 65 (2002) 15. Chen, Y., Probert, R.L., Sims, D.P.: Specification-based regression test selection with risk analysis. In: CASCON 2002: Proceedings of the 2002 conference of the Centre for Advanced Studies on Collaborative research. IBM Press (2002) 16. Burguillo-Rial, J.C., Fern´ andez-Iglesias, M.J., Gonz´ alez-Casta˜ no, F.J., LlamasNistal, M.: Heuristic-driven test case selection from formal specifications. A case study. In: Eriksson, L.-H., Lindsay, P.A. (eds.) FME 2002. LNCS, vol. 2391, p. 57. Springer, Heidelberg (2002) 17. Elbaum, S., Malishevsky, A., Rothermel, G.: Test case prioritization. A family of empirical studies 28(2), 159–182 (2002)
266
Sapna P.G. and H. Mohanty
18. Rothermel, G., Untch, R.H., Chu, C., Harrold, M.J.: Prioritizing test cases for regression testing. Software Engineering 27(10), 929–948 (2001) 19. Rothermel, G., Harrold, M.: A safe, efficient regression test selection technique. ACM Transactions on Software Engineering and Methodology 6(2), 173–210 (1997) 20. Rothermel, G., Harrold, M.J.: Analyzing Regression Test Selection Techniques. IEEE Transactions on Software Engineering 22(8), 529–551 (1996) 21. Levenshtein, V.I.: Binary codes capable of correcting deletions, insertions, and reversals. Doklady Akademii Nauk SSSR 163(4), 845–848 (1965)
Study of Diffusion Models in an Academic Social Network Vasavi Junapudi1 , Gauri K. Udgata2 , and Siba K. Udgata3 1
3
Department of Computer and Information Sciences University of Hyderabad, Hyderabad, India [email protected] 2 Department of Computer Science Berhampur University, Berhampur, Orissa, India [email protected] Department of Computer and Information Sciences University of Hyderabad, Hyderabad, India [email protected]
Abstract. Models for the processes by which ideas and influences propagate through a social network have been studied in number of domains, including the diffusion of medical and technological innovations, the sudden and widespread adoption of various strategies in game-theoretic settings, and the effects of ’word of mouth’ in the promotion of new products. The problem of selecting a set of most influential nodes in a network has been proved to be NP-hard. We propose a framework to analyze the network in depth and to find the set of most influential nodes. We consider the problem of selecting, for any given positive integer k, the most influential k nodes in a Academic Social Network (ASN), based on certain criterions relevant in academic environment like number of citations, working location of authors, cross reference and cross co-authorship. Based on the initial node set selection and the diffusion model, we study the spread of influence of the influential nodes in the academic network. Appropriate criterions are used in the proposed generalized diffusion models. In this paper, we used two different models; (1) Linear Threshold Model and (2) Independent Cascade model to find the set of influential nodes for different criterions in an ASN and compared their performances. We constructed ASN based on the information collected from DBLP, Citeseer and used Java Social Network Simulator (JSNS) for experimental simulations.
Keywords: Academic Social Network, Influential Node Selection, JSNS, Diffusion Models.
1
Introduction
Recently there has been a great deal of interest in research involving social networks, including both modeling and analyzing. A social network describes actors, T. Janowski and H. Mohanty (Eds.): ICDCIT 2010, LNCS 5966, pp. 267–278, 2010. c Springer-Verlag Berlin Heidelberg 2010
268
V. Junapudi, G.K. Udgata, and S.K. Udgata
their relationships and their participation in different events. It can be characterized by its relational structure, and the underlying graph structure dictates the structural properties of the network. These include everything from the density of the graph, average degree of nodes, measure of centrality and information flow. Most of the research in social network focussed on structural aspects of the networks. The social network is viewed as a social structure made of individuals or organizations linked by one or more specific types of interdependency, such as friendship, co-authorship, authorship, collaboration, web links, among many others. Typically, each individual is represented by a node in the network, and there is an edge between two nodes if there exists a social interaction between them [1]. Given a social network, there has been quite intensive interest to find influential nodes based on a set of well defined measures. In general, social networks play a key role for the spread of information or behavior within a population of individuals. The idea behind diffusion is to find the extent to which people are likely to be influenced by the decisions of their neighbors. For a given value of k, which subset of nodes with cardinality k, one should target to maximize the size of the information cascade [2,3]. For this type of problem, most of the work done is based on viral marketing. 1.1
Academic Social Network
In this paper, we look at networks that are a bit more complex than the classic ‘who-knows-who’ or ’friend-of-a-friend (FOAF)’ network. We used event networks that include information about the author’s publications with other coauthors. We present a general formulation of these Friendship-Event Networks (FEN). To measure interesting structural properties of these networks, we define the notion of capital and benefit. Capital is a measure of an actor’s social capital. It is defined as the number of research publications. Benefit is defined as the number of citations of an article. Benefits received by an author is defined as the number of citations of that article. Benefits received by others is defined as the number of times they cite an article. Here we view them as descriptive properties useful for understanding the data. The relationship between authors changes over a period of time. These changes will in turn effect the social capital of an individual as well as benefit received and benefit given. An ASN mainly describes academic collaborations among different researchers. A friend is defined as a person with whom an author shares a co-authorship relation, or sharing the same working location or organization. The social capital varies from one selected criteria to another. We analyze this type of collaboration patterns among research communities in depth. There exists a natural social network here, where a node corresponds to an author who published a paper. An edge between two authors denotes that they have coauthored a research article. An edge between article and author indicates that the researcher is an author of the article. An edge between an article ’i’ and another article ’j’ indicates that articlei has referred articlej . In such a network, it is desirable to find the most prolific researchers since they are most likely the trend setters for new innovations. Thus, our objective is to find influential nodes
Study of Diffusion Models in an Academic Social Network
269
in the ASN with regard to a measure that can portray the behavior in which we are interested. This task is normally computational intensive as we have to consider the diffusion of information in the ASN. In this paper, we try to find the most influential nodes in an ASN using different relevant criterions. We concentrated on the initial active node selection, i.e. assigning different weights to the nodes by considering different criterions in the network. Then, we select the initial node set based on weight. After selection of the initial node set, we are applying the Linear Threshold Model and the Independent Cascade Model for every criteria and analyze our results. In this work, we considered the different criterions for studying the diffusion process and influential node set selection; – Selection of active node set based on citations of the article – Selection of author nodes, those who share a strong relationship with neighbors – Selection of author nodes based on cross reference of articles (The cross reference is defined as the citation of an article which cited the article authored by this author.) – Selection of author nodes based on cross-coauthorship (The cross co-authorship weight is increased if a coauthor of that article publishes more articles in the same research area.) The rest of the paper is organized as follows. In Section 2, we give a brief overview of the related work followed by a detailed discussion on the structure of the ASN in Section 3. In Section 4, we discuss about diffusion models and various criterions to find most influential k nodes and the spread of information to activate the inactive nodes. In Section 5, we evaluate the performance of each of the considered criteria for the ASN. The paper ends with a conclusion and future scope in Section 6.
2
Related Work
A large portion of the work in analyzing social networks focused on structural properties of the networks [4,5]. Much of the work has been descriptive in nature, but recently there has been more work which uses structural properties for prediction. Within this category, a number of papers focus on the spread of influence through the network [2,6]. These papers attempt to identify the most influential nodes in the network. Domingos [6] consider the problem of finding a set of nodes with cardinality k that can maximize the information cascade in viral marketing and proposed predictive models to show that selecting the right set of users for a marketing campaign can make a big difference. Later Kempe et. al. [2] treated this problem from the perspective of two widely studied operational models for diffusion of information, namely thresholds model and cascade model. They show that the underlying problem is NP-hard and further proved that it is also sub-modular. They presented a framework to generalize threshold model and cascade model for reasoning about the performance guarantee
270
V. Junapudi, G.K. Udgata, and S.K. Udgata
of algorithms for influential node selection. McCallum et al. have proposed rule discovery in social networks by looking at messages sent and received between entities [7]. Liben-Nowell et.al [8] attempt to predict future interactions between actors using the network topology. Y. Fernandess et.al [9] proposed a protocol to spread the information based on gossip. Spammers et.al [10] concentrated on the pathways of information flow. They propose an analysis by using which pathways the information is passing from one node to other. N. Suri et.al [1] found a new approach to find the influential nodes in the network based on the sample value which is used in game theoretic problems. In another paper, Kemp et.al [3] consider the problem of information diffusion in the presence of ’wordof-mouth’ referral and proposed a general model called the decreasing cascade model. In both these papers, the authors point out that the underlying objective function for information diffusion can not be computed exactly and efficiently using greedy approximation algorithms. Even though social capital is defined in a slightly different manner in different contexts such as sociology and economics, most definitions agree that social capital is a function of ties between actors in a social network whereas human capital refers to properties of individual actors. Portes argues that a systematic treatment of social capital must distinguish between the ’possessor of the capital’ (actors who receive benefits), ’Sources of the capital’ (actors who give benefits), and the resources that have been received or given [11]. In our analysis, the ’sources of the capital’ are the articles published by the author. Two related notions in social network analysis are position and role; position refers to subsets of actors who have similar ties to other actors, and role refers to patterns of relationships between these actors or subsets.
3
The Academic Social Network Formulation
We begin with a generic description of Academic Social Network. An ASN has two types of nodes represented below: – Article Nodes Ar = Ar1 , . . . . , Arn – Author Nodes Au = Au1 , . . . . , Aun Here author node represents the author of an article node. The relationship between these nodes are defined as follows: – Article - Article: This denotes the type of relation when an article Ari is referring an article Arj . In this case, relationship between articles is based on the citations. P(Ari , Arj ) : Article Ari is cited in article Arj . – Author - Article: Here, relation is based on the contribution of the author for that article. The relationship is established between author and the published article. The weight of the authors of that article will increase when more number of other articles cite this article. Q(Aui , Arj ) : Researcher Aui is the author of the article Arj .
Study of Diffusion Models in an Academic Social Network
271
– Author - Author: This type of relation is based on co-authorship relation. Let set S=Aui , . . , Auj are the authors of the article Arp . All members of set ’S’ are having the co-authorship relationship. We define strong relationship, if authors share the same working location or organization. We define a weak relationship between authors who are in the same place but are not co-authors. R(Aui , Auj ) : Authors Aui and Auj are having either the co-authorship relation or working in the same organization. Weight Assignment in ASN – Article Node: Here the number of citation of the article Ari is considered as the weight of the article. As the citation count increases the weight of the article Ari also increases. We also considered cross reference for calculation of weights at different levels. If article Ari has been cited by Ar1 , . . . . , Ark and these articles are further cited by other articles. We considered cross references as the article Ari is being referred indirectly and thus should get some weightage. The weight to be added decreases by a fraction of 2 every time the level of indirect reference increases. – Author Node: Here the publications of the author is used to represent the weight of the author node. We calculate the human capital of the author node from the publication count. Like in case of articles, we also use weights based on cross authorship. Author Aui and Auj publish an article Ari in the research ares Rp . If the author Auj publishes another paper with some other author Auk in the same research area Rp , we increment the weight of the author Aui as a cross authorship. The weight to be incremented decreases by a fraction of 2 with increase in level of cross authorship. Assigning Weights to the Relations / Connections – Author - Article: The weight is given to the relation based on the article citations. – Author - Author: The weight is given based on the co-authorship. If the authors are not related based on the co-authorship relation then a small weight is assigned based on same working location/ organization. As a group of authors publish more and more number of articles, the relationship becomes stronger and stronger. It is more so if the co-authors are working in the same location/ organization. Thus, in an ASN, we considered nodes as authors and articles. The actor is an author who plays the role through publications. The friendship relation is defined based on whether two authors have co-authored a paper together or working together. In this case the friendship relationship is symmetric, but this may not be true in other domains.
4
Diffusion Models and Criterions
Granovetter proposed models that capture such a diffusion process based on the use of node-specific thresholds [12]. Many models of this flavor have since been
272
V. Junapudi, G.K. Udgata, and S.K. Udgata
investigated [12,13,14,15], but diffusion models proposed by David Kempe et. al. starts with a initial set of active nodes. Here we applied both the models with a slight change to suit ASN environment and tried to have an in depth study. Linear Threshold Model: A node v is influenced by each neighbor w according to a weight bw . The dynamics of the process proceeds as follows: Each node v chooses a threshold θv uniformly at random in the interval [0,1]. This represents the weighted fraction of v’s neighbors that must become active in order to v become active. The threshold value denotes the level of difficulty with which a node can be influenced. The more the threshold value, the more difficult for the node to be influenced. The diffusion process unfolds deterministically in discrete steps of t, with all nodes that were active in step t-1 remain active. A node v becomes active for which the total weight of its active neighbors is at least θv : b w ≥ θv (1) w
where w, is the set of all active neighbor nodes. Thus, the thresholds θv intuitively represent the different latent tendencies of nodes to adopt the innovation when their neighbors do so. Independent Cascade Model: It also starts with an initial set of active nodes A0 . The process unfolds in discrete steps according to the following randomized rule. When node v first becomes active in step t, it is given a single chance to activate each currently inactive neighbor w. If w has multiple newly activated neighbors, their attempts are sequenced in an arbitrary order. If v succeeds, then w will become active in step t+1. But node v does not make any further attempts to activate w in subsequent rounds. The process runs until no more activation is possible. Unlike Linear Threshold Model, previous activation history is not taken into account for new node activation in the Independent Cascade Model. 4.1
Criterions Considered for Selecting Initial Active Node Set
Based on Number of Citations and Publications: Initially, we flag the set of article nodes say k based on the highest number of citation count. The number of articles to be selected are based on user input i.e. k and then activate k number of authors based on their highest weight count i.e. publications on whole, as the initial set of active nodes. Based on number of Cross-Reference Citation Count and the Publications: Weights are assigned to the articles by considering the cross references (including their previous citation count). The previous step is repeated based on the updated citation information. Based on the Number of Cross-References and Cross Co-Authorship We increment the weight of the author X by 1/ l where l is the level of the
Study of Diffusion Models in an Academic Social Network
273
coauthor of X and increment the weight by a fraction of 2 with increase in level. We considered cross-reference and cross co-authorship up to 4th levels. 4.2
Criterions for Activating Inactive Nodes
Assumption: Based on the activation process, we change the threshold value of the author. The process of activating the inactive nodes remains same. The different criterions considered for activating inactive nodes starting with a set of initial active node set are as follows; 1. Activation based on the Weight of the Author Nodes – Set the status of initial set of active nodes to 1. – Assign random threshold to all nodes. – Select an inactive node, and find the sum of weight of active neighbors of the inactive node. If the sum reaches the threshold value of the inactive node then set the status of inactive node to active i.e change it to 1. – Repeat this process until no more activations are possible. 2. Activation based on the Weight of the Relationship between the Authors: The process is same as discussed in step (1). Here, we activate the inactive node when the weight of all relationship edges reaches a threshold value. 3. Activation based on the Strong Relationship between the Authors Here in the initial iteration, we are trying to activate those inactive nodes who are neighbors of the active nodes and have a strong relationship compared with other neighboring nodes. 4. Activation based on the Cross-Reference of the Articles Based on cross-reference, we increase the weight of the article depending on the level of cross reference and then apply the same method for activating the inactive nodes. 5. Based on the Cross Co-Authorship: In this case, the weight takes cross co-authorship into account and follows the same process to active the inactive nodes. For (4) and (5), we consider cross reference and cross co-authorship up to 4th level.
5
Experiments, Results and Discussions
We constructed an ASN with 600 nodes using Java Social Network Simulator (JSNS). To start with, we collected data from references of an article published in Knowledge Discovery and Data mining (KDD) International Conference. We used mainly DBLP and Citeseer for data collection as well as for finding cross co-authorship and cross references up to 4th level. To avoid the aliasing problem and duplication of nodes, we considered the DBLP naming format. The ASN contains very few article nodes even if these are not cited. We simulated for all criterions to spread the information and find a set of active nodes. Here the
274
V. Junapudi, G.K. Udgata, and S.K. Udgata
Comparing results based on the node weights (Author Publications) 120 Linear Threshold Model Indepenent Cascade Model
Final Set of Active Nodes
100
80
60
40
20
0 0
5
10
15
20 25 30 Initial Set of Active Nodes
35
40
45
50
Fig. 1. Spread of influence based on number of publications of the author and citations of the article
spread of information range is measured in terms of active node count. The most influential node is determined based on how many number of other nodes it is able to activate/ influence. During the simulation, we updated the information about the citations of the article and the publications of the author. The network having any updated publication or with updated citation information is allowed to go through the diffusion process and activation process. We compared both basic diffusion models by considering the three criterions to find the best k nodes in the network and five criterions to spread the information i.e. to activate the inactive nodes in the network. The simulation process is carried out 500 times and we report the average results. In figure 1, we compared results of two diffusion models considering the criteria to study the diffusion process of a set of most influential k nodes in the network based on the citations of the article and number of publications of an author. A slight increase in k value results in a much better spread which shows that few more articles or authors are able to influence many more authors/ researchers in the network. It is also observed that linear threshold model has a slight edge over the independent cascade model. Figure 2 depicts the spread of influence based on author relationship. In this case also we observe a similar trend like Figure 1. Figure 3 shows the trend of spread of influence when only nodes having strong relationship are considered. From this figure, it shows that the increase in spread of influence with increased starting set of influence node set is always not linear. In this case, performance of both models are comparable. Figure 4 shows the spread of influence while considering cross reference criteria. Here, we can observe that even with one initial active node we are able to spread the influence in the network quite effectively and efficiently compared to previous two criterions. The increase in spread of influence with the increase in the initial active node set is quite uniform and rapid also. Figure 5 shows the spread of influence while considering cross co-authorship criteria. We analyzed nodes that are influenced by the initial active node set, in addition to the number of nodes that become active. We found that, both models are
Study of Diffusion Models in an Academic Social Network
275
Comparing results based on the edge weight (Relation between Authors) 140 Linear Threshold Model Independent Cascade Model 120
Final Set of Active Nodes
100
80
60
40
20
0 0
5
10
15
20 25 30 Initial Set of Active Nodes
35
40
45
50
Fig. 2. Spread of influence based on author relationship Comparing results based on the node weight considering the strong relation between the authors as primary 100 Linear Threshold Model Independent Cascade Model 90
Final Set of Active Nodes
80 70 60 50 40 30 20 10 0 0
5
10
15
20 25 30 Initial Set of Active Nodes
35
40
45
50
Fig. 3. Spread of influence based on the strong relationship between the authors Comparing results based on the node weight considering the cross reference 120 Linear Threshol Model Independent Cascade Model
110 100
Final Set of Active Nodes
90 80 70 60 50 40 30 20 10 0
5
10
15
20 25 30 Initial Set of Active Nodes
35
40
45
50
Fig. 4. Spread of influence based on the cross-reference criteria
276
V. Junapudi, G.K. Udgata, and S.K. Udgata
Comparing results based on the node weight considering the co-authorship 120 Linear Threshold Model Independent Cascade Model 110
Final Set of Active Nodes
100 90 80 70 60 50 40 30 20 0
5
10
15
20 25 30 Initial Set of Active Nodes
35
40
45
50
Fig. 5. Spread of influence based on the cross-co-authorship Initial Active set size 1-10 - criteria based on the edge weight 60 initial set size 1 initial set size 2 initial set size 4 initial set size 6 initial set size 8 initial set size 10
Final Set of Active Nodes
50
40
30
20
10
0 0.1
0.2
0.3
0.4
0.5 Thresholds
0.6
0.7
0.8
0.9
Fig. 6. Spread of influence with variable threshold value based on edge weight
activating almost same nodes (authors or articles). In very few cases, the final active set of linear threshold and independent cascade model are different. We assessed the impact of initial set of active nodes on the final set of active nodes. The influence of initial set of active nodes exist at most up to two levels. We simulated the ASN environment by fixing the threshold value in the interval of [0, 1] in stead of choosing randomly during the experiment. A fixed threshold value for the whole ASN denotes the same difficulty level for all nodes to get influenced. Initial active node set is selected randomly or through user input instead of selection based on some criteria. We studied the spread of influence for both independent cascade model and linear threshold model for this scenario. Figure 6 shows that as the threshold value increases, the size of the final set of active nodes decreases. Here, the activation process is based on the edge weight between active and inactive node set. All the criterions except the criteria based on the edge weight are giving the same sort of final active node set and number of final activated nodes are also same. From the figure 7, we can conclude that
Study of Diffusion Models in an Academic Social Network
277
Initial Active set size 1-10 - criteria based on the cross refernce and cross coauthorship 55 initial set size 1 initial set size 2 initial set size 4 initial set size 6 initial set size 8 initial set size 10
Final Set of Active Nodes
50
45
40
35
30
25 0.1
0.2
0.3
0.4
0.5 Thresholds
0.6
0.7
0.8
0.9
Fig. 7. Spread of influence with variable threshold value based on cross co-authorship based on cross co-authorship
except for edge weight criteria, spread of influence for all criterions will not be effected by threshold value of the nodes in the ASN. From the experimental simulations, we find that the criteria based on crossreference and cross co-authorship relationship to find the spread of influence from the initial best k nodes in the network are giving good results compared to all other criterions. For all criterions, we observe that linear threshold model has an edge over independent cascade model. The query processing system of the model, facilitates users to pose a query in the ASN to find information about the most influential node in the network, a set of experts without having a conflict of interest with a group of authors, the pioneer research article in a field of research among many others.
6
Conclusions and Future Scope
We conceived, designed and constructed an Academic Social Network based on publications, citations, co-authorship, cross reference and cross co-authorship. We defined the social capital, human capital, benefits received and given in accordance with academic collaboration. Two diffusion models namely linear threshold model and independent cascade model are used along with different criterions relevant for academic environment to study the spread of influence in the Academic Social Network. We observed that both the diffusion models are comparable and performs similarly. The difference in the spread is observed only with the criterions for assigning weight to the nodes. In a social network the influence of persons and the relationship between the persons may change from time to time. So it is difficult to find the exact influence set and to measure the spread of influence in a static network. We designed a dynamic network for measuring the spread with change in the parameters. The model is designed in such a way that user can get active nodes information for all criterions and
278
V. Junapudi, G.K. Udgata, and S.K. Udgata
also facilitates query processing. In future work, we propose to view the social networks as a set of clusters. By considering those clusters, we assume that influence can spread easily with much more accuracy compared to the initial active set.
References 1. Suri, N., Narahari, Y.: Determining the top-k nodes in social networks using the Shapley value. In: Proceedings of the 7th international joint conference on Autonomous agents and multiagent systems. International Foundation for Autonomous Agents and Multiagent Systems Richland, SC, vol. 3, pp. 1509–1512 (2008) ´ Maximizing the spread of influence through 2. Kempe, D., Kleinberg, J., Tardos, E.: a social network. In: Proceedings of the ninth ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 137–146. ACM, New York (2003) ´ Influential nodes in a diffusion model for so3. Kempe, D., Kleinberg, J., Tardos, E.: cial networks. In: Caires, L., Italiano, G.F., Monteiro, L., Palamidessi, C., Yung, M. (eds.) ICALP 2005. LNCS, vol. 3580, pp. 1127–1138. Springer, Heidelberg (2005) 4. Newman, M.: The structure and function of complex networks. Arxiv preprint cond-mat/0303516 (2003) 5. Jensen, D., Neville, J.: Data mining in social networks. In: Dynamic Social Network Modeling and Analysis: workshop summary and papers, pp. 7–9 (2002) 6. Domingos, P., Richardson, M.: Mining the network value of customers. In: Proceedings of the seventh ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 57–66. ACM, New York (2001) 7. McCallum, A., Corrada-Emmanuel, A., Wang, X.: Topic and role discovery in social networks. In: Proceedings of IJCAI, Citeseer, vol. 19, pp. 786–791 (2005) 8. Liben-Nowell, D., Kleinberg, J.: The link prediction problem for social networks. In: Proceedings of the twelfth international conference on Information and knowledge management, pp. 556–559. ACM, New York (2003) 9. Fernandess, Y., Malkhi, D.: On spreading recommendations via social gossip. In: Proceedings of the twentieth annual symposium on Parallelism in algorithms and architectures, pp. 91–97. ACM, New York (2008) 10. Kossinets, G., Kleinberg, J.M., Watts, D.J.: The structure of information pathways in a social communication network. In: KDD, 435–443 (2008) 11. Portes, A.: Social capital: its origins and applications in modern sociology. Annual Review of Sociology 24(1), 1–24 (1998) 12. Granovetter, M.: Threshold models of collective behavior. American Journal of Sociology 83(6), 14–20 (1978) 13. Valente, T.: Network models of the diffusion of innovations. Computational & Mathematical Organization Theory 2(2), 163–164 (1996) 14. Young, H.: The diffusion of innovations in social networks. In: Blume, L.E., Durlauf, S.N. (eds.) The Economy As an Evolving Complex System III: Current Perspectives and Future Directions. Oxford University Press, USA (2003) 15. Young, H.: Individual strategy and social structure: An evolutionary theory of institutions. Princeton University Press, Princeton (1998)
First Advisory and Real-Time Health Surveillance to Reduce Maternal Mortality Using Mobile Technology Satya Swarup Samal1, Arunav Mishra1, Sidheswar Samal2, J.K. Pattnaik3, and Prachet Bhuyan4 1
Hamaradoc, Centre of Innovation and Entrepreneurship, KIIT University, Bhubaneswar, India 2 Hope Fertility Clinic, Bhubaneswar, India 3 Office of Chief District Medical Officer Kandhamal, Orissa, India 4 School of Computer Engineering, KIIT University, Bhubaneswar, India [email protected], [email protected], [email protected], [email protected], [email protected]
1 Introduction Maternal mortality is death of any woman while pregnant or 42 completed days during post partum period. Maternal Mortality Rates (MMRs) measures the risk of dying from pregnancy. This is expressed as number of deaths per 100000 births. According to Patel et al[1] 70.69 % of maternal deaths within 24 hours of admission in the hospital are due to geographic conditions, lack of qualified medical attention, delayed referral and late intervention. Therefore a vast majority of these deaths are preventable [2]. Our current study involves developing a “risk assessment method” [Fig. 1] based on a simple health scoring scheme [3], [4]. Using this method we can provide first advisory and referral services to patients in rural and far flung areas through health workers working in sub centres, PHCs and CHCs [Fig. 2]. These services are delivered through SMS [5] or WAP [6]. Similar scoring schemes have already been tested for its efficacy in Indian conditions on 490 mothers [4]. The previous scoring schemes lacked the transport variable which is important as around 56.67% patients die within 6 hours of admission in the hospital, thereby highlighting the necessity of quick transport [7]. Thus we refined the existing model by adding a transport variable. When we introduced this system in Phringia Block of Kandhamal District, Orissa, India in the form of printed manuals, we saw 100% decline in MMR [Table 1] over a period of two years. The scoring scheme being one of the factors behind it but the specific impact of it was not quantitatively measured. Total risk scores (calculated from Table 2) and associated risk category are 0-2 (Low), 3-5 (Moderate), >6(High). Based on risk category relevant advisory are provided [8]. Table 1. Maternal Mortality Rate (MMR) during the study period. Source: Office of Chief District Medical Officer, Kandhamal, Orissa, India. Sl. No. 1 2
Name of Block PHC-Phiringia PHC-Phiringia
Year 1999 2000
Maternal Deaths 2 0
T. Janowski and H. Mohanty (Eds.): ICDCIT 2010, LNCS 5966, pp. 279–281, 2009. © Springer-Verlag Berlin Heidelberg 2009
MMR 109.11 0.00
280
S.S. Samal et al. Table 2. Characteristics with associated score
Factors 1.Age: <16 16-35 >35 2.Parity: 0 1-4 5+ 3.Past Obstetric history Abortion/Infertility PPH Baby> 4000gms < 2500gms PET/Hypertension Previous caesarean Stillbirth/neonatal death Prolonged labor/ difficult labor 6.Transport Rural area Semi urban area
Score 1 0 2 2 0 2 1 1 1 1 1 2 3 1
1 0
Factors 4.Associated disease Diabetes mellitus Cardiac disease Ch. Renal Failure Previous gyn. surgery Infective hepatitis Pulmonary tuberculosis Under nutrition Other diseases (according to severity) 5.Present pregnancy Bleeding < 20 weeks > 20 weeks Anemia <10 gm% Hypertension Edema Albuminuria Multiple pregnancy Breech RH-isoimmunisation
Fig. 1. System Architecture
Score
3 3 2 1 1 2 2 1-3
1 3 1 2 3 3 3 3 3
First Advisory and Real-Time Health Surveillance to Reduce Maternal Mortality
281
Fig. 2. Use Case Diagram
References 1. Patel, D.A., Gangopadhyay, S., Vaishnav, S.B., et al.: Maternal mortality at Karamsad-the only rural medical college in Gujurat. J. Obstet. Gynaecol. India 51, 63–66 (2001) 2. Ram, F., Ram, U., Singh, A.: Maternal Mortality: Is The Indian Programme Prepare to Meet the Challenges? Health and Development Systematic Review (January-June 2006) 3. Morrison, I., Olsen, J.: Obstet. Gynec. 53, 362 (1979) 4. Datta, S., Das, K.S.: Identification of High Risk Mothers by a Scoring System and its Correlation with Perinatal Outcome. J. Obstet. Gynaecol. India 40(2), 181–190 (1990) 5. http://www.kannel.org Kannel: Open Source WAP and SMS gateway 6. http://www.wapforum.org OMA: The WAP 2.0 conformance release 7. Nikhil, P., et al.: Maternal Mortality at a Referral Centre: A five year study. J. Obstet. Gynaecol. India 57(3), 248–250 (2007) 8. Dawn, C.S.: Textbook of Obstetrics and Neonatology, 10th edn.
Appendix: SMS: Short Message Service. WAP: Wireless Application Protocol. PHC: Primary Health Centre. CHC: Community Health Centre.
Author Index
Agarwal, Arun
134
Balaga, Sirish Kumar 188 Banerjee, Anuradha 55 Barooah, Maushumi 84 Bhuyan, Prachet 279 Cai, Hongming 122 Chakraborty, Ayon 98 Chanda, Jayeeta 194 Chatterjee, Kakali 206 Davies, Jim 40 Dutta, Paramartha
Nandi, Sukumar 84 Naskar, M.K. 98
55
Gan Chaudhuri, Sruti Ghike, Swapnil 157 Gibbons, Jeremy 40 Gore, M.M. 72 Gupta, Daya 206
170
267
Kaiiali, Mustafa 134 Kang, Juanhua 243 Kanjilal, Ananya 194 Kotwal, Dhananjay 84 Kumar, Amrit 91 Kumar, Mukul 91 Kumar, N.V. Narendra 21 Kundu, Anirban 104 Kurra, Rajesh 140 Laxmi, P. Vijaya Lenin, R. 176 Li, Qin 243
Padmanabh, Kumar Patil, Sanket 236 Pattnaik, J.K. 279 P.G., Sapna 255
152
Samal, Satya Swarup 279 Samal, Sidheswar 279 Sarje, Anil K. 224 Sengupta, Sabnam 194 Shah, Harshit 21 Shyamasundar, R.K. 21, 140 Singh, Kuldip 224 Sinha, Sukanta 104 Skroch, Oliver 110 Sood, Sandeep K. 224 Srinivasa, Srinath 236 Swain, P.K. 180 Udgata, Gauri K. 267 Udgata, Siba K. 267 Wankar, Rajeev 134 Wu, Wenjuan 243 Xu, Boyi Yu, Hao
Mishra, Arunav 279 Mishra, Manas Kumar Misra, C. 180
91
Ramamritham, Krithi 13 Ramaswamy, S. 176 Rao, C.R. 134 Ray, Mitrabinda 212
Haribabu, K. 188 Herlihy, Maurice 1 Hoffman, D. 176 Hota, Chittaranjan 188 Hurst, W. 176 Junapudi, Vasavi
Mitra, Swarup Kumar 98 Mohanty, Hrushikesha 140, 255 Mohanty, J.R. 184 Mohapatra, Durga Prasad 212 Mukhopadhyaya, Krishnendu 157, 170 Mukhopadhyay, Debajyoti 104 Mussa, Yesuf Obsie 152
72
122 122
Zhu, Cheng 122 Zhu, Huibiao 243