Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering Editorial Board Ozgur Akan Middle East Technical University, Ankara, Turkey Paolo Bellavista University of Bologna, Italy Jiannong Cao Hong Kong Polytechnic University, Hong Kong Falko Dressler University of Erlangen, Germany Domenico Ferrari Università Cattolica Piacenza, Italy Mario Gerla UCLA, USA Hisashi Kobayashi Princeton University, USA Sergio Palazzo University of Catania, Italy Sartaj Sahni University of Florida, USA Xuemin (Sherman) Shen University of Waterloo, Canada Mircea Stan University of Virginia, USA Jia Xiaohua City University of Hong Kong, Hong Kong Albert Zomaya University of Sydney, Australia Geoffrey Coulson Lancaster University, UK
57
Gerard Parr Philip Morrow (Eds.)
Sensor Systems and Software Second International ICST Conference, S-Cube 2010 Miami, FL, USA, December 13-15, 2010 Revised Selected Papers
13
Volume Editors Gerard Parr Philip Morrow University of Ulster, School of Computing and Information Engineering Cromore Road, Coleraine, Co. Londonderry Northern Ireland, BT52 1SA, UK E-mail: {gp.parr; pj.morrow}@ulster.ac.uk
ISSN 1867-8211 ISBN 978-3-642-23582-5 DOI 10.1007/978-3-642-23583-2
e-ISSN 1867-822X e-ISBN 978-3-642-23583-2
Springer Heidelberg Dordrecht London New York Library of Congress Control Number: 2011935462 CR Subject Classification (1998): C.2, K.6, K.4.2, D.2, D.4, J.3
© ICST Institute for Computer Science, Social Informatics and Telecommunications Engineering 2011 This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, re-use of illustrations, recitation, broadcasting, reproduction on microfilms or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer. Violations are liable to prosecution under the German Copyright Law. The use of general descriptive names, registered names, trademarks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. Typesetting: Camera-ready by author, data conversion by Scientific Publishing Services, Chennai, India Printed on acid-free paper Springer is part of Springer Science+Business Media (www.springer.com)
Preface
The Second International ICST Conference on Sensor Systems and Software (S-Cube) was held at the National Hotel, Miami Beach, during December 13–14, 2010. The conference was a two-day event covering many aspects of wireless sensor network systems and their management and control software. Wireless sensor systems are increasing in importance both in the academic and commercial worlds: the availability of innovative sensing technologies and low-cost, low-power wireless communications have made it possible to create and tailor hardware designs to specific applications such as smart homes, assisted living, railway track and signal management, environmental monitoring and measurement and nextgeneration healthcare monitoring. However, the control and management software for such systems has, to a large extent, been bespoke, and the development of novel programming paradigms that support the modularity, scalability and reusability of generic software components is a challenge the research community is addressing to come up with scalable solutions. The conference offered a forum for industrial and commercial sensor design and applications developers to interact with those from other research groups. There is a need to foster a greater understanding of the technical and operational constraints of real sensor deployments and the difficulties present in the ever-increasing complexity of real systems that may have to deal with sensing in real time, acquiring large amounts of raw data and operating in sometimes harsh communications environments. The conference included a keynote talk by Sumi Helal, Professor in the CISE Department at the University of Florida, and Director of its Pervasive and Mobile Computing Laboratory. He presented ongoing research efforts in defining and supporting programmable pervasive spaces, from “assistive environments” for the elderly to ATLAS, a middleware architecture and sensor platform that was used as the foundation for the Gator Tech Smart House. Seventeen papers were accepted for presentation at the conference across six main categories: Sensor Application Programming Paradigms; Novel Sensor Applications; Sensor Network Middleware; Trust, Security and Privacy; Wireless Sensor Network Management and Monitoring; and Sensor Application Development Support Systems. This conference could not have happened without the support of many people. I am grateful for the support of our sponsors: the ICST, who made this conference possible, and Create-Net. I am particularly indebted to my colleague Philip Morrow in his role as TPC Chair for his undaunting eye for detail and planning and for his assistance in organizing the technical programme. I am also very grateful to the Florida International University for assistance with local arrangements and advising such a stunning venue for the event. In particular I wish to thank Ming Zhao from Florida International University for assistance with the local arrangements in Miami.
VI
Preface
I also wish to express my appreciation to Tarja Ryynanen of ICST for her organizational guidance and support in managing the interactions with the various stakeholders and conference supporters and to JeongGil Ko of Johns Hopkins University who did a sterling job with the conference website. On behalf of all attendees, I would like to thank the S-Cube 2010 Steering Committee and the members of the Technical Programme Committee and Track Chairs for their support and efforts. In particular I wish to acknowledge Steve Hailes, who invited me to take on the role of Conference Chair. Finally, thanks is also due to Brian J. Bigalke and Tamas Deli of ICST for their assistance with the production of the conference proceedings and CD. Gerard Parr
Organization
Steering Committee Imrich Chlamtac Sabrina Sicari Stephen Hailes
Create-Net, Italy (Chair) Universit` a degli studi dell’Insubria, Italy University College of London, UK
Conference General Chair Gerard Parr
University of Ulster, UK
Web Chair JeongGil Ko
Johns Hopkins University, USA
Conference Coordinator Tarja Ryynanen
ICST
Technical Programme Committee Chair Philip Morrow
University of Ulster, UK
Local Chair Ming Zhao
Florida International University, USA
Track Chairs Sensors for Assisted Living Chris Nugent
University of Ulster, UK
Scalable Deployment Strategies for Sensor Clouds Stephen Hailes
University College London, UK
Sensor Data Fusion Tools and Techniques Sally McClean
University of Ulster, UK
VIII
Organization
TPC Members Alberto Coen Porisini Alicia As´ın P´erez Animesh Pathak Bryan Scotney Chris Nugent Eiko Yoneki Eli Katsiri Gerd Kortuem Houda Labiod Krishna M. Sivalingam Lu´ıs Lopez Masum Hasan Mattia Monga Mirco Musolesi Philip Morrow Sally McClean Sam Michiels Subrat Kar Tatsuo Nakajima Uday Desai Jane Tateson Ming Zhao
Universit` a degli Studi dell’Insubria, Italy Libelium, Spain INRIA, France University of Ulster, UK University of Ulster, UK University of Cambridge, UK Birkbeck College University of London, UK Lancaster University, UK Telecom ParisTech, France Indian Institute of Technology, India Universidade do Porto, Portugal Cisco Systems, USA Universit` a degli Studi di Milano, Italy University of St. Andrews, UK University of Ulster, UK University of Ulster, UK Katholieke Universiteit Leuven, Belgium Indian Insitute of Technology Delhi, India Waseda University, Japan Indian Insitute of Technology Hyderabad, India BT Innovate & Design, UK Florida International University
Table of Contents
Sensor Application Programming Paradigms Wireless Sensors Solution for Energy Monitoring, Analyzing, Controlling and Predicting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Marius Marcu, Cristina Stangaciu, Alexandru Topirceanu, Daniel Volcinschi, and Valentin Stangaciu Policy-Driven Tailoring of Sensor Networks . . . . . . . . . . . . . . . . . . . . . . . . . . Nelson Matthys, Christophe Huygens, Danny Hughes, J´ o Ueyama, Sam Michiels, and Wouter Joosen
1
20
Novel Sensor Applications Integration of Terrain Image Sensing with UAV Safety Management Protocols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Timothy Patterson, Sally McClean, Gerard Parr, Philip Morrow, Luke Teacy, and Jing Nie A Study on the Wireless Onboard Monitoring System for Railroad Vehicle Axle Bearings Using the SAW Sensor . . . . . . . . . . . . . . . . . . . . . . . . Jaehoon Kim, K.-S. Lee, and J.-G. Oh
36
52
Sensor Network Middleware Middleware for Adaptive Group Communication in Wireless Sensor Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Klaas Thoelen, Sam Michiels, and Wouter Joosen
59
A Middleware Framework for the Web Integration of Sensor Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Herv´e Paulino and Jo˜ ao Ruivo Santos
75
Expressing and Configuring Quality of Data in Multi-purpose Wireless Sensor Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Pedro Javier del Cid, Daniel Hughes, Sam Michiels, and Wouter Joosen A Lightweight Component-Based Reconfigurable Middleware Architecture and State Ontology for Fault Tolerant Embedded Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jagun Kwon and Stephen Hailes
91
107
X
Table of Contents
Distributed Context Models in Support of Ubiquitous Mobile Awareness Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jamie Walters, Theo Kanter, and Roger Norling
121
Trust, Security and Privacy SeDAP: Secure Data Aggregation Protocol in Privacy Aware Wireless Sensor Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Alberto Coen Porisini and Sabrina Sicari
135
A Centralized Approach for Secure Location Verification in Wireless Sensor Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Abbas Ghebleh, Maghsoud Abbaspour, and Saman Homayounnejad
151
How Secure Are Secure Localization Protocols in WSNs? . . . . . . . . . . . . . . Ch´erifa Boucetta, Mohamed Ali Kaafar, and Marine Minier
164
Wireless Sensor Network Management and Monitoring DANTE: A Video Based Annotation Tool for Smart Environments . . . . . Federico Cruciani, Mark P. Donnelly, Chris D. Nugent, Guido Parente, Cristiano Paggetti, and William Burns Pro-active Strategies for the Frugal Feeding Problem in Wireless Sensor Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Elio Velazquez, Nicola Santoro, and Mark Lanthier Integrating WSN Simulation into Workflow Testing and Execution . . . . . Duarte Vieira and Francisco Martins
179
189 205
Sensor Application Development Support Systems Revisiting Human Activity Frameworks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Eunju Kim and Sumi Helal Deriving Relationships between Physiological Change and Activities of Daily Living Using Wearable Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Shuai Zhang, Leo Galway, Sally McClean, Bryan Scotney, Dewar Finlay, and Chris D. Nugent Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
219
235
251
Wireless Sensors Solution for Energy Monitoring, Analyzing, Controlling and Predicting Marius Marcu, Cristina Stangaciu, Alexandru Topirceanu, Daniel Volcinschi, and Valentin Stangaciu “Politehnica” University of Timisoara, Faculty of Automation and Computer Science 2 V. Parvan Blv., 300223 Timisoara, Romania
[email protected],
[email protected],
[email protected],
[email protected],
[email protected]
Abstract. The work presented in this paper addresses the problem of energy wastes through irresponsible usage of computers or electrical devices, as well as wastes encouraged by companies or firms that do not implement any kind of power awareness plan. It is meant to help increase the power usage efficiency in the locations it is installed, by completing an extensive survey of power usage patterns over a short period, and then presenting clear expressive power consumption results of the monitored location, together with recommendations of action plans that would save energy if applied. Based on the consumption analysis, studying energetically profiles for different electrical devices, prediction of power consume and savings can be made. We propose a new wireless sensor network solution that can be used to profile power consumption of both electric power connected devices and battery-powered devices running different applications. The proposed solution offers a better and easier way to monitor the energy each of the target devices, and get real time feedback on the effect of different usage patterns applied to target devices. Keywords: power consumption, power characterization, power profiles, power signatures, wireless sensors network.
1 Introduction The information and communication technology industry is responsible for 2% of the global carbon dioxide emission. The figure is equivalent to aviation industry [1]. In Europe, the energy consumption in data centers was 46 Terawatt hours, in 2006, and 70 Terawatt hours in USA in 2007. This is the equivalent of one hundred million 100 watts light bulbs running 24 h/day 365 days [2]. Moreover, a reduction of the energy consumed by the IT equipment has the greatest impact on overall consumption, because they cascade across all supporting systems [3]. Another aspect of the problem is the high energy consumed in data centers. Gartner’s research showed that data centre and IT managers are interested in internal projects like rationalization and virtualization. In order to achieve improvements in G. Parr and P. Morrow (Eds.): S-Cube 2010, LNICST 57, pp. 1–19, 2011. © Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2011
2
M. Marcu et al.
the energy efficiency for data centers, it is necessary to gain a better understanding of server power consumption and how does this influence the server performance [4]. As far as household domain is concerned, in 2006, BBC predicted that irresponsible behavior of home users in UK would cause an emission of carbon dioxide of 43 millions of tones by 2010 and a cost of £11 billion, coming only from electrical energy wasted during this time [5]. Therefore, important institutions like The European Parliament discussed about reducing energy waste and new standards for standby power consumption of electronic devices [6]. The European Union action plan to increase energy efficiency is oriented also on changing energy behavior of the consumer, public awareness being one of the priorities [7]. Besides the European Parliament, other important institutions (as the European Commission, the Economic Committee etc.) have participated in an awareness-raising campaign on energy saving 'M'illumino di meno' (meaning ‘I'm using less light’) on 12th of February [8]. This action shows their interest in reducing energy usage, by eliminating energy waste. Only advanced monitoring, modeling and measuring techniques would lead to an effective energy management [9]. A system to monitor the electrical consumption of individual devices in a home or office buildings could help people make decisions that are more informed on how to alter their usage patterns and choice of electrical devices in order to conserve electricity. This detailed picture would help users better identify those devices or usage patterns that are leading to needless electricity consumption. Therefore, we propose a system to monitor, analyze and control the energy consumption of individual devices in a home, office building or data centre could provide the necessary tools that lead to a superior energy management. The proposed solution is a better way for individuals or companies to see how much electricity each of the devices in their homes and offices consume, to make a prediction based on consumption habits and try to control the amount of electrical energy that is used. One practical aspect of our solution is that the user can receive a more detailed feedback on the energy used by receiving an electric bill with a line for every individual device that details how much electricity that device used over the monitored period and shows a graph of the power consumption through time. We aimed our system to be a configurable one, which takes into consideration the three types of users (office buildings, data centers, home user), and their particularities: − In office buildings, the energy waste can be reduced by offering a system that monitors and controls the energy consumption. A simple action, as shutting down PCs at night saves $15 to $20 per computer annually [10]. Office buildings are characterized by a large number of systems of the same class, usually desktop computers that have the same kind of usage profile. Therefore, electrical and computing devices existing in office buildings need various profiles according to the companies working program in order to optimize energy consumption according with their weekly program. Every such profile offers the needed level of usability for the devices at the minimum energy, hence energy efficiency could be obtained; − For data centers we want to offer a less expensive solution that monitors and analyses consumption at different levels (server level, cooling system etc). We are
Wireless Sensors Solution for Energy Monitoring, Analyzing, Controlling
3
motivate by the fact that in the present, there is a lack of detailed information about power consumption at the rack or row level [11]. Therefore, without an accurate measuring, there is no basis for trying to optimize data centers [10]. Data centers equipments are characterized by high availability under different load levels, therefore the existing power management solutions must not influence the quality level of the services they are implemented for. One possible solution in such environments is to use the proposed system in order to implement a dynamic balancer between virtual machines and physical machine in order to obtain optimum energy efficiency for the current performance and quality requested level; − As for home users, we propose a solution that will make people aware of their consumption habits, helping them reduce costs and try to make them more responsible. Home buildings are characterized by a much larger variety of electrical, electronic and computing devices: TV sets, DVD Players, desktop and laptop computers, washing machines, air conditioning, refrigerator, microwave, radio sets, etc. This variety of devices implies a variety of power signatures and usage patterns which makes the energy consumption uncontrollable [12]. The most important aspect for home usage energy efficiency is to identify users’ bad habits and to make them aware of their effects. These goals can be achieved by means of four actions implemented as core features in the proposed solution: monitor, analyze, control and predict. − monitor feature permits online measurement of all or targeted devices in order to save power consumption, voltage and current consumption values into the database for further analysis and to provide a real time image of instant overall or particular device power consumption; − analyze feature provides a way to extract relevant data from the measurements existing already in the database. This module offers a number of various reports and charts that show different perspectives on overall or specific energy consumption for certain period of time; − control feature is an important mechanism to programmatically switch on or off specific devices according with its class or usage profile. However, some problems may appear when controlling certain devices, like computer systems, because of their specific shut down process in order to avoid data loss; − prediction is needed mainly for dynamic power management implementation in order to select the correct management decisions in order to optimize the overall power consumption under certain requirements. Prediction can be implemented as long term and short term trends generation starting from measurements stored in the database. In this paper we describe in Section 2 the other actual work related to the domain addressed in our project. The overall architecture of the power consumption monitoring solution is presented in Section 3. The solution architecture contains both hardware and software solution design. In Section 4 the power signatures of electrical devices are introduced together with their characteristics and the example and discussion of the extracted power signatures for one type of electrical device. The conclusions are presented in the last section.
4
M. Marcu et al.
2 Related Work Reducing energy consumption of electrical devices has both economical and ecological benefits, but it also opens new research directions related to the interpretation of power consumption data. The research directions we are interested to explore starting from the power consumption monitoring of various devices are: energy and power signatures definition and their characteristics; the relation between usage patterns, power profiles and energy signature; and new application-aware power management techniques for energy efficiency. In this section we first discuss current tools for monitoring electrical consumption then we present some research papers which address power and energy monitoring and analysis. Although there are a number of options for measuring power consumption at the level of a total building (the electric meter being the most obvious), a cheap scalable option for measuring power consumption of each device individually does not exist [13]. The current way one can measure power consumption of an electric system is to use an on-the-self device like Kill A Watt [14] or Watts Up Pro [15]. Both of these features make it difficult to develop a practical real time system specific picture of energy usage that contains many individual devices because the user must go to each monitor in order to record the data [13]. Most of the available solutions [16] are oriented on consumer market therefore they do not offer complex features and power management support or integration. On the other hand, professional solutions oriented to servers and network equipments in data centers are expensive and they are oriented mainly on alarms and failure avoidance, but they are not used for dynamic power management. In [16] there are presented ten energy monitoring tools: EnergyHub, Tendril, Onzo, Agilewaves, Google PowerMeter, GreenBox, The Energy Detective, PowerMand, Green Energy Options, and Energy Aware. It is out of the scope of this paper to present and compare all these devices, but [16] is a good starting point to explore these solutions. The author of [13] proposed an electrical power monitoring system containing distributed units that transmit power consumption data wirelessly via RF radios to a central base station. The designed monitoring units plug into an outlet and then the device being monitored is plugged into them. The base station parses the incoming data from multiple monitors to determine the power consumption of each device in order to have an overview on overall power consumption. Our solution is similar with the solution described in [13] in terms of hardware overall architectures but it much more oriented on the four core features implemented in the software (monitor, analyze, control and predict). In [17] the authors proposed to implement a virtual instrument for the electric power quality monitoring aiming to act in real-time for detecting, monitoring and recording all typical disturbances superimposed on the ideal signal. Their goal is to extract the voltage and power quality parameters for the power distribution network. The authors present a completely digital method for the fast and accurate monitoring of the electrical power quality useful to produce real time quality/ disturbance reports. In the paper, the mathematical basis of the proposed estimation algorithm is discussed in terms of reliability and uncertainty. This work is different from ours because we do not address the quality or disturbances of power lines but we consider the power signatures and their relation with the usage pattern of the device.
Wireless Sensors Solution for Energy Monitoring, Analyzing, Controlling
5
An interesting idea is presented in [18], where the authors try to find a way to obtain detailed information about electricity consumption in a building, at a low cost, without the usage of distinct power meter for every target device. They intend to achieve this goal using a non-intrusive method, maximizing the use of the existing infrastructure rather than imposing the need to install various new devices in the building, thus reducing the associated hardware and labor costs. In order to split overall power consumption data they built a data acquisition system that samples voltage and current at 100 kHz and calculates real and reactive power, harmonics, and other features at 20Hz. The authors showed that under certain conditions disaggregating the total power consumption between different plugged devices is an achievable task, and can be done with a relatively high degree of accuracy. However, the problem is much more challenging in the real world, where not only does the number and type of appliances increases, but also the measurements are susceptible to more noise and obtaining ground truth data becomes more difficult [18]. Furthermore, they did not take in account complex devices, like computers, which do not have constant levels of power consumption, but it varies significantly with the running applications. The original aspects of our work are: identification of specific aspects of different types of users power consumption patters and their energy management requirements; definition and implementation of the core features required by a complete energy management solution; definition of power signatures and their relation with the usage patterns; design and implement the extensible and cost effective wireless sensors network and software application for power consumption monitoring, analysis, control and prediction.
3 Solution Architecture The proposed solution aims to address a wide range of needs in this domain, and that was the starting principle in the design process. We have to take into account different usage scenarios in order to satisfy as many requests as possible. Also, given the use of the wireless sensors, the topology of the location has to be accounted for. That is why an incremental design process was selected, which allows for developing a base solution which serves only a small segment of clients, and then extend it with more functionality at each step. We started with addressing office buildings, which present a high density of computers and workstations. There are two advantages in this approach. Firstly, the types of devices to be monitored do not vary considerably, because all computers can be abstracted to a one-phased AC powered device. This means that the hardware needed in the implementation of this solution is the same for all devices, which lowers the development costs. A second advantage is represented by the fact that there are many devices to be monitored in such locations, in comparison with home or industrial locations. This stresses the software, because more data is processed and transported through the system at each architectural level. The added strain on the software part allows for a more reliable development process, because more code is covered and exercised at a high load, so the faults are caught earlier and the final result is more reliable.
6
M. Marcu et al.
The development process for this project carried us through many knowledge domains, from electrical engineering and computer network principles to database management, web design and event-based programming. We faced challenges in developing different parts of the system and were able to learn as we advanced through the design. The software development has different characteristics, depending on the level of the system it was done for. The low-level code for the microcontroller from the sensor nodes was written in AVR Studio development environment. This environment was chosen because it allows writing the code to the micro-controller’s flash in the same interface. Plus, it offers a real-time debugging option. Both this characteristics provide a faster development process and testing of the written code. The code for both the eBox and the server was written in C# using the .NET platform and Visual Studio 2010 Professional. The difference resides in the type of the projects. The user interface web application on the server is an ASP.NET MVC 2 project because it uses the model-view-control design pattern. The Windows Communication Foundation was used to assure the communication with the server, thus WCF projects were created separately for the eBox and for the server and the IIS service was activated. The rest of the functionality of the code on the eBox is integrated in the mentioned project and addresses Xbee communication, the configuration module, data base administration and the data processing modules. 3.1 Hardware Architecture In Figure 1, we designed the overall architecture for the hardware of the project. This design is built around the eBox which is considered the central embedded element of the system. This is a ready built device and no hardware alterations were done to it. Together with the sensor network coordinator, the eBox forms the central unit. The interface between the two is implemented through a USB port. Also, the components of a sensor node are specified in the figure. These are the Xbee module, an ATmega16 microcontroller and the measuring device adapted to the type of element which needs measuring. As far as the server is concerned, the hardware represents a normal Windows Server 2008 machine. The custom hardware components used in this project are further detailed in this section. The hardware layer of this project is mainly represented by monitoring devices with wireless communication capabilities (Figure 2), whit each device being connected between the AC line and an electrical consumer. These devices together form a sensor network designed to measure the power consumption and AC line parameters within an office building, a datacenter or a personal home. The wireless sensor network is responsible for acquiring power measurements from the devices attached to the AC line and transmitting them to a central unit. Each node is composed out of a controller specialized in measuring AC line parameters such as voltage, current, active power and frequency, a wireless communication device and a microcontroller which controls the whole activity of the device. Besides measurement capabilities, each node has a secondary task such as routing the information to the central unit. Each measuring node is identified through a unique serial number provided by the wireless device.
Wireless Sensors Solution for Energy Monitoring, Analyzing, Controlling
Fig. 1. Overall solution architecture
Fig. 2. Measuring device architecture
3.2 Software Architecture The product is structured in three separate stand alone applications: − the EBox component which can collect data from the sensors and also control them;
7
8
M. Marcu et al.
− the Web portal which can be used to represent data in a logical and friendly fashion; − the Web Service which holds the logic to access and modify the data for outside consumers (clients) for example the EBox. All was centered around the data logic and modeled as a separate project with logic shared between the other components (service, reports, web portal, admin applications). As a development platform we used .NET 4 with its enhancements for Entity Framework and ASP MVC. For modeling the data layer it has been used Entity Framework 4 and Unity 2.0 in order to assure a persistence and context ignorant scenario for each application this was deployed to. Persistence ignorance was assured by POCO objects (a feature introduced in Entity Framework 4) while the context “ignorance” was handled by making use of Dependency Injection found in Unity 2.0 Framework. On top of this came the service with its logic to access and modify database sensible content. The Service was done with WCF and if the client supports WS Binding, than it can make use of sessions (different instance/session) enabling caching and logic enhancements. The Website was meant to provide frontend business capabilities and was developed with ASP MVC 2.0 making also use of the same logic as the service. Rich client graphics and effects were accomplished by using jQuery with some of its famous controls/plugins (apple like menus, tabs, grids) which we enhanced to support different business scenarios for example paging in grids, autocomplete in comboxes.
Fig. 3. Overall software architecture
Wireless Sensors Solution for Energy Monitoring, Analyzing, Controlling
9
The software part of the system is described in Figure 3 [12]. As mentioned before, the various components follow the general system architecture, with the code on the eBox being the one offering the main functionality. The applications written in different levels of the system have distinct characteristics and are described below. First, we first have the low-level code that is written for the microcontroller in the sensor nodes. There is a separate part that regards the coordinator. This consists of the serial communication protocol for the connection with the eBox. The code on the endpoints differs from the one written to the coordinator, because the controller on the endpoint has to communicate with the measurement device and read data from it. So, the serial interface is replaced by the interface with the devices. Both types of sensor nodes implement a communication protocol between them, which builds on top of the already existing layers. Second, we have the software on the eBox, build on top of a Windows Embedded 6.0 Image. The eBox gathers all data from the monitored devices and stores it using the compact version of SQL Server 2008 for embedded devices. A monitoring module congregates data into clusters of measurements coming from the same source and adds a timestamp to make chronological ordering possible. Then a mild analysis is done to remove inconsistent data from the clusters, such as misreads or values that could not practically exist, but are reported due to specific events in the power line, especially in alternative current. After the analysis, the data can be stored into the database. After the analysis, the data can be stored into the database. In this state, the data is structured into chronological data samples from the same source, and each source has its own data. Reporting and analysis features were provided by Microsoft’s Reporting Services 2008. Data was stored in Microsoft Sql Server 2008 R2. Though not implemented yet in the web portal, support for localization (example Bing Maps) it’s present. The database structure we designed is presented in Figure 4. There are a number of 15 tables that resulted from the normalization process. We have a table-cluster that represents the hierarchical representation of an organization (company-departmentlocations (rooms)) and a set of corresponding maps for each location. On the other side, an abstract representation of physical elements can be seen (monitored device, eBox, consumption). The central tables are the user table, which can access identity, rights, location, consumption and devices information. For security reasons, the password field contains the md5 code of the user-selected password. We used this encryption for its proven correctness and safety in many domains and applications. In this way, the only place the password itself is available, is the login form textbox where the user enters it. Each device has an ID, from the XBee unique serial number, the name given by the user, the name of the location it measurement entry has a timestamp, the values of the current and voltage from which the power will be calculated, and a reference to the device that provided this measurement. The same structure is available on the data base present on the server. For the communication to the server, Windows Communication Foundation is used. The server makes available through WCF several services regarding the data transfer. One service assures that the two SQL Servers present in the system stay synchronized. Another is used to communicate to the eBox which devices from the user’s list are enabled and which are disabled. The third service permits the eBox to upload the clusters of data to the server.
10
M. Marcu et al.
Fig. 4. Database architecture
The security measures taken at this point are described next. First, the communication through the WCF is done using https with a self signed certificate, for the moment. More important, at each access from the eBox to the server, the registration protocol is used, meaning that the username and password stored in the data base of the eBox are transmitted to the server. Only after the data is checked, the server allows the communication to start. If the provided username and password are found in the database on the server, but the eBox serial number is not present, the system assumes that a new eBox was added for the specified user, and registers it. Finally, there is the software running on the server. This is split into two parts. One consists of the discussed WCF Services: synchronization and data storing operate with the local SQL Server, while the user-parameter setting transmits to the eBox the user preferences. The second part is a web application that addresses the users of the system. This application is designed using the model-view-control pattern available in ASP.NET. The data regarding the current user is loaded from the global data-base containing measurements from all users and is stored in the model. Then, when the user requests certain information to be displayed, the controller enables the view containing the respective data. There is a view for the list of devices, and one for each of the devices, with the graphical representation of the consumed power over a period of time.
Wireless Sensors Solution for Energy Monitoring, Analyzing, Controlling
11
3.3 Analysis and Prediction Analysis and prediction are two of the four main functionalities of our solution. Moreover, they are the ones that mostly influence the user into changing his behavior. Analysis offers the user 4 various graphical representations of the consumption of one or more devices. − A first option would be to generate an intuitive graphical representation of the consumption of all devices in a selected location, over a defined time span. The total consumption is depicted with the help of bar charts. − Another chart will show the consumption distribution for a selected location, highlighting the contribution of each device to the locations total power draw. If, for any reason, a device is responsible for a major proportion of the consumption, the user might understand that he is misusing that certain device. − Third, the user can generate an individual consumption distribution pie-chart showing how much time the selected device spends in each of the four consumption categories: low-power, normal mode, high and very high. The piegeneration algorithm is straightforward: having the minimum and the maximum consumptions over the selected time interval, all consumption readings (in Watts) are attributed to one of the four categories by the following criteria: interval = maximum – minimum if (Watts <= interval * 0.20 + min) -> low-power (0-20%) else if (Watts <= interval * 0.60 + min) -> normal mode (20-60%) else if (Watts <= interval * 0.80 + min) -> high mode (60-80%) else -> very high consumption (80-100%)
− Finally, a classic XY chart, showing the consumption evolution over a selected time span, can be generated. Note that, if, for instance, the user selects a very broad time span, it will be automatically narrowed to the space in which readings have occurred, so that the representation area will be filled by graphics. All four analysis graphics can have their representation step set to a day, week, month or a year. Prediction is again, a simple yet powerful tool, which will offer the user an insight on the possible consumption evolution over a selected period of one day, one week, one month or a whole year. Of course, the accuracy will depend on the amount of data has already been collected. The generated report or graphical representations are based on the same algorithm: − All (W) readings are clustered into groups corresponding to the prediction range (e.g. if “weekly prediction” is selected, all readings for the selected device will be grouped into clusters for each week up until the last finished week) − For every cluster an average consumption is computed for each temporal subunit (e.g. for “weekly prediction”, the subunit is day. An average consumption will be computed for each Monday, Tuesday … Sunday. The subunit for day is hour, for week day, for month week and for year it’s a month). − In order to predict to consumption for each temporal subunit the following computations are done:
12
M. Marcu et al.
N = number of available subunits with recordings Next subunit possible average consumption = {subunit (N) - [subunit (N) - subunit (N-1)] / 2 + average (subunits)} / 2 (E.g. we have the following consumptions: Monday (N) = 100W, Monday (N-1) = 60W and overall Monday consumption M = 70W. We need Monday (N+1), which equals [100 – (100 - 60) / 2 (100 – 20 + 70)/2 = 150/2 = 75 W)
+ 70] / 2 =
As can be seen, the prediction algorithm focuses on the idea that our consumption habit for the next tine interval depends on the habit over the last two intervals, and is further approximated with the help of the overall average consumption habit. 3.4 User Features The targeted clients of our service have very different profiles, from company managers or representatives, who want to reduce power consumption in a certain branch, to individuals who want to analyze the energy consumption in their homes. That is why we designed the solution to require less training on the user part and not to rely on any previous knowledge other than internet browsing. After this step, the system is powered on and the client only needs an internet connection to access the server and use the application. The user interface presents a list of all the registered devices for the logged in user. The device name and id are displayed, together with the power consumed by the device since the start-up of the system, both in absolute values and in percentages of the total power consumed. View current/ previous power data
Set consumption threshold alert
Single node information
View node signal strength
Consumption statistics
Highest/lowest consumption per node
Overloaded nodes
Cluster information Reports
Graphics
View topology
MONITORING
ANALYSIS
BROWSER CONTROL
PREDICTION Predict energy consumption
Daily / weekly / monthly consumption
Energy‐saver profiles
Specific devices/nodes consumption
Power option commands to PCs
Fig. 5. Features overview
Plugged device control
Switch on/off node
Limit node parameters (voltage,current)
Wireless Sensors Solution for Energy Monitoring, Analyzing, Controlling
13
Any user may opt for one of the four main functionalities (depicted in Fig.5) supported by the user interface (Fig. 6). The monitoring option offers a 2D spatial view of the installed network of plugs. Interaction is possible with all nodes, and data may be obtained from them in the form of reports or suggestive graphics. The prediction option is a special asset which may give an idea regarding, for example, the value of the next electric bill. The control option enables remote on/off switching of specific consumers and offers the possibility to apply a certain energy-saver profile. These profiles take decisions like turning off lights at night or putting a PC into hibernation.
Fig. 6. Graphical user interface
14
M. Marcu et al.
4 Power Signatures and Test Results 4.1 Power Signature Definition Power signature of an electrical, electronic or computer device is defined as the power consumption response to certain workload executed by the target device. Power signature is the variation in time of power consumption measurements when certain usage pattern is applied to the device. Some devices can show distinct power signatures function of the type of workload the device executes or function of the applications executed on the device. Therefore, power consumption of a device is not constant but it depends on various hardware parameters and user applications. Test methodology we used to extract power consumption signatures addresses two directions: the power states of target devices and the usage patterns of top level applications. Every device has at least two power states (on and off) while most devices implement more power states: off - the device is turned off, but it remains plugged in; sleep or stand-by - the device is in one of its power saving states, where it waits for certain commands to switch in active state again. In sleep states a device is not completely switched off in order to retain at certain level the last active device state or context, so that the active state can be easily activated; active - the device is turned on and executes its activities. The application level influence on power consumption is much more difficult to model but it has an important impact on the power signature. Considering the relation between type of workload executed by certain device and its power consumption variation when executing the workload, we grouped the consumer electrical devices in three classes: − Low-intelligence devices are considered the systems whose power signatures depend only or mostly on the hardware power states the device passes through when used. In this class we include consumer electrical devices like: refrigerators, washing machines, heating devices, air conditioning devices, etc. For these devices their power signatures are less influenced by their usage parameters or workload type they execute. − Medium-intelligence devices are considered the systems whose power signatures depend on both hardware power states and the workload the device executes. In this class we include the electronic devices containing some level of electronic control features: TV sets, radio devices, CD players, DVD players, set-top-boxes, fixed phones, etc. The power signatures of these devices are moderately influenced by the workload they execute. − High-intelligence devices are considered the systems whose power signatures depend mainly on the workload type and parameters they execute. We include in this class the computing systems containing at least one certain type of central processing unit like microprocessors or microcontrollers. In this class of devices we consider: desktop PC, notebooks, PDAs, SmartPhones, printers, network devices, etc. The power signatures of these devices are strongly influenced by the workload they execute.
Wireless Sensors Solution for Energy Monitoring, Analyzing, Controlling
15
For every type of device which can be seen as power consumption source we can establish different power signatures (or power fingerprints) which denotes the power consumption of the device for a given utilization profile (e.g. usage pattern, applied stimuli or workload). In our tests we observed that every electronic device has a specific power consumption profile for different workload type or usage patterns [12]. We started from the results obtained in [12] and we tried to identify the power signature characteristics and their benefits, and we take in study one simple device. 4.2 Test Workbench The performance criteria by which we benchmarked our system refer to specific metrics depending on the component being analyzed. The system as a whole has to be accurate, fast and scalable. During the tests, we observed the behavior of our system, paying special attention to faults occurred in the components of the system. The communication distances between two XBees were measured. In open space these were of about 30 meters. When the modules are separated by a 25 centimeters thick concrete wall, the communication between them is interrupted. On the other hand, a thinner wall, of 15-20 centimeters, only shortens the distance to about 10 meters. Glass doors do not affect the communication in any way, which is a good thing especially when using the system in office buildings. Regarding the transmission times, these are satisfactory, especially because we need to transmit a sample from a device every five minutes. This allows for enough time to gather measurements from all devices present and due to the processing power of the eBox, the samples are analyzed and stored in time, such that the workload on the eBox is not at constant high levels. Considering all these parameters, we tried to execute the tests in the same environment. For every type of device we considered specific set of tests in order to extract power signature characteristics. We tried to see two aspects of power signatures: whether they are the same or similar when executing similar workloads and second, whether they are distinct when executing different workloads. In our tests we considered one programmable device in the first class - a washing machine. 4.3 Test Results The first test was executed to identify the elements of the washing program and their effect on power consumption. There are seven phases of a complete washing program (Figure 7): − − − − − − −
(I) - the prewashing phase; (II) - water heating phase (40oC); (III) - first washing phase; (IV) - an optional heating phase between the two washing phases; (V) - second washing phase; (VI) - rinsing phase; (VII) - drain and drying phase (1000 rotations/minute).
16
M. Marcu et al.
Fig. 7. Washing program phases and their power signatures Power signature (test 1) P [W]
2500
2000
1500 Power signature 1 1000
500
0 1
501
1001
1501
2001
2501
3001
3501
4001
t [s]
(a) Power signature (test 4)
2500
2000
1500 Power signature 4
P [W]
1000
500
0 1
501
1001
1501
2001
2501
3001
(b) Fig. 8. Power signature repeatability
3501
4001
t [s]
Wireless Sensors Solution for Energy Monitoring, Analyzing, Controlling
17
The second test emphasis the stability and repeatability of power signatures when running the same washing program and the same clothes workload. We run the washing program four times and we monitored the power consumption (Figure 8a). The obtained power consumption signatures were the same, but some differences can be observed due to different amount of items washed. During test 4 there washing machine was loaded half of its full capacity therefore the heating phase is shorter because the washing machine automatically adapts to the workload (Figure 8b). During the test 2 we observed that the washing machine didn’t drain the clothes correctly. This malfunction can be observed in the power signature in Figure 9 (a). In Figure 9 (b) two differences are observed compared with standard power signatures: an extra spike to heat the water between washing phases and a larger drain phase. In our tests we obtained both similar signatures for similar programs and workloads and distinct signatures for different programs or workloads.
(a)
(b) Fig. 9. Power signature differences
18
M. Marcu et al.
Power signature (test 5) P [W]
2500
2000
1500 Power signature 5 1000
500
0 1
501
1001
1501
2001
2501
3001
3501
4001
4501
5001
t [s]
(a) Power signature (test 6) P [W]
2500
2000
1500 Power signature 6 1000
500
0 1
501
1001
1501
2001
2501
3001
3501
4001
4501
5001
t [s]
(b) Fig. 10. Washing programs power signatures
In Figure 10 two washing programs power signatures are presented. In Figure 10(a) there is a long program with full capacity loaded and the second chart depicts the power signature of a short washing program with only half of the capacity loaded. It can be observed in Figure 10 (b) the shorter heating phase duration due to the smaller quantity of water needed in test 6. It can be also observed the missing of the washing phase I in test 6, and this is specified in the selected washing program.
6 Conclusions In this paper we presented our proposed architecture for online monitoring of different consumer electronic devices. We defined the concept of power signatures of different types of devices and we run a number of tests for a device in every proposed class. Based on these preliminary results for every power signature class further
Wireless Sensors Solution for Energy Monitoring, Analyzing, Controlling
19
specific analysis can be done in order to observe usage flaws and malfunctions of the devices. For certain types of devices, power signatures can be used to monitor the usage pattern of that device and furter power management assumptions can be made. Acknowledgments. This work was supported by Romanian Ministry of Education CNCSIS grant 680/19.01.2009.
References 1. Gartner, Inc. Gartner Estimates ICT Industry Accounts for 2 Percent of Global CO2 Emissions. Gartner. Gartner, Inc. (April 27, 2007) 2. Falcon Electronics LTD, Measurement of Data Center Power Consumption, http://www.fe.co.za 3. Emerson: Energy Logic: Reducing Data Center Energy Consumption by Creating Savings that Cascade Across Systems, http://www.emerson.com/edc 4. Giri, R., Vanchi, A.: Increasing Data Center Efficiency with Server Power Measurements. Document. Intel Information Technology. IT@Intel White Paper (2010) 5. BBC, UK ’tops energy wasters league’ (October 23, 2006), http://news.bbc.co.uk/2/hi/uk_news 6. European Parliament. Action Plan for Energy Efficiency: Realizing the Potential (2008) 7. European Commission, Directorate-General for Energy and Transport. 2020 vision: Saving our energy. European Commission (2007), http://ec.europa.eu 8. Energy Saving Campaign: ’M’illumino di meno’. Press release (February 2010), http://www.europarl.europa.eu/sides/ getDoc.do?pubRef=-//EP//NONSGML+IMPRESS+20100212IPR68943+0+DOC+PDF+V0//EN&language=EN 9. Gartner, Inc. Gartner Says Measurement and Monitoring of Data Centre Energy Use Will Remain Immature Through 2011. Gartner (September 24, 2009) 10. Wong, W.: Green Tech. Tips, Biztech (March 2009), http://www.biztechmagazine.com 11. Giri, R., Vanchi, A.: Increasing Data Center Efficiency with Server Power Measurements. Document. Intel Information Technology. IT@Intel White Paper (2010) 12. Marcu, M., Popescu, B., Stancovici, A., Stangaciu, V., Certejan, C.: Power Characterization Of Electric, Electronic And Computing Devices. Scientific Bulletin of the Electrical Engineering Faculty (1) (2009) 13. LeBlanc, J.: Device-Level Power Consumption Monitoring. In: Proceedings of the 9th International Conference on Ubiquitous Computing, Austria (September 2007), http://www.awarepower.com/ubicomp07.pdf 14. Kill-A-Watt, http://www.p3international.com 15. Watts-Up-Pro, http://www.doubleed.com/products.html 16. Fehrenbacher, K.: 10 Monitoring Tools Bringing Smart Energy Home, Business Week (April 2009), http://www.businessweek.com/technology/content /apr2009/tc20090414_446611.htm 17. Adamo, F., Attivissiomo, F., Cavone, G., Lanzolla, A.M.: A Virtual Instrument for the Electric Power Monitoring in the Distributing Network. In: 15th Symposium on Novelties in Electrical Measurements and Instrumentation, Romania (2007) 18. Berges, M., Goldman, E., Matthews, H.S., Soibelman, L.: Learning Systems for Electric Consumption of Buildings. In: Proceedings of the ASCE International Workshop on Computing in Civil Engineering, Austin, Texas (2009)
Policy-Driven Tailoring of Sensor Networks Nelson Matthys1 , Christophe Huygens1 , Danny Hughes2 , J´ o Ueyama3 , Sam Michiels1 , and Wouter Joosen1 1
IBBT-DistriNet, Katholieke Universiteit Leuven, 3001 Heverlee, Belgium
[email protected] 2 Department of Computer Science and Software Engineering, Xi’an Jiaotong-Liverpool University, 215123 Suzhou, China
[email protected] 3 University of S˜ ao Paulo (USP), S˜ ao Carlos, 13566-585, Brazil
[email protected]
Abstract. The emerging reality of wireless sensor networks deployed as long-lived infrastructure mandates an approach to tailor developed artefacts at run-time to avoid costly reprogramming. Support for dynamic concerns, such as adaptation, calibration or tuning of the functional and non-functional behaviour by application users and infrastructure managers raises the need for fine-grained run-time customization. This paper presents a policy-based paradigm to realize the diverse concerns of the involved actors by enabling fine-tuning and optimization of the runtime environment. Integration of the policy paradigm into various main programming models is analyzed. A prototype implementation of the paradigm in the context of an event-component based wireless sensor network platform is evaluated on the SunSPOT sensor platform. Keywords: Policy, Component models, Reconfiguration, Multi-paradigm Programming.
1
Introduction
Over the last few years, Wireless Sensor Networks (WSN) have evolved into long-lived infrastructure on which various applications from multiple actors may be executing concurrently [25,24,21]. This trend of moving away from the traditional monolithic application paradigm towards a general purpose execution platform capable of hosting a multitude of applications has already been exemplified by several scenarios that explore the advantages of using WSNs, such as environmental monitoring [16], road monitoring [4], or logistics [21]. In these application scenarios, WSN devices play a role which is merely not data-centric but expands to the execution of a localized part of the holistic application, in which the sensor acts as a general purpose execution platform, albeit with limited execution capabilities. As such, WSN infrastructure is becoming another tier of enterprise infrastructure on which various software components can be deployed over time, and which are potentially used and administrated by different actors. G. Parr and P. Morrow (Eds.): S-Cube 2010, LNICST 57, pp. 20–35, 2011. c Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2011
Policy-Driven Tailoring of Sensor Networks
21
Both these multi-actor and long-lived usage modes drive the need for run-time customization or adaptation. More specifically, the need for run-time adaptation can be due to (i) changing requirements of a single application end-user in a long-lived setting, since it is impossible to capture all (future) requirements at development time, (ii) driven by the benefits of reuse by running customized versions of a single application for various application users, and (iii) plain change in the system environment over time. In the first case (i), the demands of the user with respect to the system change: a sampling frequency sufficient today may no longer be appropriate tomorrow. This ultimately comes down to the same requirement raised by the multi-actor perspective (ii) where services may differ only through customization of service behaviour, for instance in sampling frequency, persistence or security of collected data. These objectives are typically very specific to each user, and therefore cannot always be fixed during platform development time. To illustrate (iii), it is straightforward to envision different behaviour depending on location or energy status. As a result, appropriate programming abstractions must be foreseen that allow for each individual actor to tailor its portion of business functionality over time at run-time. In this paper, we propose a policy-based abstraction and associated programming model to accommodate the adaptation requirement, hereby enabling variations in functional and non-functional concerns of application end-users and system administrators. We demonstrate how our policy-driven system integrates in a non-intrusive way with (the interaction models) of the application logic programmed in the WSN and how it operationally aligns with the activities of the WSN administrative stakeholder. A prototype implementation on the SunSPOT [23] sensor platform is presented enabling flexible and fine-grained customizability of the WSN, while respecting its resource-constrained nature in terms of memory footprint and performance overhead. We validate through a case study in a WSN-based flood monitoring application where the policy-driven system is used to tailor the individual concerns of the various actors. The remainder of this paper is structured as follows. Section 2 motivates the case for and highlights the requirements of fine-grained customization in long-lived WSN scenarios involving multiple actors. Section 3 discusses how the policies are used as a paradigm to express and support the various concerns of different stakeholders and how to integrate them in existing systems. Section 4 validates a prototype implementation through a flood monitoring case, while its performance and overhead is evaluated in Section 5. Section 6 discusses related work. Finally, Section 7 concludes and sketches future work.
2
Fine-Grained Customization in Multi-actor WSNs: Motivation and Requirements
WSNs are moving away from the monolithic application model towards a sharedinfrastructure, multi-application usage mode where every device has an administrative owner controlling the device [10]. In this setting, multiple applications reflecting the functional (business) goals of their respective owners may execute concurrently on a node [25,21].
22
N. Matthys et al.
Consider for instance an environmental river-monitoring scenario where multiple entities may leverage a WSN to gather temperature, pollution, and water level data. Both governmental agencies as well as ecological scientists from universities are interested in pollution and temperature data, whereas local government is interested in flooding data to protect the livestock of the community along the river. During commissioning, certain activities are executed: first and foremost the application is composed out of subprograms. These programs are then deployed. Finally an initial configuration defining fixed sensor sampling and reporting rates is established. From this scenario, we can identify three drivers for run-time customization: 1. Since at commissioning time, ecologists do not know what base pollution values to expect, the alerting threshold will need updating. In fact, this value will change frequently with evolution of water quality. Within a long-lived application, customization happens frequently reflecting tuning or changing requirements. 2. The data gathering activities of the government and the scientist are fundamentally similar yet subtle differences in level of detail need to be accommodated. Large efficiency gains (footprint, programming effort) can be made in the WSN by using customized variants of the same application, for example with different operational parameters. Within a multi-actor setting, the possibility to customize greatly facilitates reuse and resource efficiency. 3. Alerting thresholds need to be adjusted depending on the location of the sensor and time of season, for example in extreme rainfall events pollution will typically be very high because of sewerage overrun. Customization rules describe change in system behaviour in response to system dynamics. Due to dynamism in many of these scenarios, WSN customization is not solely a one-time process as previous real world WSN deployments [16] have revealed, but rather a continuous process. Many operational parameters of the WSN (or its constructing system or application parts) will change continuously in response to changes in objectives of applications users, system administrators or in response to internal change such as mobility or available energy. In addition, the change cycle of above customizations is closely related to the type of abstraction which is used to realize that particular part of the application that is impacted by this change. For instance, pure business functionality, such as a generic sensing component, is changed less often compared to the rules governing the sampling behaviour of that component during extreme rainfall. Finally, since the above customizations are performed by both application endusers or system administrators during the run-time phase of the application, it should be recognized these users prefer to express their goals differently than developers [2]. Abstractions offered for run-time customization must therefore be intuitive and sufficiently high-level. In this context, rule-based declarative or imperative approaches are typically considered to be appropriate abstractions [10,3]. From this, the requirements of the mechanism to support these run-time customizations can now be identified:
Policy-Driven Tailoring of Sensor Networks
23
1. Fine-grained: supporting small and specific changes in behaviour and as such complementary to the development and composition activities of the applications where behaviour is described at large. 2. Light-weight: since the changes are frequent, it is beneficial to the WSN that their representation is compact and their execution imposes little overhead. 3. End-user abstraction: adequate abstraction so that less technical end-users rather than expert WSN programmers can express their goals. These requirements are in contrast with typical WSN reprogramming approaches such as TinyOS [15] that work monolithically, or even component-based engineering [8] where only coarse-grained change is supported. Component models do provide an attractive programming model for building multi-actor WSNs since new applications can be constructed in a cost-effective and resource-efficient manner by reusing existing components. Some level of support is offered in component models for the problems of dynamism and long-livedness of the WSN through the ability to add or delete functionality or modification of the existing composition at run-time. Yet, components are typically implemented as coarse-grained generic artefacts applicable for a wide variety of applications with little support for domain-specific customization. Thus, while component-based reconfiguration provides a generic mechanism for enacting changes, it is inefficient when only a few lines of code may represent that change. This is particularly critical for WSNs, where memory is limited and software updates are costly. So, while the component approach has clear merits, the combined requirements regarding change granularity, frequency, and end-user abstraction demand a specific solution with low development overhead, memory footprint, and performance overhead. Furthermore, the solution should integrate in a non-intrusive manner with the existing main development paradigm. Therefore, we introduce a lightweight framework for adapting application behaviour based on policies. Policies for this framework are high-level, declarative and platform independent, allowing end-users to easily tailor behaviour.
3
Policy as Paradigm for Fine-Grained Customization
Like regular programs, policies are abstractions that govern the behaviour of applications. By using policies functionally, the application actor can fine-tune the behaviour of a business function so that it better serves its purpose. Nonfunctionally, an infrastructure manager can realize its security or energy concerns through policy specification. For instance, sharing policies may indicate what actor may use which piece of functionality. In this section, we first discuss the life cycle of using policies as a programming paradigm for fine-tuning WSN behaviour. Secondly, we focus on possible strategies for integration in existing systems. 3.1
Policy Life Cycle
Several activities take place between policy specification by the application user and their actual enforcement or execution on the target device. Policies start
24
N. Matthys et al.
1 5 Policy enforcement
2
Policy analysis
3
Policy transformation
Policy specification
Application user A 1
Policy specification
Gateway 4
Sensor node
Secure policy distribution
Infrastructure Manager Application user B
Fig. 1. Policy life cycle: specification, analysis, distribution, and enforcement
their life cycle as high-level user-specified policies and subsequently undergo several levels of transformation and refinement to end up as code or configuration specifications that can be directly deployed and executed on the elements of an IT infrastructure [1]. To be applicable in the context of resource-constrained systems such as WSNs, it is crucial to make optimal use of the capabilities offered by the resource-rich back-end to perform most of these activities which are inherently heavyweight. Figure 1 illustrates the chain of activities that constitute the policy life cycle in the context of WSNs: 1. Policy specification happens by (non-technical) application users using tools helping them to create syntactically and semantically correct policies written in a policy language. These policies are then submitted to the administrative actor together with a list of applications and target nodes where they should be applied. 2. Policy analysis is a heavyweight activity which should exclusively happen in the resource-rich back-end of the administrative actor. It involves checking for conflicts between already applied policies and verification whether the policy is transparent for other users. If a policy is found to be correct, it is marked for transformation and distribution to the nodes. 3. Next, the policy is transformed into a more compact and optimized representation better suited for energy-efficient dissemination inside the WSN. 4. Since policies may potentially govern control over all types of functionality deployed inside the network, secure policy distribution is an essential part of the life cycle. Therefore, it is particularly critical that only authorized actors can deploy these policies and that they are disseminated in a secure and reliable fashion. 5. Installation and policy enforcement on a node happens upon reception and verification of the policy. After reception of the compact policy representation, an implementation-specific data structure more suitable for efficient evaluation is constructed which is then stored locally.
Policy-Driven Tailoring of Sensor Networks
3.2
25
Integration Strategies
Integration of the proposed policy-based programming paradigm with the main application development paradigm of the WSN should be transparent for the application developer whilst guaranteeing execution of the policies. At some point execution must be transferred from application code to policy code. The exact point in an application where redirection to the set of applicable policies or corresponding refined code is performed depends on the interaction model used for application composition. Classical interaction paradigms [5] in traditional distributed applications include amongst others (remote) procedure calls, method invocations, or event-based messaging, and the base entities (components in these systems) are considered reusable, solely identified by their type along with their interfaces and dependencies [8]. At development time, all applications are composed by combining individual components through a set of connections which model interface dependencies. At run-time, these connections are represented by a series of inter-component procedure calls or communication messages, depending on the interaction model used. Since both syntax and semantics of these component interactions are well standardized, it is advantageous to limit policy enforcement to these componentto-component interaction points. For tailoring and customization purposes of the application this set of points is sufficient. Hence, the integration of the policybased paradigm must a priori be supported by the interaction model or must be inserted in the interaction model before deployment through selective instrumentation. Regarding the interaction model, since (remote) procedure calls and method invocation are resource-heavy, they are not suited for the WSN target environment as scalability is compromised [14]. Therefore, we limit the scope of our research to event-based messaging, which is commonly used as interaction model in WSNs [15,18], and integrate our policy-based paradigm directly inside this interaction model. 3.3
Paradigm Benefits
Policies offer an attractive development paradigm complementary to the main application development paradigm used inside the WSN system. First, they provide a powerful abstraction for additional customization as they allow to specify and enforce various functional and non-functional concerns at run-time. Secondly, since policies are focused on a single concern, they can be lightweight (see Section 4). Thus customization is achieved at a reasonable energy cost. Finally, the fine-grained, independent distribution model complements the update model used by the main development paradigm, since it supports small changes to application compositions, hence accommodating evolving application demands and dynamic environmental conditions.
4
Research Prototype
A prototype of a framework supporting the proposed policy paradigm was implemented for an event-based run-time reconfigurable component model. For testing
26
N. Matthys et al.
and evaluation purposes, we applied the resulting framework in the context of a small-scale real world river monitoring case. 4.1
Implementation
Base Paradigm: Component Model. The Loosely-coupled Component Infrastructure (LooCI) [9] is a lightweight event-based run-time reconfigurable component model for WSNs. In the LooCI model, components are indirectly bound over an event bus abstraction, implementing a decentralized publish/subscribe interaction model. All LooCI components define their provided interfaces as the set of events that they publish, whereas the required interfaces of a LooCI component are similarly defined as the events to which it subscribes to. Reconfiguration in LooCI is enacted by mechanisms to dynamically deploy, start, stop, or remove components together with dynamic re-wiring of component bindings. Defining a LooCI component implicitly provides access to the event bus for inter-component communications and the underlying connectionless network framework. As a result, all communication between LooCI components is carried by events that allow for asynchronous and indirect communication between a pair of components. For our research, the key benefits of LooCI are, along with a small footprint and good performance, that it promotes this event-based interaction paradigm and a loose coupling between cooperating components. The set of introspective facilities includes support to discover components and their bindings on a node or between two nodes. Policy Framework and Language: The supporting framework for our policy based programming paradigm is illustrated in Figure 2. This framework is deployed on every node inside the WSN and is integrated with the LooCI event bus. Since all interactions between LooCI components occur as events over the event bus, it is possible to tailor multiple aspects of the system by modifying their content or configuring the manner in which these events are propagated. To facilitate this type of management, the LooCI run-time is extended with a compact policy engine, which executes a lightweight Event-Condition-Action (ECA) policy specification language. Every ECA policy consists of a description of the triggering events, a condition which is a logical expression typically referring to event criteria or external system aspects, and a list of actions to be enforced in response. Every time an event is sent between a pair of components via the event bus, the policy engine intercepts this event and evaluates how it should be modified based upon the set of per-node installed policy rules, stored inside a repository component. If the incoming event matches a policy rule, its associated action(s) will applied. At run-time, the set of policies can be dynamically updated accommodating evolving application requirements. Our prototype policy language allows various functions to be invoked inside the condition and action parts of a policy. Amongst these include actions to allow/deny the event to pass, change its contents, replicate and reroute the event towards an intermediate component for additional processing, or to publish a custom user-defined event, for example containing a configuration value destined
Policy-Driven Tailoring of Sensor Networks
Application Component A
EVENT_A
27
Application Component B
EVENT_A
Event Bus
INTERCEPT EVENT
(UN)DEPLOY POLICY
From Networking Framework
(DIS)ALLOW EVENT
PUBLISH EVENT
INVOKE ACTION
Secure Policy Distribution
Policy Repository (UN)INSTALL POLICY
Policy Engine CHECK POLICY
Policy Framework Run-time
KEY Component
To LooCI Run-time
Provided interface Required interface
Fig. 2. Policy framework prototype
for a particular component. In this sense, policies offer a simple, yet powerful abstraction for end-users to fine-tune generic component behaviour, or for system administrators to inject various flavours of non-functional concerns. Policy Distribution Model: As shown in Figure 1, the policy distribution model forces all policy administration to go through the gateway, which is under control of the infrastructure manager and has a direct trust relation with each sensor. In our research prototype, this trust between the gateway and the individual sensors is established by using a pre-deployed public/private key scheme. Application users submit their (high-level) policies to the infrastructure manager in a secure manner using standard enterprise-grade security technologies such as TLS/SSL. Upon reception, these policies are analysed for consistency and transformed into a compact binary representation, more suitable for energyefficient dissemination inside the WSN. In order to deploy a policy on a sensor node, the infrastructure manager then transfers this compact policy representation together with its associated deployment instructions to the gateway using a secure connection, ensuring authenticity and confidentiality of the said policies when distributed to the gateway. Next, to securely deploy the policy inside the WSN, the gateway constructs a message M that securely encapsulates the following content D: D = [deploy_instr, dest_addr, id, timestamp, policy_data] M = D, M AC(KGW , D) In this message, deploy instr contains the deployment instructions such as install or remove a policy, dest addr is the node where the policy needs to be deployed, id indicates a sequence number used between the gateway and destination, timestamp is the current timestamp used as nonce, and policy data is the binary representation of the policy sent to the WSN. For integrity purposes, a Message Authentication Code (MAC), denoted as M AC(KGW , D), of the
28
N. Matthys et al.
policy data D is attached to the message M . This MAC is signed by the private key KGW of the gateway. Once received, the destination node can check whether the message has been sent from the gateway by using the pre-deployed public/private key pair to verify M AC(KGW , D). In this sense, non-repudiation of origin, integrity, and protection against replay attacks can be guaranteed. When this verification is successful, the policy data can be extracted from the message, transformed by the policy engine into a data structure which is more suitable for efficient evaluation, and added to the list of policies installed on the node. Note that this scheme only ensures integrity of the policy data and does not provide confidentiality. Policy authenticity can be assured between each node and the gateway, and between the gateway and network administrator. Optionally, confidentiality can be ensured inside the WSN via symmetric encryption using a session key generated by the pre-deployed public/private key scheme. 4.2
Real World Case Study
The combination of a lightweight component framework LooCI and policy framework was evaluated in the context of a small-scale real world river monitoring case in the city of S˜ ao Carlos, in S˜ ao Paulo state, Brazil. In this scenario, the WSN consisted of four SunSPOT [23] sensor nodes that were deployed to monitor river water quality. Different local environmental science partners monitored three environmental factors over a two-week period: (i) water depth was monitored using a hydrostatic level sensor in order to provide early warning of flood events, (ii) water conductivity levels were monitored using a standard conductivity sensor in order to infer pollution levels, and (iii) methane levels were monitored using a simple CH4 sensor in order to detect decaying organic matter. Finally, tamper and theft detection was implemented using the built-in three dimensional accelerometer of the SunSPOT. Application Composition: The river monitoring composition consisted of seven LooCI components that implemented generic functionality, including logging, encapsulation of hardware resources like sensors, or alert reporting. At platform commissioning time, these components were wired to each other as depicted in Figure 3(a). – A pressure sensor component periodically polls the hydrostatic level sensor and exposes these readings through events of type ‘PRESSURE’. – A conductivity sensor component periodically measures water conductivity and exposes these readings through events of type ‘CONDUCTIVITY’. – A methane sensor component exposes readings from the CH4 sensor via an interface of type ‘METHANE’. – An environmental alert component processes these readings, adds additional context information such as node ID and timestamp, and forwards these readings to the environmental scientists. – A generic logging component encapsulating access to the flash memory and processing ‘LOG’ events.
Policy-Driven Tailoring of Sensor Networks
TAMPERING ALERT
Administrator
TAMPERING ALERT (remote)
ENVIRONMENTAL ALERT
Scientist
ENVIRONMENTAL ALERT
PRESSURE
Administrator
TAMPERING ALERT
TAMPERING ALERT (remote)
ACCEL
ACCELEROMETER CONDUCTIVITY
Flood Detection Policy
LOGGING COMPONENT
LOG
METHANE
Conductivity Detection Policy
ACCEL
CONDUCTIVITY SENSOR
ACCELEROMETER
METHANE SENSOR
LOGGING COMPONENT
Scientist
ENVIRONMENTAL ALERT
Theft Detection Policy
PRESSURE SENSOR
Energyaware Reporting Policy ENVIRONMENTAL ALERT
29
Methane Detection Policy
CONDUCTIVITY
LOG
METHANE
PRESSURE
PRESSURE SENSOR
CONDUCTIVITY SENSOR
METHANE SENSOR
(a) Default component-based application(b) Policy-augmented application composicomposition tion Fig. 3. River monitoring composition
– Finally, to detect sensor tampering and theft, a tampering alert component in the back-end is remotely wired to an accelerometer component that provides readings from the built-in accelerometer via an interface of type ‘ACCEL’. Policy-augmented Composition: In contrast to the generic functionality offered by components, policies are used to tailor this functionality. For example, allowing scientists to set a specific level at which sensor readings should generate an alert. As a result of this, the basic application composition can be augmented with various types of policies as illustrated in Figure 3(b). An example policy for flood detection is provided in Listing 1.1, filtering readings at the source node and only allowing remote publication where the value exceeds a pre-defined threshold. As a result, the policy allows very specific customization of the flood alerting composition by the scientist, possibly depending on the time of season or the location of the sensor. It will not only prevent the publication of spurious alerts, but also conserve battery power. Similarly, an administrator can deploy a policy specifying an energy-aware alert reporting strategy for all environmental alerts inside the network. For instance, when the nodes are low on battery power everything must be stored locally, which is achieved through the publishing of a custom LOG event (Listing 1.2). policy " Flood detection " { on PRESSURE as p ; // PRESSURE contains parameter value if ( p . value > 500 ) then ( allow p ; // by default other PRESSURE events are blocked ) } Listing 1.1. Example of flood detection policy
30
N. Matthys et al.
policy " Energy - aware reporting policy " { on ENVIRONMEN TA L _A L ER T as e ; if ( POWER_STATE == LOW ) then ( deny e ; // do not propagate event publish ( LOG , e . data []) ; // but store it locally ) } Listing 1.2. Example of energy management policy set by the administrator
Similarly, other possible unanticipated concerns might be addressed by the injection of policies at run-time. As a result, Figure 3(b) illustrates policies for conductivity detection, tampering detection, and methane detection which are possibly written by different users.
5
Evaluation and Discussion
We have implemented and evaluated the performance of our policy-based programming paradigm on Java ME CLDC 1.1 compliant SunSPOT nodes [23] (180 MHz ARM9 CPU, 512 kB RAM, 4 MB flash, SQUAWK VM version ‘RED100104’). We investigate the overhead in terms of memory footprint, development overhead, and performance of policy evaluation and secure policy distribution. 5.1
Memory Footprint and Cost of Change
As presented in Table 1, the footprint of the policy framework is small. The run-time consumes 28 kB of ROM, which represents 0,6 % of the total flash memory available on the SunSPOT. Dynamic memory requirements (RAM) for the policy framework are small, requiring only 0,2 % of the total available RAM. The footprint for the component model run-time and the example components is higher, but still small w.r.t. resource capabilities. The disparity between ROM and RAM requirements of a component can be explained by SunSPOT-specific overhead due to the inter-isolate RPC server and the establishment of proxies. Representing a policy is efficient, as the compact representation of the policies used in the case study only occupies 72 bytes of static memory on average. When combined with the distribution protocol header size of 15 bytes, it represents the amount of data transmitted per policy from the gateway to an individual node. As a result, policy updates are lightweight compared to traditional componentbased reconfiguration. Upon reception, the policy data is transformed into a Java object more suitable for efficient evaluation, requiring 320 bytes of RAM on average for the policies applied in the case study. 5.2
Development Overhead
Table 1 also provides a quantitative assessment of development overhead in terms of Source Lines of Code (SLoC) of all components and policies. As can be seen,
Policy-Driven Tailoring of Sensor Networks
31
Table 1. Comparison of memory requirements and development overhead Static footprint (ROM) Dynamic footprint (RAM) SLoC Components: LooCI run-time Conductivity Sensor Methane Sensor Pressure Sensor Accelerometer Environmental Alert Logging Component Policies: Framework run-time Flood detection Energy-aware reporting Theft detection Conductivity detection Methane detection
52 kB 1.8 kB 1.7 kB 1.7 kB 1.9 kB 2.1 kB 1.9 kB
37 26 26 26 26 26 27
28 kB 64 bytes 102 bytes 65 bytes 63 bytes 62 bytes
1 284 440 292 296 286
kB kB kB kB kB kB kB
N/A 59 59 59 53 64 68
kB bytes bytes bytes bytes bytes
N/A 7 8 7 7 7
both paradigms are relatively compact and impose limited overhead on developers. The identical size of some artefacts listed in the table may be attributed to their simplicity and similarity (in all cases, components read a simple value from the SunSPOT Analog-to-Digital Converter (ADC) and transmit it over the event bus, whereas the policies do some simple filtering of spurious events). 5.3
Policy Engine Performance
Figure 4(a) illustrates the overhead of evaluating a number of policies against an event flowing between two components. The performance of policy evaluation includes the time to intercept and redirect the event to the policy engine (0.5 ms), followed by evaluation of matching policies, which was on average 0.5 ms per
10
900
’performance’
9
’signingtime’ ’verificationtime’
800
8
700
7
Time (ms)
Evaluation time (ms)
600 6 5 4
500 400 300
3 2
200
1
100
0
0
1
2
3
4
5
6
7
8
9
10
0
0
250
Number of policies
500
750
1000
1250
1500
1750
Policy size (byte)
(a) Policy evaluation
(b) Secure policy deployment
Fig. 4. Performance evaluation
2000
32
N. Matthys et al.
policy used in our case study. As can be seen from the figure, this scales linearly with the number of policies. Finally, it takes 6 ms on average to construct a policy from its compact representation into a suitable data structure, whereas 7420 ms are needed on average to initialize and start a regular component after deployment (due to registration with the LooCI run-time and establishment of inter-isolate proxies on the SunSPOT). 5.4
Secure Policy Distribution Performance
Subsequently, we investigate the overhead of secure policy distribution in terms of a varying length of policy data as input. Since policy signing happens at the gateway, which in the case study is a standard Internet-connected PC located near the river, it can be done efficiently. We mainly concentrate on the time it takes to verify M AC(KGW , D) at the receiving node. For signing and verification, we use the SHA-1 hashing algorithm, which makes use of the pre-deployed public/private key scheme [22] of the SunSPOT. As can be seen from Figure 4(b), verifying the authenticity of the message can be done in a constant time order (i.e on average 720 ms). Only the time to calculate the signature slightly increases with the input size (which could hypothetically be large), albeit this signing happens at the resource-rich gateway tier. As policy updates in the field test involved less then 80 bytes of policy data on average and 15 bytes of header data, we believe this overhead is still acceptable. 5.5
Lessons Learned from the Field Test
Through the case, several points were uncovered. Firstly, the ability to deploy policies with different coverage was found advantageous. The distribution model allows a single policy, implementing a specific concern, to be deployed to different entities inside the network, ranging from an individual component interface on a single device to the entire network. Secondly, because of diversity in functional objectives, nature of the concerns, resource-constraints, and change cycle, WSN applications benefit from “mixing- and-matching” multiple programming paradigms. In particular, artefacts of varying granularity are incorporated to reduce the impact of change on the system. Both relatively coarse-grained components and lightweight, fine-grained policies are key elements of effective solutions. Base artefacts can reflect dynamism of the application functionality and the control rules govern the behaviour of that functionality. This paradigm mix also supports the conceptual decoupling of stakeholder abstractions. Ordinary application users who often have little experience in programming, such as the ecologists in our case, had the ability to express how components developed by embedded systems experts should behave. However, applying multiple programming paradigms next to each other must be done with care. One paradigm can introduce additional behaviour opaque to the other paradigm or create tight coupling between the various artefact developers, both leading to unwanted side effects. An operational model where for example the network administrator analyzes the interplay between both paradigms can mitigate this problem.
Policy-Driven Tailoring of Sensor Networks
33
Regardless of the potential issues raised, it is our belief that the programming of WSN applications based on multiple paradigms holds great potential for large scale multi-actor scenarios such as the river monitoring application described in this paper. The approach respects both the skill-set of each actor and the resource constraints of the WSN.
6
Related Work
In recent years, a number of programming abstractions [18] have been proposed to simplify WSN application development, including programming models adopting the principles of component-based engineering. Their associated update models can be classified as either static or dynamic. NesC [7], perhaps the best known component model in the WSN field, adopts an event-driven programming approach as the interaction model in combination with a static update model. Reconfiguration translates to deploying a monolithic image, allowing for whole-program analysis and optimization. Run-time reconfigurable component models such as RUNES [4], or OpenCOM [6] follow a dynamic update model, allowing the composition to be changed at run-time through the deployment of new components and modification of component bindings. Despite their benefits regarding reusability, components are still artefacts solely supporting coarse-grained modifications. In order to use the component paradigm, additional support from the sensor operating system is equally needed. Modular updates at run-time can be provided, for example, by modular Virtual Machine (VM) solutions [13,25] or component-oriented operating systems [20]. A major drawback of VMs applied in energy-constrained systems is that they interpret byte code as opposed to components executing native instructions, which results in overhead. Similar to this research on combining a lightweight policy-based paradigm with a more generic main development paradigm, Koshi et al. [13] describe a hybrid approach that efficiently combines VM byte code interpretation with native code execution. Platon and Sei [19] emphasize the need for policy-based management of WSNs to provide for scalability of security. Marsh et al. [17] provide for a flexible and memory efficient policy specification language validating policy-based approaches for WSN management. Finger [26] provides support for the policy-based management of TinyOS [15] nodes in a relatively small footprint, though it offers no support of run-time reconfiguration of applications.
7
Conclusions and Future Work
This paper has made the case for a fine-grained, policy-driven approach to manage the diverse concerns of multiple actors involved in realistic WSN applications. The feasibility of this approach has been demonstrated through a prototype implementation on the SunSPOT platform, using an event-based component model as the base programming paradigm. We validated in a real world river monitoring scenario. As a result, the evaluation has shown that our approach is sufficiently lightweight and has clear benefits to be applied effectively in WSN environments.
34
N. Matthys et al.
In the short term, our future work will focus upon further investigation on the interplay of multiple co-existing programming paradigms. Furthermore, investigation of when to apply which paradigm is required as this might not always be always obvious for many goals. In this light, further collaboration with different WSN actors should give us the necessary insights. Acknowledgement. This research is partially funded by the Interuniversity Attraction Poles Programme Belgian State, Belgian Science Policy, the Research Fund K.U.Leuven, and is conducted in the context of the IWT-SBO-STADiUM project No. 80037 [11] and IWT-SBO-SymbioNets project No. 090062 [12].
References 1. Agrawal, D., Calo, S., Lee, K.-w., Lobo, J., Verma, D.: Policy Technologies for Self-Managing Systems. IBM Press (2008) 2. Bai, L.S., Dick, R.P., Dinda, P.A.: Archetype-based design: Sensor network programming for application experts, not just programming experts. In: IPSN, pp. 85–96 (2009) 3. Chu, D., Popa, L., Tavakoli, A., Hellerstein, J.M., Levis, P., Shenker, S., Stoica, I.: The design and implementation of a declarative sensor network system. In: SenSys 2007: Proceedings of the 5th International Conference on Embedded Networked Sensor Systems, pp. 175–188. ACM, New York (2007) 4. Costa, P., Coulson, G., Mascolo, C., Mottola, L., Picco, G.P., Zachariadis, S.: Reconfigurable component-based middleware for networked embedded systems. International Journal of Wireless Information Networks 14(2), 149–162 (2007) 5. Coulouris, G., Dollimore, J., Kindberg, T.: Distributed systems: concepts and design, 4th edn. Addison-Wesley Publishing Co., Inc., Boston (2001) 6. Coulson, G., Blair, G., Grace, P., Taiani, F., Joolia, A., Lee, K., Ueyama, J., Sivaharan, T.: A generic component model for building systems software. ACM Trans. Comput. Syst. 26(1), 1–42 (2008) 7. Gay, D., Levis, P., von Behren, R.V., Welsh, M., Brewer, E., Culler, D.: The nesc language: A holistic approach to networked embedded systems. In: Proceedings of the 2003 PLDI, pp. 1–11. ACM Press, New York (2003) 8. Heineman, G.T., Councill, W.T. (eds.): Component-based software engineering: putting the pieces together. Addison-Wesley Co., Boston (2001) 9. Hughes, D., Thoelen, K., Horr´e, W., Matthys, N., del Cid Garcia, P.J., Michiels, S., Huygens, C., Joosen, W.: LooCI: A loosely-coupled component infrastructure for networked embedded systems. In: Proceedings of the 7th International Conference on Advances in Mobile Computing & Multimedia, pp. 195–203. ACM, New York (2009) 10. Huygens, C., Hughes, D., Lagaisse, B., Joosen, W.: Streamlining development for networked embedded systems using multiple paradigms. IEEE Software (September 2010) 11. IWT-SBO-STADiUM project No. 80037. Software technology for adaptable distributed middleware, http://distrinet.cs.kuleuven.be/projects/stadium/ 12. IWT-SBO-SymbioNets project No. 090062. Symbiotic networks, http://symbionets.intec.ugent.be/
Policy-Driven Tailoring of Sensor Networks
35
13. Koshy, J., Wirjawan, I., Pandey, R., Ramin, Y.: Balancing computation and communication costs. The Case for Hybrid Execution in Sensor Networks 6(8), 1185– 1200 (2008) 14. Kuorilehto, M., H¨ annik¨ ainen, M., H¨ am¨ al¨ ainen, T.D.: A survey of application distribution in wireless sensor networks. EURASIP J. Wirel. Commun. Netw. 2005(5), 774–788 (2005) 15. Levis, P., Madden, S., Gay, D., Polastre, J., Szewczyk, R., Woo, A., Brewer, E.A., Culler, D.E.: The emergence of networking abstractions and techniques in tinyos. In: Proc. 1st Symposium on NSDI, pp. 1–14 (2004) 16. Mainwaring, A., Culler, D., Polastre, J., Szewczyk, R., Anderson, J.: Wireless sensor networks for habitat monitoring. In: Proceedings of the 1st ACM International Workshop on Wireless Sensor Networks and Applications, New York, USA, pp. 88–97 (2002) 17. Marsh, D., Baldwin, R., Mullins, B., Mills, R., Grimaila, M.: A security policy language for wireless sensor networks. Journal of Systems and Software 82(1), 101–111 (2009) 18. Mottola, L., Picco, G.: Programming wireless sensor networks: Fundamental concepts and state of the art. ACM Computing Surveys (2010) 19. Platon, E., Sei, Y.: Security software engineering in wireless sensor networks. Progress in Informatics 5(1), 1–19 (2008) 20. Porter, B., Coulson, G.: Lorien: a pure dynamic component-based operating system for wireless sensor networks. In: Proceedings of the 4th International Workshop on Middleware Tools, Services and Run-Time Support for Sensor Networks, pp. 7–12. ACM, New York (2009) 21. Steffan, J., Fiege, L., Cilia, M., Buchmann, A.: Towards multi-purpose wireless sensor networks. In: Proceedings of the International Conference on Systems Communications, Washington, DC, USA, pp. 336–341 (2005) 22. Sun Microsystems. Sun SPOT security library, https://spots-security.dev.java.net/ 23. Sun Microsystems. Sun SPOT world, http://www.sunspotworld.com/ 24. Wang, M., Cao, J., Li, J., Dasi, S.K.: Middleware for wireless sensor networks: A survey 23(3), 305–326 (2008) 25. Yu, Y., Rittle, L., Bhandari, V., LeBrun, J.: Supporting concurrent applications in wireless sensor networks. In: Proc. of the 4th International Conference on Embedded Networked Sensor Systems, pp. 139–152. ACM, New York (2006) 26. Zhu, Y., Keoh, S., Sloman, M., Lupu, E., Dulay, N., Pryce, N.: Finger: An Efficient Policy System for Body Sensor Networks. In: 5th IEEE International Conference on Mobile Ad-hoc and Sensor Systems (September 2008)
Integration of Terrain Image Sensing with UAV Safety Management Protocols Timothy Patterson, Sally McClean, Gerard Parr, Philip Morrow, Luke Teacy, and Jing Nie School of Computing and Information Engineering University of Ulster Cromore Road, Coleraine, BT52 1SA, Northern Ireland {patterson-t,si.mcclean,gp.parr,pj.morrow,l.teacy,j.nie}@ulster.ac.uk
Abstract. In recent years there has been increased interest in the development of lightweight rotor-based UAV platforms which may be deployed as single or multiple autonomous UAV systems in support of applications such as ground surveillance, search and rescue, environmental monitoring in remote areas, bridge inspection and aerial imaging of crops. With the increased complexity of the UAV platforms comes a legal requirement that any UAV operates in a safe manner and is able to land safely in the presence of control and power when flight task exception conditions are alarmed. No standards currently exist for the in-line discovery and designation of UAV Safe Landing Zones (SLZs) for rotor-based platforms and this paper describes a novel approach which has been developed as part of a wider UAV Safety Management Protocol. Aspects relating to the SLZ sensing, classification and designation are described together with the methodology for deciding on the SLZ attainability. Keywords: Quadrotor UAV, UAV safety management protocol, UAV safe landing zone detection.
1
Introduction
For many sensing applications such as monitoring atmospheric pollution or surveillance, Unmanned Aerial Vehicles (UAVs) provide a versatile and often inexpensive method of gathering data. UAVs offer many advantages over manned aircraft the most notable of which is the removal of humans from situations which may be deemed dull, dangerous or dirty. There are a wide range of commercially available UAVs which can be equipped with many types of sensors, for example infra red cameras for oil slick detection [1] or video cameras for traffic monitoring [2]. The Sensing Unmanned Autonomous Aerial Vehicles (SUAAVE) project [3] is concerned with the development of swarms of coordinating ’autonomous’ UAVs. The UAVs are autonomous in that low level flight controller commands are generated in response to high level goals, for example GPS waypoints. Currently Ascending Technologies Quadrotor Hummingbird UAVs [4] are used within the G. Parr and P. Morrow (Eds.): S-Cube 2010, LNICST 57, pp. 36–51, 2011. c Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2011
Integration of Terrain Image Sensing
37
SUAAVE project. These UAVs have a flight time of approximately 23 minutes or 12 minutes with a 200g payload. They have four flexible rotors and can be equipped with a variety of sensors. The Hummingbird UAVs used within the project are currently equipped with a Point Gray Chameleon colour camera, GPS, Inertial Measurement Unit (IMU) and wireless communication capabilities. An initial application scenario for the SUAAVE project is that of mountain search and rescue. In this scenario swarms of collaborating UAVs offer many advantages over a single UAV working in isolation. These include: 1. Heterogeneous sensors - UAVs may be fitted with different types of sensors and their respective actions coordinated based on the data detected by other members of the swarm. For example, a UAV equipped with an IR camera flying at a relatively high altitude may be able to identify heat signatures on the ground. A UAV equipped with a colour camera could subsequently be dispatched to areas which have a high probability of containing the casualty given the observations by the IR camera [5]. 2. Efficient searching - One of the most important constraints in a search and rescue scenario is time. By utilizing swarms of coordinating UAVs an area can be searched relatively quickly and efficiently. 3. Robustness against mission failure - In the event of a UAV malfunction a mission can continue to be executed by other members of the swarm. There are many possible situations which may trigger a UAV malfunction. For the most serious of UAV malfunctions it is desirable to land the UAV as safely and quickly as possible. Such scenarios include prolonged loss of GPS signal, a sudden change in operating conditions resulting in insufficient battery life to navigate to the base station and a loss of communication capabilities. Presented in this position paper is the current state of work within the SUAAVE project related to the safe operation of the UAVs. A Safety Management Protocol (SMP) is outlined which incorporates a method of autonomously detecting safe landing areas from image sensor data. The remainder of the paper is structured as follows. In section 2 an overview of related work is presented. Section 3 describes the components of the SMP. An algorithm for the autonomous detection of safe landing sites is outlined in section 4. In section 5 the process of choosing a safe landing site from the available alternatives is discussed. Finally, conclusions and proposed future work are outlined in section 6.
2
Related Work
For the most serious scenarios it may be the safest course of action to instruct the UAV to land. These scenarios may be caused by a variety of reasons, for example prolonged loss of GPS signal due to the profile of the terrain [6] or a hardware error. The definition of a safe landing site varies depending on the UAV’s size and type, for example the Ascending Technologies Hummingbird UAV will require
38
T. Patterson et al.
a much smaller landing site than a MQ-9 Reaper UAV. However, it is proposed by [7] that a safe landing system should, minimize the expectation of human casualty, minimize external property damage, maximize the chance of aircraft survival and maximize the chance of payload survival. Under certain circumstances it may not be possible to satisfy all of these requirements, for example it may be safer to land the UAV in a lake as opposed to a school yard. One method of detecting safe landing areas is to create a 3D map of the surrounding terrain. In these approaches a user is required to provide the GPS coordinates of the suitable area. The UAV then navigates to this area and creates a 3D map of the terrain which enables suitable areas, for example flat, smooth surfaces to be identified. This 3D map can be created by a variety of approaches including stereo ranging [8], structure from motion [9] and laser scanning [10]. The work in [10] addresses the issue of the effect of obscurants on safe landing zone identification by utilizing a laser range finder to create a 3D reconstruction of the terrain. Accurate 3D terrain reconstruction is influenced by the position and pose estimation of the UAV at any given time. In this approach the pose measurements are fused with the laser scan using a probabilistic model of pose error and the likelihood of an accurate point in 3D space given 2 successive scans. However [10] found that in some cases the laser beam reflected off smoke resulting in an inaccurate reading of the terrain profile. Perhaps the main disadvantage of methods which attempt to reconstruct terrain is the required equipment. In [8] a stereo pair of cameras is required. Whilst this may be achieved using two low cost cameras whose pose and relative position is known it increases equipment payload and power consumption. Similarly, the use of a laser range finder as proposed by [10] would be impractical for a small quadrotor UAV. In the work by [11] the terrain is reconstructed using a single camera. However, in order to achieve this multiple passes of the same area is required. In the scenario of an emergency forced landing this may not be achievable due to limited battery life. A further disadvantage is the requirement of an accurate estimation of camera movement. For a UAV with constantly changing velocity this may be difficult. An improvement on these approaches in terms of required equipment is presented by [12]. In this work a user chooses a safe landing area via a series of navigation waypoints either from an aerial image or from the live UAV camera feed. The optical flow between two successive images is estimated using Scale Invariant Feature Transform (SIFT) features and used to estimate depth structure information. An assumption is made that areas with low variance between optical flow vectors indicate flat areas and are therefore deemed to be a safe landing site. A threshold for determining the boundaries between safe and unsafe is calculated during a supervised training phase. Each of the approaches discussed provide a degree of autonomy in that the UAV is able to detect landing sites and land without receiving low level commands from the operator. However many of the systems described require significant human input such as manually identifying suitable landing sites via a
Integration of Terrain Image Sensing
39
GUI. In the SUAAVE project were an operator is responsible for multiple UAVs it may not be feasible to choose a landing site whilst possibly ignoring the status of other UAVs in the swarm.
3
A Safety Management Protocol
Understandably safety is given utmost priority within the SUAAVE project. An implemented safety management protocol (SMP) defines a set of operational constraints on the UAV platform to help ensure that a mission is executed as safely as possible. In the event of the SMP issuing an abort command the UAV identifies the safest possible area to land in either from its current location, from a database of previously identified safe landing sites, or by contacting neighbouring UAVs. The safety management protocol is responsible for monitoring UAV location, health, connectivity and risks, for example from surrounding UAVs. 3.1
UAV Location
Flying a small autonomous quad-rotor UAV over heavily populated areas currently presents an unacceptable risk of causing damage to property or, in extreme cases human fatalities. Whilst the envisaged initial application for the SUAAVE project is mountain search and rescue it is desirable to include a mechanism whereby areas which are known to be unsuitable for flying over can be avoided. One such example is a school which may be in close proximity to the operational area. The location of unsuitable areas for flying through are indicated within the SMP by specifying the corner points of a bounding box via a series of GPS coordinates. When the application layer sends a GPS waypoint it is verified against the unsuitable areas specified in the SMP and unsafe requests subsequently denied. This provision reduces the probability of causing damage to property and people however increases the complexity of the path planning algorithms used. 3.2
UAV Health
The system health of the UAV can be influenced by many factors including operating and environmental conditions, for example decreased battery life due to wind or loss of GPS signal due to the profile of the terrain. In a related SUAAVE publication [3] it is proposed that the UAV periodically checks for and diagnoses errors. The UAV passes through several states including: 1. Pre-Flight Bootstrap - This phase of operation ensures that the necessary communication links, GPS, and on-board sensors are functional. The successful execution of this phase is a prerequisite to flight. 2. In-Flight Self Diagnostics - During a flight the UAV periodically executes this phase to detect and diagnose errors. 3. Operation - The operation phase is the most common state of the UAV during which the UAV executes its assigned mission.
40
T. Patterson et al.
4. Recover - In the event that an error is encountered the UAV will attempt to recover from that error, for example loss of a communication link may be resolved by relocating within range of another UAV or the base station. 5. Abort - Should the UAV encounter an irrecoverable error then the safest course of action may be to land as soon as possible. In the event of an abort command being issued by the SMP or the human operator the UAV will attempt to land as safely as possible. The relationship between each of these states is outlined in Figure 1.
Fig. 1. State transition diagram for the phases of operation
3.3
Maintaining Connectivity
An important constraint imposed by the SMP is that the potential for connectivity between a UAV and the base station is maintained at all times. This may be either via direct communication or a multi-hop link between neighbouring UAVs. From a safety prospective this constraint is significant as it helps ensure that there is a human-in-the-loop at all times who can abort a mission or command a single UAV to land. Balancing the limited flight time of the UAVs, the requirement of constant connectivity and the need to maximize information gain results in a project requirement of resource aware path planning algorithms. To achieve maximum information gain given the platform constraints the algorithm presented in [13] has been implemented and extended by incorporating two new features. Firstly, the algorithm is modified to account for the changing communication range of
Integration of Terrain Image Sensing
41
the UAVs in response to environmental and topographic conditions. Secondly, a multi-hop routing protocol has been incorporated. In the event of a loss of communication link the UAV will attempt to recover by relocating within range of other UAVs or the base station. Should this loss of communication link continue the SMP will switch to an abort state during which it will attempt to land the UAV as safely as possible. 3.4
Collision Avoidance
One potential hazard which is especially pertinent to UAVs operating as members of a swarm is that of mid-air collisions. Within the SUAAVE project an approach to multi-UAV collision detection using the IEEE 802.11 wireless networking protocol has been designed and implemented. In this work the received signal strength between UAVs is used to estimate their distance. The sampling rate is dynamically based on the speed of the UAV broadcasting the signal. The distance between two UAVs is estimated assuming ideal propagation conditions and that there is a clear line-of-sight path between the transmitter and the receiver. As the UAVs are operating as members of a swarm it is desirable that they are aware of the position of other UAVs which may only be accessible via a multi-hop connection. This knowledge enables UAVs to pre-emptively adjust their path to avoid breaching the safe operating distance threshold. One of the constraints placed upon the collision avoidance strategy is that it should not depend on GPS. In the absence of GPS the location of a UAV can be estimated from three nodes whose positions are known. Once this location is known the distance, D(i, j) between UAVi with coordinates (Xi , Yi , Zi ) and UAVj with coordinates (Xj , Yj , Zj ) is estimated using the Euclidean distance measure, D(i, j) = (Xi − Xj )2 + (Yi − Yj )2 + (Zi − Zj )2 (1) This distance is stored in a dynamically updated table (Table 1) along with a unique UAV identifier, timestamp and coordinates. The table is updated with new information upon receiving a ”Coords” message from a neighbouring UAV. Table 1. Stored attributes of neighbouring UAVs UAV name Timestamp Coordinates Distance UAV1 T1 (X1 , Y1 , Z1 ) D1 ... ... ... ... UAVn Tn (Xn , Yn , Zn ) Dn
The safe operating distance threshold refers to the minimum allowable distance between UAVs and is dynamically changed based on the speed of the UAV and number of neighbours. A breach of this safe operating distance threshold between two UAVs may be the result of operating conditions, for example wind, or
42
T. Patterson et al.
a hardware error, for example GPS inaccuracies. Therefore, in this initial work and until robust see-and-avoid and sense-and-avoid technologies are available the UAV is issued with an abort command.
4
Detection of Landing Sites
In the event of receiving an abort command it is not sufficient to assume that the area directly beneath the UAV is suitable for landing. Furthermore it cannot be assumed that the UAV has the required resources to safely navigate to the base station. It is therefore desirable to provide a means of detecting a safe landing area which considers the surrounding terrain and the available resources of the UAV. This section and subsequent subsections discuss the detection of a landing site from a colour aerial image captured from the UAV. An overview of the processes used for the detection and storage of landing sites can be found in Fig. 2.
Sample image
No
Image useable
Yes
Identify potential landing sites
Determine attribute values for each potential landing site
Assign a safety classification to each potential landing site
Store landing site locations and relevant information
Fig. 2. Safe landing site detection overview
Sample Image and Test for Quality. The first stage in the safe landing site detection algorithm is to sample a frame from the live video stream. To avoid needlessly expending processing time by executing the algorithm on previously seen images this sampling rate is related to the altitude and velocity of the UAV. In this initial work an assumption is made that the UAV is travelling in a forward motion and is located at the centre of the image. Furthermore, the attitude of the UAV is not taken into account. The image sampling rate, S from the video stream is therefore, S = (Iy /2 ∗ R)/V (2) where Iy is the resolution of image I along the y axis, R is the ground pixel resolution and V is the velocity of the UAV estimated from GPS and IMU
Integration of Terrain Image Sensing
43
data. The ground pixel resolution, R, refers to the size of each pixel and can be calculated using [14], R = (AW ∗ H)/(F L ∗ Ix ) (3) where AW is the sensor array width in mm, H is the height above ground level, F L is the lens focal length in mm and Ix is the width of the image. Identify Potential Landing Sites. The sampled image is then analysed to identify regions which are of a suitable size and shape for landing. An edge detection operator is executed on the sampled image to identify object boundaries. In comparison to many image segmentation techniques edge detection in an unconstrained environment is relatively computationally inexpensive and provides reasonable results. However an assumption is made that object boundaries exhibit a steep change in intensity gradient. Currently a Canny edge detector [15] is used to identify object boundaries. This operator requires three parameters, σ which denotes the standard deviation of the Gaussian filter, a low threshold for high edge sensitivity and a high threshold for low edge sensitivity. These thresholds are currently determined empirically and are statically defined however, in future work it is planned that the parameters will be dynamically adjusted according to altitude. The resulting image is then dilated to increase the size of object boundaries and to close small gaps. The motivation behind this step is to provide a margin of error when performing the actual landing. Following edge detection and dilation areas which are of a suitable size for landing in are identified. The Ascending Technologies Hummingbird UAV is approximately 0.5m2 in size which, depending on the altitude of the UAV corresponds to varying numbers of pixels in the input image. The process of identifying potential landing sites can be represented by the following pseudo code: begin execute Canny edge detector on input image, i dilate detected edges by 1.5m for each group of pixels, p in input image, i analyse a rectangular area corresponding to 20m2 surrounding p if the area does not contain edges set p as a potential landing site end end for all potential landing sites, pi for all potential landing sites, pj if pi is adjacent to pj merge end end end assign a unique ID to each potential landing site end
44
T. Patterson et al.
Determine Attribute Values. The previous stages of edge detection, dilation and identification of areas of suitable size results in a set of potential landing sites. The suitability of these potential landing sites is determined by a number of factors including terrain classification and roughness. Currently a Maximum Likelihood Classifier (MLC) is used for the classification of terrain. This classifier requires training data from which class spectral signatures are estimated. The MLC estimates the probability of a pixel represented by a vector of spectral values, x belonging to class ωi and is given as [16], 1 p(x|ωi ) = (2π)−1/2 |Σi |−1/2 exp{− (x − mi )t Σ−1 (4) i (x − mi )}, 2 where mi is the mean spectral values and Σi is the covariance matrix for each class i. Intuitively different terrain types have varying degrees of suitability for landing in. Current classes used in mountainous terrain are grass, gorse, rock, trees and water. These classes are assigned a numeric suitability measure in the range [0..1] by a human expert familiar with the operational area. This suitability measure is used to determine a fuzzy classification of unsuitable, risky or suitable (Figure 3a). In aerial images of many rural scenarios man-made structures typically exhibit a high greyscale contrast deviation in comparison to the surrounding terrain. Landing a UAV near these structures presents a higher risk of damaging property and possibly harming people. Therefore the greyscale intensity deviation of each area surrounding a potential landing site is analysed and assigned a fuzzy classification of low, medium or high (Figure 3b). In the event of a fuzzy classification of high the potential landing site is discounted as unsafe. In future work it is planned that man-made structures will be more robustly detected by fusing map information with the aerial image sensor data. Potential landing sites which are exceptionally rough, for example areas which are very stony represent a risk to the safety of the UAV and its payload. As with man-made structures these areas typically exhibit relatively high greyscale intensity deviation and so this measure is used as an estimate of roughness. A fuzzy classification of smooth, rough and very rough is used to describe the roughness property (Figure 3c). The greyscale intensity deviation of a landing sites neighbourhood and the landing sites roughness is calculated using [17]: 2 I i,j i,j∈r i,j∈r (Im − Ii,j ) Im = ,V = (5) N ∗M N ∗M where Im is the average pixel intensity within the region, r is the region under consideration, i, j is the location of the pixel in the image, I is pixel intensity, N ∗ M is the size of region r and V is the standard deviation. Landing Site Safety Classification. The fuzzy input parameters of terrain suitability, neighbourhood deviation and roughness are aggregated using a series
Integration of Terrain Image Sensing
Unsuitable
1
Risky
Suitable
Smooth
1
Rough
45
Very Rough
Degree of membership
Degree of membership
0
0.1
0.5
0
1
15
b) The membership functions of input parameter landing site roughness
a) The membership functions of input parameter terrain suitability Low
1
Medium
30
Roughness
Terrain suitability
High
1
Unsafe
Risky
Safe
Degree of membership
Degree of membership
0
15
30
Neighbourhood deviation c) The membership functions of input parameter neighbourhood deviation
0
0.1
0.5
1
Safety weighting d) The membership functions of output parameter safety weighting
Fig. 3. The membership functions of fuzzy logic parameters
of rules to produce a fuzzy output (Figure 3d), for example if terrain is suitable and neighbourhood deviation is low and roughness is smooth then landing site = safe. These rules are generated based on expert knowledge which is captured during a training phase prior to deployment. The centroid defuzzification method is used to provide a crisp numeric value for safety weighting. Storage of Previously Classified Landing Sites. It is desirable to store all previously seen landing sites for future use. Attributes of landing sites which are stored are outlined in Table 2. Table 2. Stored attributes of classified landing sites Attribute Description ID Primary key - Used to uniquely identify each landing site Time Each landing site is time-stamped Latitude/Longitude Used to estimate attainability Grid reference The corner coordinates in the image of the landing site Safety weighting The numeric safety weighting of each landing site
These attributes enable the UAV to locate a previously identified landing site in the event of receiving an abort command from the SMP in an area which is unsuitable for landing in. A time-stamp on each landing site may be used as an
46
T. Patterson et al.
indication of the safety classification accuracy which in a dynamic environment, for example farmland may change over time. The database is updated when a new landing site is identified. Many other processes such as path planning are executed in parallel with the safety module which results in processing and storage constraints. Under certain conditions it may therefore be feasible to only store landing sites with a safety weighting above a given threshold. During the course of a mission a large number of classified landing sites may be accumulated. The potential usefulness of these landing sites may decrease over time and with distance from the UAV’s location. To avoid sorting through a large number of unattainable landing sites in the event of an emergency the database is periodically pruned of such sites.
5
Choosing a Landing Site
In the event of the SMP issuing an abort command the UAV will consider the state of its resources and the suitability of surrounding and previously sensed terrain to choose a suitable landing site. An overview of the decisions taken by the UAV are outlined in Figure 4 and are discussed in the subsequent subsections.
Base station safely attainable
Abort command
Yes
Land at base station
Yes
Land at site with highest weighting
No Landing site available from current position No
Check database of previously classified landing sites
Previously classified landing sites
Choose landing site
Request aid from surrounding UAVs
Dynamically evaluate landing site
Fig. 4. Safe landing system overview
Land
Integration of Terrain Image Sensing
47
Attainability. A key attribute when choosing a landing site is its attainability which is determined by remaining battery life and distance from the UAV’s current position. The Ascending Technologies Hummingbird UAV used in the SUAAVE project has a battery life of approximately 23 minutes or 12 minutes with a 200g payload. However, this can be significantly influenced by environmental conditions such as wind. In the absence of models which characterize the effects of specific flight manoeuvres and environmental conditions upon the platforms battery, the travelled distance and current battery voltage may be used as an estimate of the required power per m of the UAV. Given an estimate of the required power per m of the UAV the potential attainability of a landing site can be determined by: R =C −D∗P
(6)
where R is the remaining battery life in volts (v) after navigating to the landing site, C is the current battery life in v, D is the distance of the landing site from the UAV’s position in m and P is the required battery power in v per m. The required power to navigate to a landing site is estimated as a percentage of remaining battery life. A landing site is considered unattainable if it requires more than 75% of the remaining battery life to navigate to that area. Therefore, in emergency situations the UAV reserves 25% of battery life to ensure that it has sufficient power to perform a controlled descent and, if possible transmit its location following an emergency landing. Part of the future work within the SUAAVE project will involve characterisation of the UAV platform. This will enable the impact of environmental conditions and specific flight manoeuvres upon battery life to be modelled. Furthermore, the power required by the UAV to perform a controlled descent and transmit its location can be estimated from these models enabling the attainability thresholds of a landing site to be more accurately defined. Neighbouring Landing Sites. It is possible that a landing site which appears suitable for landing in from a high altitude may, upon closer inspection contain hazards. Preference is therefore given to landing sites which have surrounding areas which are suitable for landing in. Therefore in the event of a chosen landing site containing hazards, alternative, attainable landing sites are available. 5.1
Base Station
In the first instance the UAV will assess if it has sufficient battery life to safely navigate to the base station. Landing at the base station enables easy recovery of the UAV which is an important advantage given that a single operator may be responsible for an entire swarm. 5.2
Current Location
In the event of the base station being unattainable the UAV will attempt to locate a landing site from its current location using aerial image data captured
48
T. Patterson et al.
from the onboard camera. The sequence of events executed in this scenario are similar to those outlined in Figure 2 however, as opposed to storing landing site locations a decision is made as to the most suitable landing site from the input image. The distance of landing sites detected from the UAV’s current location is estimated by calculating the Euclidean distance from the UAV’s current position to the centre of each landing site. This distance is used in conjunction with remaining battery life to estimate attainability. 5.3
Check Database
In the event of no suitable landing site being available from the current location the UAV will query the database of previously classified landing sites. This database contains the unique id, longitude/latitude position, safety classification and time stamp for each landing site. The distance, d from the UAV’s current position to the location of the landing site is calculated using the haversine formula [18], d = Rc (7) where R is the Earth’s radius in m and c is calculated as, Δlat = lat2 − lat1 , Δlong = long2 − long1 , Δlat Δlong a = sin2 + cos(lat1 )cos(lat2 ) sin2 , 2 2 √ c = 2atan2( a, (1 − a))
(8) (9) (10)
The fields in the database are subsequently sorted into ascending order by distance from the UAV’s current position and are compared against any landing sites which are detected by neighbouring UAVs. 5.4
Neighbouring UAVs
One of the requirements placed upon the UAVs by the SMP is that the potential for connectivity is maintained at all times. Therefore in the event of a member of the swarm performing a forced landing for reasons other than connectivity problems it is possible that a neighbouring UAV may be able to detect a landing site which has not been previously identified and stored by the UAV. If a neighbouring UAV can identify a safe landing site it will transmit the location of that site along with its associated safety weighting and number of neighbouring landing sites via 802.11. 5.5
Landing
The result of searching through the safe landing site database and requesting aid from surrounding UAVs is a list of attainable safe landing sites and their
Integration of Terrain Image Sensing
49
associated attributes. In this initial work the safe landing site with the greatest number of attainable, neighbouring safe landing sites is chosen. As the UAV descends it is possible that at lower altitudes a hazard may be identified in the landing site. Therefore, the chosen landing site is dynamically evaluated. In the event of the landing site containing hazards a neighbouring landing site is chosen. A constraint placed upon the UAVs by the SMP is that connectivity with the base station must be maintained. This connectivity can be either directly between the UAV and the base station or via neighbouring UAVs using a multihop routing protocol. In the event of a UAV performing an emergency landing the configuration of other swarm members will adapt to ensure that connectivity is maintained. A further constraint imposed by the SMP is with respect to the maximum allowable distance between swarm members. A possible scenario is where multiple UAVs attempt to land at the same landing site. It is therefore desirable that a UAV retains a portion of battery life to periodically transmit a ”Coords” message to other swarm members. This will be used in conjunction with the collision avoidance module of the SMP to help decrease the risk of multiple UAVs landing in close proximity to each other. In the example shown in Figure 5, 3 UAVs are dispatched to sense the environment in search for a missing person. Due to a GPS failure UAV1 navigates out of multi-hop communication range with the base station. As it cannot safely navigate to the base station without GPS UAV1 executes the safe landing
UAV 3 UAV 1 UAV UA AV 2
UAV
Communication Direction of travel range
Base station Link to base station
Grass
Water
Trees
Fig. 5. Example scenario where an emergency safe landing is required
50
T. Patterson et al.
site detection algorithm and determines that the ground directly beneath it is suitable for landing in. UAV1 subsequently lands and periodically transmits a ”Coords” message notifying other UAVs of its presence should they fly within communication range.
6
Conclusions/Future Work
In this position paper a safety management protocol which incorporates connectivity constraints, collision avoidance and safe landing site detection from aerial image data is presented. A novel algorithm is described for the detection, storage and subsequent choosing of safe landing sites. Preliminary results indicate potential in the approach used for the detection of landing sites. In future work it is planned to validate and improve all components of the SMP based on experiments conducted on a Hummingbird quadrotor UAV. A further piece of future work will be characterization of the platform in varying environmental conditions which will enable the attainability components of the algorithm to be defined more accurately. Acknowledgements. This research was supported by a Department for Employment and Learning studentship and through the Sensing Unmanned Autonomous Aerial Vehicles (SUAAVE) project under grants EP/F064217/1, EP/F064179/1 and EP/F06358X/1.
References 1. Binenko, V.I., Andreev, V.L., Ivanov, R.V.: Remote sensing of environment on the base of the micro-aviation. In: Remote Sensing of the Enviornment, St. Petersburg, Russia, pp. 10–15 (2005) 2. Srinivasan, S., Latchman, H., Shea, J., Wong, T., McNair, J.: Airborne traffic surveillance systems: video surveillance of highway traffic. In: Proceedings of the ACM 2nd International Workshop on Video Surveillance & Sensor Networks, pp. 131–135. ACM, New York (2004) 3. Cameron, S., Parr, G., Nardi, R., Hailes, S., Symington, A., Julier, S., Teacy, L., Mclean, S., Mcphillips, G., Waharte, S., Trigoni, N., Ahmed, M.: SUAAVE: Combining Aerial Robots and Wireless Networking. In: Unmanned Air Vehicle Systems, Bristol, vol. (01865), pp. 7–20 (2010) 4. Ascending Technologies. AscTec Hummingbird AutoPilot(accessed July 30, 2010) 5. Quaritsch, M., Kruggl, K., Wischounig, D., Bhattacharya, S., Shah, M., Rinner, B.: Networked UAVs as aerial sensor network for disaster management applications. E & I Elektrotechnik und Informationstechnik 127(3), 56–63 (2010) 6. Ochieng, W.Y., Sauer, K., Walsh, D., Brodin, G., Griffin, S., Denney, M.: GPS Integrity and Potential Impact on Aviation Safety. Journal of Navigation 56(1), 51–65 (2003) 7. Cox, T.H., Nagy, C.J., Skoog, M.A.: Civil UAV Capability Assessment (2004) 8. Theodore, C., Rowley, D., Ansar, A., Matthies, L., Goldberg, S., Hubbard, D., Whalley, M.: Flight trials of a rotorcraft unmanned aerial vehicle landing autonomously at unprepared sites. In: Annual Forum Proceedings-American Helicopter Society, Phoenix, AZ, vol. 62, pp. 67–73 (2006)
Integration of Terrain Image Sensing
51
9. Johnson, A., Montgomery, J., Matthies, L.: Vision guided landing of an autonomous helicopter in hazardous terrain. In: Proceedings of IEEE International Conference on Robotics and Automation, pp. 3966–3971 (2005) 10. Sevcik, K., Kuntz, N., Oh, P.: Exploring the Effect of Obscurants on Safe Landing Zone Identification. Journal of Intelligent and Robotic Systems 57(1-4), 281–295 (2009) 11. Templeton, T., Shim, D.H., Geyer, C., Sastry, S.: Autonomous vision-based landing and terrain mapping using an MPC-controlled unmanned rotorcraft. In: 2007 IEEE International Conference on Robotics and Automation, pp. 1349–1356 (2007) 12. Cesetti, A., Frontoni, E., Mancini, A., Zingaretti, P., Longhi, S.: A Vision-Based Guidance System for UAV Navigation and Safe Landing using Natural Landmarks. Journal of Intelligent and Robotic Systems 57(1-4), 233–257 (2009) 13. Stranders, R., Farinelli, A., Rogers, A., Jennings, N.J.: Decentralised control of continously valued control parameters using the max-sum algorithm. In: Proceedings of AAMAS 2009, pp. 601–608 (2009) 14. Booth, D., Cox, S., Berryman, R.: Precision measurements from very-large scale aerial digital imagery. In: Environmental Monitoring and Assessment, pp. 293–307 (2006) 15. Canny, J.: A Computational Approach To Edge Detection. IEEE Transactions Pattern Analysis and Machine Intelligence, 679–698 (1986) 16. Richards, J.A., Xiuping, J.: Remote Sensing Digital Image Analysis, 3rd edn. Springer, New York (1999) 17. Howard, A., Seraji, H.: Multi-Sensor Terrain Classification for Safe Spacecraft Landing. IEEE Transactions on Aerospace and Electronic Systems 40, 1122–1131 (2004) 18. Sinnott, R.W.: Virtues of the haversine. Sky Telescope 68 (1984)
A Study on the Wireless Onboard Monitoring System for Railroad Vehicle Axle Bearings Using the SAW Sensor Jaehoon Kim1, K.-S. Lee1, and J.-G. Oh2 1
Korea Railroad Research Institute, 360-1, Woram-dong , Uiwang- city, Gyeonggi-do, 437-757, Korea {lapin95,kslee}@krri.re.kr 2 Corechips, Shin-dong, Youngtong-gu, Suwon-city, Gyeonggi-do, 443-734, Korea
[email protected]
Abstract. This study aimed to replace the current discontinuous rail monitoring system by applying “Plug and Play” technology to rail system monitoring to enable real-time monitoring, and by confirming on-condition maintenance efficiency and reliability. It examined a wireless sensor monitoring system which uses SAW (Surface Acoustic Wave) technology to monitor temperature changes in the axle box bearing of railroad vehicles during operation. The results of the experiment were compared with HDB measurements to confirm the reliability of the real-time monitoring results measured on vehicles during operation. Keywords: Monitoring, Wireless, Surface Acoustic Wave, Railroad.
1 Introduction In the railroad system measurement field, real-time measurements are an essential feature for the various sensors used for vehicle maintenance. These sensors are currently powered by batteries or through electric wires. However, such power supply methods can only be installed in certain locations and many improvements can be made in terms of long-term maintenance cost efficiency. This means that vehicle maintenance system developments require smaller sensors and technical improvements which enable “Plug and Play” so that sensors can be installed in any location. Axle box bearing heating and adhesion during vehicle operation damage the axle, causing derailments and other accidents. However, the current limitations in monitoring system installation locations and technology do not allow direct axle bearing temperature monitoring during vehicle operation. Instead, the temperature is monitored using wayside Hot Box Detectors (HBD) installed at fixed distances along the track. This is discontinuous monitoring, and there have been reported incidents in which vehicles that passed the HBD with no problems suddenly derailed due to axle box bearing damages [1, 2]. This study aimed to replace the current discontinuous rail monitoring system by applying “Plug and Play” technology to rail system monitoring to enable real-time G. Parr and P. Morrow (Eds.): S-Cube 2010, LNICST 57, pp. 52–58, 2011. © Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2011
A Study on the Wireless Onboard Monitoring System
53
monitoring, and by confirming on-condition maintenance efficiency and reliability. It examined a wireless sensor monitoring system which uses SAW (Surface Acoustic Wave) technology to monitor temperature changes in the axle box bearing of railroad vehicles during operation. The results of the experiment were compared with the existing HDB measurements to confirm the reliability of the real-time monitoring results measured on actual vehicles during operation [3-5].
2 SAW Sensor for Real-time Wireless Axle Box Bearing Temperature Monitoring For axle bearing boxes of vehicles that travel at high speeds, it is difficult to employ temperature sensors that use standard power supplies due to the influence of the surrounding environment, interference from high voltages and electronic parts and the unique location of the axle box bearing. Moreover, if the temperature sensor is employed in vehicles currently in commercial operation, the sensor and bearing structure need to be changed and this would incur significant replacement costs. Sensors such as the real-time wireless SAW temperature sensor, which do not require a power supply and can wirelessly monitor temperature immediately after semipermanent installation, fully overcome the limitations described above and do not require changes in the parts structure of commercial vehicles. They can be employed in both existing and newly-produced vehicles, giving them high research value. 2.1 SAW Sensor Design In general, SAW (Surface Acoustic Wave) sensors do not require any power supplies and can take measurements wirelessly. They are being studied intensively as a powerfree wireless sensor, and are expected to be of great use in areas such as axle box bearings where it is not easy to install and operate sensors that require a standard power supply. Researchers in Europe published a research paper on measuring braking disc temperature in high-speed railroad vehicles using SAW sensors [6].
Fig. 1. SAW Sensor Concept (ZL : Variable impedance temperature sensor)
This study looked at the SAW temperature sensor which allows temperature monitoring on the axle box bearing. As shown in Figure 1, the SAW temperature sensor in this study has a substrate surface that can transfer acoustic waves, which is where the input and output Interdigital Transducers are located. At the input IDT, electric signals applied to the comb finger electrodes are transformed into acoustic
54
J. Kim, K.-S. Lee, and J.-G. Oh
waves by the inverse piezoelectric effect. The waves are transferred along the substrate to arrive at the output IDT, and a voltage is created at the electrode by the piezoelectric effect. The signal produced at the output IDT reacts with the IDT finger and substrate material. Substrate characteristics are applied to the acoustic wave transfer process, and the waves are converted to electric signals after reacting with the substrate material at the output IDT. The characteristics that can be expected from the SAW temperature sensor in this study are firstly, a filter characteristic and secondly, a delay characteristic. In actual application, there are many products such as the VCO and Band Pass Filter in which communication systems composed of devices with filter characteristics play a key role. However, though devices with delay characteristics are being used as communication devices such as Delay Lines, the delay characteristic of SAW devices is a key feature of the power-free/wireless sensor to be developed through this study. In an actual device, the medium used on the substrate which transfers acoustic waves has a certain amount of thickness. Acoustic waves that are generated on the IDT on the substrate surface are not all transferred along the surface, but partially transferred as bulk waves. The amount of bulk waves depends on the substrate material and electrode design. If bulk waves transferred into the substrate reflects off the bottom surface of the substrate, they disrupt the surface acoustic waves and decrease the filter and delay characteristics of the SAW device. This means that it is extremely important in the development of powerfree/wireless SAW sensors and transponders to select appropriate IDT finger shape and substrate material. In this study, YZ-LiNb03, a popular SAW filter substrate material, was used to produce power-free/wireless SAW transponders and the IDT finger was designed to operate at a center frequency of 433MHz. A numerical analysis was carried out in order to confirm the transponder design for the SAW transponder design in Figure 2. The results showed a matching performance of near 50 ohms at a center frequency of 433MHz with Reflector Finger IDT (Inter Digital Transducers) of 24 pairs and a Finger IDT Overlap Length of 500 . A SAW Mask Pattern as shown in Figure 2 was designed and produced based on these results. The estimated performance based on the results was a one-way traveling wave insertion loss of around -9dB. When used as a SAW power-free/wireless transponder, the two-way loss, 2-step piezoelectric conversion and inverse piezoelectric conversion result in a -6dB loss. Therefore, the basic design policy was set to have a structure with a loss of around -24dB (-9dB-9dB-6dB).
㎛
λ
d
4
h
λ
4
Sensor IDT
(nλ)
Open Grating
Transceiver IDT
(mλ)
Fig. 2. SAW Model and Actual Production Sample Mask Pattern
A Study on the Wireless Onboard Monitoring System
55
The loss varies according to the SAW IDT number and the distance between the Sensor IDT and Transceiver IDT, and this was also analyzed. The analysis conditions and results are shown in Table 1. Note that the conditions and results apply when aluminum was (Al) is used as the metal layer, with the external impedance of the Sensor and Transceiver IDT matching at 50oe, a finger overlap (h) of 3mm), and an open grating distance (d) of 7mm. As Table 1 shows, insertion loss is low when the IDT number is 25 or fewer. Table 1. Insertion Loss by Sensor IDT and Transceiver IDT Number NSensor IDT
NTransceiver IDT
1 5 10 15 20 25 30 35 40 50
5 5 5 25 25 25 40 40 40 50
Insertion loss (dB) -25 -13 -11 -11 -11 -11 -15 -15 -18 -22
3dB BW (MHz) 60 40 30 20 14 14 14 12 12 10
Fig. 3. Power-free SAW Temperature Sensor Prototype Combined with a Thermistor (Resistive Temp. Sensor)
A validation test and vehicle application test were carried out in order to monitor axle box bearing temperature in real-time by producing an SAW Transponder with an excellent measuring distance and noise coherence at 1°C precision, and an SAW temperature sensor with a variable impedance structure, as shown in Figure 3. The variable impedance temperature sensor was developed into a form which combines the Thermistor (Resistive Temp. Sensor) , which is commonly used in measurements that require precision, with the SAW transponder as shown in Figure 3, and the characteristics of the Thermistor is as shown in the graph in Figure 4. For the validation test, the current
56
J. Kim, K.-S. Lee, and J.-G. Oh
study also measured changes in sensor value with temperature changes in the precision hot-plate. Changes in sensor temperature were measured on the hot-plate instead of the standard incubation chamber because for axle box bearings, the monitoring target of this study, the existing HBD monitoring device measures the axle box bearing surface temperature during operation. Surface temperature was measured using hot-plates instead of incubation chambers in order to create similar conditions. Tem perature vs. R esistance R esistance (Ω) 600 517
500 ) 400 Ω ( e c n 300 a t s i s e R 200
344 234 162 115
100 0
40
60
80 Tem perature (℃)
83
61 100
45
34 120
Fig. 4. Changes in Thermistor Resistance by Temperature
2.2 System Application Test Using High-speed Railroad Vehicles An application test was carried out for axle box bearing temperature monitoring during vehicle operation using the SAW temperature sensor from the validation test described above. The aim of the application test was to replace the existing HBD (Hot Box Detector), which detects axle box bearing damages through a discontinuous temperature monitoring method, with a SAW temperature sensor to enable continuous temperature monitoring. It also compared SAW temperature sensor measurements with HBD temperature measurements to confirm the data, reliability, and validity of real-time monitoring.
Fig. 5. SAW Temperature Sensor and Antenna Installation for Axle Box Bearing Temperature Measurement
A Study on the Wireless Onboard Monitoring System
57
As shown in Figure 5, a SAW temperature sensor was installed on the axle box bearing of a vehicle that travels at 300km/h to wirelessly monitor temperature changes in real-time. To minimize errors due to installation location, the sensor was placed at a location closest to where the wayside HBD makes contactless temperature measurements on the axle box bearing surface.
50
40
o
Temperature [C ]
SAW Sensor HBD
30
20
10
07:00:00
08:00:00
09:00:00
10:00:00
11:00:00
Time
Fig. 6. SAW Sensor Temperature Results vs HBD Temperature Results (HBD installed in 7 locations along the route)
Figure 6 shows that during the two-way operation of the vehicle at the maximum speed of 300km/h, the SAW temperature sensor measurements were similar to the HBD measurements. For the entire period, both the SAW temperature sensor measurements and HBD measurements show fluctuations in axle box bearing temperature according to operation conditions such as stopping at stations or passing through tunnels, and both show an overall increase in temperature with operation time. Furthermore, the SAW temperature sensor and HBD temperature measurements differed only by 0.02 ~4.02 for each section. This confirmed the reliability of the SAW temperature sensor developed for r real-time wireless monitoring of axle box bearings on railroad vehicles. The real-time monitoring validity was compared with the HBD. The HBD is only installed in 7 locations on each of the upward and downward routes. This does not allow continuous monitoring as the axle box bearing temperature is measured only when the vehicle passes these locations, and errors may occur in data analysis. In particular, it is difficult to determine whether sudden temperature changes in vehicles operating at 300km/h indicate problems in the axle box bearing with only HBD measurements. For example, in Figure 6, there is a sudden change in HBD data between 10:15 and 10:30. In order to determine whether this indicates a problem in the axle box bearing, another HBD measurement must be taken after 15 minutes. 15 minutes is a long period of time for a high-speed vehicle and if the axle box bearing has in fact been damaged, there may be an accident before the problem can even be checked. However, for real-time SAW sensor measurements, increases and decreases in axle box bearing temperature can be analyzed continuously, allowing quick and
℃
℃
58
J. Kim, K.-S. Lee, and J.-G. Oh
accurate damage detection. The SAW temperature sensor measurements in Figure 6, unlike the HBD measurements, show continuous changes in axle box bearing temperature between 10:15t and 10:30 to allow accurate monitoring of the axle box bearing. This study confirmed the reliability of the real-time wireless SAW temperature sensor, and demonstrated that the temperature sensor can be made useful in vehicle integrity assessment and maintenance by using it to monitor axle box bearings and take continuous temperature measurements.
References 1. Rail Safety Investigation Report- Derailment of Pacific National Freight Services CB76 and 1WB3. Office of Transport Safety Investigations, Sydney, Australia (2007) 2. Rail Safety Investigation into the Derailment of Train 6WP2. Australian Transport Safety Bureau (2003) 3. Sodano, H.A., Inman, D.J., Park, G.: Comparison of piezoelectric energy harvesting devices for recharging batteries. Journal of Intelligent Material Systems and Structures 16, 799–807 (2005) 4. Horowitz, S., Kasyap, K.A., Liu, F., Johnson, D., Nishida, T., Ngo, K., Sheplak, M., Cattafesta, L.: Technology Development for Self-Powered Sensors. In: Proc. of 1st Flow Control Conference, AIAA-2022-2702 (2002) 5. Roundy, S.: On the Effectiveness of Vibration-based Energy Harvesting. Journal of Intelligent Material Systems and Structures 16, 809–824 (2005) 6. Pohl, A., Seifert, F.: Wirelessly interrogable surface acoustic wave sensors for vehicular applications. IEEE Transactions on Instrumentation and Measurement 46, 1031–1038 (1997)
Middleware for Adaptive Group Communication in Wireless Sensor Networks Klaas Thoelen, Sam Michiels, and Wouter Joosen IBBT - DistriNet, Katholieke Universiteit Leuven 3001 Leuven, Belgium
[email protected]
Abstract. While the size and heterogeneity of wireless sensor networks confirm the need and benefit of group communication, an intelligent approach that exploits the interaction pattern and network context is still missing. This paper introduces sensor middleware to dynamically select the most efficient alternative from a set of group communication mechanisms. The proposed solution leverages on an empirical analysis of the ODMRP multicast protocol and was evaluated by a proof-of-concept prototype running on the SunSPOT platform. Results show that network overhead is considerably reduced when using the sensor middleware for software deployment, reconfiguration and periodic data monitoring. Keywords: Group Communication, Context-Awareness, Middleware, Wireless Sensor Networks, Multicast.
1
Introduction
As wireless sensor network (WSN) platforms mature and standardization in their networking stack advances, WSN deployments become larger and more heterogeneous. This heterogeneity is exposed by differences in hardware, software and even ownership of the individual sensor nodes. As a result, sensor nodes emerge with different responsibilities (provided services, quality levels), capabilities (processing power, sensor accuracy) and general properties (ownership, types of sensors, energy source). Paradoxically, this heterogeneity also leads to the opportunity of defining groups of sensor nodes with common characteristics. Group communication support is needed so that a subset of nodes adhering to a certain commonality can easily be contacted. In the scope of large scale, long lived and dynamic WSN environments, we identify three interaction patterns that benefit from group communication [9,13]: (1) software deployment, (2) dynamic configuration and (3) periodic actuation and monitoring. Evolving requirements on deployed services and the network as a whole trigger both software deployment and configuration. The difference lies however in the frequencies of these interactions and the amount of data to be exchanged. (1) Software deployment involves the dissemination of relatively large chunks of data and should be kept to a minimum to extend the battery lifetime of the sensor nodes. (2) Reconfigurations can be performed more frequently than deployment, given that the G. Parr and P. Morrow (Eds.): S-Cube 2010, LNICST 57, pp. 59–74, 2011. c Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2011
60
K. Thoelen, S. Michiels, and W. Joosen
network overhead is considerably smaller. (3) The exchange of monitoring and actuation data is highly frequent and often periodic in nature; in addition, the frequency of this periodic interaction is not static but susceptible to (changing) application preferences. The size of the data that is exchanged for monitoring however, is typically small in comparison with software deployment. These differences in interaction pattern characteristics influence the applicability of the various group communication mechanisms at hand. In traditional networking, group communication is provided by multicast protocols that represent a group of nodes by a single address. The member nodes are not always bound by geographical properties and can be scattered throughout the network. The primary goal of multicast protocols is to deliver the intended packets to all members of the group with the least possible amount of transmissions per packet. This is typically achieved by the usage of an overlay tree or mesh in which all sources, members and needed intermediary nodes are connected. Creating these overlays, typically involves broadcasting the network in search of members and subsequent replies to the initiator of this so-called discovery phase. This exchange of control packets is commonly called protocol overhead, as it does not take part in delivering the actual data to the group members. In the face of mobility, changing connectivity and changing group memberships, these discovery phases need to be repeated at an appropriate frequency to keep the overlay up-to-date. The problem that we address in this paper is that the protocol overhead introduced by multicast protocols is not always justifiable. Alternatives like broadcast and gossip mechanisms, introduce no or less protocol overhead [2,4] and although not group-aware, can be more efficient than multicast protocols. We argue that both the characteristics of the current interaction pattern and the network context need to be taken into account when performing group communication. Only in this way can we ensure that the most efficient communication mechanism is used. The contributions of this paper are (1) the design of a context aware middleware layer that allows the selection of the most efficient mechanism at hand to perform group communication; to enable this, we performed (2) an efficiency analysis of a representative multicast protocol with regards to protocol overhead; (3) the evaluation of our prototype on the SunSPOT platform confirms the efficiency benefits of the proposed solution. The remainder of this paper is structured as follows. In Section 2 we report on our efficiency analysis of the ODMRP multicast protocol and discuss the main results. We introduce our Communication Management Middleware in Section 3. We evaluate our prototype in Section 4 and elaborate on related work in Section 5. We conclude in Section 6.
2
Multicast Efficiency Analysis
With respect to the interaction patterns identified in the introduction, we compare a multicast and broadcast approach in terms of the amount of transmissions
Middleware for Adaptive Group Communication in WSNs
61
needed to deliver data to all group members. This, of course, is dependent on the selected multicast protocol and how its overlay is created. In the following subsection we first discuss why we selected the ODMRP protocol and how it operates. We continue with a report on an efficiency analysis of ODMRP and conclude with defining a set of rules to follow in order to achieve efficient group communication. 2.1
Multicast Protocol Selection
We selected the On-Demand Multicast Routing Protocol (ODMRP) [14] because it is on-demand and source-initiated. Only when a source has a need to send data to a group, a multicast overlay is created, hereby reducing the protocol overhead to truly useful situations. This suits the identified interaction patterns as both deployment and reconfiguration require only short-lived overlays to fulfill their duties. Furthermore, periodic data publication only requires a constantly active overlay when the periodicity is high enough. ODMRP is a mesh-based multicast protocol that uses the concept of a forwarding group in which only a subset of nodes forward data packets to the group members. The mesh overlay is created during a so-called discovery phase which involves broadcasting a Join Query packet to find all possible members of the group. At the reception of a Join Query, members of the group create a Join Reply and unicast it back to the source. Intermediate nodes receiving the Join Reply realize that they are on a path between the source and at least one member of the group and set a forwarding group flag before forwarding the Join Reply. In this way, a mesh of forwarding nodes is created that establishes routes between the source and the group members. Multicast data, transmitted by the source, is only retransmitted by nodes which have the forwarding flag set. Periodically, but only while the overlay is in effective use, discovery phases are repeated to refresh the mesh overlay. This period is defined as the discovery phase interval. The mesh overlay is maintained only in soft-state and fades away when no longer in use. No explicit control packets are thus required to leave the group. Implementations. We implemented ODMRP on the SunSPOT [11] platform, the stable RED version of 090706. The SunSPOT version consumes 920 bytes or 0,17% of the total available RAM on a SunSPOT, compared to the 1040 bytes consumed by the built-in LQRP (Link Quality Routing Protocol) unicast routing protocol which is a derivative of AODV (Ad hoc On-demand Distance Vector) [7]. To be able to test the performance of our implementation in larger networks, we ported this implementation to a Cooja [6] compatible version. Cooja is a simulator primarily created to simulate sensor nodes running the Contiki operating system [1]. However, since it is developed using Java, it allows to simulate networks of Java-based application level sensor nodes. 2.2
Tests and Simulations
Driven by the interaction patterns introduced in Section 1, the variables in scope are the amount of data to exchange and the frequency of interactions with a
62
K. Thoelen, S. Michiels, and W. Joosen
1
2
4
5
7
8
3
6
(a)
9
(b)
Fig. 1. The 3x3 grid topology with an indication of the radio range around node 5 (a), and the accompanying graph of the number of transmissions needed to deliver a certain amount of data packets to the specified group (b)
multicast group. We define efficiency as the cumulative number of transmissions of all nodes in the network to deliver a certain amount of data packets to a group of nodes. This includes the transmissions of control as well as data packets. We compare the efficiency of the following cases: the SunSPOT implementation of ODMRP, the Cooja port of ODMRP, the theoretical optimum of ODMRP and a simple broadcast mechanism. In our experiments, the discovery phase interval of ODMRP was kept constant at 30 seconds and the forwarding group refresh interval at 60 seconds. We consider node mobility as future work. The theoretical optimum of ODMRP is the case in which ODMRP uses the minimum amount of protocol overhead to select the minimum number of forwarders. This case was used as a point of reference for the SunSPOT and Cooja implementations. In case of the simple broadcasting mechanism, we calculated the number of transmissions executed when every node retransmits every packet it receives once. The number of transmissions per packet is thus equal to the number of nodes in the network. The SunSPOT network under test was a 3x3-grid (see Figure 1(a)). Node 1 was used as the source of multicast data, with nodes 3, 7 and 9 being members. In Cooja, we repeated the tests on a 3x3 grid with the same topology and radio range. As can be seen in Figure 1(b), the difference in efficiency was small and can be attributed to the difference in MAC and PHY layers used in both cases. This causes both setups to slightly favor different overlay meshes as members reply to the first JoinQuery they receive. The difference in comparison to the theoretical case can be attributed to the fact that in this case the most ideal mesh overlay is created with the least amount of forwarders. This is not always the case in the SunSPOT and Cooja versions.
Middleware for Adaptive Group Communication in WSNs
63
1
11
21
31
41
51
61
71
81
91
1
11
21
31
41
51
61
71
81
91
1
11
21
31
41
51
61
71
81
91
2
12
22
32
42
52
62
72
82
92
2
12
22
32
42
52
62
72
82
92
2
12
22
32
42
52
62
72
82
92
3
13
23
33
43
53
63
73
83
93
3
13
23
33
43
53
63
73
83
93
3
13
23
33
43
53
63
73
83
93
4
14
24
34
44
54
64
74
84
94
4
14
24
34
44
54
64
74
84
94
4
14
24
34
44
54
64
74
84
94
5
15
25
35
45
55
65
75
85
95
5
15
25
35
45
55
65
75
85
95
5
15
25
35
45
55
65
75
85
95
6
16
26
36
46
56
66
76
86
96
6
16
26
36
46
56
66
76
86
96
6
16
26
36
46
56
66
76
86
96
7
17
27
37
47
57
67
77
87
97
7
17
27
37
47
57
67
77
87
97
7
17
27
37
47
57
67
77
87
97
8
18
28
38
48
58
68
78
88
98
8
18
28
38
48
58
68
78
88
98
8
18
28
38
48
58
68
78
88
98
9
19
29
39
49
59
69
79
89
99
9
19
29
39
49
59
69
79
89
99
9
19
29
39
49
59
69
79
89
99
10
20
30
40
50
60
70
80
90
100
10
20
30
40
50
60
70
80
90
100
10
20
30
40
50
60
70
80
90
100
(a)
(b)
(c)
Fig. 2. Topologies of the 10x10 grid networks. The different sets of members (shaded circles) were chosen to represent a very dispersed group (a), a very condensed group (b) and an evenly spread out group (c). In (a), also the radio ranges are shown which were used during subsequent simulations.
Additional simulations were performed on a 5x5 grid and a 10x10 grid. In the first case we used the same radio range and the same network configuration as previous (one corner node as a source and all other corner nodes as members of the group). The 10x10 grid was however used to perform more extensive simulations with various network configurations. Figure 2 shows the radio ranges and sets of group members used. Node 1 was again used as the source of multicast data and the different sets of members were chosen to represent (a) a very dispersed group, (b) a very condensed group and (c) a evenly spread out group. To indicate that our ODMRP implementation is satisfactory and that the measured transmissions effectively deliver the data packets to the group members, we include Table 1. This table shows the delivery ratio, or the average ratio of the number of data packets effectively received by the group members to the number of data packets sent by the source. The theoretical multicast and broadcast cases obviously hold a perfect delivery ratio of 1, but the experimental SunSPOT and Cooja cases come very close. 2.3
Multicast vs. Broadcast: Protocol Overhead
In this subsection we evaluate how the two alternatives of multicasting and broadcasting relate to each other in terms of protocol overhead. Table 1. Delivery ratios of the tests and simulations Network
Implementation
Delivery Ratio
3x3 3x3 5x5 10x10
SunSPOT Cooja Cooja Cooja
0,976 0,994 0,999 0,980
64
K. Thoelen, S. Michiels, and W. Joosen
Figure 3(a) and Figure 3(b) show the measured (or required) number of transmissions per number of data packets sent out by the source, respectively with and without the broadcast results. These specific figures are the average results from measurements in the 10x10 grid network, with medium radio range and evenly spread group members. All other cases resulted in similar graphs however. The simple, yet wasteful, broadcast mechanism has no protocol overhead. Every data packet transmitted by the source is retransmitted exactly once by every node when it is received. This is represented by the linear function nx in which n is equal to the number of nodes in the network and x to the amount of data packets transmitted by the source. In the multicast case, the protocol overhead introduces a variable factor. A discovery phase is executed before the first data packet is transmitted and periodically repeated. Such a discovery phase requires an amount of transmissions equal to O. O = JQN + JRN + AckN
(1)
JQN , JRN , AckN are respectively defined as the total amount of Join Queries, Join Replies and Acknowledgements transmitted by all nodes in the network. JQN is equal to n or the number of nodes in the network. JRN and AckN are however dependent on the actual overlay mesh that is created at execution time. They are both functions of the size of the network, the position and the amount of members, the selected forwarders and the radio range of the nodes. The total amount of transmissions in the multicast case, during a single discovery interval, can be represented by the function O + kx, in which k is equal to the number of nodes which are selected as a forwarder. We can compare both functions with the graphs in Figure 3(a). As the number of forwarders in the multicast case is typically much smaller than the total amount of nodes and thus k < n, we see that the slope of the multicast graph is
(a)
(b)
Fig. 3. The amount of transmissions needed per number of data packets sent out by the source, with (a) and without (b) the broadcast results for the 10x10 grid network, with medium radio range and evenly spread group members
Middleware for Adaptive Group Communication in WSNs
65
a lot less steep than the broadcast graph. A second discovery phase, performed between 250 and 500 data packets causes the multicast graphs to bend a little. In the theoretical case, the optimal set of forwarders is reselected and thus the graph only bends lightly because of the additional protocol overhead. In the practical case however, the selection of a different set of forwarders causes a greater bend in the graph. Subsequent discovery phases influence the slope to a lesser extent since after two discovery phase intervals forwarders are deactivated. From Figure 3(a), we can conclude that in general multicast requires far less transmissions than broadcast, especially when the amount of data to disseminate is large. For a small amount of data packets however, the benefit of multicast is not that clear. Two of the three interaction patterns we identified involve the dissemination of small amounts of data. In the following subsection we therefore investigate the influence of protocol overhead on the amount of transmissions needed to disseminate an increasing number of data packets. 2.4
The Efficiency Threshold
In this section, we empirically determine the amount of data packets that need to be disseminated by the source in order to compensate multicast protocol overhead. This is defined as the efficiency threshold and visualized in Figure 4. The horizontal axis shows the amount of data packets disseminated by the source. On the vertical axis, we depict the total amount of transmissions (control and data packets) in percentages of the broadcast constant. To compensate for the various network sizes, we normalize the broadcast constant to 100 %. Only a single discovery phase is executed in the experiments. This happens before the first data packet is transmitted, but the respective protocol overhead is divided over the amount of data packets disseminated by the source. For a single data packet, the total amount of transmissions is thus the sum of all protocol overhead transmissions and the data packet transmissions. For a larger number
Fig. 4. The normalized amount of transmissions needed per data packet, for an increasing number of data packets sent by the source
66
K. Thoelen, S. Michiels, and W. Joosen Table 2. The average efficiency threshold for the various grids under test. Network
Threshold (in number of data packets)
3x3 5x5 10x10
7 5 2
of data packets, the protocol overhead transmissions are divided over all the data packets, hereby reducing their impact. In Figure 4, the curves represent the average of all our tests and simulations on the various grids (see section 2.2). We see that for a few packets, the multicast protocol overhead is not justifiable and broadcast is the better option. The multicast curves however quickly drop below the broadcast constant and level of to the average amount of forwarders selected by the protocol in each of the three grids. The amount of data packets on the horizontal axis that corresponds with the intersection of the multicast curve with the broadcast constant, is the efficiency threshold (see Table 2). The variation is explained in the following subsection. 2.5
Network Influences on the Efficiency Threshold
In this subsection we look at how various network parameters influence the efficiency threshold and explain its variation between the various grids. The discovery phase interval and periodic data interval have no influence on the efficiency threshold. Any change in one of them, doesn’t affect the amount of data packets required to justify the multicast protocol overhead. For a fixed group of members, the radio range also has no practical influence. It affects the number of forwarders selected and thus the overall total of transmissions, but in most practical cases, this results in only a few more transmissions per data packet. Due to the steep decline of the curves in Figure 4, this practically has no influence on the efficiency threshold. The member distribution and member ratio (the percentage of members in the network) in general also do not influence the efficiency threshold. The contribution of the broadcasting of Join Queries is too large for the variation in members and forwarders to have a clear impact on the total amount of transmissions. The amount of transmissions per data packet does change lightly in most cases, but not enough to influence the efficiency threshold. The 3x3 and 5x5 grids however, expose extreme cases which do clearly affect the efficiency threshold. The small size of the grid, inherently causes the member ratio to be higher. In combination with the small radio range used, this causes nearly all nodes to be either a member or a forwarder. This drastically augments the amount of transmissions during both the discovery phase and the data dissemination afterwards. As a result, the efficiency threshold increases. In larger grids, this effect is also present, but only appears in extreme situations with a
Middleware for Adaptive Group Communication in WSNs
67
very low radio range, combined with a very high member ratio. The efficiency thresholds in Table 2, were empirically determined and do not take into account such extreme situations. 2.6
Lessons Learned
With respect to the results described in the previous subsection, we deduce a number of rules which guide us towards efficient use of multicast. For this purpose, we define the following variables: d = discovery phase interval e = the efficiency threshold p = periodic data interval Using these variables, we can describe the following rules: 1. For a number of data packets below the threshold e, the protocol overhead is inefficient. In this case simply broadcasting is more efficient. 2. For a number of data packets above the threshold e, during a single discovery interval, the multicast protocol overhead is justifiable and multicast becomes the most efficient alternative. 3. For periodic transmissions to a multicast group, we can select a threshold for p below which multicast is the most efficient communication mechanism. In the other case, broadcast is the more efficient alternative. p has to obey to the following equation: d p< (2) e
3
Communication Management Middleware
In this section we introduce our Communication Management Middleware (CMM) which leverages on the conclusions presented in the previous section. It efficiently delivers application data to a specified destination taking into account the interaction pattern and network context. The CMM is situated above a traditional networking layer and below any application functionality (see Figure 5). The networking layer provides an extensible set of communication mechanisms like unicast, broadcast and multicast. The need for a middleware layer arises from the fact that neither the applications, nor the multicast protocols should be held responsible for the selection of the most efficient communication mechanism. Applications might target their data to a group of interested nodes, but should not be concerned about how the data is delivered to this group. Additionally, multicast protocols allow for dissemination of data to a group of nodes but are not, and shouldn’t be, concerned about whether they are to be used or not. We continue this section with an overview of the key CMM building blocks and a discussion on the adaptations required at the networking layer.
68
K. Thoelen, S. Michiels, and W. Joosen
Network Manager Application
Application
Management Interface
Data Interface
Communication Management Middleware Network Monitor
Context
Decision Tree
Packetizer Network Layer Multicast
...
Broadcast
...
Unicast
Dispatcher
... Radio
Fig. 5. Architectural representation of the Communication Management middleware in the network stack. The white blocks depict the contributions of this paper.
3.1
Component View
The Communication Management Middleware consists of a Data and Management Interface, a Network Monitor, a Context representation and a Decision Tree. The Data Interface (see Table 3) is used by applications to pass data and the recipient’s address to the middleware. The address can either be a unicast, broadcast or multicast address; further delivery of data is taken care of by the CMM. Through the same interface, applications can also notify the middleware of periodic transmissions by registering themselves together with the multicast address concerned and the sending interval time value. The Management Interface and Network Monitor are both used to update the context representation. The latter contains a small database with tuples for various network parameters like the network size, the group size and the radio range. These parameters are used to determine the efficiency threshold and can be manually updated by a network manager through the Management Interface (see Table 4). The context representation is also interfaced with the various communication mechanisms in the network layer to query on their state or update various settings. With regard to the multicast mechanism, for instance, it can check for active multicast overlays or update the discovery phase interval. Table 3. The Data Interface void send(NetworkAddress address, byte[] data) void registerPeriodicSender(NetworkAddress address, int interval, int ApplicationID) void unregisterPeriodicSender(NetworkAddress address, int ApplicationID)
Middleware for Adaptive Group Communication in WSNs
69
Table 4. The Management Interface void setNetworkSize(int size) int getNetworkSize() void setGroupSize(NetworkAddress address, int size) int getGroupSize(NetworkAddress address) void setRadioRange(int range) int getRadioRange() void setEfficiencyThreshold(int value) int getEfficiencyThreshold()
We conceptually include the Network Monitor, which provides a future automated alternative to the Management Interface. This alleviates the burden of manually updating the context representation from the network manager. Further research is however required on how the network parameters can be extracted from the network in an automated manner. We do foresee however that the Network Monitor will not be installed in the same form on all nodes of the network. A full-fledged Network Monitor will operate at the more powerful nodes like for instance gateways, while light-weight monitors or proxies will be deployed on the more resource constrained nodes. The logic of the CMM is defined in the Decision Tree. The current interaction pattern at use is reflected by the data received via the Data Interface. In combination with the information retrievable in the context database, the Decision Tree evaluates which communication component at the network layer will deliver the data in the most efficient manner. A detailed representation of the Decision Tree is presented in Figure 6. Depending on the destination address specified, the broadcast or unicast mechanisms of the network layer are activated, in which case the middleware layer simply passes the data and destination address to the network layer. The real logic comes into play when the destination address is a multicast address. In this case, first the size of the message is compared against the product of the efficiency threshold e and the available payload size of the network packets. The
yes
Broadcast
Broadcast address?
yes no
Multicast address? no
yes
Fragment and Multicast
no
Overlay active?
message > payload size * eff. thresh.
Unicast
yes
no
Multicast yes
Multicast
no
Broadcast
High periodicity?
Fig. 6. The Communication Management Middleware decision tree
70
K. Thoelen, S. Michiels, and W. Joosen
latter product indicates the amount of data required to result in enough data packets to be transmitted to justify the protocol overhead of a discovery phase. If the message is large enough, a discovery phase is executed if no multicast overlay is active and the message gets fragmented and transmitted via the multicast mechanism. In case the message is smaller than the mentioned product, we check whether an overlay is active, in which case we multicast anyway. Be it that no multicast overlay is active, we further check if the multicast group is registered as a group towards which periodic communication is targeted with a high enough periodicity. If this is the case, we multicast as well, if not we broadcast the message over the network and let the recipients detect whether they should process the message or not. 3.2
Network Layer Adaptations
The network layer provides an extensible set of communication mechanisms. Besides the unicast, broadcast and multicast mechanisms depicted in Figure 5, other mechanisms can be provided such as gossip protocols and anycast mechanisms. To allow multicast data to be broadcasted, the broadcast mechanism has to be adapted to reflect the multicast membership of nodes. The Packetizer component adds a small header to all broadcast traffic. This header consists of a multicast-flag and an optional multicast address. If the broadcast data to be sent, is targeted to a multicast group, the multicast-flag is set to 1 and the multicast address is added to the header. In case of a normal broadcast, the multicast-flag is set to 0 and no multicast address is added. At the receiving end, the Dispatcher component checks the destination address of all incoming packets. Packets destined to the node’s unicast address or one of its multicast addresses are dispatched to the upper layer applications for further processing. In case a broadcast packet is received, its multicast-flag is inspected. If this is not set, the packet is dispatched to the upper layers as a normal broadcast packet. If the multicast-flag is set however, the packet is only dispatched to the upper layers if the node itself is a member of the specified multicast group. The multicast-flag thus allows receiving nodes to refrain from processing a broadcast packet when it is intended for a multicast group they are not a member of. The Dispatcher furthermore appropriately makes use of Unicast, Broadcast and Multicast mechanism for packet forwarding to the (other) destination(s).
4
Prototype Evaluation
We implemented a prototype version of the Communication Management Middleware on the SunSPOT platform (version Red-090706) and ported it to the Cooja simulator. The SunSPOT prototype consumes 124 bytes of RAM and 2997 bytes of flash memory on a SunSPOT. This is respectively 0,02% of the 512 kbytes available RAM and 0,07% of the 4 MB available flash memory on a SunSPOT. We consider
Middleware for Adaptive Group Communication in WSNs
71
!
Fig. 7. The number of transmissions needed to perform the actions of the described use case
this very low with regards to the efficiency advantage that is added. Furthermore it indicates the applicability of our approach on more constrained sensor devices. The following use case is used as a practical evaluation. In the 10x10 grid as depicted in Figure 2(a) node 1 is used as a management gateway to the WSN, through which deployments and reconfigurations are performed. The small radio range as depicted in the same figure is used and the discovery phase interval is set to 30 seconds. We performed the following set of actions using the Cooja prototype. 1. Deploy a temperature monitoring service to the group of opposite corner nodes 10, 91 and 100. The size of the component is 4KB which is fragmented over 50 packets. 2. During a 200 second period, the temperature monitoring services publish temperature readings to a group consisting of nodes 5 and 51. The temperature readings are published with a periodic time interval of 10 seconds and small enough to fit in a single packet. 3. A reconfiguration is executed which changes the periodic interval of the publication of temperature readings from 10 seconds to 30 seconds. This reconfiguration is performed on nodes 10, 91 and 100 and can be performed by the dissemination of a single data packet. 4. For another 200 seconds, the temperature components publish their temperature readings. This time using the new periodic interval of 30 seconds. We compare the results of our middleware layer to the alternatives of always multicasting and always broadcasting. Figure 7 shows the total amount of transmissions in the network needed to execute this set of actions. During the first action, the evaluation of the decision tree results in the selection of the Fragment and Multicast leaf. The temperature monitoring component is thus multicasted to the group members, which requires an average of 736 transmissions or about 15% of the broadcast alternative.
72
K. Thoelen, S. Michiels, and W. Joosen
During the second action, the periodicity of the temperature readings is small enough to justify the protocol overhead of multicasting. When using the Communication Management Middleware, the components register their periodic publication to the defined group and the evaluation of the decision tree results in the selection of multicast as the most efficient alternative. This results in an average of 3124 transmissions or about 52% of the broadcast alternative. Concerning the third action, the single packet needed for reconfiguration causes the Communication Management Middleware to refrain from using the multicast mechanism. The efficiency threshold e for a 10x10 grid is 2 as shown in Table 2 and thus the evaluation of the decision tree results in the selection of a Broadcast leaf. This results in a 100 transmissions as compared to an average of 161 transmissions in case multicast is used. This equals to 62% of the multicast alternative and is a reduction of 38%. During the fourth action the temperature components are registered with the relatively high periodic interval of 30 seconds. This means that in case multicast is used, a discovery phase has to be executed for each transmission of a temperature reading. The evaluation of the decision tree, however results in the selection of a Broadcast leaf. This results in 1800 transmissions which is about 77% of the transmissions needed when multicast would have been used. We can conclude that the Communication Management Middleware always selects the most efficient mechanism to perform group communication, and this at a very low cost with regards to memory consumption. The last two actions indicate the real additional value of our middleware. While a multicast mechanism can be provided, and will in some situations be the most efficient alternative, it is rather wasteful to use it as a default to perform group communication. Certainly when the overall task of WSNs is considered, which generally is periodic gathering of sensor data. The introduction of a middleware layer which takes the current context and interaction patterns into account, shows that considerable reductions of transmissions are possible.
5
Related Work
The work described in this paper is a first step towards more elaborated group communication support in the field of WSNs. While considerable work has been done on multicast routing in WSNs [10], few attention has been given to runtime selection of the efficient application of these protocols. In this section we discuss related work that adds to the functionality of the Communication Management Middleware. First of all, the functionality of the Communication Management Middleware can be expanded to include group definition and creation. This would allow the middleware to create groups outside of the awareness of the applications. In [12], a reconfigurable group management middleware is presented. Group management is defined as the combination of identifying the need for new groups and discovering their members. The authors argue that different strategies need to be employed, dependent on the user services requiring group communication
Middleware for Adaptive Group Communication in WSNs
73
and the system conditions (power level, connectivity, bandwidth etc.). A similar mechanism can be included in our context aware middleware. Secondly, as possible outcomes of the decision tree in Figure 6, broadcasting is a frequent alternative to multicasting. The efficiency of group communication, and broadcasting in general, can be further improved by supporting more intelligent broadcast mechanisms like gossiping [2] or clustering [4]. A further efficiency gain can be realized by adopting a framework like ManetKit [8]. While we consider the multicast routing protocols to be black boxes, ManetKit provides reusable components and supports composition, decomposition and hybridisation of possibly multiple MANET routing protocols. It allows for runtime transition between unicast routing protocols while retaining reusable protocol state. The combination of a multicast version of ManetKit and our Communication Management Middleware would enable finer grained mechanism selection by replacing the fixed set of communication mechanisms with runtime composable alternatives which are better adapted to the current usage and network conditions. In the third place, improving the context awareness of the Communication Management Middleware requires more elaborate studies on how multicast protocols are affected by the network context. Additionally, further work on network monitoring that extracts this information from the network is needed [5]. Node mobility is just one of the context parameters that need to be studied.
6
Conclusion
This paper presented the Communication Management Middleware which enables the selection of the most appropriate group communication mechanism based on the current interaction pattern and network context. The Communication Management Middleware incorporates rules that are the conclusions of an efficiency analysis of the ODMRP multicast protocol. We evaluated the middleware by performing a set of deployment, reconfiguration and periodic data monitoring actions. In all cases, the middleware selected the most efficient alternative, hereby considerably reducing the network overhead. Acknowledgement. This research is partially funded by the Interuniversity Attraction Poles Programme Belgian State, Belgian Science Policy, and by the Research Fund K.U.Leuven. It is conducted in the context of the IWT-SBOSTADiUM project No. 80037 [3].
References 1. Dunkels, A., Gr¨ onvall, B., Voigt, T.: Contiki - a lightweight and flexible operating system for tiny networked sensors. In: Proceedings of the First IEEE Workshop on Embedded Networked Sensors, Tampa, Florida, USA (November 2004) 2. Haas, Z.J., Halpern, J.Y., Li, L.: Gossip-based ad hoc routing. In: Proceedings of IEEE INFOCOM 2002, The 21st Annual Joint Conference of the IEEE Computer and Communications Societies, June 23-27. IEEE, New York (2002), ISBN 0-78037477-0
74
K. Thoelen, S. Michiels, and W. Joosen
3. IWT STADiUM project 80037, Software Technology for Adaptable Distributed Middleware (2010), http://distrinet.cs.kuleuven.be/projects/stadium/ (visited June 2010) 4. Kwon, T.J., Gerla, M.: Efficient Flooding with Passive Clustering (PC) in Ad Hoc Networks. ACM SIGCOMM Computer Comm. Rev. 32(1), 44–56 (2002) 5. Lee, W.L., Datta, A., Cardell-Oliver, R.: Network Management in Wireless Sensor Networks. In: Handbook of Mobile Ad Hoc and Pervasive Communication (2007) ¨ 6. Osterlind, F., Dunkels, A., Eriksson, J., Finne, N., Voigt, T.: Cross-level sensor network simulation with COOJA. In: SenseApp 2006, Tampa, Florida, USA (2006) 7. Perkins, C., Belding-Royer, E., Das, S.: Ad hoc On-Demand Distance Vector (AODV) Routing. RFC 3561 (2003) 8. Ramdhany, R., Grace, P., Coulson, G., Hutchison, D.: MANETKit: Supporting the Dynamic Deployment and Reconfiguration of Ad-Hoc Routing Protocols. In: Proceedings of IFIP/ACM/USENIX Middleware, vol. 09 (2009) 9. S´ a Silva, J., Camilo, T., Pinto, P., Ruivo, R., Rodrigues, A., Gaud˚ encio, F., Boavida, F.: Multicast and IP Multicast support in Wireless Sensor Networks. Journal of Networks 3(3), 19–26 (2008) 10. Simek, M., Komosn´ y, D., Burget, R., S´ a Silva, J.: Multicast Routing in Wireless Sensor Networks. In: Telecommunication and Signal Processing (2008) 11. SunSPOT, http://www.sunspotworld.com/ (visited June 2010) 12. Vieira, M.S., Rosa, N.S.: A reconfigurable group management middleware service for wireless sensor networks. In: Proceedings of the 3rd International Workshop on Middleware for Pervasive and Ad-Hoc Computing, Grenoble, France (2005) 13. Wagenknecht, G., Anwander, M., Brogle, M., Braun, T.: Reliable Multicast in Wireless Sensor Networks, 7. GI/ITG KuVS Fachgespr¨ ach Drahtlose Sensornetze, Berlin, Germany, September 25-26, pp. 69–72, Freie Universitt Berlin, Fachbereich Mathematik und Informatik, Tech. Report B 08-12 (2008) 14. Yi, Y., Lee, S., Su, W., Gerla, M.: On-Demand Multicast Routing Protocol (ODMRP) for Ad Hoc Networks. IETF Draft, draft-ietf-manet-odmrp-04.txt (February 2003)
A Middleware Framework for the Web Integration of Sensor Networks Herv´e Paulino and Jo˜ ao Ruivo Santos CITI / Departamento de Inform´ atica Faculdade de Ciˆencias e Tecnologia, FCT Universidade Nova de Lisboa 2829-518 Caparica, Portugal
[email protected]
Abstract. This paper introduces SenSer, a generic middleware framework that allows for the Web access and management of sensor networks by virtualizing their functionalities as services, in a way that is programming language and sensor network development platform independent. We present the SenSer architectural model and provide a concrete Javabased implementation that exposes the framework as two Web services, cleanly separating regular user operations from administration operations. This prototype implementation has been validated with the development of two applications, and evaluated against the initial functional and non-functional requirements, namely modularity, performance and scalability. We have also performed two integration exercises targeting Callas [17] and SWE [3] networks. Keywords: Wireless sensor networks, Web services, Middleware.
1
Introduction
Sensor networks are, currently, one of the hottest topics in computer science research, spreading areas as diverse as programming language design and implementation, computer networks, information processing, security, and physical sensor manufacturing, to name a few. The focus of our work is on Web integration, namely on the remote Web access and management, of such networks. By being commonly deployed at distant and/or unreachable locations (e.g. environmental monitoring), the widespread use of wireless sensor networks is bound to the ability of presenting them, to the application developer, as just another easily composable software component. The main challenges in this endeavor are to provide a generic Web accessible interface for common sensor network functionalities, and to cope with heterogeneity on both endpoints of the interaction. The integration of heterogeneous sensor networks is a crucial concern that is still underachieved. There are many operating systems (e.g. TinyOS [14], SOS [13], Contiki [5], and Nano-RK [6]) and programming languages (e.g. Nesc [7], Pushpin [16], TinyScript [15], and Callas [17]) available to develop and support G. Parr and P. Morrow (Eds.): S-Cube 2010, LNICST 57, pp. 75–90, 2011. c Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2011
76
H. Paulino and J.R. Santos
sensor networks, which makes integration in mainstream programming languages a cumbersome task. Moreover, heterogeneity must also be dealt with on the client side. The lack of standardized interfaces further contributes to the difficulty of systematically incorporating sensor networks in everyday applications. The state of the art in the porting of wireless sensor networks to the Web is almost entirely restricted to the Sensor Web Enablement (SWE) specifications [3] of the Open Geospatial Consortium. As will be further detailed in Section 7, the scope of the SWE specifications is more on the integration of existing sensor networks in a Web of sensors and not on the actual client/network interaction. There are other platforms that enable remote access to a sensor network [10,11], but these are custom-made solutions that address a particular setting, and thus do not contribute to solve the problem at hand. Other proposals, such as [1] and [9], focus on sensor network integration. In this paper we propose SenSer, a generic middleware framework that allows for the remote (Web) access and management of Sensor networks by virtualizing their functionalities as services, in a way that is programming language and sensor network development platform independent. The featured operations are divided in two categories: regular user and administrator. Regular users may interact with a network by posting queries, requiring data-streams, subscribing notifications, and performing operations (if the network contains actuator nodes). Administrators are allowed to configure networks by registering and unregistering sinks, and by reprogramming the network’s behavior. The concrete implementation of the SenSer framework relies on the Web Service technology, solving client heterogeneity by providing a Web based interoperable platform. With SenSer, a sensor network can be integrated into an application as just another Web service, hence enabling their inclusion in business processes, by resorting to BPEL [18], a feature most welcomed in businesses whose work-flow comprises the monitoring of merchandise. The heterogeneity of the sensor network endpoint is handled by network specific drivers, which must be compliant with the SenSer sensor network interface. The remainder of this paper is structured as follows: Sections 2 and 3, present, respectively, the SenSer architectural model and its concrete implementation; Section 4 focuses on an applicational example; Section 5 evaluates the framework against the functional and non-functional requirements; Section 6 presents two integration exercises with Callas networks and with SWE compliant sensor webs and, finally; Section 8 presents our conclusions and guidelines for future work.
2
The SenSer Middleware Framework
The main functional requirements of the SenSer middleware framework are: 1. to virtualize one or more sensor networks as a set of World Wide Web accessible services; 2. to seamlessly integrate multiple heterogeneous sensor networks; 3. to collect data from the registered sensor networks, either in real-time or by retrieving archived data, and; 4. to manage the middleware layer and the registered networks. The main non-functional architectural requirements are: 1. modularity
A Middleware Framework for the Web Integration of Sensor Networks
77
PRESENTATION LAYER WSN Admin Service
WSN Service
LOGIC LAYER Middleware Manager
Async Message Producer
Filter Manager
Notification Listener
Stream Listener
Query Manager
Network Registry Manager
DATA LAYER Repository Manager
Network Manager
Repository Sensor Networks
Fig. 1. The SenSer architectural model
in the design and configuration of the framework; 2. scalability in handling of large number of client requests and registered networks, and; 3. performance in the processing of client requests. The proposed architecture comprises several components organized in a threetier model that, as illustrated in Figure 1, cleanly separates client interaction, logic, and data management. The lower layer manages the two possible data sources: the registered networks or a repository of previously archived data. In this section we briefly describe each component, listing the interface of only the ones that are visible from the outside world. The complete specification can be found in [20]. 2.1
Presentation Layer
This layer exposes a set of sensor networks as two service interfaces that cleanly separate the operations available for regular users and administrators. SenSer manipulates two concepts: network type that stands for a sensor network type, e.g. a temperature or air humidity monitoring network, and; sink that abstracts a sink of a particular network type. Sink identifiers are qualified with their network type (sinkId@networkType) being that, by itself, networkType denotes all the sinks of that given type. snId denotes the whole identifier (sink and network type included) and, for the sake of simplicity, is throughout this text abusively referred to as sink. Regular User Service: Service WSNService (Listing 1)1 features the operations available to regular users: getNetworkTypes returns the identifiers of the network 1
For the sake of readability, the throwable exceptions were removed from all listings.
78
H. Paulino and J.R. Santos
types currently registered; getSinks returns the identifiers of the sinks of either one or all network types; query posts a query on a given sink; requestStream and requestNotification request a topic upon which the stream or the notification service can be subscribed; subscribe and unsubscribe manage the actual topic subscription; getSensorNetworkInterface retrieves the interface supported by the sinks of a network type and the respective parameters, and; finally executeOperation executes an operation on network, via its sink. i n t e r f a c e WSNService { N etwo r kTyp eId [ ] g etN etwo r kT yp e s ( ) ; SinkId [ ] getSinks () ; S i n k I d [ ] g e t S i n k s ( N etwo rkTypeId netwo r kType ) ; O b s e r v a t i o n q u e r y ( S i n k I d s i n k , Query q u er y , Date d a t e ) ; T o p i c r e q u e s t S t r e a m ( S i n k I d s i n k , C o n d i t i o n c o n d i t i o n , Date s t a r t D a t e , Date endDate , i n t r a t e ) ; Topic r e q u e s t N o t i f i c a t i o n ( Co n d i t i o n c o n d i t i o n , Date v a l i d i t y D a t e ) ; boolean s u b s c r i b e ( Topic t o p i c ) ; boolean u n s u b s c r i b e ( Topic t o p i c ) ; N e t w o r k I n t e r f a c e g e t S e n s o r N e t w o r k I n t e r f a c e ( N etw o r k T yp eI d n etw o r k T yp e ) ; v o i d e x e c u t e O p e r a t i o n ( S i n k I d s i n k , O p e r a t i o n op , P a r a m e t e r [ ] params ) ; }
Listing 1. Interface WSNService
Conditions are used to parameterize both data-streams and notification requests. In the context of data-streams, the condition, along with the specification of a maximum data rate, operates as a filter on the data to be received. When it comes to notification requests, the condition specifies the scenarios on which a notification event is to be triggered. Notifications may encompass more than a single sink or network type, thus no explicit target sink argument is included. Condition specification borrows the conditional and logic expressions syntax used in Java and C. An example that operates on all sinks of type Temperature and Light follows: ((Temperature > 30 && Light >= 70) || (Temperature < 5 && Light < 20)) Administration Service: Service WSNAdminService (Listing 2) specifies the administration operations. This includes the getNetworks and getSinks operations described above plus: registerNetwork and unregisterNetwork to manage network type registry, the registration requires a set of properties and the network’s interface; registerSink and unregisterSink to manage sink registry, the registration requires the sink’s network type, an adapter to bridge the communication between the framework and the sink (more on this in Subsection 2.2), and the set of properties to provide, for instance, the location of the sink; install to reprogram a network by deploying a new program in the target sink; setFilter to set the filter for a given sink (more on this in Subsection 2.2) and, finally; operations to manage the configuration (properties map) of the middleware, of a registered networkType, or of a registered sink. We only list the operations available for managing the configuration of the middleware: getPropertyList, getPropertyValue, and setPropertyValue. The remainder are analogous.
A Middleware Framework for the Web Integration of Sensor Networks
79
Note that the management of the inclusion in the framework of the adapters to bridge sink communication is not included in the interface. This happens because these operations require the uploading of code to the middleware (more on this in Subsection 2.2). This is naturally implementation dependent, for instance a Java-based implementation will not accept C# code. i n t e r f a c e WSNAdminService { N etwo r kTyp eId [ ] g etN etwo r k T y p e s ( ) ; SinkId [ ] getSinks () ; S i n k I d [ ] g e t S i n k s ( N etwo rkTypeId netwo r kType ) ; v o i d r e g i s t e r N e t w o r k T y p e ( N etw o r k T yp eI d netType , N e t w o r k I n t e r f a c e i n t e r f , Map<S t r i n g , S t r i n g > p r o p s ) ; v o i d r e g i s t e r N e t w o r k T y p e ( N etw o r k T yp eI d n etw o r k T yp e ) ; v o i d r e g i s t e r S i n k ( S i n k I d s i n k , N etw o r k T yp eI d netType , A d a p t e r I d a d a p t e r , Map<S t r i n g , S t r i n g > p r o p s ) ; void unr eg iste r Sin k ( SinkId sink ) ; v o i d i n s t a l l ( S i n k I d s i n k , byte [ ] program ) ; void s e t F i l t e r ( SinkId sink , P ipe l ine p i p e l i n e ) ; Map<S t r i n g , S t r i n g > g e t P r o p e r t y L i s t ( ) ; String getPropertyValue( String property ) ; void setPropertyValue ( Str i n g property , Str i n g value ) ; ... }
Listing 2. Interface WSNAdminService
2.2
Logic Layer
The components of the logic layer provide the core functionalities of the framework. A brief description of each follows. NetworkRegistryManager: all requests are processed by the logic layer that interacts with the data layer to relay the queries or to perform actions upon a target sink. Each kind of network (e.g. temperature monitoring network) constitutes a network type that may comprise many physical networks composed of one or more sinks, of distinct technologies, each of them explicitly addressable. i n t e r f a c e SensorNetworkAdapter { S t r i n g query ( S t r i n g query ) ; Map<S t r i n g , S t r i n g > g e t P r o p e r t y L i s t ( ) ; String getPropertyValue( String property ) ; void setPropertyValue ( Str i n g property , Str i n g value ) ; v o i d i n s t a l l ( byte [ ] program ) ; }
Listing 3. Interface SensorNetworkAdapter
Sensor technology specifics are decoupled from the middleware and encapsulated in sensor network specific adapter that has the task of translating the communication on both directions (requests and data). The registry of a sink requires the existence of such an adapter that must be compliant with the framework (respect the SensorNetworkAdapter interface depicted in Listing 3) and with the respective network type. For instance, the adapter for a temperature control network that offers operation int maxTemperature() must implement a method with such signature.
80
H. Paulino and J.R. Santos
The NetworkRegistryManager component manages the registry of sensor network adapters. How it is made available to administrators is considered an implementation detail and will be dealt with in Section 3. FilterManager: supports the ability of SenSer to periodically collect data from a given sink and archive it in an history repository. This data is processed as a data-stream subjected to a pipeline of filters, whose purpose is to refine the information to be stored (Figure 2). In the end, the processed information may be either actual sensed data or simply computed statistics. Currently pipelines can be only associated to sinks, through the setFilter method of the administration interface. Ongoing work is extending their application to client requested streams (Section 8). 1st Filter
2nd Filter
Repository
Sensor Network
Fig. 2. Filter pipeline
The pipeline specification syntax is presented in Table 1, where pipeline identifiers, filter identifiers, and primitive values are denoted, respectively by p, f , and v. A pipeline (P ) is a sequence of filters (F ) that are applied sequentially, whereas records can be used to create complex values. Types (T ) ensure compliance between two consecutive pipeline stages. Table 1. Pipeline definition syntax
P := p = F
Pipeline definition
F := f ( v¯ ) : T
Filter application
|F >F | [ F¯ ]
Filter composition
T := String | Boolean | Integer | Float | Double
Record definition Data-types
The following example denotes a pipeline, named MyPipeline, that aggregates all the data read from the network in clusters of 30 minutes (filter Aggregate with instantiated with value 30) and creates a record holding the minimum and average values (computed, respectively by filters Min and Avg) of each of these clusters: MyPipeline = Aggregate(30) : Integer > [Min : Integer, Avg : Integer] StreamListener: opens a data-stream between the framework and the client application. According to the dates supplied in the stream’s request (present or past) the data is retrieved from the target sink or from the repository. This
A Middleware Framework for the Web Integration of Sensor Networks
81
decision is taken by the component, in a way that is transparent to the client. The date range may even require a rerouting of the data path, when crossing the boundary between the past and the present. The actual emission of the data is performed by the AsyncMessageProducer component. NotificationListener: supports the subscription and handling of notifications. These events are triggered whenever the condition specified in the request is evaluated to true. Similarly to StreamListener, this component resorts to AsyncMessageProducer to create the notification topics and to publish the events. AsyncMessageProducer: factorizes the support for producer/consumer relationships. It stores all the currently active data-stream and notification topics, allowing for the client applications to subscribe them, and for the StreamListener and NotificationListener services to publishing their items. The interface includes operations to create, remove, subscribe to, and publish on data-stream and notification topics. QueryManager: processes queries to the registered networks. Once again the supplied date determines whether the query is forwarded to the repository or to the network. In either case, the operation is synchronous, in the sense that it only concludes when the result is sent back to the client. MiddlewareManager: the behavior of the SenSer framework can be regulated through a set of parameters. This component is responsible for loading these configuration parameters and for allowing their on-the-fly modification, without disrupting the framework’s availability. 2.3
Data Layer
This layer comprises three components: one responsible for the storage of the framework’s configuration settings, and two more responsible for the integration of the system’s data sources: sensor networks and the repository. Only the latter two justify a more detailed description. NetworkManager: relays the upper level requests to the target sink. The network type identifier is used to retrieve the network type’s interface and ensure protocol compliance between the request and the target sink. The sink identifier selects the required adapter (SensorNetworkAdapter implementation) from the network registry. RepositoryManager: supports the persistent storage of the data collected from the sensor network sinks. It manages the integration of a specific repository (e.g. a database system or a file-system) in the framework, virtualizing it in a service interface accessible to the upper level components. Furthermore, it manages the connections for data storing and retrieval.
3
Implementation
This section presents the most relevant implementation specific details of a Javabased prototype instantiation of the SenSer architectural specification. We
82
H. Paulino and J.R. Santos
center our discussion on the following major topics: overall instantiation of the model; integration with the World Wide Web; sensor network registry; support for data-streams and notifications, and; implementation of filter pipelines. Model Instantiation: to promote loose coupling between components, allowing these to be autonomous and to run on different machines, we instantiated the three-tier conceptual model in a Enterprise Service Bus. All inter-component communication is built on top of Java-RMI, hence the bus takes the form of a Java-RMI registry. Web Integration: all client-platform interaction is based on the Web service technology, providing an Internet-scale interoperable platform. The WSNService and WSNAdminService components are, thus, exposed as two Web services. To provide a framework upon which it would be easy to access the SenSer middleware layer from Java applications, we have transposed these interfaces to the Java world by implementing a client side API that hides the Web service communication details. The interface of this API is almost identical to the one presented in Subsection 2.1 as can be checked in the example of Section 4. Network Registry: the registry of sensor network adapters is performed by having a client that directly accesses the NetworkRegistryManager by plugging in the service bus. The registry requires the upload of all the user-implemented classes required by the adapter are uploaded to the framework and locally stored. Ongoing work focuses the integration of these operations in the Java client API by featuring a Web service dedicated to these operations. Several of the functionalities that must be offered by these adapters crosscut the sensor technology, operating system and programming language, e.g. the properties map. We offer factorize these functionalities in an adapter development kit. Data-streams and Notifications: both these features are supported by WSNotification [19], a topic-based publish/subscribe Web service standard. Each distinct data-stream and notification request has a associated dedicated topic in the AsyncMessageProducer component. This topic is used both for subscription (client side) and for publication purposes (components StreamListener and NotificationListener). The implemented Java client API features a component, AsyncMessageConsumer, that hides the details of the publish/subscribe protocol. As is illustrated in Figure 3, the client no longer has to be aware of the topic subscription protocol, simply posting the request and obtaining the stream or the notification event handler. The latter can be used to associate an action to the reception of the notification, such as display or persistently store them. An example will be given in Section 4. Figure 3 illustrates the whole stream request process. The client invokes the requestStream method on the API. The request is relayed to the WSNService Web service that, in turn, relays it to the StreamListener logic layer component. A new topic is then created by the AsyncMessageProducer (if one does not yet exist) and sent back to the API, that automatically subscribes it. From this point
A Middleware Framework for the Web Integration of Sensor Networks
83
Fig. 3. Data-stream request diagram
on, the AsyncMessageConsumer component receives all the messages published on the subscribed topic. Filter Pipelines: filters are instantiated as Java classes that implement the Filter interface depicted in Listing 4. Filter provides methods to define the filter’s input arguments (setArgs), to program the filter’s data manipulation process (process), and to obtain a stream to the filter’s output (getStream). The last method is used to connect two pipeline stages, each filter processes the data it reads from the stream generated by the previous stage in the pipeline.
}
interface F ilt er { void setArgs ( String [ ] args ) ; void p r o c e s s ( j a v a . i o . InputStream i n ) ; j a v a . i o . InputStream getStream ( ) ;
Listing 4. Interface Filter
The name of a filter in the pipeline definition string must match the name of the associated Java class. The latter will be dynamically loaded as soon as the string is successfully parsed. Thus, to add a new filter to SenSer is simply to place a Java filter class with the same name in a folder pre-determined by the framework’s configuration parameters. The output of the pipeline must be persistently stored in the repository (a MySQL database in our prototype). For this purpose, a Java class that encapsulates the record to be stored is dynamically generated. This class resorts to the Java Data Objects technology to abstract database records as Java objects.
4
Automated Home Management Application
This section illustrates how SenSer can be used to develop an automated home management application that incorporates lighting control and security systems. We have also used SenSer to interact with a network of medical body-sensors,
84
H. Paulino and J.R. Santos
such as the one described in [2], but the home management application better illustrates the framework’s capabilities. The sensor networks used in these examples were simulated. The lighting control system automatically regulates the lights of the rooms, according to human presence and daylight intensity, while the security system closes all outside doors and windows when no human presence is detected inside the house. Due to space restrictions, the application cannot be fully presented. We will restrict the code to two snippets that cover the registry of network types and sinks, and the management of notifications. As is illustrated in Figure 4, this implementation assumes the existence of sensor networks for human presence detection and for the measurement of light intensity, both comprising a sink node for each house division. It also assumes the existence of networks to control the intensity of each illumination point (e.g. a X10 module network), and to control the lock of every outside door and window.
RepositoryManager
Lock Activation Network
Presence Detection Network
Room1
NetworkManager
Room2
Window1
Window2
Light Intensity Network
Door
Room1
Room2
Light Switching Network
Room1
Room2
Fig. 4. The networks registeres in the automated management application
Once the frameowrk is equipped with all the adapters needed by the set up sensor networks, the remote interfaces can be used to manage the network types and sink nodes, and to program the intended behaviors. Listing 5 illustrates the registry of the LightSwitchingNetwork network type and its living-room sink. The network interface includes, among others, the switchOn operation that has no parameters. The properties map in the sink registry was used to indicate the latter’s contact port2 . Sen s er WSN A d m i n Ser vi ce w s n A d m i n S e r v i c e = new Sen s er WSN A d m i n Ser vi ce ( s en s er UR L ); wsnAdminService . registerNetwor kT ype ( ” LightSwitchingNetwork ” , new N e t w o r k I n t e r f a c e ( ) {{ . . . ; p u t ( ” s w i tch O n ” , n u l l ) ; . . . ; }}) ; wsnAdminService . r e g i s t e r S i n k ( ” LivingRoom@LightSwitchingNetwork ” , ” LSNAdapter ” , new P r o p e r t i e s M a p ( ) { p u t ( ” P o r t ” , ”A1” ) ; }}) ;) ;
Listing 5. Sink registry example 2
The port value range is network type specific.
A Middleware Framework for the Web Integration of Sensor Networks
85
A notification request returns an handler to which an action can be bound. This binding will cause the AsyncMessageConsumer client API module to associate the behavior to the correspondent notification topic, and trigger its execution whenever a message is received. Listing 6 exemplifies how to perform a notification request and to bind an action to its reception. These actions are programmed by providing a concrete implementation of the NotificationResponse abstract class, namely of its response method. In this particular example, the response is to switch on the lights on a given house division. N o ti f i ca ti o n H a n d le r livingRoomHandler = wsnService . r e q u e s t N o t i f i c a t i o n ( ” L i vi n g R o o m @H u m a n Pr es en ceD etec to r==f a l s e && L i v i n g R o o m @ L i g h t I n t e n s i t y <0.4 ” ); livingRoomHandler . setResponse ( new N o t i f i c a t i o n R e s p o n s e ( w s n S e r v i c e , ” L i v i n g R o o m @ L i g h t S w i t c h i n g N e t w o r k ” ) { void response () { w s n S e r v i c e . e x e c u t e O p e r a t i o n ( t h i s . s i n k , ” s w i tch O n ” , n u l l ) ; } };) ;
Listing 6. Notification setup and management example
5
Evaluation
In this section we evaluate SenSer against the requirements enumerated in Section 2. The functional requirements have all been met and individually and collectively tested [20]. We just highlight the sensor network adapter approach, which enables the interaction of sensor networks independently of their technology, and introduction of the date field in the user requests, a tool that allows the framework to route the requests to the target sensor networks or to the repository, providing for the transparent access of real-time and archived data. Regarding the non-functional requirements, special focus was given on modularity, namely on the clean separation between client interaction, logic, and data retrieval and storage. The use of the ESB model was also an important contribution to meet this modularity requirement. Moreover, it enables the physical separation of the components. SenSer does not have to run in a single processor (core) or machine, thus enabling scalability. Another key aspect of modularity was parameterization. The framework is highly configurable and this can be done remotely, through the administration API. To certify that SenSer does not pose as a bottleneck on user-sensor network interaction, we performed some performance tests. The first test measured the overhead imposed on client requests. Figure 5 presents the mean and the standard deviation for three scenarios: 30, 150 and 300 simultaneous queries. We can observe that for each of these the mean of the overheads is under 25 milliseconds, and the maximum value under 35 milliseconds. We can also observe that there are no scalability problems. the overhead remains steady with the increase of requests.
86
H. Paulino and J.R. Santos
Fig. 5. Overhead imposed on query processing
Fig. 6. Mean of the overhead imposed on Fig. 7. Standard deviation of the overhead stream processing imposed on stream processing
The next two graphics, depicted in figures 6 and 7, present the mean and the standard deviation of the overhead that SenSer imposes on stream processing. We measured the time elapsed between the instant a new data item arrives from a sensor network and the instant it is sent to the client, i.e., the time the platform takes to process it. We analyzed streams requested with rates of 5 and 10 milliseconds. The graphics present the actual rate of the stream after being processed by SenSer and the overhead regarding the original requested rate. We can observe that the overhead never reaches the 2% barrier. In fact, for streams with rates of 5 milliseconds it is lower than 1%. The impact of these is negligible when taking into account the latency of an Internet connection.
6
Integration
We realized two integration exercises. The first, directed to the Callas WSN programming language [17], consists on the implementation of a Java class that communicates with Callas network sinks. This class can now be extended to be compliant with the SensorNetwork interface and, thus, be used to register Callas networks with specific capabilities. Callas sink nodes were equipped with a set of system calls that enables them to accept TCP connections at a configurable port. Requests are in the form of
A Middleware Framework for the Web Integration of Sensor Networks
Data Layer
87
Socket communication 1. Callas request
Network Manager
CallasSensor Network
2. OK or ERROR
Sink
if OK
3. DATA
Repository Manager
Callas sensor network
Fig. 8. Callas communication protocol
a Callas program that is submitted to the sink (Figure 8). The parsing of the program will generate an Ok or ERROR response, being that the former spawns the request to the network and replies the consequent result. The second exercise targeted the SWE specification. It is a simple theoretical exercise that illustrates how SenSer and SWE can be integrated. The integration of a SenSer instance in a SWE network (Figure 9) only requires the addition of an extra component in the SenSer presentation layer, with the task of translating SWE to SenSer requests and SenSer to SWE data. The opposite, i.e. to register a SWE network in SenSer (Figure 10) is also feasible. The motivation is to compare local measurements with data retrieved from other locations. For instance, a concrete example is the comparison of air pollution indicators: how do the local measurements compare with the mean of the remainder of the city or country. In this scenario, the translator would have to be SensorNetwork compliant and convert SenSer requests to SWE and SWE data to SenSer. Figures 9 and 10 illustrate both processes.
WSN Admin Service
WSN Service
SWE SOS WNS SPS SAS
Middleware Manager
Async Message Producer
Filter Manager
Notification Listener
Stream Listener
Repository Manager
WSN Service
WSN Admin Service
Network Manager
SWE Translation Service Light Switching Network
SWESensor Network
Middleware Manager
Async Message Producer
Filter Manager
Repository Manager
Notification Listener
Network Registry Manager
Query Manager
Stream Listener
Query Manager
Network Registry Manager
Network Manager
Fig. 9. Integrate SenSer in SWE
SAS SPS WNS SOS SWE
Room 1
Room 2
Room 3
Fig. 10. Integrate SWE in SenSer
Room 4
88
7
H. Paulino and J.R. Santos
Related Work
IrisNet [8] stands out as one of the first projects to address sensor webs at an Internet-scale. It is a software infrastructure that supports distributed queries on a Internet-wide service-oriented platform that abstracts common sensing hardware, such as web-cams. Services are described through XML documents that are published on dedicated distributed databases. These operations, as well as the querying, are performed through a high-level API. Sensor Web Enablement (SWE) [3] is a set of models and Web service interfaces proposed by the Open Geospatial Consortium for the Internet integration of sensor systems. The goal is to provide a specification for the integration of sensor networks in a Web of sensors accessible via Internet technologies. The focus is on the integration of several sensor networks in a web of sensors, rather than the actual remote access and management of such networks. The featured services have several limitations of which we emphasize: 1. the lack of management operations, such as network registry and network reprogramming; 2. the lack user levels or roles, such administration privileges, or different access permissions for different users; 3. the overall complexity when the purpose may be simply to setup a single network, and; 4. the disregard for community adopted Web service standards, such as service orchestration to perform planning, defining its own protocol. In our opinion, a sensor network should be virtualized as just another service on Web, and therefore easily integrable with standard Web service technologies. Some SWE implementations are available3 Among the most known are Nosa [4] and 52o North [21]. Nosa is a research-oriented implementation of a first iteration of the specification, that had many limitations, some of which overcome with non-SWE extensions [12]. 52o North is a more recent implementation with a more industrial focus. The documentation is mostly in German, thus not easily digestible for the majority of the scientific community. GSN [1] also targets sensor network integration. The objecive is to build a sensor Internet by connecting virtual sensors, that abstract data-streams originating from either a sensor network or from another virtual sensor. SQL queries can than be performed on top of these virtual sensors. There is no focus on Web interoperability. Aginome [9] integrates IP and sensor networks by resorting to mobile agents that communicate through tuple-spaces. The focus is on sensor network integration, agents may migrate between wireless sensor networks, rather than on Web exposure and interoperability.
8
Conclusions
We have presented SenSer, a framework that provides a generic middleware for the remote access and management of sensor networks. Two distinct interfaces 3
http://www.sensorsmag.com/networking-communications/ government-military/new-impl ementations-ogcsensor-web-enablement-standards-1437
A Middleware Framework for the Web Integration of Sensor Networks
89
are defined to cleanly separate the operations available for regular users and administrators. A major effort was placed on the support for modularity and heterogeneity. The framework’s architecture follows a three-tier model, decoupling the presentation, logic and data layer, the latter comprising the two possible data sources: registered sensor networks and a history repository. A Java-based prototype implementation exposes the framework’s presentation layer as two Web services, providing a Internet-scale interoperable platform, thus solving client heterogeneity. Due to the lack of standards, sensor network heterogeneity is handled by a dedicated framework compliance interface, that specifies the SenSer/sensor network interaction protocol. The prototype has been validated with the development of applications [20], of which one was briefly described in Section 4, and with an evaluation of the functional and non-functional requirements. Regarding the latter, a performance study revealed that SenSer is a lightweight framework that scales well with the increase of requests. Our final efforts were in the realization of two integration exercises, with Callas sensor networks and SWE sensor webs. Regarding future work our goals include: 1. the addition of the registry of network types to the Java client API. This requires the migration of Java code on top of Web service technology, an ongoing work; 2. apply the filter pipeline functionality to modify client requested streams; 3. the definition of different access permissions, for instance to distinguish users that may only retrieve data, from the ones that can perform actions; 4. the integration with real-life applications, and; 5. the actual incorporation of an authentication module in the current implementation. Acknowledgments. This work was partially funded by FCT MCTES under project CALLAS (contract PTDC/EIA/71462/2006) and the CITI research center.
References 1. Aberer, K., Hauswirth, M., Salehi, A.: A middleware for fast and flexible sensor network deployment. In: Dayal, U., Whang, K.-Y., Lomet, D.B., Alonso, G., Lohman, G.M., Kersten, M.L., Cha, S.K., Kim, Y.-K. (eds.) Proceedings of the 32nd International Conference on Very Large Data Bases, Seoul, Korea, September 12-15, pp. 1199–1202. ACM, New York (2006) 2. Baldus, H., Klabunde, K., M¨ usch, G.: Reliable set-up of medical body-sensor networks. In: Karl, H., Wolisz, A., Willig, A. (eds.) EWSN 2004. LNCS, vol. 2920, pp. 353–363. Springer, Heidelberg (2004) 3. Botts, M., Percivall, G., Reed, C., Davidson, J.: OGC Sensor Web Enablement: Overview and high level architecture. Technical report, OGC (2007) 4. Chu, X., Kobialka, T., Durnota, B., Buyya, R.: Open sensor web architecture: Core services. In: Proceedings of the 4th International Conference on Intelligent Sensing and Information Processing, pp. 98–103 (2006) 5. Dunkels, A., Gronvall, B., Voigt, T.: Contiki - a lightweight and flexible operating system for tiny networked sensors. In: LCN 2004: Proceedings of the 29th Annual IEEE International Conference on Local Computer Networks, pp. 455–462. IEEE Computer Society, Los Alamitos (2004)
90
H. Paulino and J.R. Santos
6. Eswaran, A., Rowe, A., Rajkumar, R.: Nano-RK: An energy-aware resource-centric RTOS for sensor networks. In: RTSS 2005: Proceedings of the 26th IEEE International Real-Time Systems Symposium, pp. 256–265. IEEE Computer Society Press, Los Alamitos (2005) 7. Gay, D., Levis, P., von Robert Behren, Welsh, M., Brewer, E., Culler, D.: The nesC language: A holistic approach to networked embedded systems. In: PLDI 2003: Proceedings of the ACM SIGPLAN 2003 Conference on Programming Language Design and Implementation, pp. 1–11. ACM, New York (2003) 8. Gibbons, P., Karp, B., Ke, Y., Nath, S., Seshan, S.: Irisnet: An architecture for a worldwide sensor web. IEEE Pervasive Computing 2(4), 22–33 (2003) 9. Hackmann, G., Fok, C.-L., Roman, G.-C., Lu, C.: Agimone: Middleware support for seamless integration of sensor and IP networks. In: Gibbons, P.B., Abdelzaher, T.F., Aspnes, J., Rao, R. (eds.) DCOSS 2006. LNCS, vol. 4026, pp. 101–118. Springer, Heidelberg (2006) 10. Hillenbrand, M., Verney, A., Muller, P., Koenig, T.: Web services for sensor node access. In: 5th CaberNet Plenary Workshop (2003) 11. Jiang, G., Chung, W., Cybenko, G.: Semantic agent technologies for tactical sensor networks. In: Carapezza, E.M. (ed.) Proceedings of the SPIE Conference on Unattended Ground Sensor Technologies and Applications V, vol. 5090, pp. 311–320. SPIE, San Jose (2003) 12. Kobialka, T., Buyya, R., Leckie, C., Kotagiri, R.: A sensor web middleware with stateful services for heterogeneous sensor networks. In: 3rd International Conference on Intelligent Sensors, Sensor Networks and Information, ISSNIP 2007, pp. 491–496 (2007) 13. Kumar, C.-C.H.R., Shea, R., Kohler, E., Srivastava, M.: A dynamic operating system for sensor nodes. In: MobiSys 2005: Proceedings of the 3rd International Conference on Mobile Systems, Applications, and Services, pp. 163–176. ACM, New York (2005) 14. Levis, P., Madden, S., Polastre, J., Szewczyk, R., Whitehouse, K., Woo, A., Gay, D., Hill, J., Welsh, M., Brewer, E., Culler, D.: TinyOS: An operating system for sensor networks. Ambient Intelligence, 115–148 (2005) 15. Lewis, P.: The TinyScript Language. UC Berkeley (2004) 16. Lifton, J., Seetharam, D., Broxton, M., Paradiso, J.A.: Pushpin computing system overview: A platform for distributed, embedded, ubiquitous sensor networks. In: Mattern, F., Naghshineh, M. (eds.) PERVASIVE 2002. LNCS, vol. 2414, pp. 139– 151. Springer, Heidelberg (2002) 17. Martins, F., Lopes, L., Barros, J.: Towards Safe Programming of Wireless Sensor Networks. Electronic Proceedings in Theoretical Computer Science 17, 49–62 (2010) 18. OASIS. Web Services Business Process Execution Language (WSBPEL) TC, http://www.oasis-open.org/committees/tc_home.php?wg_abbrev=wsbpel 19. OASIS. Web Services Notification (WSN) TC, http://www.oasis-open.org/committees/tc_home.php?wg_abbrev=wsn 20. Santos, J.R.: Um middleware para acesso e gest˜ ao de redes de sensores em ambientes web. Master’s thesis, Faculdade de Ciˆencias - Universidade Nova de Lisboa, Supervised by Herv´e Paulino (2009) 21. Stasch, C., Walkowski, A., Jirka, S.: A geosensor network architecture for disaster management based on open standards. In: Digital Earth Summit on Geoinformatics 2008: Tools for Climate Change Research, pp. 54–59 (2008)
Expressing and Configuring Quality of Data in Multi-purpose Wireless Sensor Networks Pedro Javier del Cid, Daniel Hughes, Sam Michiels, and Wouter Joosen IBBT - DistriNet, Katholieke Universiteit Leuven, 3001 Leuven, Belgium {name.lastname}@cs.kuleuven.be Department of Computer Science and Software Engineering, Xi'an Jiaotong-Liverpool University, 215123 Suzhou, China
[email protected]
Abstract. Wireless Sensor Networks (WSNs) are evolving towards interconnected, sensing, processing and actuating infrastructures that are expected to provide services for multiple concurrent applications. In a multi-purpose WSN, concurrently running applications share network resources and each may have varying Quality of Data (QoD) requirements. Our middleware targets these multipurpose WSN deployments. Specifically this paper discusses how one should express and configure QoD properties for multi-purpose WSNs. We contribute by presenting our approach; which leverages per-instance QoD configuration and a separation of operational concerns to achieve simpler configuration and improve adaptability and customize-ability of the WSN. A prototype implementation and comparison to the related state of the art in WSNs are provided. Keywords: Wireless Sensor Networks, Resource Management, Middleware, Adaptive, Context Aware, Quality of Data.
1 Introduction Wireless sensor networks (WSNs) support the integration of environmental data into applications, from mobile devices to backend enterprise infrastructure. WSNs are evolving towards interconnected, sensing, processing and actuating infrastructures that are expected to provide services for multiple concurrent clients [1]. In a multipurpose WSN, applications share network resources and each may have varying Quality of Data (QoD) requirements. Notably, QoD requirements, i.e. the required data reliability, resolution and the importance of a single reading, vary from application to application [2]. Previous approaches address the challenge of multiple applications on shared infrastructure by decoupling of the applications and the network, e.g. Milan [3], Servilla [4], TinySOA [1] and TinyCOPS [5]. In early approaches, such as Milan, QoD is expressed at the application level i.e. once for every application, it is assumed that: (i) there is no run-time variability in required QoD, (ii) considerable collaboration between applications is possible and (iii) a-priori knowledge of all G. Parr and P. Morrow (Eds.): S-Cube 2010, LNICST 57, pp. 91–106, 2011. © Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2011
92
P.J. del Cid et al.
applications that will use the network is available. In multi-purpose WSNs these assumptions are not reasonable thus application level configuration of QoD is not an adequate approach. Approaches such as TinySOA [1], Servilla [4] and TinyCOPS [5] configure QoD in a per-instance manner. An instance refers to each service request or query an application may have, where an application may have multiple concurrent requests or queries. Per-instance configuration allows for higher flexibility and optimization. These multi-purpose WSNs may serve different types of applications with arbitrary requests or query patterns with no a-priori knowledge needed. They provide application developers the flexibility to meet variable QoD requirements of new applications and yet expect the same levels of performance that would result from an applicationspecific deployment [1]. Fine-grained optimization is possible because every instance may be customized with specific QoD requirements allowing for higher component re-usability, more efficient parameterization and improved reliability through lightweight run-time capacity planning [6]. We present our approach that leverages per-instance QoD configuration and a separation of operational concerns to achieve simpler configuration and improve adaptability and customize-ability of the WSN. Adaptability refers to the system’s capacity to enact context-aware run-time reconfiguration. Customize-ability refers to the capacity the system has to fine-tune or extend its functionality at run-time without service interruption. This paper is structured as follows: Section 2 elaborates on the operational setting of our research. Section 3 presents an overview of our middleware. Section 4 presents our prototype implementation. Section 5 further describes WSN configuration and highlights our achieved benefits. Section 6 discusses these benefits in the context of the current state of the art. Section 7 concludes the paper and maps the road ahead.
2 Operational Setting One of the main objectives of shared enterprise deployments is to maximize the return on investment in the WSN infrastructure [1,7,8]. We leverage on per-instance QoD configuration and focus on maximizing the amount of concurrent and varying QoDaware requests the network can successfully support while providing simple configuration abstractions. Quality of Data (QoD) can be broken down into reliability and resolution [2]. The former specifies the accuracy of the data i.e. reported data corresponds to reported phenomena and the latter refers to the granularity of data and its temporal and spatial qualities. These are properties that are specified by the application and may vary for each request instance. In multi-purpose WSNs the main operational concerns involved in application development and use should be undertaken by the following operational roles as defined by Huygens et al. in [9]: application developers, service developers and network administrators. Application developers (application owners in [9]) will be concerned with achieving high-level business goals and will undertake the implementation of domain specific business logic. Service developers (component developers in [9]) will be concerned with developing prepackaged functionality to support the goals of the network administrators and application developers. They will
Expressing and Configuring Quality of Data in Multi-purpose WSN
93
undertake the implementation of application-independent and platform-specific common use services e.g. temperature sensing on a SunSpot [10] sensor node. Network administrators (infrastructure owner in [9]) will be concerned with monitoring network QoS, configuring and maintaining common use software services e.g. temperature, aggregation. They also have high-level goals, usually system-wide requirements driven by concerns such as system lifetime optimization or service level agreements with application stakeholders. Consider a WSN deployed in a corporate warehouse (see Fig. 1). The deployment is shared by multiple applications each with different QoD requirements. An HVAC application monitors environmental conditions to determine cooling and heating requirements and periodically gathers sensing information. A tracking application is used to provide information on package movement and environmental conditions during storage.
Fig. 1. Deployment scenario
The HVAC application periodically requests temperature and light readings throughout the warehouse and deploys specialized components to specific nodes that locally determine if an actuating action needs to be taken e.g. if temperature exceeds 30 degrees increase power to the AC unit in this area. The tracking application continuously monitors temperature and position of packages. Additionally a specialized component is deployed to high value packages which use light and accelerometer readings to locally determine package handling and tampering. An application developer would be concerned with implementing the required functionality of each deployed application; that is HVAC and tracking, as well as the respective application-specific components. The implementation of the temperature, light, accelerometer services would be undertaken by the service developer but monitoring their run-time performance would be undertaken by the network administrator. The network administrator is in charge of monitoring the infrastructure and taking corrective action when needed to ensure expected quality levels are achieved. Current approaches do not properly separate these operational concerns and do not provide appropriate abstractions to achieve the required adaptability and customize-ability.
94
P.J. del Cid et al.
3 Middleware Overview We propose a middleware approach based on configurable components that may be used in multiple concurrently running compositions and allow different QoD parameterizations for each composition. Figure 2 illustrates an overview of the different elements that compose our middleware. Through decoupling of applications and the components implementing the underlying application functionality, and the provision of structure and behavior patterns we achieve simple compositions and perservice instance parameterization. We consider a service instance to be: each service request from the moment it is submitted to the middleware until it has been processed. The service request specification is used to express the desired functionality and QoD for every service instance.
Fig. 2. Middleware overview
3.1 The Mediation Layer The mediation layer runs in the backend and in cluster heads in the WSN. The mediation layer is implemented by the Service Management Component (SMC). It automatically interprets requests, selects the optimal service providers and instantiates an individual service composition involving specified services from a shared pool of components interacting in a loosely coupled manner. Every application may submit multiple service requests, each representing a service instance. As such, every composition allows for per-service instance parameterization of how this pool of components is used.
Expressing and Configuring Quality of Data in Multi-purpose WSN
95
3.2 The Service Request Specification Application developers use the service request specification to express their QoD requirements for each service instance. In the specification one includes the request Id, which is a unique sequential number generated by the WSN backend middleware. The service Id represents a globally unique service identifier defined at service implementation time. Each sensing service e.g. temperature, humidity, has a unique service Id. The temporal resolution required from the specified service is expressed through the sampling frequency. Duration of service is the amount of time one requires the selected sensing service to collect data samples. Spatial resolution is specified by selecting a target location e.g. <warehouse A> or < node21>. A Postcollection data processing service Id, which is globally unique identifier for services like averaging or specialized data filters. Finally a parameter to be passed to the postcollection data processing service may be required e.g. in case of the averaging component, one may use the parameter 30 to indicate the average must be done in 30 minute intervals. Each service request may be configured with different QoD requirements and it may or may not include one or more post-collection processing instructions. serviceRequest#(requestId,serviceId,samplingfrequency,duration, targetLocation,DataProcessServiceId[],parameter[]);
3.3 WSN Services As one may see in Fig. 3A, we typify services based on two primary types: sensing services and post-collection data processing services. These are the meta-types used to implement all services that comprise the pool of services available to create service compositions. They are implemented in Sensing Service Components (SSC) and Data Processing Component (DPC) respectively. Sensing services are components offering typical functionality such as the retrieval of temperature or light readings. They provide access to the various sensors. Data processing services are components implementing typical post-collection data processing functionality such as averaging, data filtering or persistence. A sample composition may be: (i) an SSC reads, timestamps and stores temperature readings which, (ii) an averaging component processes and finally (iii) a persistence component stores to static memory. SSC and DPC meta-types impose structure and behavior of implemented services. This predictable structure and behavior allows for higher re-usability of services in compositions because all DPCs can transparently be used with any SSC. SSCs are unaware of the existence of any other SSC or DPC. All SSCs provide and require the same interfaces. All DPCs provide and require the same interfaces. Elaboration on how the imposed structure and behavior of components helps achieve more efficient reconfiguration and service composition can be found in our previous work [6]. Data reliability requirements may be achieved with the use of specialized data filters implemented with DPCs. For example, erratic sensor readings may be excluded from a sample on the basis that may indicate a defective sensor.
96
P.J. del Cid et al.
Fig. 3. WSN Services and valid service compositions
3.4 Service Compositions The submission of a service request starts a service instance which is fulfilled with an independent service composition. Service compositions can have only 1 SSC and zero-to-many DPCs (see Fig. 3B). Dash-dotted lines depict a composition that only involves an SSC, the dashed lines depict a composition that uses one SSC and one DPC and the continuous lines depict a composition that involves an SSC and 2 DPCs. All components may be used in multiple compositions concurrently and have different QoD parameters for each composition. Due to the imposed structure and behavior, DPCs may be transparently combined with any SSC. Additionally new services can be introduced or additional requests supported and none of the existing compositions need to be modified or any service interrupted, resulting in efficient reconfiguration. Elaboration on how our simple composition rules help achieve more efficient service reconfiguration and lower complexity overheads can be found in our work [6]. 3.5 Share-Able and Adaptive Components We consider that introducing a new component for every service requiring different parameterization is not efficient, as exemplified in [6]. We separate the functional code from the meta-data and share the same component instance across multiple service compositions (see Fig.4A). This meta-data contains the configuration semantics to be used to serve each service request. Each component is associated with a particular service composition through a request Id; this association contains perinstance configuration semantics. Configuration semantics for each service composition are extracted from the specified service request. The configuration semantics include specified QoD, services involved in each composition and related parameterization. The SMC parses the service request and extracts configuration parameters which it autonomously submits to the corresponding SSC or DPCs. Clients may also submit these configuration parameters directly to the SSC or DPC using ProcessRequest@SSC or ProcessData@DPC as one may see in Fig. 5A. This allows a single instance of our components to be used across multiple service compositions with varying parameters in each composition and avoids substantial increases in required static and dynamic memory per additional service request. For further details on how each SSC and DPC are configured we refer the reader to [6].
Expressing and Configuring Quality of Data in Multi-purpose WSN
97
Fig. 4. Configuration semantics, annotated attributes and adaptation in components
Furthermore these configuration parameters are available for introspection in SSCs and DPCs. Introspection provides insight as to currently allocated requests and their parameters. Additionally these inform of current component dependencies. Annotated Component Attributes: SSCs and DPCs are annotated with attributes that are mandated as part of their structure. They may be static or dynamically modified at run-time depending on the attribute and intended use. For example: an energy category attribute is used to represent energy consumption incurred in the invocation of a particular sensor, given that energy use may vary considerably by platform / sensor hardware as exemplified in [11]. At implementation time the service developer would include the value for the energy category of the SSCs he implements and these are specific to the platform/sensor of deployment. These attributes provide semantic information which is used during the evaluation of system strategies, such as: adaptation and resource reservation. E.g. The result of the evaluation of an adaptation strategy may modify the maxSamplingFrequency attribute in an SSC modifying the execution of the component’s functional code, in turn influencing the selection of service provider during the evaluation of the selection strategy in the service matching process of the SMC (see Fig.4C). Sykes et al. in [12] discuss how the use of quality attribute annotations in components improves decisions about adaptive reconfigurations. Using these attributes is how the service developers may express their concerns, regarding relevant usage parameters, provided accuracy or energy consumption of implemented services. Component Level Adaptation: Component behavior may be modified at run-time i.e. the parameters used in the execution of its functional code may be dynamically modified as previously discussed. These adaptations give the network admin effective mechanisms that account for current working conditions and help prolong system lifetime. As exemplified in Levels [13] modifying available functionality has proven very efficient in prolonging system lifetime in sensor networks. The adaptation strategy specifies that data resolution is lowered more aggressively on services with higher energy categories when battery levels decrease, thus avoiding excessive use of high consumption services. This is done independently from any request being processed and transparently for application developers and service developers.
98
P.J. del Cid et al.
3.6 Capacity Planning Each node has a light-weight Resource Manager (RM) that does run-time capacity planning and reservation of any resource required to effectively support allocated service requests. For further details on the benefits of run-time capacity planning we direct the reader to [6]. Currently we focus on required dynamic and static memory to support allocated requests. The RM uses a memory reservation strategy specified by the network administrator. This strategy specifies how much memory to reserve based on estimated requirements and when and how to release these reservations. This guarantees that every service request will have the needed resources e.g. memory, to be processed successfully through the service duration. Calculating Required Memory: In order to effectively calculate the amount of static and dynamic memory that will be used by the middleware, the service developer must realize an off-line process to establish a memory baseline and an autonomic run-time process to establish run-time memory requirements. During the off-line process a baseline of memory use is recorded for every implemented service and the corresponding annotated attributes of each service are updated e.g. requiredStaticMemory, requiredDynamicMemory, memoryManagementOverheads. The on-line process parses each submitted service request and extracts: (i) services involved in the composition (ii) QoD parameters. These are used to calculate the amount of memory needed to successfully process each request. For example: in the SSC the amount of required memory depends on the output dataset size which is directly proportional to the amount of records. These will vary depending on sampling frequency and service duration. Further details can be found in our previous work [6]. Memory Management: The memory management strategy dictates how and when data should be transferred to static memory. The strategy accounts for frequency of read and write operations. Additionally this strategy may specify memory related actions to increase reliability, e.g. make all sensed and processed data persistent when battery levels drop under 15%. Both these strategies are evaluated in the RM. Making data persistent in static memory is done by a persistence service DPC. 3.7 The Client API Clients interact with the middleware through a distributed API (see Fig. 5A). We consider middleware clients to be applications, which are developed by application developers. The mediation layer is accessible to the clients at the back-end or in every cluster head and exposes interfaces A and B. The former interface is used to submit service requests and the latter to retrieve processed data. Clients may also access the middleware directly at the sensor nodes commonly through the use of applicationspecific components. This interaction happens through interfaces 1 and 2 available on SSCs and DPCs. The former is used to submit configuration parameters which are used by the component to parameterize each service instance and the latter is used to retrieve data. ProcessRequest@SSCorDPC(requestId,samplingfrequency,duration); ProcessData@SSCorDPC(requestId,serviceIdToCollectData,parameter, timeToExecute);
Expressing and Configuring Quality of Data in Multi-purpose WSN
99
Fig. 5. Middleware component and interface views
This distributed API gives the application developer access to WSN services directly on each sensor node in a consistent manner. She is able to leverage innetwork processing to achieve improved application performance while still undertaking only her concerns for the implementation of business functionality. She can develop application-specific components that take business reasoning into the network and react locally to improve reaction times while still being abstracted away from platform specific programming. For example: the HVAC components that locally determine if an actuating action needs to be taken, as described in section 2 (see Fig.1). 3.8 Network Administrator API The network administrator interacts with the middleware through the API depicted in Fig. 5B. The SMC exposes interfaces C, D and E. Interface C is used to configure allocation strategies which are used by the SMC in the first step of the selection process, to narrow down possible providers to a potential sub-network or cluster. Additionally once the potential cluster is selected workload distribution or node tasking preferences are accounted for e.g. nodes at the edges of the network should sense, intermediate nodes should only transmit and process. Interface D is used to configure selection and composition strategies. Once the potential cluster of providers is selected, the selection strategy selects a node from within said cluster. This is done by comparing the current battery level and sampling frequency provided by each node. Once the provider is selected composition strategies are used to determine the proper order of composition according to the composition pattern expressed in the strategy. E.g. For a request that involves a temp, average and data filter services, based on a Sequence pattern [14]: the resulting composition would first have the temp service then the data filter and finally the averaging service. The sequence pattern (aÆb) states that b is carried out after the completion of a. In [14] the authors further elaborate on the benefits of leveraging composition patterns to achieve significant benefits in the composition process. Interface E is used to introspect the SMC e.g. service requests being processed and their respective parameters. The SMC may in turn use the introspection interfaces of relevant SSCs and DPCs to obtain all requests being process by each one and their
100
P.J. del Cid et al.
respective annotated attributes. It is important to notice that this introspection provides the details on all current component dependencies and can be used to create a component graph. On each node the RM exposes interfaces 1,2,3 and every SSC/DPC expose interface 3 and 4. In the RM interface 1 provides access to resource reservation and memory management strategies (see section 3.6). In the RM interface 2 provides access to adaptation strategies (see section 3.5). In the RM interface 3 provides introspection of current resource allocations e.g. current static and dynamic memory allocations with all relevant request related information. In the SSC/DPC interface 3 provides access to requests being processed with all their respective configuration parameters and the annotated component attributes; which in essence describes all current component dependencies and when each dependency will expire. Expiration of a dependency depends on the duration of the service that required that dependency i.e. when the service terminates the dependency expires. This gives the admin valuable insight into current system configuration. Interface 4 provide access to the modification of annotated attributes hence potentially altering how the component’s functional code is executed and the outcome of strategy evaluation as described in section 3.5.
4 Middleware Implementation We implemented a prototype to validate our approach; it was implemented in Java ME CLDC1.1 configuration on the SunSPOT platform [10]. The implementation supports the operational scenario as described in section 2. To provide the reader with explicit background on our implementation we provide details of the evaluation conducted in the context of our previous work. This is described in more detail in [6] and is summarized below. We recorded relevant footprint information and run-time memory consumption in our middleware while executing the following use case: The HVAC back-end application requires that for service request 1, temperature is to be sensed every 30 minutes during the next 1 day, averages of samples for every 60 min. should be processed and the results should be made persistent. The tracking application requires that for service request 2 light readings be taken every 10 min. for the next 15 days, 120 min averages are processed and the results be made persistent. These service requests are submitted to the SMC and are depicted in Fig. 6A (as is not directly relevant to our scenario, we omit the specification of target location). The HVAC specialized component that determines AC related actions requires temperature readings every 10 min. for the next year. Since this specialized component is deployed on the node, it submits these parameters directly to the temperature SSC with the ProcessRequest@Temp1. The deployed tracking specialized component that determines product handling and tampering requires light and accelerometer readings every 5 min. for the next 5 days. This component submits these parameters directly to the corresponding SSC. In Fig.6B one can see the component configuration implemented and the service composition that resulted from the submitted service requests and process requests. Five service components are used to serve 5 concurrent requests which are fulfilled
Expressing and Configuring Quality of Data in Multi-purpose WSN
101
with 5 independent service compositions. As the number of served requests increases the amount of instantiated components remains constant. Each SSC consumes 750 bytes of dynamic memory and 7.8 Kb of static memory. Each DPC consumes about 750 bytes of dynamic memory and 11.9 Kb of static memory. Each additional service request processed in a SSC consumes about 5Kb of dynamic memory. Each additional request that runs in a DPC consumes about 800 bytes of additional dynamic memory. We then recorded footprint and run-time memory consumption for a comparable approach. We implemented the same use case with LooCI [15] component model. We selected this approach because of its very loosely coupled component interaction and published subscribe functionality provides effective mechanisms to implement multipurpose WSNs. It was necessary to instantiate 9 LooCI micro-components. Each LooCI component requires 3 Kb of dynamic memory and 1.7Kb of static memory.
Fig. 6. Implemented scenario
In Fig.6C we plotted the amount of dynamic memory (RAM) and static memory (ROM) required to instantiate the components needed to process concurrent requests with both approaches. We varied the amount of concurrent requests processed from n=1 to n=10. Each of these requests is equivalent in functionality as serviceRequest1. In terms of transmission overhead: One service request (order of 64 bytes) is needed to support an additional request, where in LooCI two new components are needed (each component is in the order of 1.7Kb). In terms of request load: The additional functionality offered in our components comes with some overhead in static memory which as one can see from Fig.6C is an acceptable trade-off given the improved efficiency under higher request loads.
5 Discussion In this section we further discuss how a separation of operational concerns allows for simpler WSN configuration while improving adaptability and customize-ability of the middleware. 5.1 Simpler WSN Configuration To elaborate on how the proposed middleware simplifies configuration we look at the manner in which the WSN is to be configured i.e. complexity of abstractions used and how operational concerns are accounted for. We consider that a clear separation of
102
P.J. del Cid et al.
operational concerns allows for simpler system configuration. We elaborate on the benefits for network administrators, application and service developers below. Application developers: In our system they only have to concern themselves with the implementation of business functionality. Our client API uses the service request specification to allow them to express their functional requirements only. They are not burdened with platform specific commands to implement sensing or processing services. They are not burdened with issues like selecting low consumption services to maximize network lifetime. The service request is easy to use and yet expressive enough to adequately address complex use cases as the one described in section 4. The service request specification does not require any coding or instantiation of runtime constructs. Furthermore, they can use application-specific components that implement extended business functionality and deploy them anywhere in the network and still access a simple and consistent API at node level. Service developers: Service implementation may be done by following the specified structure and behavior imposed by the SSC and DPC meta-types. Annotated attributes allow them to enhance the implementation with semantic information regarding relevant usage parameters, provided accuracy or energy consumption. These attributes can be easily extended to account for new requirements. In this way the service developer is not burdened with operational considerations e.g. what should be the offered data resolution given a particular battery level. Service developers do not require any domain specific or business knowledge. Network administrators: The network admin API gives them access to configure allocation, selection, composition strategies in the SMC, adaptation, resource reservation and memory management strategies in the RM. These strategies give the administrator the abstractions needed to define specific actions to be taken under changing system conditions that will result in compliance with expected quality levels and system lifetime as describe in section 3.5. In [9] Huygens et al. elaborate on the importance of providing the network administrator access to mechanisms that can influence the ensemble of running applications and thus fine-tune system functionality. The admin is able to execute his responsibilities without the burden of the implementation of business functionality or low level sensor programming. In our implementation, strategies are described with event condition action semantics in human readable form, facilitating comprehension and extension (see the code snippet below). In our model the network administrator is not restricted to use a specific notation. Using the provided interfaces she is able to evaluate and enact required adaptations. Event: If battery level > 80 Condition: for energy category = 2; Action: do maxSamplingFreq = 100;
The evaluation of these strategies is orthogonal to the running applications. This implies that network administrators need no a-priori knowledge of what applications will be using the WSN. Finding the extent of efficacy and the optimal strategies to provide the highest benefits in multi-purpose WSNs is the main focus of our future work.
Expressing and Configuring Quality of Data in Multi-purpose WSN
103
5.2 Adaptability and Customize-Ability of the Middleware Adaptability: Is the system’s capacity to enact context aware run-time reconfiguration. Strategies are the way one may express reconfiguration actions which account for contextual conditions. We provide the SMC and RM abstractions which enforce these strategies to achieve autonomic and lightweight run-time reconfiguration. The reconfiguration can be system-wide e.g. using allocation, memory management strategies (see sections 3.5 and 3.8) or fine-grained e.g. adaptation strategies (see section 3.5). The strategies are currently implemented using event condition action notation which can be easily understood and extended. E.g. currently component level adaptation is limited to restrict the sampling frequency to save energy but they may be easily extended to account for monetary costs or other factors. Introspection capabilities offered in the network admin API allow our system to provided run-time details on component dependencies and execution of all services; which is essential information in reconfiguration efforts. Customize-ability: Is the system’s capacity to fine-tune or extend its functionality during run-time without service interruption. As we discussed in section 3 our loosely coupled system allows for new services or additional clients to share a common pool of components without modification to any current service composition. The pool of components can be easily extended with the implementation of additional SSCs or DPCs with no modifications on existing services (see section 3.3). Applicationspecific functionality may be implemented and leverage the node level client API to achieve significant benefits from in-network processing. Furthermore a running application may have varying QoD requirements which can be easily expressed in the per-instance service request and transparently configured by the SMC (see sections 3.4 and 3.5).
6 Related Work In this section we discuss the previously highlighted benefits in the context of state of the art [1,4,5,11]. We considered approaches that contribute programming abstractions designed to provide application support in the context of multi-purpose WSNs. We narrowed the landscape further by selecting one approach from some of the more prominent models used in WSNs. These include: service oriented [1], modularized agent like abstractions [4], content-based publish-subscribe [5] and database oriented [11]. These approaches provide a high level abstraction that allows each application to express its QoD and non-functional concerns, a mediation layer to manage service selection and low level abstractions to expose network resource and allow these to be shared. All these approaches are designed to be application independent and all offer the possibility of specifying some QoD requirements per every query, request or subscription instance. In this context service requests, queries and subscriptions all serve the same functional purpose, expressing application interests and commonly specify: required service, the duration of the service, temporal resolution or sampling frequency and spatial resolution or target location. Access to WSN resources is offered as software services using code modularization, mainly implemented with component-like abstractions.
104
P.J. del Cid et al.
6.1 Configuration in State of the Art The main issue impacting configuration in these approaches is that they do not clearly separate operational concerns and they all rely on an all-knowing programmer. This programmer is expected to know low level platform specific programming and high level implementation of business functionality; this of course implies domain specific knowledge. Additionally they need to have in-depth knowledge about network monitoring and management. This of course is only feasible in research oriented WSN deployments but not for commercially viable WSN deployments. We will briefly elaborate on relevant differences of the evaluated approaches: Servilla [4] uses ServillaScript to express queries to be executed by the WSN, which requires the application developer to learn a scripting language similar to JavaScript. Additionally complexity arises because there is no notion of time in queries and application developers are expected to know how many transmission hops the query can execute, which is not easily estimated under changing conditions. Service specification requires the use of yet another programming language ServillaSpec which further burdens service developers. TinySOA [1] uses an extended event condition action syntax and a service oriented query model to specify service requests which is comparable to our service request specification but limited, service composition is not possible and deploying application-specific components is not discussed. TinyCOPS [5] uses subscriptions to specify service requests and leverages the concept of subscription meta-data to express quality requirements for each subscription. Expressing application requirement in terms of subscriptions is rather straight forward but limited, service composition is not possible and deploying application-specific components is not possible. TinyDB [11] uses SQL syntax to specify queries and related quality requirements. It offers a rather static view of the WSN since it abstracts away all functionality and presents it strictly as a database. Retrieving sensed information and executing aggregation or filtering processes is possible but the information is only accessible through a centralized user interface. 6.2 Adaptability and Customize-Ability in State of the Art The evaluated approaches are all designed strongly toward either a macroprogramming e.g. TinySOA, TinyCOPS, TinyDB or node-centric approach e.g. Servilla. The former are usually characterized by higher-level abstractions that focus mainly on the behavior of the entire network. Node-centric programming generally refers to programming abstractions used to express the application processing from the point of view of the individual nodes. Macro-programming approaches are usually characterized by abstracting away node level interactions which limits the customize-ability by restricting the possibility to deploy application-specific components in the network to leverage in-network processing and extend functionality. On the other hand node-centric approaches allow for the possibility of deploying application-specific components but lack some higher level abstractions to unburden application developers of implementing common use functionality e.g. having to realize service composition. Our approach provides both,
Expressing and Configuring Quality of Data in Multi-purpose WSN
105
high level abstractions e.g. the SMC and node-centric abstractions e.g. the RM and annotated components. This allows our middleware to provide both: high-level functionality e.g. service composition and fine-grained control e.g. component level adaptation. None of the evaluated approaches [1,4,5,11] provide abstractions to enable modification or extension of reconfiguration actions or introspection into run-time behavior. Our approach provides system strategies that inform reconfiguration actions both at system level and at node level and introspection of component dependencies, allocated requests with their respective parameters. We briefly comment on the more relevant differences in each of these approaches: TinyCOPS [5] only offers a notion of control information with soft semantics to inform subscriptions but it is not really clear to what extent this could influence runtime reconfiguration and it is left up to the application developer to determine the control information to be used. It offers Service Extension Components (SEC) which can be used to extend functionality to meet application-specific functionality. However these components cannot locally interact with other services, they need to subscribe to receive events of interest with a centralized broker, limiting the possibility of composition. Servilla [4] offers direct access to services on nodes through the implementation of tasks, these tasks implement application-specific functionality. They offer support for discovery, matching and binding but the application developer is left to realize any needed service compositions. Using ServillaSpec the service developer can enhance service descriptions with semantic information but this is only used during the service matching process not to inform system reconfiguration. In TinySOA [1] the extension of functionality is possible by implementing new services. However no support for run-time reconfiguration or service composition e.g. sense, aggregate and persist is offered. TinyDB [11] exposes the WSN as strictly a database. They offer support for service composition which offers the possibility to request sensor data, aggregate or filter it. However the approach does not provide the possibility to add services at runtime or to deploy application-specific components and leverage in-network processing.
7 Conclusion This paper presented a lightweight, component-based service platform for WSN. We argue in favor of per-instance run-time configuration of QoD attributes and we demonstrate how our approach leverages per-instance QoD configuration and a separation of operational concerns to achieve simpler configuration and improve adaptability and customize-ability of the WSN. In the short term we plan to record a baseline of resource usage under concurrent use and varying system loads. We will identify key QoS attributes that may be representative of global system state. In the long term we plan to further investigate and thoroughly evaluate system strategies for multi-purpose WSN.
106
P.J. del Cid et al.
Acknowledgments. The Research for this paper was partially funded by IMEC, the Interuniversity Attraction Poles Programme Belgian State, Belgian Science Policy, and by the Research Fund K.U. Leuven for IWT-SBO-STADIUM [16].
References 1. Rezgui, A., Eltoweissy, M.: Service-oriented sensor–actuator networks: Promises, challenges, and the road ahead. Computer Communications 30, 2627–2648 (2007) 2. Basaran, C., Kang, K.: Quality of Service in Wireless Sensor Networks. In: Guide to Wireless Sensor Networks, Computer Communications and Networks. Springer, Heidelberg (2009) 3. Murphy, A., Heinzelman, W.: MiLAN: Middleware Linking Applications and Networks, TR-795, University of Rochester, Computer Science (November 2002) 4. Fok, C., Roman, G., Lu, C.: Enhanced Coordination in Sensor Networks through Flexible Service Provisioning. In: Field, J., Vasconcelos, V.T. (eds.) COORDINATION 2009. LNCS, vol. 5521, pp. 66–85. Springer, Heidelberg (2009) 5. Hauer, J., Handziski, V., Kopke, W., Wolisz, A.: A component framework for contentbased publish/Subscribe in sensor networks. In: Verdone, R. (ed.) EWSN 2008. LNCS, vol. 4913, pp. 369–385. Springer, Heidelberg (2008) 6. del Cid, P.J., et al.: Middleware for Resource Sharing in Multi-purpose Wireless Sensor Networks. In: IEEE Proc. NESEA 2010, China (November 2010) (in press) 7. Yu, Y., Rittle, L.J., Bhandari, V., LeBrun, J.: Supporting concurrent applications in wireless sensor networks. In: ACM Proc. of Sensys 2006, New York, NY, USA, pp. 139– 152 (2006) 8. Steffan, J., Fiege, L., Cilia, M., Buchmann, A.: Towards multi-purpose wireless sensor networks. In: International Conference on Sensor Networks (August 2005) 9. Huygens, C., et al.: Streamlining development for Networked Embedded Systems using multiple paradigms. IEEE Software 27(5), 45–52 (2010) 10. SunSPOT, http://www.sunspotworld.com/ (visited July 2010) 11. Madden, S., Hong, W.: TinyDB: An Acquisitional Query Processing System for Sensor Networks. ACM Transactions on Database Systems 30(1), 122–173 (2005) 12. Sykes, D., Heaven, W., Magee, J., Kramer, J.: Exploiting Non-Functional Preferences in Architectural Adaptation for Self-Managed Systems. In: SAC 2010, March 22-26, ACM, New York (2010) 13. Lachenmann, A., Marron, P.J., Minder, D., Rothennel, K.: Meeting Lifetime Goals with Energy Levels. In: ACM Proc. of SenSys (2007) 14. Hu, H., Han, Y., Huang, K., Li, G., Zhao, Z.: A Pattern-Based Approach to Facilitating Service Composition. In: Jin, H., Pan, Y., Xiao, N., Sun, J. (eds.) GCC 2004. LNCS, vol. 3252, pp. 90–98. Springer, Heidelberg (2004) 15. Hughes, D., et al.: LooCI: A Loosely Coupled Component Infrastructure for Embedded Network Eccentric Systems. In: ACM Proc. of MoMM 2009, Kuala Lumpur (2009) 16. IWT Stadium project 80037, http://distrinet.cs.kuleuven.be/projects/stadium
A Lightweight Component-Based Reconfigurable Middleware Architecture and State Ontology for Fault Tolerant Embedded Systems Jagun Kwon and Stephen Hailes Department of Computer Science University College London, Gower Street, London, WC1E 6BT, UK {j.kwon,s.hailes}@cs.ucl.ac.uk
Abstract. In this paper, we introduce a component-based software architecture that facilitates reconfigurability and state migration in a semantically correct manner for fault tolerant systems. The main focus and contribution of the paper is on the ontology framework that is based on object orientation techniques for coherent reconfiguration of software components in the events of faults at runtime, preserving state consistency and also facilitating state ontology evolution. Our proposed approach is realised in C++, which is integrated in the lightweight middleware MIREA.
MIREA is a reconfigurable component-based middleware targeted at embedded systems that may not have abundant resources to utilise, such as sensor systems or embedded control applications. The middleware supports software componentisation, redundancy and diversity with different software designs in order to ensure the independence of common operational/development errors. Moreover, any unforeseen errors can be dealt with by dynamically reconfiguring software components and restoring states. Keywords: Component-based middleware, Reconfigurability, Embedded systems, State, Ontology, Fault tolerance.
1 Introduction Today, software accounts for up to 40% of the total production cost in many embedded systems due to the complex, inflexible and proprietary nature [2]. This means that the cost of poor or repeat engineering in this area is extremely high and that flexibility, reusability and reconfigurability are key factors in staying competitive in the market. Complexity can be managed most effectively if the underlying software systems support structured, standardised, high-level abstraction layers that encapsulate unnecessary details behind well-defined interfaces; this has the effect of reducing training effort, development time and cost. Moreover, since the cost of software maintenance is often as high as the cost of initial development, the ease with which it is possible to flexibly deal with faults and reconfigure software components in operational systems is also critical. In this sense, the use of a middleware is an attractive solution. G. Parr and P. Morrow (Eds.): S-Cube 2010, LNICST 57, pp. 107–120, 2011. © Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2011
108
J. Kwon and S. Hailes
In this paper, we describe a component-based software architecture that facilitates reconfigurability and state migration in a semantically correct manner for fault tolerant embedded systems. The main focus and contribution is on the ontology framework that is based on object orientation techniques for coherent reconfiguration of software components in the events of faults at runtime, preserving state consistency and also facilitating state ontology evolution. Our proposed approach is realised in C++, which is integrated in the component-based middleware MIREA [19]. The middleware has been developed to address some of the important requirements in the development of modern embedded systems, such as support for predictability, reconfigurability, fault-tolerance, portability for heterogeneous platforms, object oriented programming, and component-based composition, to name a few. MIREA facilitates software reuse and runtime reconfigurability to reduce development and maintenance cost and to fight unforeseen faults. States can be migrated in a coherent manner as long as correct transformation logics are specified by following the proposed model in the paper. In the remainder of this paper, we first describe the middleware’s component model and kernel’s key features such as component reconfiguration/rebinding. Section 3 presents our proposed model of state ontology for consistent state migration and state ontology evolution based on object orientation techniques. The next section briefly reviews related work before drawing a few conclusions.
2 The MIREA Middleware Middleware is commonly defined as a software layer that lies between the operating system and application software, often in distributed systems. Its importance has been growing not only in enterprise systems, but also in embedded systems, where reuse of legacy code and standard development environments are becoming increasingly important. According to [3], there are four functions of middleware: • Hiding distribution, i.e., the fact that an application is usually made up of many interconnected parts running in distributed locations should not be apparent; • Hiding the heterogeneity of various hardware components, operating systems and communication protocols; • Providing uniform, standard, high-level interfaces to the application developers and integrators, so that applications can be easily composed, reused, ported, and made to interoperate; • Supplying a set of common services to perform various general purpose functions, in order to avoid duplicating efforts and to facilitate collaboration between applications. In other words, the use of middleware makes software development easier, more reliable and cost-effective by providing common programming abstractions, hiding low-level software and hardware details, and facilitating reuse of established, proven components. In the context of embedded systems, the use of middleware is also well justified due to the cost of development and the bespoke nature of the systems. However, since most embedded systems are resource-conscious or constrained,
A Lightweight Component-Based Reconfigurable Middleware Architecture
109
light-weight systems are much to be preferred. Along with reconfigurability, scalability and real-time quality-of-service features, this has been one of the key guiding principles in the development of the MIREA middleware. 2.1 Component Model MIREA assumes a generic contract-based component model, in which a component can specify functionality that it requires and/or provides using a well-defined interface. The model consists of ComponentTypes, Components, Interfaces, Receptacles, States and Connectors. A Component is a runtime instance of a ComponentType. A ComponentType can export one or more Interfaces, through which a given component provides a set of functionalities to other components (i.e., in the form of a set of C/C++ functions in the middleware). A Component can have any number of Receptacles, through which a set of required functionalities are specified. Figure 1 below illustrates the elements of the component model. A component can also have an associated component-wide state that is only accessible from within the containing component; a state can consist of a set of any data types and/or primitive variables. Connectors are a specialised form of component that performs intermediary actions if required, for instance, in order to monitor, log or encrypt data for security reasons. Note that a connector is simply a component and, as such, can be realised as a composite component, with component state and connectors of its own if this makes sense.
Fig. 1. Exemplified Component Model Elements
2.2 Middleware Kernel The middleware kernel provides a framework for realising the component model in practice, so that component types can be defined with implementation details and a
110
J. Kwon and S. Hailes
set of receptacles and interfaces in the framework, and they can be instantiated using the middleware’s services at run-time. Underneath the exposed middleware API, there can be certain cross-platform components that make use of operating system services or hardware abstraction layers. User space interrupt handlers and/or hardware drivers may be situated at this layer or embedded within the operating system. On top of the middleware API lies either generic or specific services for embedded applications. 2.3 Core Services The vast majority of embedded software is written in C and C++. By supporting C++, a large body of existing software libraries, such as NIST’s RCS NML [1] can be reused, integrated seamlessly with other components running on this middleware, and used to build new applications cost-effectively while taking advantage of the object oriented design and programming methodology. In order to support such flexibility and scalability without incurring excessive runtime overheads, the middleware provides the following categories of core services: • • • • • • • •
Loading and unloading of ComponentTypes Instantiation and destruction of Components Registration and acquisition of Interfaces and Receptacles Connection between Interfaces and Receptacles Registration and acquisition of Components’ States Destruction of Connectors Check-pointing of States Support for State evolution
All of the services above are runtime activities whereas the process of defining a new ComponentType, Interface, Receptacle, Connector and State are static in nature - defined at an application design stage where the user may consider communication overheads between components and complexity of the application composition. Connections between components can be reset and reconfigured by first destroying an existing connector instance and then reconnecting them to a new type of interface and receptacle. After this, any invocations made on the given Receptacle will be redirected to the newly connected Interface instance, hence a new/different Component instance. 2.4 Inter-Component calls, Interfaces and Receptacles The middleware allows pure virtual C++ classes to be used as Interfaces and Receptacles. In C++, pure virtual classes only define the prototypes and signatures of functions that must be implemented in an inheriting class if they are to be instantiated as objects. By doing so, we enhance type-safety of applications by ensuring that pairing between interfaces and receptacles is type-checkable in C++. Figure 2 illustrates this point, where two components A and B agree on the use of the common interface for their Receptacle and Interface. Component A expects to call operations op1 and op2 while component B implements the relevant operations and provides an interface to that by inheriting the agreed class and instantiating an Interface object. After connection, whenever A calls functions in the Receptacle, those of the matching Interface object will eventually be invoked.
A Lightweight Component-Based Reconfigurable Middleware Architecture
111
Fig. 2. Component Connection Model
Type-checking of an arbitrary component is achieved by making use of Xrtti [15], an automated reflection mechanism for C++, which ensures that users are not required manually to provide introspective class information. 2.5 Component Implementation and Migration of Legacy Code There is bound to be reusable legacy code in many software engineering projects, and migration efforts for such code must be kept to the minimum at any time. Creating a MIREA component is a straightforward process where one can either reuse existing code or create a new C/C++ project implementing all the required interfaces and receptacles with the middleware’s conventions in mind, and compile and link the code into a dynamic shared library (for instance, .so files in case of Linux or Unix). The middleware requires every component to provide a pair of construct() and destruct() functions, and register the interfaces and receptacles they operate on in the construct() function. The destruct() function is used to de-allocate reserved memory for states or any other objects within the component. So long as a piece of code follows these undemanding rules, it can be seen and used as a MIREA component. Because of the simplicity, the middleware has a negligible impact on the performance of the whole system. Refer to [19] for the overheads and performance figures. Depending on the architecture of the deployed system, components may reside locally or be downloaded to embedded nodes through network, loaded and utilised using the middleware’s services. Managing the location of a component is up to the system’s designer and should be strategically handled, for instance, to reduce network traffic or lower security risks.
3 State Ontology for Check-Pointing and Reconfiguration 3.1 Problem Description The ability to reconfigure software components dynamically is beneficial in that it provides flexibility in design and implementation while reducing development cost by means of reusing proven artefacts. One can also delay design decisions and opt for alternatives in case of faults at runtime or as a result of changes in requirements.
112
J. Kwon and S. Hailes
State Migration Problem However, as software components are dynamically reconfigured or replaced in an embedded system, there are situations where previous states must be preserved in a semantically correct manner and transferred to the new replacing components in order to ensure correct, smooth transition of operation. There is no standard way of doing this in widely used languages like C/C++ and any middleware for embedded systems, i.e., there is no schema- or ontology-based approaches to migration of states. It is also possible for knowledge structures or ontologies (we use the term ontology rather informally and it can be interchanged with schema here) to evolve during the life of the system, and hence, there is a need for • A means for expressing/representing ontology for knowledge base and instances for the purpose of storing and querying about states, • A means to extend ontology and reason about changes between different versions, and • A lightweight programming interface for accessing and reasoning about such knowledge base schemas and instances. Illustration Imagine that there is a software component called CPathPlanner2009 that deals with path planning for a laser-guided vehicle (LGV) in a factory. CPathPlanner2009 implements algorithm a, which is later found to be inefficient and to be replaced with a newer one. Hence, a newer component CPathPlanner2010 that implements algorithm b is about to be deployed while the LGV is in operation or recharging its batteries. CPathPlanner2009 currently holds information regarding the destination and source of pallets, locations of several via-points and charging stations, number of other LGVs operating at the same time, and factory floor plan. Along with this static information, it also stores various intermediate values and results of calculation regarding where it is heading towards, current speed of the vehicle, various sensor values (e.g., collision sensor values), distance to the nearest via-point and so on. Such information may need to be transferred and reused in a semantically correct manner in order to prevent abrupt operation or restart of the LGV’s controller system. However, some of the state information may be unnecessary or have different meanings in different components; for instance, algorithm a requires the number of other LGVs while algorithm b does not because it relies more on the sensory input values than pre-defined world-model parameters.
Fig. 3. State Migration Example
A Lightweight Component-Based Reconfigurable Middleware Architecture
113
State Ontology Evolution The structure of the state may also evolve during the course of system development and the lifetime of the components. In other words, the state information that is stored, checkpointed, or required by a new component may differ from the one currently used, meaning that it is possible to make changes to the existing state information structure. Hence, it is important to be able to track changes in state structures in order to ensure information compatibility; for instance, an existing component may still use an old state structure to store and transfer state information to a new component. This new component must be able to interpret the previous state structure automatically, and in order to do that, it has to reason about the changes made between the different versions of the state structure. This is also useful in check-pointing for fault tolerant systems that store states at a regular time interval or at sporadic requests. States must be stored in a format that can be interpreted by other components that may attempt to recover from faults or errors at a later stage using one of the previously stored states. 3.2 Proposed Approach: State Ontology First, let us consider the fundamental motivations for developing ontology, i.e., to share common understanding of the structure of information among people or software agents [17]. This also implies reuse and analysis of domain knowledge. In practice, developing an ontology involves [18] • • • •
Class definition in the ontology, Arrangement of the classes in a taxonomic hierarchy (i.e., subclass–superclass), Definition of slots or attributes and describing allowed values for these slots, Instantiation by filling in the values for slots.
It is our observation that this can be captured in object-oriented programming languages although operational knowledge is more focused. Proposed Approach We believe that object orientation techniques realised in the form of C++ classes can be used as a means of expressing, maintaining and reasoning about ontology, especially in the context of software development for fault tolerant embedded systems that must ensure state consistency. Take check-pointing for example; states must be stored constantly in the right form, preserving consistency. That way, if faults develop in one of the current components, a new component, which is unknown at the time of dispatch, can restore the states and operate smoothly after the faults are isolated and subject components are reconfigured using the middleware’s services. In such a process, there can be more than one type of check-pointed states that a component or software agent has to deal with. Thus, states should be coherently organised along with transformation rules or logics.
114
J. Kwon and S. Hailes
Fig. 4. State ontology evolution realised in the form of class hierarchy
The reasons for using C++ instead of other declarative languages are as follows. • It is easy to adopt in programming terms. One can even embedded operational knowledge by means of C++ objects and functions since certain type of knowledge can only be expressed in programming terms. • Less overhead involved in reasoning and accessing knowledge base instances (i.e., objects), thus suitable for resource and performance-sensitive applications (especially considering the middleware itself is written in C++). • Schema/ontology evolution is possible based on object orientation techniques by means of inheritance and encapsulation. Possible types of ontology evolution in C++ are • Addition of new fields and functions: Inheritance with new members • Changes of fields and functions: Overriding of relevant fields and functions • Removal or change of semantics in functions: Nullifying or redirecting with a wrapper that overrides a function However, there are some downsides of using a procedural language, i.e., • Expressive power of the language may be limited. • One may have to depend on a reflection mechanism in order to reason about ontology/schema at runtime, which could cause extra overheads at runtime or be complex to learn. • It is difficult to learn or understand for those who are not familiar with C++ or object-orientation concepts. But, since we are dealing with embedded software engineers who are likely to be familiar with programming in C or C++, the last point is not a real obstacle. C++ has been widely used by not just computer scientists, but also by a wide range of
A Lightweight Component-Based Reconfigurable Middleware Architecture
115
scientific and engineering communities, and the language is standardised. In addition, it is not necessary to learn the whole language, but only the class-related subset, which is reasonable and comparable to learning a different ontology language, such as OWL and Protégé. Any tools that can generate code from UML class diagrams could also be employed in constructing ontology in C++. Expressive power of the language may seem limited at first compared to other purpose-built languages. But, for the purpose of storing readily usable information specific to the embedded systems domain, we believe that C++’s expressive power is adequate. As a bonus, behavioural or operational knowledge in the form of functions can be captured as well. Moreover, reflection mechanisms for C++ are available in a few different flavours, for instance, ones that make use of templates and macros to document extra information about classes within source code, compiler and debugger-based approaches and so on. We find that Xrtti [15] is one of the most convenient means of accessing meta-class information at runtime. It provides libraries and APIs that facilitate users to access meta-class information at runtime. Its front-end takes original C++ header files and generates Xrtti-compliant headers that need to be included in the code that invokes Xrtti’s APIs to access metaclass information. Usage of the Proposed Model State ontology is of use in • State migration in fault-tolerant systems (e.g., when recovering from faults in a reconfigurable control system, if a component fails, its state could be transferred to a new, different component) or • Check-pointing of component states and restoring them if necessary The conceptualisation or specification of an ontology may change over time as we progress with or become better aware of the problem domain. However, to preserve the ontology’s consistency one must keep track of changes in the ontology as it is updated. Especially, when multiple parties (such as, multiple threads or even distributed agents) are communicating with each other assuming different knowledge representations and versions, they will eventually conflict with each other unless they are consistently mapped and changes and differences are resolved. This can happen when a component with a state ontology is replaced, due to software faults, with another component that assumes a different version of the state ontology. Interoperability can be supported by keeping track of changes and converting states into different versions when required. A state ontology in C++ can evolve by means of inheriting a parent class and overriding attributes and functions, or even by nullifying by overriding a function with an empty function or redirecting calls to a correct function to remove or keep consistency. In other words, according to the inheritance hierarchy that can be seen as a version history, changes between parent and child classes can be tracked and reasoned about. To illustrate this point, imagine that component CPathPlanner2009 has developed a fault and the most recent state has been checkpointed in state s that is an instance of PathPlannerState2009 (see Figure 5 below for an illustration). An alternative
116
J. Kwon and S. Hailes
component CPathPlanner2010 exists, but it has been designed to work with a more evolved ontology PathPlannerState2010, which contains additional members and certain changes in the semantics that were not foreseen and are now part of new state instances. Each attribute has an associated getter and setter function, some of which the updated state ontology overrides and nullifies with additional transformation logics for the different version to convert to the immediate parent state and vice versa. In order to reduce downtime, the old component is replaced dynamically with the new one, taking the most recent stable checkpointed state. Since the state from the old component is the parent of the new state ontology class, they are compatible to each other although certain changes are present on the new one. The mechanisms to convert an old state into a new one are embedded in the new, overriding, or nullifying functions. For instance, if a state instance of PathPlannerState2009 is adopted in the place of PathPlannerState2010, and function getDestination is updated by overriding it, the new version of getDestination may contain a program logic that redirects calls to its parent (or super) class when it needs to obtain or associate state values with its previous version. In other words, if the migrated state instance is of type PathPlannerState2009 (i.e., the old one), calls to getDestination can be redirected conditionally to the original one, and converted into a new state value in a semantically correct manner given a correct transformation logic within the new function.
Fig. 5. Illustration of the proposed state ontology framework
Implementation for State Transformation The idea above can be realised using a copy constructor in C++ as briefly sketched below and a piece of skeleton code is shown. Assume that there are state ontology
A Lightweight Component-Based Reconfigurable Middleware Architecture
117
classes StateA and StateB for the sake of simplicity. When an object of StateA is passed between two components (where one is replaced by the other) and transformed into more specialised StateB for use in the new component, more information may need to be deduced along with the original contents of StateA. For instance, components utilising StateB may be driven more by runtime sensor input than relying on a static world model, and hence there are additional information fields for storing sensor values in StateB. However, a large set of data fields are inherited when converting StateA into StateB. StateA Destination: Position Source: Position ViaPoints: Position[] ChargingStations: Position[] OtherLGVs: Position[] Orientation: int Speed: int CollisionSensor: Boolean ModeOfOperation: OpMode CollisionAvoidanceBehaviour: func *()
StateB Destination: Position Source: Position ViaPoints: Position[] ChargingStations: Position[] Orientation: int Speed: int CollisionSensor: Boolean ProximitySensorFront: int ProximitySensorRear: int NearestViaPoint: Position ModeOfOperation: OpMode CollisionAvoidanceBehaviour: func *()
In other words, when StateA is converted into StateB, the following copy constructor can determine automatically what information is to be kept while what and how other information needs to be deduced depend on the version of the source state object that is passed in as a parameter to the copy constructor. See the case for (i.getVersion() < MAJOR_VERSION), where some of the values are assigned normally while other values are deduced by extra calculations (for instance, the representation of locations were based on a static world model, but now incorporates GPS input). However, when such a conversion is performed backward, i.e., from a more specialised child state into a parent one, it is always the responsibility of the newer child class to convert its type into a previously defined parent type correctly – such information is encoded in a user provided function convert(). class StateB { const int MAJOR_VERSION=2;// class-wide version of this state type const int MINOR_VERSION=0; … // Copy constructor // Check the version of each class when assigning one to // the other, and determine how they should be converted. StateB(const StateB& i) { // First check if given state i belongs to the same state hierarchy … if (i.getMajorVersion() == MAJOR_VERSION) { // No type conversion is required, just plain copy of states. setDestination(i.getDestination()); setSource(i.getSource()); // Same for other attributes … }
118
J. Kwon and S. Hailes else if (i.getVersion() < MAJOR_VERSION) { // If the version of i is lower (i.e., one of the parents), // convert i into this object’s type. if (I.getVersion() == MAJOR_VERSION-1) { //Assume the locations are encoded differently in StateB, setDestination(i.getDestination() * getCurrentGPS() * x); setSource (i.getSource() * getCurrentGPSPos() * y); setViaPoints (i.getViaPoints()); // may need further calculations to deduce values… } else if (…) {} // This conversion logic could vary depending on the versions } else if (i.getVersion() > MAJOR_VERSION) { // If the version of i is higher (i.e., one of the children), // delegate the type conversion task to i (thus, i into this). // It is always the newer(child) class that knows how to // convert its(child) type into an previous(parent) type. tmp = i.convert(MAJOR_VERSION); // i is converted into a different version // (i.e., to this version) // if i is of type StateC, StateC’s convert() will be called // and copy itself into a requested VERSION of the type, // by calling its super classes recursively if necessary } // Object i may be freed up after state migration } ControlState convert(int ver) { // If a backward state conversion is requested (into a lower ver.) if (ver < MAJOR_VERSION) { return static_cast<StateA>(this).convert(ver); } else if (ver == MAJOR_VERSION) { // Now clone/deep-copy and return the object’s reference //… return *lowerCloned; } } }
Middleware’s Support for State Ontology The middleware defines a special class named ControlState. This is the top-level framework that every state ontology class must inherit. By overriding the member functions, the user can specify how a specific state instance can be serialised, deserialised, cloned, check-pointed, and restored. The middleware also provides a special macro called CHECKPOINT, which can be used to specify where and when the user wants to checkpoint states within the application code. When the middleware encounters a call to CHECKPOINT, it will call the subject state’s checkPoint() function to store the state, which is expected to be provided by the user or inherited from a parent class.
A Lightweight Component-Based Reconfigurable Middleware Architecture
119
4 Related Work There are a large number of middleware systems that have been developed for different purposes. However, for resource-conscious embedded systems, one often settles with less dynamic or less flexible options because of predictability and safety concerns. In most cases, it is also logical to rule out heavy middleware systems that can cause longer, unpredictable delays and are difficult to reason about. Generally speaking, CORBA-based middlewares tend to be large in memory footprint and unfavourable for resource-constrained embedded applications. Compared to the RUNES [7], MIREA[19] supports C++ components, typechecking and QoS-related reasoning. OROCOS [10] is specifically designed for robotics applications. It comes with kinematics and dynamics libraries and real-time tool kits. Its component model is comprehensive and provides a rich set of features including scripting. For lightweight embedded systems like wireless sensor systems, MIREA has a smaller footprint and better flexibility. In [13] and [14] build-level component models are discussed. Although targeted at consumer electronics software, they support variability/reconfigurability at buildtime, and are lightweight in terms of development effort. However, they do not provide a runtime middleware kernel, or a state migration framework.
5 Conclusions and Future Work In this paper, we introduce a component-based middleware specifically designed for resource-constrained embedded systems, and its proposed state migration approach for fault-tolerant systems. MIREA has a small memory footprint (34KB) and low runtime overheads [19], yet is component-based. It facilitates software reuse and runtime reconfigurability to reduce development and maintenance cost and to fight unforeseen faults. States can be migrated in a coherent manner as long as correct transformation logics are specified by following the proposed model in the paper. Currently, we are in the process of applying the proposed model in a real scenario that involves LGVs (laser guided vehicles) in a factory. As a future work, we will evaluate the effectiveness of the approach and the middleware, and report on the findings. Acknowledgments. This work has been supported by the Seventh Framework ICT Programme of the European Union under contract number ICT-2007-224428 (CHAT).
References [1] Gazi, et al.: The RCS Handbook, Tools for Real-Time Control Systems Software Development. John Wiley & Sons, Inc., Chichester (2001) [2] NGMS IMS (Next Generation Manufacturing Systems-Intelligent Manufacturing System) Research Reports, “Scalable flexible manufacturing”, Advanced Manufacturing (March 2000), http://www.advancedmanufacturing.com/March00/research.htm
120
J. Kwon and S. Hailes
[3] Krakowiak, S.: What is Middleware, ObjectWeb, A more complete version available as ”Middleware Architecture with Patterns and Frameworks” (2003), http://middleware.objectweb.org/, http://sardes.inrialpes.fr/~krakowia/MWBook/Chapters/Intro/intro.html [4] Xenomai Official Website, http://www.xenomai.org/ [5] Morton, Y.T., Troy, D.A., Pizza, G.A.: An Approach to Develop Component-Based Control Software For Flexible Manufacturing Systems. In: Proc. of the American Control Conference, Anchorage, AK, May 8-10 (2002) [6] Stroustrup, B.: The C++ Programming Language (1997) [7] RUNES, http://www.ist-runes.org [8] Costa, P., Coulson, G., Mascolo, C., Picco, G.P., Zachariadis, S.: The RUNES Middleware: A Reconfigurable Component-based Approach to Network Embedded Systems. In: Proc. of 16th International Symposium on Personal Indoor and Mobile Radio Communications (PIMRC 2005). IEEE Press, Los Alamitos (2005) [9] NIST, 4D/RCS: A Reference Model Architecture for Unmanned Vehicle Systems, Version 2.0, http://www.isd.mel.nist.gov/projects/rcs/ref_model/ coverPage3.htm [10] OROCOS, http://www.orocos.org/ [11] Szyperski, C.: Component Software – Beyond Object-Oriented Programming, 2nd edn. Addison-Wesley, Reading (2002) [12] Szyperski, C.: Component Technology: what, where, and how? In: Proc. of the 25th Intl. Conference on Software Engineering (2003) [13] Ommering, R., Linden, F., Krammer, J., Magee, J.: The Koala Component Model for Consumer Electronics Software. IEEE Computer 33(3) (March 2000) [14] Park, C., Hong, S., Son, K., Kwon, J.: A Component Model supporting Decomposition and Composition of Consumer Electronics Software Product Lines. In: Proc. of 11th IEEE Intl. Software Product Line Conference (2007) [15] Xrtti: Extended Runtime Type Information for C++, documentation and download, http://www.ischo.com/xrtti/ [16] Gat, E.: On the Role of Stored Internal State in the Control of Autonomous Mobile Robots. AI Magazine 14(1) (1993) [17] Gruber, T.R.: A Translation Approach to Portable Ontology Specification. Knowledge Acquisition 5, 199–220 (1993) [18] Stanford University, Ontology Development 101: A Guide to Creating Your First Ontology (Protégé Documentation), http://protege.stanford.edu/publications/ ontology_development/ontology101.html [19] Kwon, J., Hailes, S.: MIREA: Component-based Middleware for Reconfigurable, Embedded Control Applications. In: The Proc. of the IEEE International Symposium on Intelligent Control, ISIC (2010)
Distributed Context Models in Support of Ubiquitous Mobile Awareness Services Jamie Walters, Theo Kanter, and Roger Norling Department of Information Technology and Media Mid Sweden University, Sundsvall Sweden {jamie.walters,theo.kanter,roger.norling}@miun.se
Abstract. Real-time context aware applications require dynamic support reflecting the continual changes in context. Architectures that distribute and utilize the supporting sensor information within the constraints of publish-subscribe systems provide sensor information in primitive forms requiring extensive application-level transformations limiting the dynamic addition and removal of sources. Elevating sensors to first class objects in a meta-model addresses these issues by applying ontological dimensions in direct support of context. This paper proposes an extension of such a model into a distributed architecture co-located with context user agents. This arrangement provides clients with a model schema which is continually evolving over sensor domains. In addition, the evolving model schema represents an accurate temporal view of a userâĂŹs context with respect to the available sensors and actuators. Keywords: Object-Oriented, Context Awareness, Peer-to-Peer, Context Agents.
1
Introduction
The dynamic nature of context information creates support for the deployment of real-time dependent applications and services across stationary as well as mobile networks. These applications, while becoming increasingly ubiquitous, provide users with information and services outside the scope of current stationary computing. Enabled devices such as mobile phones, television sets and IPTV boxes are all aiming towards this service provisioning paradigm of "everywhere computing" [18]. Of importance is their ability to apply computing to social situations; "mediating" by provisioning, delivering or acting upon user-based information [19], [11]. Such information is regarded as user context information [22] and drives the adoption of ubiquitous computing services and applications. Ubiquitous Services will enable the realization of seamless media. Service providers need to be able to deliver media solutions across a variety of platforms and scenarios based on user preferences. This would create globally accessible media distribution and provide seamless access and transfer as users navigate around the Internet of "Media" Things [21]. Social media and social networking are enhanced by the ability to manipulate presence profiles and information G. Parr and P. Morrow (Eds.): S-Cube 2010, LNICST 57, pp. 121–134, 2011. © Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2011
122
J. Walters, T. Kanter, and R. Norling
in real-time in response to changes in context, creating true real-time online presence and interaction [12]. As these applications become increasingly mobile, the need to derive models capable of representing and deriving context within real-time limits, increases. Architectures such as the IP Multimedia Subsystem (IMS) [3] and SenseWeb [13] project present opportunities for deriving and persisting information in support of context. However such centralized approaches are network-centric and require clients to retrieve context information from service portals on the Internet. Mobile broadband access services via 3G mobile systems can deliver access to context information from service portals in the Internet, but the centralized architectures incur severe limitations for real-time context aware applications. Mobile access via 3G mobile systems to context information centralized in service portals on the Internet also suffers from unreliable connectivity. In response to the challenges, we seek a data-centric approach which allows for real-time context-aware applications to leverage distributed access to context information, by relocating context information closer to end-points. Improvements in offering seamless connectivity via heterogeneous mobile broadband systems would further strengthen the approach. One solution is to distribute the sensor information in a P2P overlay as achieved with DCXP [14]. However, such approaches either assume that sensor information is represented as context information or that applications and services gain access via support which associates sensor information with context. Consequently, access to raw sensor information does not negate the need to access metadata required for applying this information to some presence or context model in support of applications and services. Architectures such as DCXP [15] resolve this problem by offering interworking with a centralized platform where sensor information required can be accessed in real-time. However, in order for applications and services to create some intelligent behavior through the use of agents leveraging context information, the support is required to access prescence information which is organized in a centralized model. Subsequently, this points to issues around the ability of the support to scale well. Therefore, there exists a need to substitute the centralized element of the DCXP hybrid while capitalizing on its distributed properties. This, along with the application of an intelligent context representation model, can create a solution for enabling distributed context in support of the proliferation of ubiquitous computing applications and services.
2
Motivation
There exists a need for real-time distributed access to sensor information. Such information, organized into models representing users and their interactions with the real world, motivates research into supporting systems and technologies. With the Internet of Things moving from a concept to a reality; dynamic yet robust context-centric architectures must be created in direct response to the need to fulfill the demands for information created by the integral user-services relationships.
Distributed Context Models
123
Fig. 1. Context-Awareness Support Model
Our previous work with the DCXP protocol [14] explored distributed approaches to gathering and sharing sensor information in support of context. This provided the sensor information needed to create simple applications based on user context. However, more advanced applications and services such as agentbased computing require context information to be presented and described in an intelligent manner. This adds meaning to the sensor data, the entities they describe and the sensors themselves. This mandated the type of general architecture shown in figure 1. At its base, such a support requires real-time access to sensor information generated by users or their environment. We deem solutions such as IMS [3] or Senseweb [13] to be inadequate owing to their centralization. They are regarded as being nonscalable with respect to the real-time properties mandated by the dynamic behavior of users and their environment. The DCXP approach produced the ranges in response times adequate enough to support real-time context dependent services. Furthermore, it proved that distributed systems were more capable of achieving this than centralized approaches building on mobile or web services. The current DCXP architecture is centered around the dissemination of context information as raw sensor values. While this provided the sensor information needed to create simple applications based on user context, more advanced applications and services such as agent-based computing require context information to be presented and described in an intelligent manner; adding meaning to the sensor data, the entities they describe and the sensors themselves. To this end we proposed the use of the Context Information Integration model (CII) [8], an object oriented model supporting context aware applications.
124
J. Walters, T. Kanter, and R. Norling
There remains however, the question of scalability within this model as the model in its present implementation, is a centralized solution. It creates a repository over a relational database with context user agent nodes, interacting with context information as required. Simply distributing the current model across the DCXP framework would create ubiquitous islands of context information and subsequently obstacles to seamless reasoning over the global data or an applicable subset. The simple approach of using distributing a relational database such as in [26] and making use of the advanced research in database distribution is not applicable. This, as such a distribution assumes communication reliability in order to maintain database integrity across wide area networks which cannot be guaranteed in heterogeneous mobile scenarios [2], [27]. This is also undermined by the fact that relational databases are highly inefficient for supporting real-time data manipulation, evolution and querying. An approach would be to use cloud data store such as Big Table [4] capable of enabling broad access to context information. This, however is not represented in an intelligent manner and therefore the resulting support would rather simply replace the persistence layer within the architecture while offering no additional knowledge to the agents or applications residing on top. A solution should also consider that dynamic characteristic of context is expressed in the continual changing of sensor information to which a user has access. As an entity changes state in real-time, it will encounter new sensors or sensor information, such as entering a building or vehicle. This requires that a schema of available context information about a user must be maintained and kept current; an evolving meta model as suggested in [17]. In solutions such as IMS, such user information is centralized with scalablity issues. With solutions such as [9], [24], [10], citeKawakami2008 this information could be stored within the ontologies. This however would then be subjected to the performance issues raised in [28]. Supporting such dynamic scenarios also requires means of quickly and efficiently locating relevant sensors within the framework as well as a standardized means of addressing and naming these sensors within disparate infrastructures. Our research in this direction is further motivated by the benefits that arise when meta-data with some added value is given to sensors in a distributed environment. The concept, that while sensors give rise to context information, they themselves must also possess some context; determining relations with other sensors and entities within the framework. In order to meet the requirements of providing real-time distributed context information, we must leverage the benefits of distribution coupled with the intelligent representation of context within the CII model. Our approach, would require that context be represented in a intelligent manner capable of supporting the range of services dependent on context information, but be made available in a manner that is readily accessible across distributed architecture devices and with indifference to network heterogeneity.
Distributed Context Models
3 3.1
125
Approach Distributed Context eXchange Protocol
DCXP is an application level XML based context exchange protocol which offers reliable communication among nodes that have joined a peer-to-peer network as illustrated in [15]. It utilizes a distributed hash table based on Chord [25] for message forwarding and naming resolutions. Any device on the Internet may implement the DCXP protocol and register with the architecture in order to share or digest context information. The original DCXP protocol was limited to exchanging sensor information within a publish and subscribe pattern. While DCXP works as expected for exchanging simple context information, we need to make several changes in order to enable support for creating context models. We replace the XML format for exchanging DCXP messages with a JSON [6] based format without making any significant changes to the message structures. We attempt to gain speed [20] with regards to constructing, disseminating and digesting messages with an emphasis on the resource-limited devices that would participate in the overlay. Further to this, we introduce a TRANSFER and a SET primitive to complement the existing protocol set. We reason that in a distributed environment, a sensor will not always be managed closest to the end points requiring its values. As such, a time critical application could be adversely affected by the overheads required even within a minimalistic P2P implementation. Where this occurs, rather than ascribing to the established publish-subscribe protocol, the node, using the TRANSFER primitive, could request that the sensor responsibility be relocated locally. One primary difference however, is that the TRANSFER primitive, unlike the remaining protocol set, can be accessed by the applications and services above the CII model. This allows applications and services to influence such a decision within the DCXP protocol. The SET primitive, extends DCXP to accommodate actuators, such that we can now permit applications and services to modify environment based on context information. This more accurately represents the sensor-actuator end points of underlying Wireless Sensor Actuator Networks and accommodates scenarios such as intelligent homes where the climate control is a combination of both sensing the temperature and adjusting it to suit user preferences.The URI based Universal Context Identifier (UCI) detailed in [16] remains in use but is extended to accommodate distributed addressing within the object model overlay. This is addressed later in the paper. 3.2
The Meta Model
The CII meta-model as detailed in [8] describes an entity-predicate-entity triple implemented in an object-oriented framework. Such a model, as illustrated in figure 2 is similar in concepts to the semantic web approaches, however it remains advantageous with regards to the time taken to traverse and reason over an object-based model. Such a model provides a way to represent the realtionships and interactions among the connected things within an Internet of Things. Where
126
J. Walters, T. Kanter, and R. Norling
Fig. 2. The Distributed Context Information Integration Model (DCII)
things can range from sensors and actuators to virtual information sources such as social networks, media, people and infrastructure. The CII model can be extended with new sub-concepts of Entity and Information-Source. These concepts would be presented as classes following a standard interface. This integration would be made possible by adaptive software techniques such as a combination of computational reflection, automated compilation and dynamic class loading. Agents, applications and services reside above and use the meta-model as a source of data and deriving context information. The previous implementation [8] is focused on a largely centralized approach, where the model resides on a single end point with the CUAs reporting sensor information and sensor modifications. This introduced a single point of failure for the model. Distributing this model to create the DCII mandated two key departures from the CII model. Firstly, the concept of Information Sources has been superseded by Information Points and extends the model to support the changes reflected in section 3.1. Actuators now become Information Sinks with the following reverse properties of Information Sources: – Comparing input values : a Fahrenheit value could be passed into the end point and compared with the threshold value of the actuator – Representations and translations : exposing multiple representations for accepting and translating input, an actuator implemented in Centigrade could accept temperature settings in Fahrenheit
Distributed Context Models
127
Secondly, we look at Presentities as regarded by [5] as: An entity that posseses and is defined by its presence information. We regard presentities as being separate in behavior from other entities such as sensors or actuators. We add a Schema Entity which is attached to a presentity and describes the current model of sensors and actuators that provide context information supporting the presentity. In this way the watchers, regarded by [5] as the entities interested in a presentity’s prescence, may has access to a defined real-time picture of all the information points related to a presentity. It can then choose which sensors to use in order to deliver its services. Schema entities, however have one additional property; they expose a publish/subscribe interface. We take this approach in order to avoid having to synchronize large datasets distributed around the architecture. Watchers can therefore subscribe to a schema and be notified as it changes. There is no need to issue queries to nodes or databases or for watchers to be concerned with checking for updated presence information.
3.3
The Overlay
The CII model details an implementation on top of the existing DCXP framework. DCXP was built as a context information exchange framework utilizing an underlying Chord [25] P2P overlay. Our requirements with respects to real-time constraints, scalablilty and peformance motivates a substitution of the Chord overlay with the more advanced PGrid architecture [1]. PGrid introduces several advantages over the existing Chord overlay including:improved search functionality, more efficient routing, complete decentralization and load balancing. From this we gain two key functionality required to obtain the real-time qualities needed for supporting an object based context model. Firstly, a P2P overlay on which to implement the DCXP protocol; a key requirement for a the model. Based on the PGrid support, the overlay is responsible for routing DCXP messaging primitives around the overlay, sending messages between nodes, resolving nodes responsible for a UCI or a range of UCIs as well as building and maintaining the overlay. The DCXP layer implements the DCXP primitives described in: [16] along with the extensions described in section 3.1. This preserves the original architecture as well as the functionality of existing dependent solutions. It further maximizes interoperability by allowing endpoints incapable of supporting the complete interface to gain access to lower level user-based context information. Secondly, we benefit from an advanced indexing layer native to PGrid, which provides for a distributed indexing service implemented on top of the overlay. We extend this, in order to define how different entities are described, indexed and queried within the index and refer to this as the Resource Index (RI). All resources such as entities, presentities and schema are identified by their UCI and stored within the RI. This provides for a quick searching mechanism for all entities within the architecture. A secondary result of using PGrid is that such an implementation would be more suitably adapted for highly dynamic and flexible
128
J. Walters, T. Kanter, and R. Norling
Fig. 3. Distributing the Meta Model
environments such as mobile networks which are underpinned by issues relating to heterogeneity and reliability. 3.4
The Distributed Meta Model
Section 2 addressed some of the issues surrounding the distribution of context within fixed and mobile computing environments. The CII model as it stands, meets its initial requirements with regards to representing context information. However the current implementation does so as a centralized solution consisting of an object-oriented overlay connected to a relational database. Objects are described, initialized, utilized, destroyed and persisted within this same space, i.e. on a single node. All requests must be sent to the same server for execution and while such a model creates the intelligence required, the real-time properties are undermined by the limitations of its scalability. Distributing this model, while achieving and sustaining real-time targets, requires a rethinking of the underlying architecture in support of real-time properties. There are three main problems to be addressed by the proposed solution: firstly the need for a scalable distributed architecture for routing context information among nodes in real-time; secondly real-time querying and indexing infrastructure for locating entities and information across a distributed architecture. Thirdly, a means of constructing and manipulating complex object-oriented context models representing the highly dynamic real world interaction of presentities. A support for this is illustrated in figure 3. The first two are resolved by the DCXP and Resource Index layers respectively. The third, the CUA or the Context User Agent is an extension of the existing CUA from a sensor information dissemination point to an intelligent context
Distributed Context Models
129
node with the ability to initialize, store and manipulate context models required to support applications and services being executed at an endpoint. With the original CII implementation, an object-layer residing over a relational database, was co-located with the CUA. Citing the comparison of relational and objectoriented database sytems in [23], we replace the RDMS with an OODB component, negating the need for an object layer above persistence. In addition to this we gain speed by exploiting the advantages of OODB with respect to realtime performance [7]. The native implementation of the entity-predicate-entity triples further improves performance over the original CII implementation which is required to fetch and construct the CII models in the object layer. Within a DCII, an object representation of a sensor or actuator is defined. The object is stored locally in an OODB and also indexed in RI. The same happens for all entities including presentities. The application or service responsible for the creating and maintaining the object further adds relationships to its schema, define translation rules and access permissions. When a sensor or actuator is added to the DCXP layer it is given a UCI of the format: dcxp://user[:password]@domain[/path[?options]] The original DCXP interactions remain. The local CUA implements an object representation of the sensor as described by the CII model and stores it in the local OODB. Each object is assigned a UCI of the format: dcii://user[:password]@domain[/path[?options]] The object UCI is stored in a DCXP-like distributed hash table to facilitate real-time resolution of UCI to object queries. The object is then serialized and indexed by the RI layer for supporting searching and browsing. An application requiring use of a sensor implements a reference to sensor object. The local object layer is then responsible for resolving the UCI, fetching the object description, initializing the object and translating this into the GET or SUBSCRIBE primitives of the underlying sensor. It further maintains this relationship until the sensor is no longer required. This interaction is local to the application and its CUA. The CUA responsbile for the sensor object is not required to particpate in this relationship. Such object-application relations are straightforward, however when an application requires the context-model for a presentity in order to deliver some service or user experience, the model then introduces more complexity. Here, we utilize the schema objects described in section 3.2. The CUA local to a presentity maintains its schema, adding or removing sensors or other information sources that contribute to its prescence profile. Where an application requires the Information Points relating to a person, a subscription is made to its schema, the schema is initialized on the CUA local to the application. This liberates the CUA hosting the presentity from maintaining resources capable of supporting all the services connected to it. The schema is then resolved to a DCXP
130
J. Walters, T. Kanter, and R. Norling
Fig. 4. Distributed Meta Model
PUBLISH/SUBSCRIBE with the concrete sensor/actuator sources. The application’s local CUA is then responsible for maintaining and updating the local representation of the schema relative to the application’s requirements. With this loose coupling, applications can ignore schema changes that do no impact on their performance, eg: an application that requires location information may ignore schema changes that adds new temperature sensors but would update the context-model if a new location sensor is added to a presentity. Such an approach is beneficial in resource-constrained devices where only a subset of the schema may be implemented. 3.5
Evolving Context Schemata
The schema entity detailed in section 3.2 introduces support for the dynamic behavior of context information. We need to maintain a model that continually desribes the current situation of a presentity. This we regard as schema evolution; the continual adding or removing of information sources that are available to a presentity and subsequently its watchers. Model schema evolution will take place progressively, continually deriving its new state from application level interaction. Applications such as user agents can negotiate the addition of new sensors or actuators to a presentity’s schema. The watchers (applications, services, etc) can SUBSCRIBE to the schema object and receive notifications of any schema changes. The evolution of the schema provides for personal preferences with regards to this problem. Schemata are seen as being infinitely composable and reusable such that a new schema may be constructed over existing schemata. Such an example might be the need to express the collaborative context of all the occupants in a room in order to derive an accurate context-model for the room itself. This however can be limited by the fact that the schemata are time constrained and encapsulate the composition of a subset sensors attached to a presentity P over time t. The process of evolving of a schema is triggered by the presentity itself establishing a recursive dependency where, a schema expresses a new context-model
Distributed Context Models
131
which is used by the applications. The applications through defined dynamic interactions allow for resource discovery which may in turn trigger the evolution of the schema. Applications dependent on a presentity’s context do not need to discover and negotiate with sensors directly. Information sources will be seamlessly added or removed modifying the schema being used.
4
The Model as a Service
The CII model utilizes information sources from cloud based services such as social networking sites, messaging end points or any service interested in context information, thereby enabling the missing building block of cloud computing [12]. With a centralized model, such implementations are managed by a single end point or user is required to manage all the required interfaces. This reduces the "open" properties of any such framework and re-inforces the single point of failure issues described in section section2. Interworking cloud services within the DCII model may however be done with relative ease while exploiting its distributed properties. Figure 5 shows a computer wishing participating within the architecture needs only to implement the basic services (P2P, RI, DCII) in order to interface with the model. With this extension, the model provides distributed access to its services to any external application or service. Services such as social networks, messaging platform or even existing prescence systems such as IMS can interopolate with and gain access to the information contained within the model. We can also expose the model in other formats which can enable additional data mining and knowledge discovery. At the core of this, we are proposing a model that permits us to exploit the benefits of distribution while sharing information with existing and legacy context centric systems. 4.1
Service Scenario
John Smith, is actively enagaged in social activities and is interested in using services that are capable of providing him a user experience based on his context. He uses Skype and Facebook and wants to be able to modify his status and profile in response to his context. He subscribes to a profile updating service on the internet which is capable of modifying his Facebook profile. The company has a server which joins the DCXP overlay and implements a CUA. They now find John using his UCI and subscribes to his schema which is maintained by his local CUA, located in his house. His CUA maintains a track of his interaction with the connected things infrastructure and continually maintains and updates his profile as a list of connected objects referenced on a distributed architecture. The real-time qualities of such a solution allows for the implemented Facebook module to propagate this context information to his profile. Simulataneously, his Facebook connection acts like an information source providing information about his social interactions, friends connected, messaging status, etc. these are translated by the translation component, attached to all
132
J. Walters, T. Kanter, and R. Norling
Fig. 5. Cloud Services Integration
information points in the model, into any representation required by another end point. As such his skype connector running on his local computer now displays his status as a combination of his current context and his Facebook profile , eg. I am eating ice cream – 27°C, GPS[27.564, 65.4324], Humidity 78.5%, etc. This adds more dimensions to his context profile and changes quickly as John moves around his daily routine. Applications looking at his context, knowing that John gets a fainting spell over 27°C can look at his current context profile to see if he has access to a temperature actuator in order to reduce his local temperature. Such applications are able to gain access to one as soon as John enters his house and adjust the temperature accordingly. While this is achievable on current systems such as IMS, they do not store context profile information explicitly, rather a current set of values. There does not exist a means of idenitfying sensors within proximity. Secondly, centralization points to their inablility to scale well since all services depending on John’s profile would be attempting to connect to the same WebService portal and potentially offloading computational tasks. Within our service capable model, this can be distributed and managed accordingly. John never has to think about adding computation resources at home to meet the demands of all the applications and services depending on his context. Such services can exist on nodes within the architecture subscribing to his profile and creating an experience in response.
5
Concluding Remarks and Future Work
In this paper, we presented a further extension to the CII called the DCII and an architecture for supporting the distributed dissemination, agregation and reasoning over such a model. This solution creates the support required to implement the real-time dependent context services conducive with an Internet of Things. Within this model, we address the issues surrounding the scalablity of centralized sensor information access by extending and using DCXP. We added new primitives to complement the existing set and enable support for actuators and resource relocation. This extends our current research to supporting the more accurate Wireless Sensor Actuator Networks as opposed to only sensing scenarios. We however, maintain the proven real-time information dissemination unachievable with previous approaches such as Senseweb. In response to the need for an intelligent layer above DCXP, the DCII approach builds on the previous CII research by now including distributive support.
Distributed Context Models
133
This, by extending the CUA to include an OODB and a DCII API; creating relations within a native database and removing the added layer of abstraction of the CII model. With this distribution we premise that we gain the ability to build and modify complex sensor relationships in real-time to reflect the dynamic realities of the Internet of Things. We introduced the Resource Index to provide a means of indexing, searching and browsing context objects in real-time. This will benefit applications involved in data mining and resource discovery across the architecture. We also introduced the use of the sensor schema to model the evolving context information related to presentities. This we maintain through a publish/subscribe interface. We also addressed the interoperability of our solution with cloud services through service extensions allowing connectors and translators to both input and utilize information within the architecture. This creates a model capable of integration with social network services, messaging platforms or scheduling applications. Future work includes implementing the architecture on top on the existing DXCP and subsequently conducting case studies comparing its performance with existing prescence based systems. We envision further research in the areas of linked sensors and subsequently the ability to dynamically index and rank sensors in terms of relativity to other sensors, entities and search queries across the system. By being able to usefully connect sensors and entities such as is with web content, we see the possibility of enabling ranking and searching much akin to modern search engines. Acknowledgement. This research is a part of the MediaSense project, which is partially financed by the EU Regional Fund and the County Administrative Board of VÃďsternorrland.
References 1. Aberer, K., Cudré-Mauroux, P., Datta, A., Despotovic, Z., Hauswirth, M., Punceva, M., Schmidt, R.: P-Grid: a self-organizing structured P2P system. ACM SIGMOD Record 32(3), 29 (2003) 2. Barbara, D.: Mobile computing and databases-a survey. IEEE Transactions on Knowledge and Data Engineering 11(1), 108–117 (1999) 3. Camarillo, G., García-Martín, M.A.: The 3G IP multimedia subsystem (IMS): merging the internet and the cellular... David López (2004) 4. Chang, F., Dean, J., Ghemawat, S., Hsieh, W.C., Wallach, D.A., Burrows, M., Chandra, T., Fikes, A., Gruber, R.E.: Bigtable: A Distributed Storage System for Structured Data. ACM Transactions on Computer Systems (TOCS) 26(2) (2008) 5. Christein, H., Schulthess, P.: A general purpose model for presence awareness. In: Distributed Communities on the Web, pp. 24–34 (2002) 6. Crockford, D.: Introducing JSON (2010) 7. DiPippo, L.C., Wolfe, V.F.: Object-based Semantic Real-time Concurrency Control (1995) 8. Dobslaw, F., Larsson, A., Kanter, T., Walters, J.: An Object-Oriented Model in Support of Context-Aware Mobile Applications. In: Source, Chicago, pp. 1–16. SpringerLink, Heidelberg (2010)
134
J. Walters, T. Kanter, and R. Norling
9. Elettronica, I.: Semantic-based Middleware Solutions to Support Context-Aware Service Provisioning in Pervasive Environments. Informatica (2008) 10. Gu, T., Pung, H.K., Zhang, D.Q.: A service-oriented middleware for building context-aware services. Journal of Network and Computer Applications 28(1), 1–18 (2005) 11. Iqbal, R., Sturm, J., Kulyk, O., Wang, J., Terken, J.: User-centred design and evaluation of ubiquitous services. ACM Special Interest Group for Design of Communication, 138 (2005) 12. Joly, A., Maret, P., Daigremont, J.: Context-Awareness, The Missing Block of Social Networking. ijwus.net 6(2), 50–65 (2009) 13. Kansal, A., Nath, S., Liu, J., Zhao, F.: SenseWeb: An Infrastructure for Shared Sensing. IEEE Multimedia 14(4), 8–13 (2007) 14. Kanter, T., Österberg, P., Walters, J., Kardeby, V., Forsström, S., Pettersson, S.: The MediaSense Framework. In: 2009 Fourth International Conference on Digital Telecommunications, pp. 144–147 (July 2009) 15. Kanter, T., Pettersson, S., Forsstrom, S., Kardeby, V., Norling, R., Walters, J., Osterberg, P.: Distributed context support for ubiquitous mobile awareness services. In: 2009 Fourth International Conference on Communications and Networking in China, pp. 1–5. IEEE, Los Alamitos (2009) 16. Kanter, T., Pettersson, S., Forsström, S., Kardeby, V., Norling, R., Walters, J., Österberg, P.: Distributed Context Support for Ubiquitous Mobile Awareness Services. In: Context (2009) 17. Kanter, T.G.: Going wireless, enabling an adaptive and extensible environment. Mobile Networks and Applications 8(1), 37 (2003) 18. Lee, J., Song, J., Kim, H., Choi, J., Yun, M.: A user-centered approach for ubiquitous service evaluation: An evaluation metrics focused on human-system interaction capability. In: Lee, S., Choo, H., Ha, S., Shin, I.C. (eds.) APCHI 2008. LNCS, vol. 5068, pp. 21–29. Springer, Heidelberg (2008) 19. Lyytinen, K., Yoo, Y.: Introduction. Communications of the ACM 45(12), 62 (2002) 20. Nurseitov, N., Paulson, M., Reynolds, R., Izurieta, C.: Comparison of JSON and XML Data Interchange Formats: A Case Study. cs.montana.edu (2009) 21. Oliveira, J., Carrapatoso, E.: Using context information for tailoring multimedia services to user’s resources. In: Krishnaswamy, D., Pfeifer, T., Raz, D. (eds.) MMNS 2007. LNCS, vol. 4787, pp. 138–148. Springer, Heidelberg (2007) 22. Schmidt, A., Beigl, M., Gellersen, H.-w.: There is more to Context than Location (1998) 23. Smith, K.E., Zdonik, S.B.: Intermedia: A case study of the differences between relational and object-oriented database systems. ACM SIGPLAN Notices 22(12), 452 (1987) 24. Sousa, J.P., Carrpatoso, E., Fonseca, B.: A Service-Oriented Middleware for Composing Context Aware Mobile Services. IEEE, Los Alamitos (2009) 25. Stoica, I., Morris, R., Karger, D., Frans Kaashock, M., Balakrishnan, H.: ACM SIGCOMM Computer Communication Review 31(4), 149 (2001) 26. Stonebraker, M., Aoki, P.M., Litwin, W., Pfeffer, A., Sah, A., Sidell, J., Staelin, C., Yu, A.: Mariposa: a wide-area distributed database system. The VLDB Journal âĂŤ The International Journal on Very Large Data Bases 5(1), 048 (1996) 27. Ulusoy, O.: Transaction processing in distributed active real-time database systems. Journal of Systems and Software 42(3), 247–262 (1998) 28. Wang, X.H., Zhang, D.Q., Gu, T., Pung, H.K.: Ontology based context modeling and reasoning using OWL. IEEE, Los Alamitos
SeDAP: Secure Data Aggregation Protocol in Privacy Aware Wireless Sensor Networks Alberto Coen Porisini and Sabrina Sicari Dipartimento di Informatica e Comunicazione Università degli studi dell’Insubria Via Mazzini, 5-21100 Varese-Italy {alberto.coenporisini, sabrina.sicari}@ uninsubria.it
Abstract. Wireless Sensor Networks are characterized by very tight energy requirements and therefore they impose strict constraints on the amount of data that can be transmitted by sensors nodes. In order to achieve such a goal many data aggregation algorithm have been defined. However, in order to ensure a broad deployment of the innovative services delivered by WSNs, strict requirements on security and privacy must be also satisfied. The aim of this paper is to present an integrated framework, named SeDAP, that deals with end-to-end data aggregation and privacy issues. Keywords: WSN, privacy, end-to-end secure data aggregation.
1 Introduction Wireless Sensor Networks (WSN) technologies support data collection and distributed data processing by means of very small sensing devices [1], with limited computation and energy capabilities. WSN are used in many contexts, such as telemedicine, surveillance systems, assistance to disabled and elderly people, environmental monitoring, localization of services and users, industrial process control, and systems supporting traffic monitoring/control in urban/suburban areas, military and/or anti-terrorism operations. An important goal when designing WSN systems is the reduction of the need for transmission since from a power consumption perspective, data transmission is a very expensive operation. One solution consists in using proper aggregation algorithms (e.g., see [2], [3], [4], [5]) that can reduce significantly the number of bytes exchanged across the WSN. In fact, in many situations, what is needed are aggregated measures, such as the average temperature of a region, the average humidity, and so on. Thus, network processing capabilities and data-aggregation are key features of a WSN, which can greatly improve energy efficiency by reducing data going through the wireless channel [3], [4]. Another important issue in WSN is represented by privacy that may be violated by tampering of sensors and/or traffic due to the nature of the wireless channel and its G. Parr and P. Morrow (Eds.): S-Cube 2010, LNICST 57, pp. 135–150, 2011. © Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2011
136
A.C. Porisini and S. Sicari
deployment in uncontrolled environments. Thus, privacy aware mechanisms are crucial for several WSN applications such as localization and telemedicine. Moreover, it is necessary to take into account privacy also in those application contexts in which data referring to individuals are not directly handled by the WSN. For example, in home networks, sensor nodes may collect a large amount of data that may reveal habits of individuals, violating in this way their privacy. Among the different aspects characterizing privacy, anonymity is an important requirement for a privacy aware system that aims at protecting the identity of the individuals whose data are handled by the system. In this paper we take into account both data aggregation and privacy issues following the modeling approach proposed in [7], [8] and [9] in order to define an integrated solution that considers a solid privacy management policy coupled with an aggregation algorithm [2]. The aggregation process, which merges spatial correlated data and works on encrypted information, involves only linear operations and allows the sink node to estimate the confidence level of aggregated data. The model is defined in UML [10], [11] and represents a general schema that can be easily adopted in different contexts. It introduces concepts, such as nodes, data, actions, that are needed to define a privacy policy along with the existing relationships among them. The main objectives fulfilled by our approach are: (i) anonymity management; (ii) data integrity check; (iii) data aggregation to reduce the network load; (iv) end-to-end secure data aggregation. The rest of the paper is organized as follows. Section 2 introduces the foundations for modeling privacy in the context of WSN and presents a short overview of the conceptual model. Section 3 describes the reference scenario and the adopted end-to-end secure data aggregation algorithm. Section 4 introduces SeDAP the proposed framework, integrating privacy management policies and data aggregation. Section 5 shows some performance evaluation of the proposed algorithm. Section 6 presents some related works. Finally, Section 7 draws some conclusions and provides hints for future works.
2 Privacy Model A privacy policy defines the way in which data referring to individuals can be collected, processed, and diffused according to the rights that individuals are entitled to. The rest of the paper adopts the terminology introduced by the EU directive [12]. Notice that, since the proposed terms are general, i.e., they are not dedicated to a specific type of network, it is necessary to refine them in order to provide the concepts needed for supporting the definition of privacy mechanisms in WSN communications. In the following, a short overview of the conceptual model for privacy policies is illustrated. The structural aspects are defined using UML classes and their relationships. Figure 1 depicts a class diagram that provides a high level view of the basic structural elements of the model.
SeDAP: Secure Data Aggregation Protocol in Privacy Aware WSN
137
Fig. 1. A WSN Privacy Policy
A WSN Privacy Policy is characterized by three types of classes: Node, Data, and Action. Nodes interact among them inside the network in order to perform some kind of actions on data. Thus, an instance of WSN PrivacyPolicy is characterized by specific instances of Node, Data, and Action, and by the relationships among such entities. Now, let us focus on the classes introduced by the diagram. 1) Node. It represents a member of the network either interested in processing data or involved by such a processing. Nodes are characterized by functions and roles (see Figure 2).
Fig. 2. UML Model
More specifically: - Role [13] is a key concept of this approach; in fact, nodes are characterized depending on the role they play with respect to privacy. Three distinct classes
138
A.C. Porisini and S. Sicari
represent the different roles: Subject, which is a node that senses the data; Processor, which is a node that processes data by performing some kind of action on them (e.g., transmission, forwarding, aggregation, etc.); Controller, which is a node that verifies the actions executed by processor nodes. - Function represents the task performed by a Node within the network in which it operates (e.g., data sensing, message transmission, message forwarding, data aggregation, etc.). - Data. It represents the information referring to subjects that can be handled by processors. Data is extended by means of: Identifiable data (e.g., node identifier) which represent the information that can be used to uniquely identify nodes; Sensitive data (e.g., health related data) which represent information that deserve particular care and that should not be freely accessible; Sensed data (e.g., temperature, pressure) which contain information that are sensed by the nodes of the network. In the WSN context, sensitive data may be considered an extension of sensed data, i.e., they are sensed data related to individuals which require a particular care. For instance, in telemedicine applications a sensitive datum is the temperature which is sensed by nodes positioned on the body of patients. Notice that in the context of WSN also common sensed data deserve particular care. For example, consider a wireless meter reading system used to monitor the temperature and pressure of different rooms of the building where it is installed; such a system comprises several sensor units which communicate information on the current temperature, barometric pressure, and humidity of the rooms where they are positioned. Although the data sensed by the nodes of the system cannot be classified as sensitive, they can be used to reveal information on personal habits of the people who live in the interested building. As an example, slight increments of temperature or of humidity may reveal the presence of one or more person in a room. By analyzing such data, it is possible to infer periods of the day or of the week during which the building is empty. Data is a complex structure composed of basic information units, named Fields, each of which represents a partial information related to the whole data structure. - Message. It represents the basic communication unit exchanged by the nodes of the network. It contains Identifiable information concerning the nodes involved in the communication and Sensed data. - Action. It represents any operation performed by Node. It is extended by Obligation, Processing, and Purpose. Moreover, each action can be recursively composed of other actions. Since in a privacy aware scenario a processing is executed under a purpose and an obligation, Processing specifies an aggregation relationship with Purpose and Obligation. Notice that while in general for each function there may be defined several actions that can be performed, in the context of WSN usually each function corresponds to one action. To guarantee the confidentiality and integrity of data as well as to assure that only authorized nodes are allowed to access such data and execute actions, our model introduces encryption mechanisms. More specifically, two classes representing encryption keys, named DataKey and FunctionRoleKey, are introduced. The former class is used for the definition of encryption mechanisms to protect the data content of messages; whereas the latter is used for defining mechanisms to assure that message communication and data handling are executed only by authorized nodes. Each node
SeDAP: Secure Data Aggregation Protocol in Privacy Aware WSN
139
of the network owns a different DataKey used to encrypt the data content of the messages. Each node also owns multiple FunctionRoleKey that are used to constrain which nodes can execute specific actions on data. Actions are expressly built to be executed by nodes that belong to a given function-role combination. Since a node may play different functions and roles, it may own multiple functionrole keys, one for each pair of function-role. The system modeler is allowed to use the key generation algorithm and the encryption algorithm that he/she considers the most suitable for the application domain.
3 The Scenario We consider a dense network composed of N nodes; each node senses a given type of data (e.g., temperature, pressure, brightness, position, and so on). Nodes can exchange messages and all sensed data (possibly aggregated) are directed to the sink. Each node directly communicates with its closer neighbors (at one hop distance). The broadcast nature of wireless channels enables a node to determine, by overhearing the channel, whether its messages are received and forwarded by its neighbors [14]. Each node of the network is characterized by a label n in order to unambiguously identify the node in the network. Each node owns different types of keys each of which corresponds to a given Function-Role pair. We identify the following Function-Role pairs: Sensing-Subject (SS); Authenticator-Processor (AP);Transmitter-Processor (TP); Notifier-Controller (NC). Thus, each node owns four keys, one for each pair Function-Role. Keys are denoted by k(n, fr), where n is the node label and fr ∈ {SS,AP, TP,NC} is the Function- Role played by node n. Notice that, we assume that keys are pre-shared in the nodes and that each node contains a table in which it stores the last sent messages. The usefulness of this table will be clarified later on. Each node, in order to achieve end-to-end security to data that are aggregated inside a WSN, adopts the algorithm of Castelluccia et al. [2] which is based on a simple and secure additively homomorphic stream cipher that allows efficient aggregation of encrypted data. The new cipher only uses modular additions and is therefore very well suited for CPU-constrained devices like sensors. Aggregation based on this cipher can be used to efficiently compute statistical values such as mean, variance, and standard deviation of sensed data, enabling significant bandwidth gain. Homomorphic encryption schemes are especially useful in scenarios where someone who does not have decryption keys needs to perform arithmetic operations on a set of ciphertexts. For reader convenience, we will briefly sketch the additively homomorphic encryption scheme proposed in [2] to show how it works during aggregation. Each node ni represents its data as an integer number di and let k(ni,SS) be a randomly generated keystream, Enc and Dec be the encryption and decryption function respectively, and M a large enough integer number. Then, the encrypted ciphertext ci is given by: ci = Enc(di; k(ni, SS);M) = (di + k(ni, SS)) mod M
(1)
140
A.C. Porisini and S. Sicari
The sensor then forwards the ciphertext to its parent, who aggregates all the ci received from its children: z
c = ∑ ci (mod M )
(2)
i =1
The cleartext aggregated data d can then be obtained by: d = Dec(c; k ; M ) = (c − k ) mod M
where k =
(3)
z
∑ k (n , SS ) i
i =1
Due to the commutative property of addition, the above encryption scheme is additively homomorphic. In fact, if c1 = Enc(d1; k(n1, SS);M) and c2 = Enc(d2; k(n2, SS);M) then c1 + c2 = Enc(d1 + d2; k(n1, SS) + k(n2, SS);M). Note that if α different ciphers ci are added, then M should be larger than
α
∑d , i=1
i
α
otherwise correctness is not provided. In fact, if
∑d i =1
i
is larger than M, decryption
will results in a value d* that is smaller than M. In practice, if p = max{di}, then M log ( p ⋅α )
should be selected as (M = 2 2 ). The keystream k can be generated using a streamcipher, such as RC4, keyed with a node secret key and a unique message . Finally, each sensor node shares a unique secret key with the sink of the WSN. Such keys are derived from a master secret (known only to the sink) and distributed to the sensor nodes. However, the key distribution protocol is outside the scope of this work.
4 SeDAP: The Proposed Solution In this section, the novel integrated framework SeDAP to realize end-to end data aggregation scheme with privacy capabilities in WSN is described. In particular, we address the following issues: (i) data integrity; (ii) anonymity; (iii) energy efficient WSN usage; and (iv) end-to-end secure data aggregation. 4.1 Message Structure To exploit the benefits derived by the adoption of SeDAP, which satisfies both endto-end data aggregation and anonymity requirements, network messages have to be suitably structured. More specifically, a message contains data that, according to the conceptual model, may be classified as identifiable and sensed. Identifiable data includes the information that can be used to identify a node. Sensed data includes all information sensed by the nodes, such as the environmental temperature, pressure, and so on. A message refers to a single transmission hop between adjacent nodes. It is identified by the notation msgn,q where n identify the node that generated and transmitted the message, while q identifies the message among those generated by
SeDAP: Secure Data Aggregation Protocol in Privacy Aware WSN
141
node n. The pair (n,q) unambiguously identifies the message among those transmitted in the network. A sensed data before reaching the sink passes through different nodes of the network (multi-hop communication) by means of different messages. To guarantee the integrity and confidentiality of the end-to-end communication, we propose a message structure that keeps track of the last two hops of the transmission. In this way it is possible to implement a simple enforcement schema that checks the integrity of the data content of the message. Combining the requirements of both anonymity and data aggregation, a generic SeDAP message msgn,q, is a tuple msgn,q=<current, previous, subject, error−flag, data, id-list> where:
sensing−identifier,
mistaking−identifier,
− current: is a couple
, which unambiguously identifies the current message among the ones transmitted within the network. This field is ciphered. − previous: is a couple , which includes np, the identifier of the node that operated the second last forwarding of the sensed data contained in the current message, and qp, the identifier used by np to identify such a message. This field is ciphered. − subject: is a couple <ns, qs> where ns is the identifier of the node subject which originally sensed the data or the aggregator node which aggregated the data. Whereas, qs is the message identifier used by such a node(aggregator or subject) for the message that started the communication of the sensed data towards the sink. Notice that in case of error notification (see Reception and Integrity Verification Protocol) this field identifies the node that reveals the error. This field is ciphered. − sensing-identifier: is a tuple <nsi, qsi> that in case of error notification contains the identifier of the node that sensed or aggregated the correct data and the identifier of the message transmitted by such a node. This field is ciphered. − mistaking identifier: is a tuple , which contains the identifier of the node that generated the error and the identifier of the message containing the error transmitted by such a node. This field is ciphered. − error-flag: represents an error code, which indicates if an anomaly was identified in the message content. This field is in clear. − data: includes the ciphered data c either sensed or aggregated by the subject node or the aggregator node, respectively. − identifier list: is the list of the encrypted node identifiers used by the sink in the decryption process for identifying the nodes and then the keys that handle the data. Notice that sensing identifier, mistaking identifier are used only in case of error notification, i.e., when error flag is set to 1, as described below. 4.2 System dynamics System dynamics are described by means of the following protocols: − Sensing, which defines the actions that a node of the network executes to sense data and to communicate such data to the other nodes of the network.
142
A.C. Porisini and S. Sicari
− Message Reception and Integrity Verification, which defines the actions that a node should perform to both forward data received from other nodes and verify the integrity of the messages transmitted across the network. − Data Aggregation, which defines the action that a node of the network executes to aggregate sample encrypted sensed data. 4.2.1 The Sensing Protocol The operations are described below step by step.
1. Data sensing. The node nc senses a data d from the environment where it is located. The node plays the role of subject and the function of sensing. 2. Data encryption. The node encrypts the sensed data d by using its sensing-subject key k(nc, SS). 1 The resulting output is c that is equal to Enc(d; k(nc, SS). 3. Message identifier generation. The node generates an identifier qc for the message that has to transmit to the sink 4. Identifiable data encryption. The node encrypts the generated identifier by using its personal transmitter-processor key, k(nc, TP). As a result, we have the content Enc(; k(nc, TP)). 5. Message structuring. A new message msgnc,qc is generated starting from the resulting outputs of the previous steps 2, 3 and 4, with the following structure: − current is set to Enc(; k(nc, TP)) ; − previous is initialized to an empty string, because this is the first transmission and no forwarding has been executed yet; − subject is set to Enc(; k(nc, TP)) since the current transmitter is the subject itself; − sensing-identifier is an empty string because the message is not an error notification message; − mistaking-identifier is an empty string because the message is not an error notification message; − error -flag is set to 0 because there is no error; − data is set to c = Enc(d; k(nc, SS)); − identifier list is updated with the encrypted identifier of the node nc. Notice that this field is composed by only one identifier, equal to the field current, since this is the first transmission and no forwarding has been executed yet; 6. Message storing. The node stores the content of the fields data and subject in its local table. It uses the content of the field current of the message msgnc,qc as the hash key for the sensed data that needs to be stored. 7. Message queuing. The message is put in the transmission queue. 4.2.2 Message Reception and Integrity Verification Protocol The operations executed by each node when a packet is received and then forwarded are the following:
1. Role check. The node nc analyses the received message msgnp,qp to figure out what type of action it has to execute on the contained data. In particular, it looks for the message among those stored in the local table by using the content of previous field of the received message as hash key. 1
The Sensing-Subject key is equivalent to the DataKey defined in the conceptual model.
SeDAP: Secure Data Aggregation Protocol in Privacy Aware WSN
143
If the message is not found, this means that it was not previously transmitted by the node. In this case, the node changes its current function and role, i.e., it has to play the role of processor and the function of transmitter. Therefore the node executes the following steps:
2a. Message identifier generation. The node generates an identifier qc for the message that has to put in the transmission queue . 3a. Identifiable data encryption. The node encrypts the generated identifier by using its personal transmitter-processor key, k(nc, TP). 4a. Message structure. The new message msgnc,qc has the field current equal to Enc(nc, qc ; k(nc, TP)).Obviously, the previous field is equal to the current field of the received msgnp,qp . In the identifier list is added the identifier of the current node. The other fields remain unchanged. 5a. Message storing. The node stores the content of the fields data and subject in its local table. It uses the content of the field current of the message msgnc,qc as the hash key for the sensed data that have to be stored. 6a. Message queuing. The message is put in the transmission queue. Otherwise, that is if the message is found, it means that it was originally transmitted by node itself. In this case, the node changes its current function and role, i.e., it has to play the role of controller and the function of notifier to verify the integrity of the previously transmitted message. Hence, the node compares the content of field data, of the received message with the information retrieved from its table. If the information match, this means that the controller is sure that the node from which it received the message preserved the integrity of the data, position and weight content. In this case, no additional action is performed by the node. If the content of field data is different from the ones extracted from the local table or no data entry corresponds to the search key. This means that something wrong happened. In this case, the node generates a new message as described in what follows, in order to notify the sink that a corrupted message is spreading through the network: 2b. Message identifier generation. The node generates an identifier qc for the message that has to put in the transmission queue 3b. Identifiable data encryption. The node encrypts the generated identifier by using its personal Transmitter-Processor key, k(nc, TP). 4.b Message structure. The new message msgnc,qc has the field current equal to Enc(nc, qc ; k(nc, TP)). The previous field is empty to avoid the creation of some loops with the malicious node and the spreading of different and opposite error messages; subject, which is equal to the field current to specify identifiable information of the node that retrieved the error (since such a node is the current one, the content of subject is equal to current); sensing identifier is set to the value of field subject of the node that senses/ aggregates the correct data that is stored in the node local table; mistaking identifier is equal to the field current of the received message in order to provide information about the node that makes the mistake, error flag is set to 1 to indicate that the current message is an error message. The fields data is equal to the content stored in the local table of the node and are encrypted with the Notifier-Controller key of the current node k(nc,NC). The identifier list is updated with the identifier of the current node.
144
A.C. Porisini and S. Sicari
5b Message storing. The node stores the content of the fields data and subject in its local table. It uses the content of the field current of the message msgnc,qc as the hash key for the sensed data that have to be stored. 6b. Message queuing. The message is put in the transmission queue. 4.2.3 Data Aggregation Protocol The data aggregation is periodically triggered by each node. It involves the following steps. 1. Error Check: the node checks the field error flag of the message in the transmission queue. If the field is set to 1, this means that the message is an error notification and no aggregation operations is possible to perform on the contained data. The message is transmitted as it is. Otherwise if the error flag is set to 0 then the aggregation procedure starts. 2. Ciphered data aggregation. The data selection procedure iteratively operates to arrange enqueued message. The data contained in all messages in the transmission queue are aggregated in a single message. More specifically, the ciphered data ci received from the children nodes, that are respectively encrypted by the SensingSubject key of each child, are aggregated following the equation (2) ( see Section 4). Notice that the aggregation process is performed without any knowledge of the keys from the aggregator node. 3. Message identifier generation. The node generates an identifier qc for the message that has to put in the transmission queue . 4. Identifiable data encryption. The node encrypts the generated identifier by using its personal transmitter-processor key, k(nc, TP). 5. Message structure. The new message msgnc,qc has the field current equal to Enc(nc, qc ; k(nc, TP)). The field previous is initialized to an empty string because this is the first transmission of the aggregated data and no forwarding has been executed yet. The field subject is set to Enc(nc, qc ; k(nc, TP)); notice that the field subject is equal to the field current because the aggregator is the generator of the aggregated z
data. data is set to c = ∑ ci (mod M ) according to equation (2) in Section 3. Finally, i =1
the field identifier list is updated with the identifier of the current node. Then, the node performs the message storing and message transmission procedures described as the other protocols .
5 Performance Evaluation In order to evaluate the efficiency of the proposed solutions some simulations have been conducted, by means of Omnet ++ [16].We consider a wireless sensor network, which measures the temperature of a given environment. The tests compare the behaviour of a network that uses SeDAP with the behaviour of a network that adopts only a secure endto-end data aggregation, such as Castelluccia et al.[2]. The simulations confirm the expected results in terms of number of transmitted messages and transmitted bytes; more specifically in order to guarantee anonymity and integrity, as shown in Figures.3 and 4, where Gen[] and Aggregator[] represent the nodes that sense single data, and the nodes performing aggregation, respectively.
SeDAP: Secure Data Aggregation Protocol in Privacy Aware WSN
145
100 90 80 70 60 Msg
50 40 30 20
Castelluccia et al
2] r[
1] ga to
r[
0] r[ Ag
Ag
gr e
ga to
ga to
gr e Ag
Nodes
gr e
[7 ]
G en
[6 ]
G en
[5 ]
G en
[4 ]
G en
[3 ]
G en
[2 ]
G en
[1 ]
G en
[0 ]
G en
G en
[8 ]
10 0
SeDAP
Fig. 3. Average No. of transmitted messages: SeDAP vs Castelluccia et al
Bytes 3000,00 2500,00 2000,00 1500,00 1000,00 500,00
Castelluccia
2] r[
1] Ag
gr e
ga to
ga to
r[
0] r[ gr e Ag
Nodes
Ag
gr e
ga to
[8 ]
[7 ]
G en
[6 ]
G en
[5 ]
G en
[4 ]
G en
[3 ]
G en
[2 ]
G en
[1 ]
G en
G en
G en
[0 ]
0,00
SeDAP
Fig. 4. Average No. of transmitted bytes: SeDAP vs Castelluccia et al
SeDAP transmits more messages and more bytes than a network that uses only the secure end-to-end aggregation [2]. However SeDAP is able to reveal malicious behaviour. In fact as shown in Figure5, which represents the total amount of received data from the sink , it is possible to notice that the 50% of the received data contains an error, but using SeDAP the sink knows the malicious behaviour thanks to the
146
A.C. Porisini and S. Sicari
notification error messages. Moreover, the delay between message that contains corrupted data and the related error notification message, sent by the controller, is shown in Figure 6. Summarizing the cost in terms of transmitted messages is balanced by the capability of SeDAP of revealing malicious behaviour, satisfying the anonymity and integrity requirement. 40
35
SeDAP 30
20
15
10
5
0 50
60
70
80
90
100
110
120
130
140
150
160
170
180
190
200
Time Notification Error Msg
Msg with error
Received Msg
Correct Msg
Castelluccia et al. 25
20
15 Msg
Msg
25
10
5
0 50
60
70
80
90 100 110 120 130 140 150 160 170 180 190 200 Time
Msg with error
Received Msg
Correct Msg
Fig. 5. No. of received messages: SeDAP vs Castelluccia et al
Time
SeDAP: Secure Data Aggregation Protocol in Privacy Aware WSN
200 180 160 140 120 100 80 60 40 20 0
147
Arrival Time of Messages with Error Arrival Time of Error Notification
1
2
3
Messages
Fig. 6. Evaluation of delay between the arrival time of messages with errors and the arrival time of error notification messages
6 Related Works WSN applications require to collect a huge amount of data and, due to the limited resources in terms of power of sensor nodes, it is necessary to aggregate such data in order to reduce the amount of transmitted information. Data may be used to perform attack towards privacy, integrity and confidentiality. Notice that the risk of violation increases due to both the wireless nature of the communication channel and the remote access. Exploiting such vulnerabilities the following common threats may occur [17], [18]: − Eavesdropping: malicious users could easily discover the communication content listening to data. − Masking: some malicious nodes may mask their real nature behind the identity of nodes that are authorized to take part to communication, and misroute the messages. Designing secure WSN is a very mature research field [5] and the literature is very reach of solutions addressing at the same time aggregation issues and security aspects, such as confidentiality, integrity, authentication, and availability (an exhaustive and very comprehensive view of this topic can be found in [6]). Nevertheless, to the best of our knowledge, no solution is able to take into account both privacy and data aggregation issues at the same time using end-to-end encryption techniques. As regards WSN privacy, the available solutions defined, starting from their vulnerabilities and related threats, may be classified into two main groups: anonymity mechanisms based on data cloaking [17] and privacy policy based approaches [19]. Data cloaking anonymity mechanisms perturb data following some kind of criterion,
148
A.C. Porisini and S. Sicari
for instance K-anonymity guarantees that every record is indistinguishable from at least k-1 other records [20]. In [17],[21],[22],[27], four main data cloaking anonymity approaches are proposed: − Decentralize Sensible Data: the basic idea of this approach is to distribute the sensed location data through a spanning tree, so that no single node holds the complete view of the original data. − Secure Communication Channel: the use of a secure communication protocols, such as SPINS [22], reduces the eavesdropping and active attack risk by means of encryption techniques. − Change Data Traffic: the traffic pattern is altered with some bogus data that obfuscate the real position of the nodes. − Node Mobility: the basic idea is to move the sensor nodes in order to change dynamically the localization information, making it difficult to identify the node. For instance, [17] proposes a solution that guarantees the anonymous usage of location based information. More specifically, such a solution consists of a cloaking algorithm which regulates the granularity of location information to meet the specified anonymity constraints. This work only focuses on localization services and therefore, constrains the middleware architecture required to support the proposed algorithm. Hence, such a solution cannot be considered a general context independent anonymity approach. Privacy policy based approaches [19], [24] state who can use individuals data, which data can be collected, for what purpose the data can be used, and how they can be distributed. A common policy based approach addresses privacy concerns at database layer after data have been collected [23]. Other works [24] address the access control and authentication issues, for instance Duri et al.[19] propose a policy based framework for protecting sensor information. Our work provides a contribution in the field of privacy policy based approaches by defining a role-based contextindependent solution that guarantees anonymity of the nodes before sensed data are collected into a database. Our solution may be combined with both data cloaking mechanisms and some other privacy policy based approaches. As regard secure data aggregation approaches proposed so far can be classified into two big families depending on whether the hop-by-hop or end-to-end cryptography is used. Hop-by-hop encryption is usually based on symmetric key schemes, which demand less computing resources than asymmetric key ones. These algorithms, such as [5], [6], [15], [25], [26], require that each aggregator must decrypt every message it receives to enable in-network processing, thus causing a confidentiality breach. Furthermore, applying several consecutive encryption/decryption operations can negatively impair latencies. Finally, hop-by-hop aggregation requires each node to share secret keys with all its neighbours. To face these problems, literature has recently proposed aggregation algorithms able to work on ciphered data, using either asymmetric or symmetric keys [26]. The main limitations of such approaches being they allow very simple aggregation functions to be used, such as sum and average [6]. Despite this very broad variety of proposals, no single solution has been conceived yet to address confidentiality, integrity, anonymity and adaptive aggregation at the same time.
SeDAP: Secure Data Aggregation Protocol in Privacy Aware WSN
149
8 Conclusion The present work proposed a protocol, SeDAP, that provides an integrated framework for facing the privacy and end-to-end secure data aggregation issues at the same time. The definition of SeDAP has been supported by means of an ad-hoc UML conceptual model for the definition of privacy policies in the context of Wireless Sensor Networks. The model provides the basic concepts involved when dealing with the management of privacy-related information in a WSN. The efficiency of the proposed solution has been verified using simulations. Results show that SeDAP is able to guarantee node’s anonymity and to identify malicious behaviour in a really short interval time. At present we are extending SeDAP with a more powerful aggregation algorithm able to reduce the network load in case of congestion.
References 1. Akyildiz, I.F., Melodia, T., Chowdhury, K.: A survey on wireless multimedia sensor networks. Elsevier Computer Networks Journal (March 2007) 2. Castelluccia, C., Mykletun, E., Tsudik, G.: Efficient aggregation of encrypted data in wireless sensor networks. In: Conference on Mobile and Ubiquitous Systems: Networking and Services (2005) 3. Fasolo, Rossi, M., Widmer, J., Zorzi, M.: In-network aggregation techniques for wireless sensor networks: A survey. IEEE Wireless Communications (April 2007) 4. Mastrocristino, T., Tesoriere, G., Grieco, L.A., Boggia, G., Palattella, M.R., Camarda, P.: Control based on data-aggregation for wireless sensor networks. In: Proc. of IEEE Int. Symp. on Industrial Electronics, ISIE 2010, Bari, Italy (July 2010) 5. Grieco, L.A., Boggia, G., Sicari, S., Colombo, P.: Secure wireless multimedia sensor networks: a survey. In: Proc. of The Third Int. Conf. on Mobile Ubiquitous Computing, Systems, Services and Technologies, UBICOMM, Sliema, Malta (October 2009) 6. Ozdemir, S., Xiao, Y.: Secure data aggregation in wireless sensor networks: a comprehensive overview. Computer Networks 53 (2009) 7. Coen-Porisini, A., Colombo, P., Sicari, S., Trombetta, A.: A conceptual model for privacy policies. In: Proc. of SEA 2007, Cambridge (MS), USA (2007) 8. Coen-Porisini, A., Colombo, P., Sicari, S.: Dealing with anonymity in wireless sensor networks. In: Proceedings of 25th Annual ACM Symposium on Applied Computing (ACM SAC), Sierre, Switzerland (2010) 9. Coen-Porisini, A., Colombo, P., Sicari, S.: Privacy aware systems: from models to patterns. In: Mouratidis, H. (ed.) Software Engineering for Secure Systems: Industrial and Research Perspectives. IGI Global (2010) 10. Unified Modeling Language: Infrastructure, Ver. 2.1.2, OMG, formal/2007-11-02 (November 2007) 11. Unified Modeling Language: Superstructure, Ver. 2.1.2, OMG, formal/2007-11-02 (November 2007) 12. Directive 95/46/EC of the European Parliament. Official Journal of the European Communities, (L.281), 31 (November 23, 1995)
150
A.C. Porisini and S. Sicari
13. Ni, Q., Trombetta, A., Bertino, E., Lobo, J.: Privacy-aware role based access control. In: Proceedings of the 12th ACM Symposium on Access Control Models and Technologies (2007) 14. Zhanga, H., Arorab, A., Choic, Y., Goudac, M.: Reliable bursty convergecast in wireless sensor networks. Elsevier Computer Communications 30(13), 2560–2576 (2007) 15. Sicari, S., Riggio, R.: Secure aggregation in hybrid mesh/sensor networks. In: Proceeding of the IEEE International Workshop on Scalable Ad Hoc and Sensor Networks (SASN 2009), St. Petersburg, Russia, October 12-13 (2009) 16. OMNeT++ Discrete Event Simulation System (2005), http://www.omnetpp.org/doc/manual/usman.html 17. Gruteser, M., Schelle, G., Jain, A., Han, R., Grunwald, D.: Privacy-aware location sensor networks. In: Proceedings of the 9th USENIX Workshop on Hot Topics in Operating Systems, HotOS IX (2003) 18. Chan, H., Perrig, A.: Security and privacy in sensor networks. IEEE Computer Magazine, 103–105 (March 2003) 19. Duri, M.G.S., Liu, P.M.X., Perez, R., Singh, M., Tang, J.: Framework for security and privacy in automotive telematics. In: Proceedings of 2nd ACM International Worksphop on Mobile Commerce (2000) 20. Samarati, P., Sweeney, L.: Protecting privacy when disclosing information: k-anonymity and its enforcement through generalization and suppression. Technical Report SRI-CSL98-04, Computer Science Laboratory, SRI International (1998) 21. Gruteser, M., Grunwald, D.: A methodological assessment of location privacy risks in wireless hotspot networks. In: Hutter, D., Müller, G., Stephan, W., Ullmann, M. (eds.) Security in Pervasive Computing. LNCS, vol. 2802, pp. 10–24. Springer, Heidelberg (2004) 22. Perrig, A., Szewczyk, R., Tygar, J.D., Wen, V., Culler, D.E.: Spins: security protocols for sensor networks. Wireless Networking 8(5), 521–534 (2002) 23. Snekkenes, E.: Concepts for personal location privacy policies. In: Proceedings of 3rd ACM Conf. on Electronic Commerce (2001) 24. Molnar, D., Wagner, D.: Privacy and security in library rfid: Issues, practices, and architectures. In: Proceedings of ACM CCS (2004) 25. Yang, Y., Wang, X., Zhu, S., Cao, G.: Sdap: a secure hop-by-hop data aggregation protocol for sensor networks. In: ACM MOBIHOC (September 2006) 26. Westhoff, D., Girao, J., Acharya, M.: Concealed data aggregation for reverse multicast traffic in sensor networks: encryption key distribution and routing adaptation. IEEE Trans. Mobile Comput. 5, 1417–1431 (2006) 27. Smailagic, A., Siewiorek, D.P., Anhalt, J., Kogan, Y.W.D.: Location sensing and privacy in a context aware computing environment. In: Proceedings of Pervasive Computing (2001)
A Centralized Approach for Secure Location Verification in Wireless Sensor Networks Abbas Ghebleh, Maghsoud Abbaspour, and Saman Homayounnejad Electrical and Computer Department, Shahid Beheshti University, Tehran, Iran [email protected], [email protected], [email protected]
Abstract. Location information of sensors is crucial in many applications of Wireless Sensor Networks (WSNs). A lot of work has been done for estimating sensors’ location in the network but these approaches are mostly designed without considering security issues which is critical in WSNs. So we should verify the location of sensors to ensure that estimated location is correct. In this paper we propose a novel approach for verifying sensors’ location having minor overhead on the network while not using any additional hardware. This approach is centralized and the verification process is performed in the base station that is much more powerful than sensors and is able to perform more sophisticated tests. Simulation studies verify that this approach is able to detect different attacks and anomalies in sensors’ location. Keywords: Wireless Sensor Network, Location Verification, Security.
1 Introduction Wireless sensor networks (WSNs) have been used in many applications and still have many potential applications. WSNs are composed of a large number of small, low cost, and low power nodes that are equipped with one or more sensors and communicate wirelessly with each other [1]. Many applications proposed for WSNs are based on the knowledge of sensors’ locations, such as environment monitoring, target tracking, etc. Therefore sensors should somehow discover their location in the network. This process is called localization. Although the simplest way of providing accurate location information is to equip each sensor with a GPS, this is too expensive for sensors; the other way is to use some special nodes that know their location in network and help sensors to estimate their location. These nodes are called locators [2]. Recently, a lot of works have been done for localization in WSNs [3-10]. However, these approaches are designed without considering security; while in many applications, WSNs are deployed in unattended and even hostile environments that make WSNs vulnerable to malicious attacks like Wormhole [11]. Security issues should be considered to ensure the operation of the WSNs otherwise, a compromised or malicious node can claim virtually any location and ruins the network. Some works have been done that locate sensors securely. These works can be categorized in two fields: i) secure localization in which the localization process is G. Parr and P. Morrow (Eds.): S-Cube 2010, LNICST 57, pp. 151–163, 2011. © Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2011
152
A. Ghebleh, M. Abbaspour, and S. Homayounnejad
performed in a secure way like [12-15] and ii) location verification in which sensors somehow estimate their location and then some other nodes that are called verifiers verify their claimed location like [16-20]. To our knowledge, previous works on localization and location verification mostly are decentralized approaches and sensor centric, i.e. localization and location verification process is performed by sensor nodes not the base station. Almost all of these works either use special hardware like directional antennas, or are too complicated for sensor nodes which have very limited resources [12, 21]. In this paper we propose a novel approach for location verification that is less complicated and doesn’t use any special hardware. This approach is centralized that makes it more precise than decentralized approaches [22]; more specifically, the verification process is performed in the base station which is more powerful than regular sensors that makes ability to perform more complicated tests to verify claimed locations. Also it is resilient against common attacks like Wormhole attack. We claim that this approach, as simulation results shows, is more practical, effective and has less overhead in many applications. The rest of this paper is organized as follows. In the next section we describe the network model and the assumptions that we made. Section 3 explains some preliminary concepts. Our proposed approach is presented in section 4 and its resilience against usual attacks is discussed in section 5. Section 6 presents simulation results and finally we conclude in section 7.
2 Assumptions and Network Model In this section we describe our network model and assumptions that we made. The WSN architecture is shown in Fig. 1. WSN consists of one or more base stations, some locators and many wireless sensor nodes. Also there might be some malicious nodes that collude together and make wormholes in network; so each of them is called a wormhole end. The base station is a powerful node that almost has no limitations in power or computing capabilities. It is the core of the network and manages all the activities in the network. We assume that the base station is trusted and attackers cannot compromise it. Locator is a node that somehow (by using a GPS for example) is aware of its exact location and broadcasts its location information so that sensor nodes use this information and estimate their locations. Wireless sensor is a cheap resource constrained node that monitors the environment, captures events and sends data to the base station; for the sake of simplicity, in the rest of this paper we call them sensors. Finally wormhole end is an adversary node that receives transmitted packets in the network and sends them to the other wormhole end which broadcasts these packets in the network. In fact these nodes collude together to move traffic from one place to another. In our model none of the nodes uses any special hardware like directional antennas, ultrasound transceivers or nanosecond precision timers. Also we assume that each sensor has a private symmetric key with the base station. This key can be assigned before or after network deployment [23-26], however it seems that assigning keys before deployment is more convenient. Note that this assumption causes serious problems for in network aggregation. But in many applications in which the location of the sensors is important such as environment monitoring, data messages that would send
A Centralized Approach for Secure Location Verification in WSN
153
by sensors is pretty small, so aggregation is not critical in these applications. Moreover this property allows the base station to verify the correctness of the sensors’ alarms that leads to fewer false alarms. Finally we do not consider physical layer attacks like jamming. Some approaches like [27, 28] can be used to defend against these kinds of attacks.
Fig. 1. Considered WSN architecture
3 Preliminaries As we mentioned before we propose a centralized approach for location verification in WSNs. In this section we discuss why centralized approaches can be effective in this application. After that we describe a tree structure that we use in our approach. 3.1 Centralized vs. Decentralized Approaches As mentioned before, previous works on secure localization and location verification were mostly designed in decentralized approach. In other words they insist that each sensor should estimate its location on its own securely. But these works usually lead to approaches that are too complicated and expensive for sensor nodes which have very limited resources. In some other works, assumptions are made that are not feasible for sensor nodes. For example, using unusual and expensive hardware like directional antennas or using asymmetric cryptography is not feasible for sensor nodes, since one of the most important properties of sensor nodes is that they are made with low cost. In general, centralized approaches are simpler and more precise than decentralized approaches; but they are single point of failure, less scalable, and usually have more communication overhead [22]. We claim that disadvantages of centralized approaches are not significant in WSNs and more specifically in localization and location verification, because:
154
A. Ghebleh, M. Abbaspour, and S. Homayounnejad
• We assume that the base station is trusted and secure. So if the location verification process is performed in the base station it would be secure too. On the other hand if the base station fails, the whole network would be useless and the location verification would be meaningless. • WSNs have limitations in expansion since as the WSN expands, the average distance between sensors and the base station increases. This results in more communication cost in the network that leads to more power consumption and less network lifetime. • WSNs are mostly static, i.e. sensor nodes are fixed and their location would not change after the deployment of the network. So localization and location verification process shall be performed only once and communication overhead would be negligible. 3.2 Tree-Based Message Structure In our approach sensors send their location information in the tree structure, so we use the following structure that imposes small computation overhead on sensors. In this structure each sensor just has to concatenate received messages and add some data to the beginning and the end of it, so there is no need to decrypt and modify received messages. The structure is shown in Fig. 2-a. This structure is used recursively, i.e. response message in each sensor is a tree which is a subtree of its parent’s tree. Fig. 2b shows an example.
Fig. 2. a) Tree-based message structure b) An example tree
4 Proposed Approach In this section we describe a centralized approach for location verification in WSNs which is not using any additional hardware and has small memory, computation and communication overhead. This approach is robust enough against different attacks (internal or external) and is able to detect anomalies in the localization (intended or not) quite well. This approach consists of two phases: initial phase and operation phase. In the first phase, which is performed just after the deployment of the network, the base station collects location information of sensors in a tree-like structure and then verifies them;
A Centralized Approach for Secure Location Verification in WSN
155
of course some of the sensors may not be verified. The verified ones are called verified sensors while those not verified are named unverified sensors. The latter ones have a second chance to be verified in the operation phase. In the following sections we describe these phases in more detail. 4.1 Initial Phase This phase has three steps: location collection, verification, and announcement. At first in the location collection step, the base station collects the location information of sensors, then in the verification step it verifies sensors’ claimed locations according to the information that is gathered in the first step. Finally in the announcement step, the base station announces the unverified sensors. Now we describe these steps in detail. Location Collection. Location information of sensors is collected as follows: 1. At first the base station broadcasts a location request message (LOC_REQ). 2. Each sensor after receiving the first LOC_REQ message sends an acknowledgement message (REQ_ACK) to its sender and sets it as its parent. Then it broadcasts LOC_REQ message for its neighbors. Every LOQ_REQ message that a sensor receives after the first one would be discarded. 3. Each sensor collects its children’s location information, concatenates them and appends its own location information that is encrypted with its private key. This procedure makes the location response message (LOC_RSP) that the sensor sends to its parent. The LOC_RSP consists of sensor’s ID, its location, response time and checksum. Response time is the period after receiving the first LOC_REQ till sending the LOC_RSP message. Checksum is used for integrity of the message; each sensor calculates checksum over the whole LOC_RSP message. 4. The process in step 3 continues until the location information of sensors arrives in the base station.
Fig. 3. The network tree after location collection step
156
A. Ghebleh, M. Abbaspour, and S. Homayounnejad
When this step is completed, the base station would have an approximate topology of the network in the form of a tree as shown in Fig. 3. The C++ like pseudocode of what runs in each sensor is shown in Fig. 4. It shows more details about how exactly our protocol works. In the first line sensor performs the localization process using any localization algorithm (e.g. centroid algorithm [4]). In the next three lines it waits until a LOC_REQ message is received, then it saves its parent’s address and its depth in the network tree. For enabling sensors to determine their depth, we include the depth parameter in LOC_REQ message. This parameter at first is zero at the base station and each sensor increments it before broadcasting LOC_REQ. This task is performed in the fifth line. In the sixth and seventh lines sensor waits a moment and after that broadcasts the LOC_REQ message for its neighbors. The reason of this is that after broadcasting the LOC_REQ message, all the recipients receive it almost at the same time. So if they broadcast the LOC_REQ as soon as they receive it, the probability of collision would be high. Although it is MAC layer’s responsibility to resolve this problem, but it can cause more power consumption. To avoid this, after receiving the LOC_REQ message, each sensor waits a random period and then broadcasts the request. We refer to maximum waiting time before sending the request as REQ_MAX_WT. Now it’s time for collecting response messages from sensor’s children. But how a sensor can determine that it is a leaf in the tree? When should it send its response message to its parent? One way is according to the acknowledgements that each sensor receives. If a sensor receives no acknowledgement after broadcasting the request message, it can determine that has no children. Also if it receives acknowledgements, it can determine how many children it has by counting them and after receiving responses from all of them sends its response message to its parent. This method works properly if all the messages are received correctly and there is no malicious entity in the network. Otherwise some sensors may not be able to contribute (e.g. in the case that its acknowledgement message is not received by its parent) or maybe waits forever (e.g. in the case that the response message of one of its children doesn’t arrive). The main vulnerability point of the previous method is that each sensor relies on the other ones that may not be trusted, so sensors should decide independently. For this reason we define a time period called response time that each sensor waits after receiving the request to collect responses of its children and when it expires sends its response to its parent. It is obvious that response time cannot be equal for all the sensors, otherwise the network tree would have only one level. Because the sensors that are in the second level receive the request later than the first level sensors and so their timer would expire after first level sensors when they already sent their responses. Therefore the response time for each sensor should depend on its depth in the network tree. The response time is calculated by the following formula: RSPWT = A × (B - C × depth) .
(1)
Where A, B, and C are parameters that should be adjusted according to REQ_MAX_WT and network conditions like the height of the network tree.
A Centralized Approach for Secure Location Verification in WSN
157
Sensor_Activity() { 1
localize ();
2
receive (LOC_REQ);
3
parent = LOC_REQ.sender;
4
depth = LOC_REQ.depth;
5
LOC_REQ.depth++;
6
wait (random (REQ_MAX_WT));
7
broadcast (LOC_REQ);
8
collectionWT = calculateCWT (depth);
9
startTimer (collectionWT);
10
while (timer is not off){
11
receive (LOC_RSP);
12
myLOC_RSP.add (LOC_RSP);
13
}
14
myLOC_RSP.finalize ();
15
send (parent, myLOC_RSP);
} Fig. 4. Pseudocode of sensors’ activity
Verification of Location Information. Existence of malicious nodes in the network can completely ruin the network and falsify the location information of the sensors. For example, Fig. 5 shows the same network of Fig. 3 and its corresponding tree in the case that it is attacked by a Wormhole. Anomalies in Fig. 5 are obvious, but how can the base station detect them? We describe three properties that should be satisfied if the claimed location is correct and so the base station can use them to verify sensors’ locations and detect anomalies in the network. These properties are as follows: • Communication range: neighboring sensors should be in each other’s communication range; more specifically the distance between each sensor and its parent should not be more than maximum communication range of sensors. • Response time: the response time of each sensor should be greater than the response time of its children. • Uniqueness: since each sensor sends its location information once and has one parent, so it should appear in the network tree once too. If any of these properties is not satisfied for a sensor, the base station detects some anomalies about that sensor and marks it as an unverified node.
158
A. Ghebleh, M. Abbaspour, and S. Homayounnejad
Fig. 5. The network tree in the case of Wormhole attack
After verification, the base station records trusted sensors and their locations, so these sensors do not have to send their location information with their sent data anymore and the base station can retrieve their location by their ID. Announcing Unverified Sensors. After detecting unverified sensors, the base station announces them to the network. This announcement has two benefits, first, unverified sensors find out that their location was not verified; so they can refine their location and try again in the operation phase. The base station can provide some hints to help these sensors refining their location or they can request their verified neighbors for some hints. Second, other sensors in the network would know unverified sensors and discard messages which they send. This can save the power of sensors since unverified messages would not be forwarded by the other sensors. Note that if an unverified message is forwarded towards the base station, since its sender is not in the verified sensors list, it would be discarded there. Also note that this announcement should be performed in a secure way, otherwise attackers can use this feature and announce some verified sensors as unverified that can cause denial of service attack. For this reason the base station should send an encrypted message consisting of a list of unverified neighbors to each sensor that has at least one unverified neighbor. 4.2 Operation Phase Sensors that have not been verified or added to the network after the initial phase have the opportunity to join the network in this phase. For this reason these sensors broadcast their location information, any receiving sensor that we call it introducer sends this information to the base station. Then the base station would be able to perform verification process according to location information of the requesting sensor and verified introducers. For example, suppose that the request of the requesting sensor is received by the base
A Centralized Approach for Secure Location Verification in WSN
159
station through two different verified introducers. If the distance between these introducers exceeds twice of the maximum communication range of sensors, the base station detects some anomalies and the requester sensor will not verified.
5 Resilience against Security Attacks In this section we describe why our proposed approach is resilient against different attacks. 5.1 Key Capture Attack Since in this approach each sensor has a private key with the base station and there is not any global key, if an attacker can compromise one or more sensor nodes and captures their credentials, she cannot dig into the whole network and have a significant effect on it. 5.2
Spoofing or Altering Messages
In our approach, since there is no global key in the network and each sensor encrypts its data with its private key, spoofing and altering messages is not possible. 5.3 Sybil Attack In this attack one node presents multiple identities to other sensors [29]. Since there is only one shared key between each sensor and the base station, a malicious node cannot present different IDs and if it uses the same ID for all the identities it claims, it would violate the uniqueness property and would be detected by the base station in the verification phase. 5.4 Sinkhole Attack In this attack the adversary tries to somehow attract traffic from particular area or the entire network to a compromised node so that it can perform other attacks like spoofing and altering messages, selective forwarding or Blackhole attack [11]. To countervail this attack, each sensor can just send its data to its parent, or the base station can define routes and introduce some sensors to each sensor that it sends its data via them. Since the base station has a better view of the entire network, it can define routes more securely and more efficiently than decentralized routing algorithms. 5.5 Wormhole Attack For this attack at least two malicious nodes collude, one of them receives messages and somehow (by powerful transmitters for example) sends it to another to broadcast them [11]. Note that the distance between two ends of wormhole (wormhole length) should be longer than sensors’ communication range; otherwise this attack would not be effective anymore. Wormhole could be simplex (one-way) or duplex (two-way). In our approach since encryption is performed in application layer not in MAC or network layer, we assume that wormhole ends have the ability to change source and destination address of messages. As it will be discussed here the effect of wormhole could be different according to the type of the messages they tunnel.
160
A. Ghebleh, M. Abbaspour, and S. Homayounnejad
• LOC_INFO: Tunneling locators’ message (LOC_INFO) by the wormhole cause to receiving them by sensors that are not in their communication range. So their estimation would be erroneous. If this error is significant then the base station would detect it in the verification process, because the distance between this sensor and its parent would be further than communication range of sensors and violates communication range property. • LOC_REQ: In the case that a location request is tunneled by the wormhole, if sensor nodes that receive the tunneled message have received any location request before, then they would ignore it and so it doesn’t affect them. Otherwise they will consider sender node as their parent and send their location information to it. Note that this would be possible if the wormhole works 2-way. In this case the base station would detect this anomaly according to distance consistency in the verification process. • LOC_RSP: In this case, any node that receives a tunneled message would consider the sender as its child while in fact this node is a child of another node in another part of the network. So the uniqueness property would be violated and the base station would detect this anomaly in the verification process. If the attacker could somehow (by jamming for example) prevent the real parent to receive the response message, then the uniqueness property would hold but the distance consistency would be violated and the base station would detect this anomaly. • DATA: Since data is sent in encrypted form and location information of sensors have been stored in the base station, tunneling data messages does not affect the network and the base station will be able to identify the sender of the message and its location.
6 Simulation and Evaluation We have simulated the proposed approach using OMNeT++ [30] and MiXiM framework [31], the simulation parameters are shown in Table 1. As it is mentioned in this table centroid algorithm is used for localization but other methods can also be used for this purpose. Due to the localization error, maximum valid distance is considered to be greater than the communication range of sensors. The results set here are the average of 1000 runs with different configurations. Table 1. Simulation parameters
Parameter Area No. Sensors No. Locators Communication range MAX Valid Distance REQ_MAX_WT A B C Localization method
Value 500×500 m2 1000 100 50 m 70 m 100 sec 50 200 20 Centroid algorithm
A Centralized Approach for Secure Location Verification in WSN
161
As it was justified earlier, our approach can detect usual attacks properly. This section shows the approach’s performance according to simulation results. We use two metrics: location accuracy and connectivity factor. Location accuracy which is the Euclidian distance between the estimated location and the exact location of the sensor is used to present localization error. Also connectivity factor indicates the percentage of sensors that contribute and those sensors which are verified in initialization phase. Table 2 shows the values of these parameters in two cases: in normal situation and under Wormhole attack. Note that the connectivity factor is reduced after the consistency check even in normal situation. This is due to the localization error of centroid algorithm not as a result of any kind of attack; in [32] these errors have been discussed in detail. As this table shows the location accuracy in the case of Wormhole attack is almost equal to normal situation. Table 2. Resulting performance factors
Wormhole Attack
Connectivity Factor Before CC After CC 0.916 0.892 0.912 0.732
No Yes
Location Accuracy Mean Max 11.552 29.602 11.450 29.861
Table 3. Resulting overhead factors
Level Mean 6.138
Max 10
Branching Factor Mean 0.966
Max 23.3
Payload Size (bytes) Mean Max 37.587 2473.8
In order to show the overheads of our approach, we use the payload size that each sensor has to transmit, since the computation overhead is pretty small. The payload size of each sensor is dependent on its level in the network tree and the number of its children. As is shown in Table 3, the average payload size is pretty small and the maximum payload size is feasible since this transmission is performed only once in the network’s lifetime.
7 Conclusion In this paper we proposed a novel approach for secure location verification in WSNs. This approach is light weight and does not use any additional hardware like directional antennas that makes it suitable for WSNs. It is resilient against different attacks like Wormhole and Sybil attacks and detects anomalies in the localization properly. The computational and memory overheads on the sensors are pretty small. Also simulation results show that the average communication overhead on each sensor is minor and even the worst case is feasible in WSNs.
162
A. Ghebleh, M. Abbaspour, and S. Homayounnejad
References 1. Akyildiz, I.F., Su, W., Sankarasubramaniam, Y., Cayirci, E.: A Survey on Sensor Networks. IEEE Communications Magazine 40, 102–105 (2002) 2. Niculescu, D., Nath, B.: Ad Hoc Positioning System (Aps). In: IEEE GLOBECOM, pp. 2926–2931 (2001) 3. Want, R., Hopper, A., Falcao, V., Gibbons, J.: Active Badge Location System. ACM Transactions on Information Systems 10, 91–102 (1992) 4. Bulusu, N., Heidemann, J., Estrin, D.: Gps-Less Low-Cost Outdoor Localization for Very Small Devices. IEEE Personal Communications 7, 28–34 (2000) 5. Niculescu, D., Nath, B.: Dv Based Positioning in Ad Hoc Networks. Telecommunication Systems 22, 267–280 (2003) 6. Harter, A., Hopper, A., Steggles, P., Ward, A., Webster, P.: The Anatomy of a ContextAware Application. Wireless Networks 8, 187–197 (2002) 7. He, T., Huang, C., Blum, B.M., Stankovic, J.A., Abdelzaher, T.: Range-Free Localization Schemes for Large Scale Sensor Networks. In: ACM MOBICOM, pp. 81–95 (2003) 8. Bahl, P., Padmanabhan, V.N.: RADAR: An in-Building RF-Based User Location and Tracking System. In: IEEE INFOCOM, pp. 775–784 (2000) 9. Fang, L., Du, W., Ning, P.: A Beacon-Less Location Discovery Scheme for Wireless Sensor Networks. In: IEEE INFOCOM, pp. 161–171 (2005) 10. Niculescu, D., Nath, B.: Ad Hoc Positioning System (Aps) Using Aoa. In: IEEE INFOCOM, pp. 1734–1743 (2003) 11. Karlof, C., Wagner, D.: Secure Routing in Wireless Sensor Networks: Attacks and Countermeasures. In: Ad Hoc Networks, vol. 1, pp. 293–315 (2003) 12. Lazos, L., Poovendran, R.: SeRLoc: Secure Range-Independent Localization for Wireless Sensor Networks. In: ACM Workshop on Wireless Security (WiSe), pp. 21–30 (2004) 13. Li, Z., Trappe, W., Zhang, Y., Nath, B.: Robust Statistical Methods for Securing Wireless Localization in Sensor Networks. In: ACM/IEEE IPSN, pp. 91–98 (2005) 14. Liu, D., Ning, P., Du, W.K.: Attack-Resistant Location Estimation in Sensor Networks. In: ACM/IEEE IPSN, pp. 99–106 (2005) 15. Čapkun, S., Hubaux, J.P.: Secure Positioning of Wireless Devices with Application to Sensor Networks. In: IEEE INFOCOM, pp. 1917-1928 (2005) 16. Vora, A., Nesterenko, M.: Secure Location Verification Using Radio Broadcast. IEEE Transactions on Dependable and Secure Computing 3, 377–385 (2006) 17. Sastry, N., Shankar, U., Wagner, D.: Secure Verification of Location Claims. In: ACM WiSe, pp. 1–10 (2003) 18. Ekici, E., McNair, J., Al-Abri, D.: A Probabilistic Approach to Location Verification in Wireless Sensor Networks. In: IEEE ICC, pp. 3485–3490 (2006) 19. Lazos, L., Poovendran, R., Čapkun, S.: ROPE: Robust Position Estimation in Wireless Sensor Networks. In: IEEE IPSN, pp. 324–331 (2005) 20. Al-Abri, D., McNair, J., Ekici, E.: Location Verification Using Communication Range Variation for Wireless Sensor Networks. In: IEEE MILCOM (2007) 21. Hu, L., Evans, D.: Using Directional Antennas to Prevent Wormhole Attacks. In: 11th Network and Distributed System Security Symposium (NDSS), pp. 131–141 (2004) 22. King, J.L.: Centralized Versus Decentralized Computing: Organizational Considerations and Management Options. ACM Computing Surveys 15, 319–349 (1983) 23. Lee, J., Stinson, D.: Deterministic Key Predistribution Schemes for Distributed Sensor Networks. In: ACM Symposium on Applied Computing, pp. 294–307 (2005)
A Centralized Approach for Secure Location Verification in WSN
163
24. Liu, D., Ning, P.: Location-Based Pairwise Key Establishments for Static Sensor Networks. In: ACM Workshop on Security of Ad Hoc and Sensor Networks, pp. 72–82 (2003) 25. Blom, R.: An Optimal Class of Symmetric Key Generation Systems. In: Pichler, F. (ed.) EUROCRYPT 1985. LNCS, vol. 219, pp. 335–338. Springer, Heidelberg (1986) 26. Chan, H., Perrig, A., Song, D.: Random Key Predistribution Schemes for Sensor Networks. In: IEEE Symposium on Security and Privacy, pp. 197–213 (2003) 27. Pickholtz, R.L., Schilling, D.L., Milstein, L.B.: Theory of Spread-Spectrum Communications - a Tutorial. IEEE Transactions on Communications 30, 855–884 (1982) 28. Wicker, S.B., Bartz, M.J.: Type-Ii Hybrid-Arq Protocols Using Punctured Mds Codes. IEEE Transactions on Communications 42, 1431–1440 (1994) 29. Newsome, J., Shi, E., Song, D., Perrig, A.: The Sybil Attack in Sensor Networks: Analysis & Defenses. In: IEEE IPSN, pp. 259–268 (2004) 30. OMNeT++ Community Site, http://www.omnetpp.org 31. MiXiM Project, http://mixim.sourceforge.net 32. Al-Abri, D., McNair, J.: On the Interaction between Localization and Location Verification for Wireless Sensor Networks. Computer Networks 52, 2713–2727 (2008)
How Secure are Secure Localization Protocols in WSNs? Ch´erifa Boucetta, Mohamed Ali Kaafar, and Marine Minier INRIA, France
Abstract. Remote monitoring and gathering information are the main objectives behind deploying Wireless Sensor Networks (WSNs). Besides WSN issues due to communication and computation restricted resources (low energy, limited memory computational speed and bandwidth), securing sensor networks is one of the major challenges these networks have to face. In particular, the security of sensors localization is a fundamental building block for many applications such as efficient routing. In this paper, we introduce a new threat model that combines classical Wormhole attacks (i.e. an attacker receives packets at one location in the network, tunnels and replays them at another remote location using a powerful transceiver as an out of band channel) with false neighborhood topology information sent by the wormhole endpoints themselves or by some colluding compromised nodes. We show using intensive simulations how this clever attacker that would exploit the neighborhood topology information can easily defeat two representative secure localization schemes. We also present some possible countermeasures and the first corresponding results.
1
Introduction
Wireless Sensor Networks (WSNs) are composed of a large number of low-cost, low-power and multi-functional sensor nodes that communicate at short distances through wireless links. The sensor nodes cooperate together to collect, transmit and forward data to particular points called base stations. Most of the time, they are deployed in an open and uncontrolled environment where attackers may be present. Due to the lack of infrastructure and to the ease of physical layer exploits, many security threats could be considered in the WSNs context. Many of those threats target the accuracy of localization information. In this case, the aim of the attacker is to construct a false topology to divert the traffic to her own advantage and then to launch traffic analysis, selective-forwarding attacks or Denial of Service (DoS) attacks, etc. Sybil attacks [1], Wormhole attacks [2], false neighbors information [2] are the major security threats for the security of localization in WSNs. In this paper, we consider both Wormhole attacks and false neighbors information: G. Parr and P. Morrow (Eds.): S-Cube 2010, LNICST 57, pp. 164–178, 2011. c Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2011
How Secure are Secure Localization Protocols in WSNs?
165
– Wormhole attack is particularly harmful against routing in sensor networks where an attacker installs a dedicated connection between two distant points (called wormhole endpoints) by a variety of means (e.g. a network cable, a long-range wireless transmission in a different band, etc.). Then, she overhears packets at one extremity, tunnels them through the wormhole link to another point in the network. The second extremity broadcasts the packets among its neighborhood nodes. The created tunnel creates the illusion of proximity between the two endpoints. – In a false neighbors information attack, an attacker lies about the list of her direct neighbors (i.e. sensor nodes within range). She could include nodes that are close to the base stations (Sinkhole attacks) creating a false topology in the network. Those two particular attacks have almost the same objectives in the network: to control a portion of the network by creating a false topology. Such topology’s correctness is important from several applications’ perspective, efficient routing being one of the most notable. Extensive previous researches have proposed several ways to secure localization in WSNs. So far, the paradigm of secure localization protocols is that once securing one of these two attacks, the second one is by design canceled. For instance, if one is sure that there is no wormhole attack in the network, he might have a good confidence (running a simple distance bounding test) that the neighborhood information is safe. Based on this paradigm two classes of protocols have emerged. On the one hand, some solutions are dedicated to defend against false neighbors information [4,7,9,10]. Most of them, if not all, claim to defend against wormhole attacks. They are essentially based on dedicated hardware or on statistical and geometrical tests coupled with local neighborhood information. On the other hand, several other solutions aim to specifically defend against wormhole attacks [3,5,6,8], often using the same principles than protocols securing neighborhood information. Contributions. Although the neighbors discovery process might be secured to compute accurate neighbors lists, current research literature has ignored so far the existence of both threats, i.e. wormhole endpoints sending also biased lists of neighbors, and so trying to mislead either wormhole detection mechanisms or “secure” neighbors discovery processes. In this paper, we study how potent this danger is, for two representative approaches: the first aims in defending against wormhole attacks through local neighborhood information [6], and the second approach [7] proposes a neighborverification protocol designed to secure localization in WSNs. We first introduce a simple, yet novel threat model where the wormhole attackers are not only relaying packets through physical tunnels, but also lie consistently according to such a tunnel. We then study the impact of co-existence of both threats in an WSN. We identify different scenarios to perform these attacks and we demonstrate that it is easy to hide the wormhole attack. We also show that these clever wormhole attackers would lead to high false positive ratios, that prevent honest, uncompromised sensors, to properly function. We also study how conspiracy can be achieved, and how much it could
166
C. Boucetta, M.A. Kaafar, and M. Minier
affect the security protocols. The “effectiveness” of these attacks on the studied approaches are demonstrated through extensive simulations. Finally we provide countermeasures that can be implemented in today’s solutions so as to prevent our attacks. Assumptions and Threat Model. We assume that attackers have access to the same data as legitimate nodes. An adversary is able to send misleading information when probed, or send manipulated information after receiving a request or affect some metrics observed by chosen targets. An adversary can be a wormhole endpoint, or simply a node lying about its neighborhood or both. Based on these assumptions, we identify four types of attacks scenarios that we describe in section 4. Paper Organization. The rest of the paper is organized as follows. Section 2 provides a brief overview of secure localization protocols. In section 3, we describe in more details the workings of the two approaches, chosen for this study. We identify and classify our attacks in section 4. We demonstrate and study the effects of theses attacks, through extensive simulations, in section 5. In section 6, we discuss ways to prevent our attacks, and provide easy to implement countermeasures. Finally, section 7 concludes this paper.
2
Background
In this section, we give a brief survey of recently proposed approaches for securing localization in WSNs. Most of these approaches aim to defend against wormhole attacks. They can be classified in two categories. 2.1
Bounding Distance-Based Approaches
Distance bounding protocols are used to verify that a node, say u, being at a distance duv from a verifier node v, is not providing a shorter distance, say duv , than what it actually is (i.e. check if duv < duv ) [11]. In [3] Hu et al. use packet leashes to detect wormhole attacks. Packet leashes contain geographical or temporal information to bound the distance or the lifetime of a end-to-end transmitted packet to restrict its travel to the destination. The sender includes a timestamp or a localization information in the message and the receiver checks that the packets-receiving is in “legal” time or in legal distance. However, this method requires either precise location information obtained via an out-of-band mechanism like GPS or needs loosely clock synchronization between the nodes. In [5], the authors propose an authenticated distance bounding technique called MAD protocol. In this protocol, each node computes distance using timing properties to verify whether a node is a true neighbor. MAD protocol needs a special hardware radio to switch very quickly between both send and receive modes. The authors of [12,13] proposed an anchor-based scheme. Each node estimates the distance to anchor nodes by using a hop-counting technique. The
How Secure are Secure Localization Protocols in WSNs?
167
disadvantage of this solution is that it mainly relies on anchor nodes which needs, besides trusting these anchor nodes, a preliminary manual setup and an a priori deployment knowledge. 2.2
Graph Theoretic and Geometry-Based Approaches
Hu and Evans [4] propose the use of special hardware called directional antennas, to secure localizations in WSNs. Each node executes a neighbors discovery process to construct neighbor list using directional antennas in each direction. Only when the directions of both pairs match, the neighboring relationship is confirmed. However, the use of directional antennas limits the use of this protocol as sensors would be costly. In [14] authors presented a centralized wormhole detection technique, which adds a virtual network layout using multi-dimensional scaling (MDS) technique. The proposed algorithm tries to establish a virtual position for every node. It respects the constraints induced by the connectivity and the distance estimation data. This technique is unfortunately significantly sensitive to distance estimation errors. Additional work by [15] uses dedicated nodes called “guard nodes” in a theoretic framework to prevent wormhole attacks. Guard nodes know their actual locations and have higher transmit power. The usage of such special-purposeguard nodes makes this approach inadequate to WSNs, since compromising guard nodes would lead to the failure of the security protocol. The scheme proposed in [8] detects wormhole attacks using connectivity information. Each node is represented by a disk, the radius of which being the range of the node itself. The detection algorithm looks for forbidden geometrical substructures in the connectivity graphs that would appear under a wormhole attack. More precisely, a wormhole link will create false disk intersections and thus could be detected. The precise form of these structures depends on the connectivity pattern.
3
Approaches for Detecting Wormhole Attacks in Our Study
In our work, we choose to concentrate on two promising and representative approaches: the first scheme uses graph theory and is purely based on local neighborhood information. The second scheme relies on node’s distance estimation and simple geometric tests. Our choice is motivated by the fact that each of these approaches represents a class of protocols to secure localization. Both protocols achieve a high level of security against wormhole attacks. Finally, the two approaches design are totally different as one is designed to detect wormhole attacks and hence the neighborhood information would be safe, whereas the second approach adopts a methodology consisting in securing first the neighborhood discovery process and then detecting wormhole attacks.
168
3.1
C. Boucetta, M.A. Kaafar, and M. Minier
LNI: Detecting Wormhole Attacks Using Local Neighborhood Information
In [6], authors presented a scheme for detecting wormhole attacks in wireless sensor networks using Local Neighborhood Information (LNI). They assume that the network is dense and static and links are bidirectional. The main idea of this algorithm is that every sensor node can compute a so-called connectivity degree based on a neighborhood verification protocol. The connectivity degrees of the node’s neighbors and the node itself are then used to verify the presence or not of a wormhole in the network. More precisely, the authors of [6] base their wormhole detection algorithm on the following principle: a wormhole endpoint X1 will pretend to have neighbors at one or two hops (i.e. the other wormhole endpoint X2 and its neighbors) that the actual neighbors of X1 cannot see. To calculate the connectivity degree, LNI uses the edge clustering coefficient (ECC), which was defined in [16], as the number of triangles to which a given edge belongs, divided by the number of triangles that might potentially include it, given the degrees of the adjacent nodes. This approach could be easily generalized to other “cyclic geometrical structures” such as squares or pentagons. Thus, LNI defines the edge-clustering cog efficient Ci,j of order g where g is the number of points of the cyclic structure. g Then, Ci,j =
g CSi,j g Si,j
g where CSi,j is the number of “cyclic structures” of order g
g the edge (i, j) belongs to, while Si,j is the number of all possible cyclic structures of order g that can be built given the degrees of the nodes. g The authors modified the coefficient CSi,j to exclude a particular node x: g they introduce the coefficient CSi,j\x which is the number of cyclic structures that exclude x. Indeed, a node j could be detected as a wormhole endpoint if g=3 g=4 one of its neighbors i checks that ∀x ∈ V 1(j), CSi,x\j = 0 and CSi,x\j = 0, where V 1(j) is the set of the 1-hop neighbors of j. This means that i can reach the neighbors of j only via j which is really rare in a dense network. The authors limit their study to the case where g =3 or 4 leading that each node exchanges with its neighbors its 2-hops neighborhood. The complete LNI algorithm works in three steps: Neighborhood discovery, CS computation and isolation phase. This algorithm is decentralized, distributed and runs locally at each node of the network. In the first step, network nodes exchange HELLO messages to determine their 1-hop (resp. 2-hop) neighborhood list (V 1) (resp. (V 2)). Once these lists are constructed, each node i executes the following steps for all of its 1-hop neighbors. 3 Then, the node i computes CSi,k\j for every k in V 1(j). If the value is null 4 then i computes CSi,k\j . If the second value is also null, i declares j as a malicious node, broadcasts an alert message containing j’s identity and inserts it in its, so-called red list. Finally, each node that hears such a message adds j to its red list or increments the corresponding counter of node j. When the j’s counter reaches a given
How Secure are Secure Localization Protocols in WSNs?
169
threshold, node sends back alert messages to all its direct neighbors to isolate the node j from the network. Simulation results show that the probability to detect a single wormhole is effectively high and that the number of false positives is relatively low as soon as the degree is sufficiently large. However, as we will show in the next sections, simply sending biased lists of neighbors may affect these results. 3.2
SNV: Secure Neighbor Verification Protocol
This second approach [7] proposes a Secure Neighbor Verification (SNV) protocol and it relies on node’s distance estimation and simple geometric tests. First, SNV assumes that each sensor node is equipped with two network interfaces: a radio frequency (RF) and a sound interface (US) and that all network nodes can perform cryptographic operations with a symmetric key to secure the exchange of messages. Again, the network is assumed dense and distance between connected nodes to match to a polygon on a plane. The adversary is assumed to control a relay network composed of a set of relay nodes. These are fully connected by out-of-band relay links. The proposed scheme contains three phases. In the first step, called ranging, every node uses neighbor discovery process and attempts to build its neighbor list and to calculate the distance to each neighbor using ultrasound ranging [7]. In the second step, called Neighbor Table Exchanges, every node shares with each of its neighbors, in an authenticated way, its established neighbors table including the distances calculated in the ranging phase, building then a 2-hop neighbor table. Finally, in the link verification phase each sensor runs locally three consistency tests on the 2-hop neighbor table. Consequently, an attack can be detected and links can be either discarded or confirmed. The three consistency tests are the following: – Link symmetry test: In the 2-hop neighbors table of i, denoted N T 2i, any link (u, v) for which the distance measured by u is different from the distance measured by v is discarded. Also, links with only one measurement are deleted. An attack is detected if a fraction of more than Θsym such links exists, where Θsym represents the acceptable fraction of links with asymmetric length [7]. – Maximum range test: node i deletes links in N T 2i which are longer than the range R. An attack is detected if a fraction of more than Θrange links is discarded. Θrange represents the acceptable fraction of links reported to be longer than R, due to the distance measurement error [7]. – Quadrilateral test: In this test a node i looks at every 4-clique in its 2-hop neighbor table N T 2i. In graph theory, a 4-clique is a clique where i knows the lengths. In an undirected graph, a clique is a subset of its vertices such that every two vertices in the subset are connected by an edge. If any of the 4-clique is not a quadrilateral within error tolerance, an attack is detected. Moreover, if a link is part of a convex quadrilateral, it is declared verified.
170
C. Boucetta, M.A. Kaafar, and M. Minier
Simulation results show that the proposed scheme is efficient to detect wormhole attacks in the network. Authors do also show that the LNI algorithm can be successfully used to detect k-end relay attacks (more than 2 relay nodes in the wormhole attack). Although we did not consider these aggressive wormhole attackers, we will show in section 5 that even simple 1-end wormhole attacks can be successfully hidden by providing biased lists of neighbors that are consistent with the created wormhole.
4
Simple Attacks
In this section, we present the adversary model. Our purpose is to study the impact of attacks consisting on sending biased (falsified) list of neighbors and the effectiveness of the proposed approaches while facing such adversary model. We enumerate four possible attacks’ scenarios: 1. As in previous research, wormhole attackers are randomly established between two nodes with a distance greater than the range of sensors. Attackers may drop or replay the data messages. 2. Nodes may lie about their neighborhood. Rather than physically performing wormhole attacks (by means of special hardware), these malicious nodes include a few distant nodes which are not actually in their 1-hop neighborhood, but are neighbors of a similar malicious node considered as the second endpoint of the attack. Both endpoints only lie about their neighborhood when asked, providing the total set, or just a subset, of each others’ neighbors. Figure 1 illustrates an example of such an attack scenario. It is worth noticing that in this case, we do no more consider wormhole attacks in the network, but only neighborhood liars. In figure 1a, nodes x1 and x2 represent the wormhole endpoints as in the first scenario. In figure 1b, dashed links represent new links created when node x1 lies about its neighborhood. In essence, node x1 adds a small subset of x2 ’s neighbors in its own one hop-neighbors lists. In this case, x1 adds only two nodes, namely j and k. 3. An attacker may combine both wormhole and neighborhood lie attacks with the hope to hide detection. In such a case, a wormhole endpoint, in addition to establishing a physical tunnel to the second endpoint, also sends falsified list of neighbors to escape detection tests. That is to say, scenario 3 is a combination of both scenarios 1 and 2. 4. The attacks described above can either be carried out by malicious nodes in an independent manner or as a conspiracy created by colluding nodes. Collusion is likely in a scenario where attack propagation happens through the now well tested means of worms. Both wormhole attackers and neighbors are then sending biased neighbors lists, claiming they are each actually neighbors. Neighborhood information is falsified according to the wormhole endpoints. In figure1c, nodes c and b, if they are compromised may then claim that in addition to x1 , nodes j, l and x2 are among their neighbors. Node x1 adds also nodes j and k to its one hop neighbor list.
How Secure are Secure Localization Protocols in WSNs?
171
(a)
(b)
(c) Fig. 1. Adversary model: (a) x1 and x2 represent the wormhole endpoints (b) Nodes lying about their neighborhood: x1 adds a small subset of x2 ’s neighbors in its own one hop-neighbors lists (c) colluding nodes: nodes x1 , c and b claim that nodes k, j, l and x2 are among their neighbors.
In summary, when a wormhole endpoint lies about its neighbors, it may include the other wormhole extremity, and probably few other nearby nodes to hide behind. To study the impact of the different attacks, we mixed both wormhole attacks and neighborhood lies to allow the one or the other to be more successful.
5
Experimental Results
We performed extensive simulations to demonstrate the effectiveness of the attacks in misleading the studied secure localizations protocols. In particular, we evaluate the probability of successful detection and mis-detection of attackers in the network. 5.1
Simulation Set Up
We performed our simulations on the WSNet simulator [17]. The network consists in 400 static nodes distributed over a square field of 500×500m. The transmission range of each node is 50m. In order to run our experiments on dense sensor environments, as assumed by the studied approaches, the average of local node degree is set to 12. The duration of each experiment is 250 seconds.
172
C. Boucetta, M.A. Kaafar, and M. Minier
Wormhole attacks are randomly established between two nodes with a distance greater than 4 hops. Because of the use of a stationary network, malicious nodes are defined at the time of network establishment. The percentage of malicious nodes varies from 0% (malicious-free system) to 20%. As mentioned in section 4, a malicious node can be a wormhole endpoint, a node lying about its neighborhood or both. 5.2
Performance Metrics
To characterize the performance of detection of the wormhole attack, we use the classical false/true positives/negatives indicators. Specifically, a negative is a normal node which should therefore be accepted by the test. A positive is a malicious node (e.g a wormhole endpoint) which should therefore be rejected by the test and detected as such. The number of negatives (resp. positives) in the population comprising all the network nodes is P N (resp. P P ). A false negative is a malicious node that has been wrongly classified by the test as negative, and has therefore been wrongly completed. A false positive is a normal node that has been wrongly rejected by the test and therefore wrongly aborted. True positives (resp. true negatives) are positives (resp. negatives) that have been correctly reported by the test and therefore have been rightly aborted (resp. completed). The number of false negatives (resp. false positives, true negatives and true positives) reported by the test is T F N (resp. T F P , T T N and T T P ). We use the notion of false positive rate (F P R) which is the proportion of all the normal nodes that have been wrongly reported as positive by the test, so F P R = T F P/P N . Similarly, the true positive rate (T P R) is the proportion of malicious nodes that have been rightly reported as malicious by the test, and we have T P R = T T P/P P . 5.3
Simulations Results
LNI, Detecting Wormhole Attacks Using Local Neighborhood Information. Figure 2 shows the variation of the true positive rates as a function of the percentage of malicious nodes. It is clear that the accuracy of detection decreases as the average of malicious nodes increases. It is also interesting to note that when wormhole attackers are lying about their neighborhood, they are detected more often than when they only perform wormhole attacks. In other words, the probability of detecting wormhole attackers lying about their neighbors is higher. This is because, if every wormhole endpoint includes a few neighbors of the other wormhole ends as its own neighbors without consistently checking other endpoints claims and more importantly without its claim being confirmed by its neighbors, alert messages increase inside the network. This simply increases the detection rate. However, we notice that when wormhole attacks are supported by a very few other colluding nodes, and these nodes are all lying about their neighborhood, the detection rate decreases significantly. In particular, when the colluding nodes lie very consistently, i.e. by simply including all
How Secure are Secure Localization Protocols in WSNs?
173
1
True positive rate
0.8
0.6
0.4
Only Wormhole attacks Nodes lying about their neighborhood Wormhole endpoints lying about their neighborhood Colluding nodes: claiming a subset of neighbors Colluding nodes: claiming the total set of neighbors
0.2
0 0
2
4
6
8
10
12
14
16
18
20
% of malicious nodes
Fig. 2. LNI: True Positive Rate vs attacks intensity
Fig. 3. Example of wormhole ends lying about their neighborhood. X1 -X2 is the wormhole link
false neighbors as their actual neighbors, the detection rate achieves less than 50%, when the network comprises only 10% of attackers. We illustrate the previous fact in Figure 3: when a wormhole endpoint lies about its neighbors, the number of nodes that detect it as a malicious node increases. Indeed, when wormhole endpoints, X1 and X2 are lying, and in par3 ticular when X1 adds K and H in its neighbors list, C computes CSC,X , 2 \X1 4 3 4 3 4 CSC,X2 \X1 , CSC,K \X1 , CSC,K \X1 , CSC,H \X1 and CSC,H \X1 . All these values are null, and hence the LNI approach suggests C to declare for instance X1 as a suspicious node 3 times, due to the additional links (X1 K) and (X1 H). This behavior however may have an impact on the number of false positives reported by the test. Figure 4 shows the variation of false positive rates as function of the percentage of malicious nodes. As expected, the higher the number of nodes lying about their neighborhoods is, the greater the number of false positives is (i.e. the more the test incorrectly classify honest nodes as behaving abnormally). In particular, when the neighbors of the colluding nodes lie consistently, i.e. in concordance with the claims of the wormhole endpoints themselves (scenario (4) described in section 4), the False positive rate increases faster. This is illustrated in figure 4 by the curve “colluding nodes” indicating a high false positive rate of more than
174
C. Boucetta, M.A. Kaafar, and M. Minier
0.7 Only Wormhole attacks Nodes lying about their neighborhood Wormhole endpoints lying about their neighborhood Colluding nodes: claiming a subset of neighbors Colluding nodes: claiming the total set of neighbors
0.6
False positive rate
0.5 0.4 0.3 0.2 0.1 0 0
5
10
15
20
% of malicious nodes
Fig. 4. LNI: False Positive Rate vs attacks intensity
13% (resp. 20%) when including a subset (resp. the total set) of neighbors, the system being under a low intensity of attack (10% of malicious nodes). In the case of simple wormhole attackers or wormhole endpoints lying about their neighborhood (scenarios (1) and (2) in section 4), the false positive rate do not exceed 8%. Yet, this proves the robustness and efficiency of the LNI’s approach as for the detection of simple wormhole attacks. Our results discussed above show however that this approach is far from being sufficient, when wormhole attackers and their colluding nodes behave in a consistent way. SNV, Secure Neighbor Verification Protocol. To evaluate the efficiency of the SNV protocol to our identified attacks, we plot in Figure 5 the true positive rates observed for several intensities of attacks. Again, as expected, the hit rate decreases as the population of malicious nodes increases. Indeed, we observe that the curves shape are slightly similar, whether the considered malicious nodes
1
True positive rate
0.8
0.6
0.4
Only Wormhole attacks Nodes lying about their neighborhood Wormhole endpoints lying about their neighborhood Colluding nodes: claiming a subset of neighbors Colluding nodes: claiming the total set of neighbors
0.2
0 4
6
8
10 12 14 % of malicious nodes
16
18
20
Fig. 5. SNV: True Positive Rate vs attacks intensity
How Secure are Secure Localization Protocols in WSNs?
175
1 Only Wormhole attacks Nodes lying about their neighborhood Wormhole endpoints lying about their neighborhood Colluding nodes: claiming a subset of neighbors Colluding nodes: claiming the total set of neighbors
False positive rate
0.8
0.6
0.4
0.2
0 0
5
10
15
20
% of malicious nodes
Fig. 6. SNV: False Positive Rate vs attacks intensity
are simple wormhole attackers, only lying about their neighborhood or colluding nodes. Surprisingly, we observed that when wormhole attackers simply lie about their neighbors (even without colluding with a set of neighbors), the scheme detects on average only 70% of malicious nodes, with a moderate intensity of the attacks (10% of malicious nodes). When faced to colluding nodes scenarios, and in particular when only a subset of neighbors lies in concordance with the wormhole endpoints, the detection ratio falls to a drastically low value (42% for 20% of malicious nodes inside the system). Recall from section 3 that an attack is detected only if the 4-clique is not a quadrilateral. However, biased lists of neighbors in SNV give malicious nodes opportunities to corrupt and distort network distances so that malicious nodes can form convex quadrilaterals which all their links are verified by the protocol. This explains the low detection rates observed for SNV in our different scenarios. Figure 6 illustrates the variation of false positive rate as a function of the average percentage of malicious nodes in the network. We observe that the rate increases faster as the percentage of attackers increases. False positive rate is more important in the case of colluding nodes. Again, this is explained by the creation of additional links when liars send false lists of neighbors. Virtually created links in neighbors’ exchanged tables cause the creation of false quadrilaterals that alter results of the quadrilateral test. The salient result to be noticed finally is that in presence of colluding nodes, the detection scheme achieves worse results than a random detection. Indeed, merging both figures 5 and 6, we can evaluate the efficiency of the test as a ROC (Receiver Operation Characteristics) curve observed for several intensities of attacks. This shows that for a magnitude as low as 10% of attackers in the system, the SNV protocol behaves as a random detector when dealing with colluding nodes lying consistently. In summary, even if the chosen protocol is specifically designed to secure neighbors discovery, it could not correctly detect partial or complete collusion of nodes whereas the first protocol seems to be more resistant to collusion.
176
6
C. Boucetta, M.A. Kaafar, and M. Minier
Possible Countermeasures and Discussion
As shown in the previous section, the second method (SNV) which proposes a dedicated method for secure neighbors discovery is relatively inefficient when looking at colluding nodes whereas the first method (LNI) which is only designed for wormhole detection seems to behave better. More generally, geometric tests seem to be less efficient than local neighbors information combined with a voting method. Notably, the attacks identified in the previous sections are generally applicable to other approaches detecting wormhole attacks or securing neighborhood information. Even if we only tested the impact of the attacks on two representative solutions, it is important to note that our attacks exploit a common vulnerability of all the methods proposed so far. Consistently lying about its neighbors remains the main weakness of previous solutions our attacks exploit to succeed. A malicious pair of nodes establishing a wormhole attack, and exploiting the knowledge of each wormhole endpoint’s neighbors can easily mislead any technique that do not consider this as a possible threat. A possible countermeasure could be to reinforce the first studied algorithm with a mechanism that do check whether the local neighbors information, even though consistent with other observations made by the testing node, is not deviant from observations as reported by a majority of other nodes. In other words, the LNI algorithm needs to be modified in such a way that the local neighbors information is verified with the help of a voting technique before the wormhole detection mechanism is activated. Such mechanisms of neighbors list verification have been proposed in the literature for other purposes. In [18], the authors check the consistency of neighborhood tables between neighbor nodes to detect Sybil attacks. A Sybil node (i.e. a node with several identities) will declare several times the same set of neighbors leading to identities that appear many more times than others in the intersection of the neighborhoods. The same kind of mechanism could be used to detect inconsistency and thus liar nodes in a neighborhood tables verification mechanism: for a sufficiently dense network, the intersection of neighborhoods of a node’s neighbors must be very close and must be consistent. This allows to detect liars with a higher confidence. Moreover, as the motto of the wireless sensor networks could be “unity is strength”, a voting mechanism could also be locally added as done in [6]. Indeed, those voting techniques stay efficient as soon as the number of attackers is less than 30% of the total number of nodes which is the most classical case. An alternate solution would be to jointly run both mechanisms in an attempt to secure the neighborhood discovery process and detect potential wormhole attackers. However, two major reasons refrain the adoption of such an intuitive solution; First, the two classes of techniques have conflictual assumptions. The first technique relying on secure neighboring to detect wormhole, whether the second detects wormhole attacks assuming a secure neighboring state. Clearly, if one would run both solutions instantaneously, the assumptions of each are not verified. Second, running both solutions would obviously require higher energy
How Secure are Secure Localization Protocols in WSNs?
177
cost and would induce more overhead traffic. Notably, the latter reason is a major concern when adopting a solution consisting in simultaneously deploying any two techniques that would belong to each of the two classes. In summary, a convenient countermeasure is to verify first the consistency of the different neighborhoods before running the wormhole detection test. The exchanged information and the energy cost of this consistency check is roughly the same than for the wormhole detection mechanism. We have tested this approach that gather a test of neighborhood consistency and the LNI wormhole detection mechanism under the same simulation conditions than the previous ones. This leads to largely improve the previous results: when considering 10 % of colluding nodes that claim the total set of neighbors, the true positive detection rate of wormholes and of liars is always greater than 70 % whereas the false positive rate stays under 10 %. Those initial results seem to confirm that adding an initial consistency check of neighborhoods with a voting mechanism helps the algorithm to correctly detect wormhole endpoints even if they are hidden behind liar nodes. Of course, this initial study needs to be refined but these results are very promising.
7
Conclusion
In this paper, we analyzed the impact of simple attacks, providing biased lists of neighbors, on the protocols aiming to detect wormhole attacks and securing neighboring in WSNs. One of our salient findings is that, when wormhole attackers compromise a subset of neighbors so that these consistently lie about their neighborhood with the virtual topology created by the wormhole, the performance of the detection protocols can easily degrade below that of a random detector. We have also described a possible convenient countermeasure and we presented the first encouraging results in this direction keeping in mind the motto of wireless sensor networks “unity is strength”. Acknowledgment. This work has been supported by the French ANR national project ARESA2. It has also been partially supported by the European Commission within the STREP WSAN4CIP project. We would like to thank the anonymous reviewers for their useful comments.
References 1. Douceur, J.R.: The sybil attack. In: Druschel, P., Kaashoek, M.F., Rowstron, A. (eds.) IPTPS 2002. LNCS, vol. 2429, pp. 251–260. Springer, Heidelberg (2002) 2. Karlof, C., Wagner, D.: Secure routing in wireless sensor networks: Attacks and countermeasures. Elsevier’s AdHoc Networks Journal, Special Issue on Sensor Network Applications and Protocols 1(2-3), 293–315 (2003) 3. Hu, Y.-C., Perrig, A., Johnson, D.B.: Packet leashes: A defense against wormhole attacks in wireless ad hoc networks. In: Proceedings of the Twenty-Second Annual Conference INFOCOM, IEEE 2003, San Francisco, CA, vol. 3, pp. 1976–1986. IEEE, Los Alamitos (2004)
178
C. Boucetta, M.A. Kaafar, and M. Minier
4. Hu, L., Evans, D.: Using directional antennas to prevent wormhole attacks. In: Proceedings of NDSS 2004, San Diego, California, USA. The Internet Society, San Diego (2004) ˇ 5. Capkun, S., Butty´ an, L., Hubaux, J.-P.: Sector: secure tracking of node encounters in multi-hop wireless networks. In: Proceedings of SASN 2003, pp. 21–32. ACM, New York (2003) 6. Znaidi, W., Minier, M., Babau, J.-P.: Detecting wormhole attacks in wireless networks using local neighborhood information. In: IEEE PIMRC, Cannes, French Riviera, France (September 2008) 7. Shokri, R., Poturalski, M., Ravot, G., Papadimitratos, P., Hubaux, J.-P.: A practical secure neighbor verification protocol for wireless sensor networks. In: WiSec 2009, pp. 193–200. ACM, New York (2009) 8. Maheshwari, R., Gao, J., Das, S.R.: Detecting wormhole attacks in wireless networks using connectivity information. In: INFOCOM 2007, Anchorage, Alaska, USA, May 6-12, pp. 107–115. IEEE, Los Alamitos (2007) 9. Khalil, I., Hayajneh, M., Awad, M.: Svnm: Secure verification of neighborhood membership in static multi-hop wireless networks. In: Proceedings of the 14th ISCC 2009, Sousse, Tunisia, July 5-8, pp. 368–373. IEEE, Los Alamitos (2009) 10. Poturalski, M., Papadimitratos, P., Hubaux, J.-P.: Towards provable secure neighbor discovery in wireless networks. In: Proceedings of the 6th FMSE 2008, Alexandria, VA, USA, October 27. ACM, New York (2008) 11. Brands, S., Chaum, D.: Distance bounding protocols. In: Helleseth, T. (ed.) EUROCRYPT 1993. LNCS, vol. 765, pp. 344–359. Springer, Heidelberg (1994) 12. Liu, D., Ning, P., Du, W.: Attack-resistant location estimation in sensor networks. In: Proceedings of the Fourth IPSN 2005, UCLA, Los Angeles, California, USA, April 25-27, pp. 99–106. IEEE, Los Alamitos (2005) 13. Du, W., Fang, L., Ning, P.: Lad: Localization anomaly detection for wireless sensor networks. J. Parallel Distrib. Comput. 66(7), 874–886 (2006) 14. Wang, W., Bhargava, B.K.: Visualization of wormholes in sensor networks. In: Jakobsson, M., Perrig, A. (eds.) Proceedings of the 2004 ACM Workshop on Wireless Security, Philadelphia, PA, USA, October 1, pp. 51–60. ACM, New York (2004) 15. Poovendran, R., Lazos, L.: A graph theoretic framework for preventing the wormhole attack in wireless ad hoc networks. Wireless Networks 13(1), 27–59 (2007) 16. Radicchi, F., Castellano, C., Cecconi, F., Loreto, V., Parisi, D.: Defining and identifying communities in networks. Proceedings of the National Academy of Science of the United States of America, PNAS 101(9), 2658–2663 (2004) 17. Hamida, E.B., Chelius, G., Gorce, J.-M.: Scalability versus accuracy in physical layer modeling for wireless network simulations. In: 22nd ACM/IEEE/SCS Workshop PADS 2008, Rome, Italy (June 2008) 18. Ssu, K.-F., Wang, W.-T., Chang, W.-C.: Detecting sybil attacks in wireless sensor networks using neighboring information. Computer Networks 53(18), 3042–3056 (2009)
DANTE: A Video Based Annotation Tool for Smart Environments Federico Cruciani1 , Mark P. Donnelly2 , Chris D. Nugent2 , Guido Parente2, Cristiano Paggetti1, and William Burns2 1
I+ s.r.l., 50144 Florence, Italy f.cruciani, [email protected] http://www.i-piu.it/ 2 University of Ulster, Faculty of Computing and Engeneering, Shore Road, Newtonabbey, Co. Antrim, BT370QB, Northern Ireland
Abstract. This paper presents a novel system, which uses a set of video cameras to support the manual annotation of object interaction and person to object interaction within smart environments. The DANTE (Dynamic ANnotation system for smart Environments) system uses two stereo-based cameras to monitor and track objects, which are tagged with inexpensive paper based fiducial markers. Offline, the DANTE annotation module allows users to navigate, frame-by-frame, through recorded sessions and to annotate interesting and relevant events. The generation of annotated datasets has significant utility within the domain of smart environments to support the development of data driven context-aware systems which require large training datasets. The current paper presents the rationale for this work, provides a technical description of the system and highlights its potential opportunities. Early experimental work, investigating a comparison with accelerometer based sensors is also briefly introduced. Keywords: Data Acquisition, Multi sensor systems, Video Recording, Optical Tracking.
1
Introduction
Worldwide, changing demographics in relation to increased life expectancy and falling birth rates are leading to an ageing society. Coupled closely with these changes are the documented rises in chronic illnesses such as cardiovascular disease, cancer and dementia [1] for which, age is the single biggest risk factor [1]. Studies have shown that persons with chronic illnesses often prefer to remain living within their own home for as long as is possible [1]. Facilitating this can assist to alleviate some of the strain faced by health services, however, a challenge is to provide equal levels of health support from within the home environment. An emerging area of interest lies in the provision of home-based care through the application of assistive technologies and integrated intelligent algorithms, to monitor and support independent living. In this paper, we describe a novel G. Parr and P. Morrow (Eds.): S-Cube 2010, LNICST 57, pp. 179–188, 2011. c Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2011
180
F. Cruciani et al.
system for recording and interpreting person-object interactions within a home environment with the aims of providing labelled datasets to promote the development of data driven context-aware intelligent algorithms. The remainder of the paper is structured as follows. In Section 2, we introduce related work in this domain. Section 3 presents the DANTE system and describes its operation. Section 4 discusses future opportunities for the system and Section 5 formulates our conclusions by presenting some experimental results.
2
Related Works
Integration of technological solutions and services within the home environment offer the potential to support independent living through automated monitoring and support of activities of daily living (ADLs) [2]. Within this domain a challenge lies in “learning” how people interact with household objects to complete their ADLs. Once the learning process is complete, intelligent algorithms can then be used to detect deviations from normal patterns and when required offer some support. This challenge is further complicated by the fact that different people perform ADLs in different ways hence leading to the requirement to tailor algorithms so that personalised support can be offered [3]. Indeed, only by referring to a large source of data can we begin to fully understand ADL patterns and develop intelligent algorithm which will be robust when deployed in situ. This has been the view of Intille et al. [4] and other researchers [5] [6] who have stated the need for shared sensor rich data repositories to accelerate the development of real world context-aware systems to support and monitor ADLs. To date, several publically available sensor datasets have been published [7][8], however, a challenge remains in providing sufficiently annotated datasets [5] [6]. In part, this issue is attributed to the complexity and error-prone process of annotating data collected from sensorised environments[6], which typically involving hours of manual hand labelling [9] or requesting participants to manually keep detailed ADL logs [10]. In 2009, Cook et al. [3] presented a comprehensive overview, outling four methods for labelling / annotating sensor data involving: annotation of raw sensor data; annotation of raw sensor data coupled with resident diaries; using a 3D environmental visualisation tool; and the latter coupled with resident diaries. They employed a naive bayes classifier to evaluate the labeled data and found that the visulisation tool, coupled with resident diaries provided the most rapid platform for annotating data and was the most accurate (>73%) for identifying target ADLs. They acknowledged, however, the invasiveness of this approach in terms of resident burden in addition to the requirement for custom 3D models to be generated for any new environments. In a similar study, Coyle et al. [6] used video cameras coupled with participant diaries to annotate data within an office environment. They attempted to streamline the annotation process by placing pressure sensors within the environment to infer important periods (e.g. person entering a room) where the captured video data contained activity related information.
DANTE: A Video Based Annotation Tool for Smart Environments
181
Other researchers considered semi-automatic approaches to the annotation process. van Kasteren et al. [11] asked participants to state activities verbally. These were recorded, via a bluetooth headset, and voice recognition software was employed to interpret key phrases to indicate the beginning and end of an activity. They reported that voice recognition was relatively accurate, however, participants became tired as they had to verbally discuss each activity. They concluded by stating that by focusing research upon the development of software tools for visualising and examining sensor data could lead to a standard data structure being adopted. Indeed, Hong et al. [12] presented the OpenHome concept as a potential method for structuring data within smart environments, however, did not address the issue of data annotation within their work.
3
System Overview
The rationale for this work stems from the aforementioned literature, which has highlighted the need for large annotated datasets, to allow researchers to more efficiently and accurately capture ADL behaviours. The DANTE (Dynamic ANnotation Tool for smart Environments) system (see http://www.orthokey.com/) provides a suite of software tools for recording and annotating ADLs. Hardware The system hardware consists of two stereo MicronTracker cameras[14]. The cameras (fig.1) are lightweight and portable and can be positioned anywhere within an environment to provide the maximal field of view for the ADL being examined. Each camera can identify objects (cup, door, appliances, people) which are ’tagged’ with custom fiduciary markers (fig.2). Once detected the system can report on the location and orientation of each detected object. Markers are fully passive and consist of a pattern (or set) of high-contrast regions printed on paper. The number of unique markers which can be designed is practically unlimited as they can differ in both size and pattern. The DANTE system samples at a frame-rate between 1-10 Hz, however, in its current configuration, has been optimised to concurrently track up to 30 objects. A tolerance parameter is used to set sensitivity to object movement. The camera range has been optimised to between 0.5 to 5 metres, depending on the size and complexity of the markers being tracked. Using an unique reference point to define an environment-based coordinate system, two cameras can track the same object in a global coordinate space.
Fig. 1. MicronTracker Camera
182
F. Cruciani et al.
Fig. 2. Fiducial markers which are used to tag objects
Recording Module Markers are ‘learned by’ the camera system and labelled offline. During recording, each camera captures a number of scenes, where a scene describes a video frames during which tagged objects have appeared/disappeared or moved, more than the configurable tolerance parameter, within the field of view. The DANTE system automatically activates/suspends recordings, intelligently ignoring frames where no change in the environment occurs or where only non-tagged objects are moved, thus significantly reducing the amount of stored data and recorded video duration. Fig.3 presents a recorded scene and accompanying data which describes the scene. Although video frames are stored to provide assistance during the manual annotation process, the recording module does not use the video frames to track objects. Data stored for each scene include: the object(s) ID(s) being tracked; the position of the object with respect to the global coordinate system; the orientation of the object(s) and the timestamp.
... 6 Cup 650.251 174.828 -1233.91 14.48.57.78 9 Kettle 546.818 50.7396 -1272.6 14.48.57.78 11 Coffee 563.525 207.784 -1277.7 14.48.57.78 ...
Fig. 3. A frame and the 3D tracking data indicating object IDs, labels and global coordinates with associated timestamps
The user interface (UI) of the recording software module shows the last identifed scene and the corresponding coordinate system (top and frontal) of the 3D representation based on coordinates of objects in the scene. Objects are colour coded and represented in form of dots in the 3D representation. Annotation Software Module The annotation module supports reviewing and labelling (annotating) of recording scenes. The annotation modules (Fig.4) presents a simple and user-friendly
DANTE: A Video Based Annotation Tool for Smart Environments
183
interface which allows users to load previously recorded sessions and play/pause/ rewind or fastforward through scenes. The real-time interval between two consecutive scenes is variable depending on the presence of object movement within the environment, therefore it is possible to skip seconds or even minutes between consecutive frames. Fig.4 indicates the range of services offered to users including top/frontal 3D visualisation of tagged objects (upper left), video frames from both cameras for the present scene (upper right) and a list of personalisable objects and associated actions to support annotation (centre). The list of objects and actions are easily configurable offline through a plain text file that enumerates and classifies the set of actions into an activity-subactivity scheme. For each object, two base actions are defined to indicate the active or inactive state of an action. It is possible to replay recorded frames at a frame-rate of between 1-10 Hz to allow the user to efficiently progress through a session. At any point, the user can pause the playback and fastforward or rewind, frame-by-frame, until they locate the desired frame. At this point, it is possible to set/unset the object action by selecting the corresponding action’s label on the UI. The visualisation of these states is shown in Fig.4 under ‘Active Object State’. Alternatively, the system provides a free-text form to annotate an activity not represented. Following annotation, it is possible to replay the ‘annotated session’ and display the video frames in addition to highlighting those currently active actions, highlighted in the centre of the UI. Moreover, the system allows users to import ‘filtered’
Birds eye view of marked obects
Field of view from both cameras
Available object states Active object state Tracked Objects
Frame Navigation
Data Export
Fig. 4. Annotation Module User Interface
184
F. Cruciani et al.
sessions: that is, scenes that include only a particular object or subset objects, aiding further to streamline the annotation process. The output file of an annotated session consists of a list of events that can be the beginning or the end of an action (active or inactive) in addition to an optional free-text annotation. The annotation is saved in a plain-text file. The first line indicates which configuration file has been used to define the set of actions for the annotation. This is followe by a list of annotated events, described by a timestamp, activity ID (e.g. Phone), action (e.g. Pick-up), and boolean value to indicate object activity. An excerpt of a typical output file is shown below. * myActionSet.txt 16.18.10.593 3 0 1 16.18.14.796 3 0 0 16.18.46.93 -1 -1 1 Some text
4
DANTE Opportunities
The DANTE system offers a number of additional opportunites with respect to improving the annotation process. We have already begun to examine the possibility of extending the system functionality to support automated annotation of particular scenarios. This is made possible by taking advantage of the global coordinate system, which enables DANTE to recognise person-object and objectobject interaction based on their relative positioning and orientation. Focusing an example upon the ADL of drink preparation, DANTE was able to accuractely detect the actions, pick-up kettle, pour, and pour into cup. Although these are our initial findings, more experimentation is required to validate this concept. Another opportunity DANTE provides is assisting with identification of interleaved activities such as the ADL of cooking, interrupted by another ADL such as a telephone call. The challenge here lies in ensuring that the original tasks is returned to and completed following the interruption. Indeed, this issue is particularly relevant in assisting persons with cognitive impairments [15]. Coupled with the aforementioned automatic annotation, the DANTE system could be extended to act upon actions which are started (activated) but interrupted by another activity. Actions could be associated with a guideline duration and used to prompt prospective users if actions are not eventually completed (deactivated). Other environmental sensors or objects with sensors attached could be tagged with markers and DANTE used to indicate important scenes in the recorded data, which when compared with the timestamps of other sensors could be used to quickly ‘mine’ those sensor data. Additionally, DANTE could be used to manage battery life of other sensing technology. For example, accelerometer based sensors could receive ‘wake’ signals through DANTE detecting potential movement of objects to which the sensors are attached. Finally, platform independent APIs could be developed for DANTE to support the rapid development and evaluation of context-aware algorithms, based on data recorded and annotated by the DANTE system.
DANTE: A Video Based Annotation Tool for Smart Environments
5
185
Conclusions
Video based systems for monitoring and supporting ADLs are not without their challenges. Indeed, occlusion of objects in addition to privacy considerations remain key challenges. In an attempt to address the former, we have undertaken some preliminary experiments, which examine the potential for coupling DANTE with other sensor platforms. Experiments were conducted at the Univeristy of Ulster within their smart lab environment [13]. The Smart Environment consists of a Kitchen and a Livingroom, as shown in Fig.5 . To date, the DANTE system has only been evaluated within the Kitchen, which covers an area of approximately 17m2 . The Kitchen comprises a sensor rich environment, from which behaviour analysis and activity assistance problems are investigated. A set of object were tagged for use with the DANTE system.
3 4
1
Fridge
2
Coffee
3
Kettle
4
Cup
5
Cupboard Camera
5
1
2
Fig. 5. Experiment Setup
The experimental setup included a wireless sensor network platform [16], coupled with DANTE markers, which were attached to a number of objects within the Smart Kitchen: fridge; 2 cupboards;a cup; and a kettle. Data was recorded using both Sunspots (sampled at 5Hz) and the DANTE recording module, and compared based on timestamp information. The summary findings, presented here, indicate that it is possible to detect a range of events through both modalities. As can be observed from Fig.6, both the Sunspots and DANTE system
186
F. Cruciani et al.
Fig. 6. (Upper Graph) Kettle acceleration measured by sunspot sensor, (Lower Graph) kettle orientation angle in respect to the floor
where able to independently detect that the kettle was ‘moving’. By comparison, the DANTE system was able to accurately detect the orientation of the object and thus infer when the kettle was being ‘poured’. In our future work, we propose to use both the SunSpots and DANTE system for monitoring ADLs. The DANTE annotation software module can easily be extended to incorporate the annotation of both video and accelerometer data. We believe that we can utilise accelerometer based data to overcome potential occlusion scenarios where neither of the cameras can detect an ongoing activity. As previously discussed, one of our main focuses is to extend the DANTE system to support automatic annotation process. Indeed, preliminary experiments indicate that is possible to automaticaly annotate scenes. Activities can be detected based on the relative position between objects and their orientation. During our experiments, the system was able to detect, for example, interactions with the fridge and the pouring of the kettle by analysing the orientation of tracked objects. In addition to reducing the amount of saved data, an automated process could significantly reduce potential privacy issues as recorded frames would not be required for manual annotation and therefore could be discarded. As a result we believe that the DANTE system could offer significant benefits to the research community in terms of facilitating the generation of sensor-rich datasets for supporting context-aware computing.
DANTE: A Video Based Annotation Tool for Smart Environments
187
References 1. The Academy of Medical Sciences. Rejuvenating Ageing Research (September 2009), http://www.acmedsci.ac.uk/p99puid161.html/ (accessed July 2, 2010) 2. Nugent, C.D., Fiorini, P., Finlay, D.D., Tsumaki, Y., Prassler, E.: Home Automation as a means of independent living. IEEE Transactions on Automation Science and Engineering 5(1), 1–9 (2008) 3. Szewcyzk, S., Dwan, K., Minor, B., Swedlove, B., Cook, D.: Annotating smart environment sensor data for activity learning. Technol. Health Care 17 3, 161–169 (2009) 4. Intille, S., Nawyn, J., Logan, B., Abowd, G.: Developing shared home behaviour datasets to advance HCI and Ubiquitous compuitng research. In: Intille, S., Nawyn, J., Logan, B., Abowd, G. (eds.) Proceedings oh the 27th Annual Conference Extended Abstracts and Human Factors in Computing Systems, CHI 2009, Boston, MA, USA, April 04-09, pp. 4763–4766. ACM, New York (2009) 5. Cook, D., Schmitter-Edgecombe, M., Crandall, A., Sanders, C., Thomas, B.: Collecting and disseminating smart homes sensor data in the CASAS project. In: Proceedings of the CHI Workshop on Developing Shared Home Behavior Datasets to Advance HCI and Ubiquitous Computing Research (2009) 6. Coyle, L., Ye, J., McKeever, S., Knox, S., Stabeler, M., Dobson, S., Nixon, P.: Gathering Datasets for Activity Identification. In: Workshop on Developing Shared Home Behaviour Datasets to avdance HCI and Ubiquitous Computing Research at CHI 2009 (2009) 7. Logan, B., Healey, J., Philipose, M., Tapia, E.M., Intille, S.S.: A long-term evaluation of sensing modalities for activity recognition. In: Krumm, J., Abowd, G.D., Seneviratne, A., Strang, T. (eds.) UbiComp 2007. LNCS, vol. 4717, pp. 483–500. Springer, Heidelberg (2007) 8. van Kasteren, T., Noulas, A., Englebienne, G., Krose, B.: Accurate activity recognition in a home setting. In: Ubicomp 2008: Proceedings of the 10th International Conference on Ubiquitous Computing, pp. 1–9. ACM, New York (2008) 9. Wren, C., Munguia-Tapia, E.: Toward scalable activity recognition for sensor networks. In: Hazas, M., Krumm, J., Strang, T. (eds.) LoCA 2006. LNCS, vol. 3987, pp. 168–185. Springer, Heidelberg (2006) 10. Tapia, E.M., Intille, S.S., Larson, K.: Activity recognition in the home using simple and ubiquitous sensors. In: Ferscha, A., Mattern, F. (eds.) PERVASIVE 2004. LNCS, vol. 3001, pp. 158–175. Springer, Heidelberg (2004) 11. van Kasteren, T.L.M., Krose, B.J.A.: A sensing and annotation system for recording datasets from mutliple homes. In: Proceedings of the 27th Annual Conference Extended Abstracts and Human Factors in Computing Systems, CHI 2009, Boston, MA, USA, April 04-09, pp. 4763–4766. ACM, New York (2009) 12. Hong, X., Nugent, C., Finlay, D., Chen, L., Davies, R., Wang, H., Donnelly, M., Zheng, H., Mulvenna, M.: Open Home: Approaches to Constructing Sharable Datasets within Smart Homes. In: Proceedings of the 27th Annual Conference Extended Abstracts and Human Factors in Computing Systems, CHI 2009, Boston, MA, USA, April 04-09, pp. 4763–4766. ACM, New York (2009) 13. Nugent, C.D., Mulvenna, M.D., Hong, X., Devlin, S.: Experiences in the development of a smart lab. International Journal of Biomedical Engineering and Technology 2(4), 319–331 (2009)
188
F. Cruciani et al.
14. Claron Technology Inc., “Micron Tracker” (2009), http://www.clarontech.com/measurement.php (accessed July 14, 2010) 15. Magherini, T., Parente, G., Nugent, C.D., Donnelly, M.P., Vicario, E., Cruciani, F., Paggetti, C.: Temporal logic Bounded Model-Checking for Recognition of Activities of Daily Living. Transaction on Biomedical Engineering Letters (July 2010) (submitted) 16. Sun Microsystem Laboratories, “Sun SPOT World”, http://sunspotworld.com/ (accessed: July 14, 2010)
Pro-active Strategies for the Frugal Feeding Problem in Wireless Sensor Networks Elio Velazquez, Nicola Santoro, and Mark Lanthier School of Computer Science, Carleton University, Canada evelazqu,santoro,[email protected]
Abstract. This paper proposes a pro-active solution to the Frugal Feeding Problem (FFP) in Wireless Sensor Networks. The FFP attempts to find energy-efficient routes for a mobile service entity to rendezvous with each member of a team of mobile robots. Although the complexity of the FFP is similar to the Traveling Salesman Problem (TSP), we propose an efficient solution, completely distributed and localized for the case of a fixed rendezvous location (i.e., service facility with limited number of docking ports) and mobile capable entities (sensors). Our pro-active solution reduces the FFP to finding energy-efficient routes in a dynamic Compass Directed unit Graph (CDG). The proposed CDG incorporates ideas from forward progress routing and the directionality of compass routing in an energy-aware unit sub-graph. Navigating the CDG guarantees that each sensor will reach the rendezvous location in a finite number of steps. The ultimate goal of our solution is to achieve energy equilibrium (i.e., no further sensor losses due to energy starvation) by optimizing the use of the shared resource (recharge station). We also examine the impact of critical parameters such as transmission range, cost of mobility and sensor knowledge in the overall performance.
1
Introduction
The problem of achieving continuous operation in a robotic environment by refueling or recharging mobile robots has been the focus of attention in recent research papers. In particular, [12,13] present this problem as the Frugal Feeding Problem (FFP), for its analogy with occurrences in the animal kingdom. The FFP attempts to find energy-efficient routes for a mobile service entity, also called “tanker”, to rendezvous with every member of a team of mobile robots. The FFP has several variants depending on where the “feeding” or refueling of the robots takes place: at each robot’s location, at a predefined location (e.g., at the tanker’s location) or anywhere. Regardless of which variant is chosen, the problem is to ensure that the robots reach the rendezvous location without “dying” of energy starvation during the process. In a Wireless Sensor Network (WSN) deployment, the sensors will eventually deplete their batteries and loss of coverage would occur. Some approaches to cope with an eventual loss of coverage attempt to extract energy from the G. Parr and P. Morrow (Eds.): S-Cube 2010, LNICST 57, pp. 189–204, 2011. c Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2011
190
E. Velazquez, N. Santoro, and M. Lanthier
environment to extend network lifetime [19,20]. Others explore the use of mobile entities (e.g., robots, actuators, service stations) in conjunction with clustering techniques [16,24,11,22]. In general, energy management strategies can be categorized into two groups: cluster-based approaches (e.g., [18,14,28,9]) or mobility-based approaches (e.g., [17,27,15,26,11]) with some degree of overlap. In this paper we study the FFP in a wireless sensor network scenario where mobility capabilities are added to the sensors and static recharge facilities are deployed throughout the sensing area. In this variant of the FFP, the responsibility for maintaining the overall health of the network is shifted to the sensor side, whereas the service facilities play a more passive role. The rendezvous between sensors and facilities should take place at the closest facility’s original position, which is static. The maximum number of sensors that can rendezvous with a facility at any given time is determined by the number of docking ports or recharge sockets available at the facility. According to the FFP terminology introduced in [12], our problem can be seen as the “tanker absorbed” version of the FFP. The rendezvous between the service facility (i.e., tanker robot) and the mobile robots (i.e., mobile sensors in our case) takes place at the current location of the service facility. The location of the service facility is known a priori and the problem is reduced to finding energy efficient routes to reach the facility. Another characteristic of our scenario is that the sensors are static in nature. That is, they have been deployed and have been assigned specific tasks. Therefore, their movement to the rendezvous location will create coverage holes that should be kept minimal. The sensors need to communicate and coordinate their actions in order to achieve a common goal (i.e., continuous sensing operation without losses due to energy starvation). Furthermore, sensors should coordinate their moves in a loop-free manner so the intended destination (i.e., recharge station) is reached in a finite number of moves or steps. The ultimate goal of the FFP is to reach a state of energy equilibrium where there are no further sensor losses. This work also examines some underlying topologies that guarantee a loop free mobility strategy as well as the network parameters needed to achieve the state of equilibrium.
1.1
Related Work
In the FFP, as described in [12], specialized robots (called tankers) have to rendezvous with mobile robots to refuel or recharge them. The main goal is to minimize the amount of fuel (energy) required to move the robots and tankers to the rendezvous locations. The problem can have several variants: 1) the rendezvous can take place at the robot’s location. The robots in need of energy do not move but instead wait for the refueling tanker to come to their rescue. This is called: the robot-absorbed case. 2) the rendezvous takes place at the tanker’s location and the robots should move to the tanker’s original location. This is called tanker-absorbed case. 3) the rendezvous takes place at locations that do not coincide with the initial robot or tanker locations. The FFP also has a combinatorial component pertaining to the order in which the robots should
FFP in Wireless Sensor Networks
191
be recharged. Finding a solution to the FFP that guarantees that no robots die of energy starvation is an NP-Hard problem (as shown in [12]). The problem of determining where to place a docking station or recharger is examined by [4]. In this case, a team of mobile robots have the specific task of transporting certain items from a pick-up location to a drop-off location. To be able to work for a prolonged period of time, the robots should interrupt their work and visit the recharge station periodically (i.e., tanker-absorbed FFP). Their solution is to place the charger station close enough to the path followed by the robots but without causing interference to the robot’s movements. Examples of the robot-absorbed FFP can be found in [3,5,21]. In both cases, a charger robot is responsible for delivering energy to a swarm of robots. The recharging strategy is completely reactive (i.e., robots are only recharged when they become out of service and cannot move). In the scenario described in [3], the charger robot is equipped with several docking ports. However, the charger robot can travel to recharge a needed robot only if none of the docking ports are occupied, assuming that several depleted robots need to be close by in order to be recharged simultaneously. The simulations results presented in [5] showed that in a network with 64 robots and one charger station with only one docking port; there will be a large number of robots either abandoned or dead due to battery depletion. However, increasing the number of docking ports to 2, affects the performance dramatically by decreasing the number of robot deaths and improving the exploring/dead time ratios. The solution presented in [21] creates clusters based on the number of available chargers. The experimental results with this approach show that a network with 76 sensors deployed in an area of 1000x1000m2 requires at least 3 chargers to keep the network alive. The network is considered dead when more that 50% of the sensors die due to battery depletion. All the aforementioned scenarios satisfy the necessary conditions for mobile robots to be able to recharge themselves as presented in [3]. For example, the robots should be able to monitor their energy levels and detect when it is time to recharge. Second, they should be able to locate and move towards a charging station. Finally, there should be a mechanism for energy transfer either by docking or plugging in to the charging station or via wireless recharging at short distances (e.g. [1,2,17]). 1.2
Contributions
This paper proposes a pro-active solution to the Frugal Feeding Problem (FFP) in Wireless Sensor Networks. We propose an efficient solution, completely distributed and localized for the case of a fixed rendezvous location (i.e., service facility with limited number of docking ports) and mobile sensors. In particular, we propose to reduce the tanker-absorbed FFP with a fixed rendezvous location in a sensor network of arbitrary topology to finding energy-efficient routes in a dynamic Compass Directed unit Graph (CDG). We prove that energy-aware mobility strategies using the CDG are loop-free, guaranteeing that the sensors will reach the recharge station within a finite number of moves. The experimental
192
E. Velazquez, N. Santoro, and M. Lanthier
analysis of our solution confirms that energy equilibrium (i.e., no further losses due to energy starvation) can be achieved in a network of 100:1 sensor/station ratio with one station containing two docking ports. Our experiments also examine the impact of critical parameters such as transmission range, cost of mobility and station role. The main differences between our proposed solution to the FFP and the existing literature in the area of autonomous robot recharging are: 1) Our solution is completed distributed and localized; there is no need for an entity with global knowledge. Sensors are only aware of their immediate neighbors and the location of the closest facility. 2) Our approach is completely pro-active. The sensors act before their batteries reach a critical level to minimize coverage holes by making the shortest possible trip to the recharge station. 3) The algorithms for route selection and logical topologies used are dynamic and adaptive.
2
Pro-active Solution to the Facility-Absorbed FFP
Our pro-active solution to the FFP in the sensor network scenario is built from two main components: mobile capable sensors and static recharging facilities. The general requirement for our theoretical model is to maximize the network operating life by the autonomous recharging of low energy sensors. However, the ultimate goal is to achieve a state of equilibrium where no further losses are reported and to accomplish this with the minimum amount of resources. In general, the model includes the following key components: 1) A set of N sensors, S = {s1 , ..., sN } randomly distributed in an area of unspecified shape. 2) A randomly located static recharge facility F (i.e., rendezvous location). The facility is equipped with a fixed number of recharging ports or sockets. This represents the maximum number of simultaneous sensors at the rendezvous location. It is assumed that sensors can determine their own positions by using GPS or some other localization method. Sensors can communicate with other sensors within their transmission range R and they all move at the same speed. The distance to the closest facility should be within the sensors’ mobility range to guarantee a successful round-trip to the station with one battery charge. All communications are asynchronous; there is no global clock or centralized entity to coordinate communications or actions. We consider the sensors to be static in terms of their sensing requirements. In other words, from the point of view of the application (i.e., functional requirements), the sensors are static and placed at a specific set of coordinates. However, they all have the capability of moving if they decide to go to the service station to recharge their batteries. Consequently, a pro-active behavior implies that the sensors decide to act before their batteries reach a critical level. The general idea is that sensors will try to get closer to the rendezvous location by swapping positions with other sensors that are closer to the station and eventually make the shortest possible trip when their batteries reach a critical level. Every time a sensor visits the recharge station, a coverage hole is created. The duration of the hole depends on the recharging time plus the length of the
FFP in Wireless Sensor Networks
193
round-trip. In order to minimize coverage holes sensors will attempt a gradual approach towards the rendezvous location by swapping positions with higher energy sensors. The operating life of a sensor is divided in three stages depending on its battery status: 1) BATTERY OK or normal operation, 2) BATTERY LOW or energy-aware operation and 3) BATTERY CRITICAL or recharge-required operation. A sensor in a BATTERY OK state will perform its regular sensing functions as well as accept any swapping proposal from other sensors with less energy. When the battery level falls below a fixed threshold, the sensor switches its state to a more active BATTERY LOW state. In this state, the sensor will start its migration towards the service station, proposing swapping operations to sensors with higher energy levels. Finally, a sensor in the BATTERY CRITICAL state will contact the service station and wait until a socket or docking port has been secured, then it will travel to the station and recharge (see Figure 1).
BATTERY LOW
MIGRATE
SENSING
BATTERY CRITICAL
WAIT
BATTERY OK RECHARGING
SOCKET AVAILABLE
CHARGING COMPLETE
Fig. 1. A sensor’s life cycle
In this life cycle, it is the migration behavior that is of interest. The objective of the sensor during migration is to reach the recharge facility in an effective timely manner, while relying solely on local information. This can be done by allowing the sensor to explore energy-aware routes leading to the recharge facility. The chosen routes are based on a logical Compass Directed unit Graph (CDG). Definition 1. A graph G = (V, E) with vertices V = {v1 , ...vN } and edges E = {(vi , vj )} with 1 ≤ i < j ≤ N is called a Unit Disk Graph (or Unit Graph) if d(vi , vj ) ≤ R where d is the Euclidean distance between the sensors and R is the transmission range. Definition 2. A graph G = (V ∪ F, E ) with V ⊆ V and E ⊆ E is called Compass Directed unit Graph (CDG) if ∀ pair of sensors Si , Sj ∈ V and recharge facility F , the following conditions are satisfied: Unit graph criterion: d(Si , Sj ) ≤ R (1) where d denotes the Euclidean distance and R is the transmission range. Proximity criterion: d(Sj , F ) < d(Si , F ) and d(Si , Sj ) < d(Si , F ) (2)
194
E. Velazquez, N. Santoro, and M. Lanthier →
→
Directionality criterion: ∀Si , Sj pair, ∃Sjp such that Sj Sjp · Si F = 0 and d(Si , Sjp ) + d(Sjp F ) = d(Si , F )
(3)
Routing algorithms use the hop count as the metric to measure effectiveness. In our case, the hop count would be equivalent to the number of swapping operations between sensors in our CDG. Our solution to the FFP can be divided into two main stages: 1) the construction of the CDG and 2) the incremental swapping approach (i.e., migration) towards the rendezvous location. 2.1
Creating the CDG
Figure 2 shows an example of the proposed CDG for three sensors A,B,C and a facility F. In the first stage of the algorithm, it is assumed that all sensors have the required levels of energy to construct the CDG. The process is rather simple and can be summarized by the following actions: 1. Sensors position themselves at some initial fixed location that depends on the task at hand. 2. Sensor A sends a NEIGHBOR REQUEST broadcast message inviting other sensors to participate. 3. Upon receiving a NEIGHBOR REQUEST message from sensor A, immediate neighbors verify the neighboring criteria according to the following rules: a) Proximity: d(A, F ) > d(B, F ) and d(A, B) < d(A, F ). b) Directionality: For example, B and C are neighbors of A if the corresponding projections Bp and Cp on line AF intersect the line segment AF . 4. If the conditions a) and b) are met, then sensors B and C send a NEIGHBOR ACCEPT message. Otherwise they send a NEIGHBOR DENY message. In order to save energy, sensor A will then try to deviate as little as possible from the direction of the recharge station F . That is, sensor A will try to minimize the angle BABp . Therefore, all the sensors that satisfy the conditions a) and b) are
B
A
Bp Cp
F C
R: Transmission Range
Fig. 2. Compass Directed unit Graph
FFP in Wireless Sensor Networks
195
d(S ,S ) ranked according to the following function: f (Si , Sj ) = d(Si , Sj ) + d(Sji ,Sjp j) where Si , Sj are the neighboring sensors, d is the Euclidean distance, F is the recharge station and Sjp is the projection of Sj on the line segment Si F . At the end of this phase each sensor will have two routing tables: one containing its children (i.e., sensors from which NEIGHBOR ACCEPT messages were received) with their corresponding ranking and a second table containing its parents (i.e., sensors to which NEIGHBOR ACCEPT messages were sent). The routing tables are just partial maps of the network indicating the position of the children and parents. 2.2
Migration Strategy
The second stage of the algorithm starts when sensors change their state from BATTERY OK to BATTERY LOW as a result of their battery levels falling below the first threshold. Once a sensor enters this state, it will try to get closer to the facility by making a series of one-hop swaps with its graph neighbors. Some of the most relevant sensor interactions in this stage can be summarized by Algorithm 1. In an ideal system, all sensors will reach the BATTERY CRITICAL state when they are exactly at one-hop distance from the rendezvous location. When the trip to the recharge station is made from a one-hop position (i.e., there are no graph neighbors), we call this a “one-hop run” or “optimal run”. Contrarily, if the final trip is made from any other location, it is called a “panic run”. 2.3
Properties of the CDG
There are two important properties of the CDG (i.e., dynamic and self-correcting) that can be explained by the following scenarios. Both scenarios may cause situations where the information in the neighboring tables is obsolete. – Scenario 1: Simultaneous swapping. As part of the swapping process, the participating sensors exchange their neighboring information, that is, their corresponding children and parent tables. However, since multiple swapping operations may occur at the same time, when a sensor finally arrives at the position occupied by its swapping partner, the information in its neighboring tables may be out-of-date. – Scenario 2: Sensor recharging. While this process takes place, other sensors may be swapping positions. Once the recharging process is finished, the sensor returns to its last known position. However, the structure of the network around it has changed. This situation is even more evident when trips to the facility are made from distances of more than one hop as a result of “panic runs”. The solution to these problems is to define the neighboring information as position-based tables, where the important factor is the relative position of the neighbors and not their corresponding IDs. The information of the actual sensors
196
E. Velazquez, N. Santoro, and M. Lanthier
Algorithm 1. Excerpt of the migration algorithm for sensor S to facility F (* In State BAT T ERY OK : *) if BAT T ERY LEV EL < BAT T ERY LOW T HRESHOLD then rank=1 become BAT T ERY LOW end if (* In State BAT T ERY LOW : *) if BAT T ERY LEV EL < BAT T ERY CRIT ICAL T HRESHOLD then send RECHARGE REQU EST message to Facility become BAT T ERY CRIT ICAL else while rank ≤ numberOf N eighbors do send SW AP REQU EST to sensor with rank: rank become W AIT F OR SW AP REP LY end while end if (* In State W AIT F OR SW AP REP LY ST AT E : *) if receiving SW AP ACCEP T from Si then move to Si send SW AP COM P LET E rank=1 become BAT T ERY LOW end if if receiving SW AP DEN Y from Si then rank = rank + 1 become BAT T ERY LOW end if (* In State BAT T ERY CRIT ICAL : *) if receiving RECHARGE ACCEP T then lastPosition = currentPosition move to Facility recharge move to lastPosition send SEN SOR RECHARGED message become BATTERY OK end if
occupying the positions is dynamic. In other words, a sensor knows that at any given point in time it has n children at positions (x1 , y1 )...(xn , yn ) and p parents at positions (x1 , y1 )...(xp , yp ). This information is static and will not be modified. However, the identity of the sensors occupying the positions is dynamic and will get updated every time a swapping operation occurs. The mechanism to detect changes in the routing tables is triggered by sending a SWAP COMPLETE message. When two neighboring sensors successfully complete a swapping operation, they will announce their new positions by sending SWAP COMPLETE messages. Sensors within the transmission range that listen to this message will
FFP in Wireless Sensor Networks
197
verify whether any of the positions involved in the exchange belongs to their routing tables and update the appropriate entry with the new occupant of that position. On the other hand, a sensor returning from the service station (e.g., scenario 2) needs to re-discover the new occupants of its routing tables. This process is initiated by a SENSOR RECHARGED message sent by the newly recharged sensor as soon it reaches its last known position on the network. Potential children and parents, upon receiving this message, will reply with CHILD UPDATE and PARENT UPDATE messages accordingly. This process is also used for parents to update their information about the energy levels of this newly recharged sensor. These two important properties, along with a neighboring criteria that incorporates ideas from forward progress and compass routing [23,10,8] in an energyaware unit graph, ensure the following lemma: Lemma 1. The swapping-based pro-active solution to the FFP guarantees that all sensors reach the rendezvous location within a finite number of swapping operations. Proof. Let G = (V, E) be a CDG with a set of vertices V = {S1 , ..., SN , F } where Si , 1 ≤ i ≤ N represent sensors and F denotes the rendezvous location. Let E be a set of edges of the form Si → Sj where Sj is neighbor of Si . By definition, G satisfies the conditions of proximity (2) and directionality (3). Without loss of generality, we can assume that for any path Pi =< Si , ..., SK , F > leading to the recharge station F , with 1 ≤ i < K ≤ N , the sub-path containing the sensors < Si , ..., SK > does not contain any cycles. This claim can be proved by contradiction. Let us assume that the rendezvous location cannot be reached. This means that at some point during the execution of the algorithm a given sensor finds itself in a loop (i.e., a cycle C of arbitrary length L is found). Let C = Si , S(i+1) ..., S(L−1) {SL , Si } with 1 ≤ i < L ≤ N . If such a cycle C exists, sensor Si must be neighbor of sensor SL which means that d(Si , T ) < d(SL , T ). This contradicts the proximity criterion (2)(triangular inequality). Hence, the Lemma holds.
During the algorithm, the facility plays a rather passive role. The facility’s responsibilities are limited to keeping a queue of waiting sensors ranked by their energy levels and notify the sensors when a socket or docking port becomes available. In a passive scenario, a socket becomes available when the sensor has reached 100% of its battery level and sends a RECHARGE DONE message to the facility. In an active scenario, the facility does not have to wait for the sensor’s battery to be 100% recharged. In this case, a sensor will notify the facility when its battery has reached an operational level (e.g., 75%). Consequently, the facility could halt the charging process by sending a TERMINATE RECHARGE, if there are other sensors waiting in line.
198
3
E. Velazquez, N. Santoro, and M. Lanthier
Experimental Results
Previous work on energy consumption of wireless sensor networks and protocols such as 802.11, have found that the energy required to initiate communication is not negligible. In particular, loss of energy due to retransmissions, collisions and acknowledgments is significant [6,7]. Therefore, protocols that rely on periodic probe messages and acknowledgments are considered high cost. For these reasons, the design of our algorithm and related coordination had to be flexible enough to avoid the use of probe messages and complicated state-full protocols. It is also noted in the literature that energy consumption of sensors in idle state can be as large as the energy used when receiving data [7]. On the other hand, the energy used in transmitting data is between 30-50% more than the energy needed to receive a packet. These differences, between the energy needed to perform the basic operations and the percentage of battery usage, vary depending on the communication protocols, hardware and type of battery used. A common problem faced by any solution involving mobile entities is that of determining a way to accurately represent the cost of energy spent when moving from one location to another. Locomotion cost depends on many factors such as weight of the electronic components, irregularities in the terrain, obstacles, etc. For simplicity, in [12], the weighted Euclidian distance between the origin and destination is used as the cost of relocating a robot. In this paper we consider an experimental setting based on real robots deployed in a controlled environment. The goal of the experiments was to identify the impact of basic operations (i.e., communication, sensing, locomotion, idle operation) on the overall battery life. The experiments were based on the PropBot 2.0 mobile robot (Figure 3(a)) developed in the School of Computer Science Robotics Lab at Carleton University. The robot’s hardware specification includes: Parallax Propeller Microprocessor (with 8 processors), Parallax Continuous Rotation Servos, CUMCam camera, three Sharp GP2Y0D810Z0F Digital Distance Sensors and Nubotics WW-01 Encoders. Communications use the Parallax EasyBluetooth module and batteries are custom-made 6v battery packs using 2600mAh NIMH AA cells (Figure 3(b)). A single mobile robot was deployed in an area of 2m x 1.5m and tests were performed to determine battery drain under the following conditions: 1) idle state, 2) continuous movement, 3) communication 4) sensor usage. Our preliminary results show that energy spent in communications (i.e., send/receive) is 25% more than the battery drain in the idle state. Battery drain under perpetual movement (i.e., locomotion costs) is almost twice as much as communication cost. Sensing with the CUMCam was among the most costly operations and the battery recharge was 14x faster than battery drain in the idle state. These findings were incorporated into our simulation scenarios to study the impact of critical variables on the overall solution. The simulation scenarios are implemented in Omnet++ along with the mobility framework extension [25]. For all experiments, the sensors and charging facilities were randomly placed in an area of 1000x1000m2. The analysis of our simulated results centers on three important aspects of the solutions:
FFP in Wireless Sensor Networks
(a) PropBot 2.0
199
(b) Battery Pack
Fig. 3. Experimental equipment
1) Whether or not a state of equilibrium is achieved and the number of sensor losses until such condition is met. 2) Impact of several variables such as: transmission range, mobility cost, etc. 3) Role played by the charging station: passive vs. active. In all cases the quality of the strategy is measured in terms of optimal runs vs. panic runs. Constant cost values are assigned to each basic operation (i.e., send, receive, idle and move). Initial values for these operations are based on the observations with the PropBot robot as well as some of the experiences found in the literature [6,7].
3.1
Sensor Losses over Time
Our first test attempted to determine whether our pro-active solution to FFP reaches a state of equilibrium and to measure the cumulative number of losses until this condition is met. In other words, it measured the number of sensor losses over time until the system reached a state where no more losses were reported. Figure 4(a) shows the result of a simulation involving 100 sensors and one service facility. The facility is equipped with two sockets which allow only two sensors to be recharged at the same time. A series of 30 experiments with different random deployments were run for 106 simulation seconds. The sensor transmission range is fixed at 100m and the energy ratio for sending/receiving a packet is set to a constant (E : E/2). Locomotion costs were based on the weighted Euclidean distance with a weight factor of 1/5E per meter traveled. Confirming our expectations, our algorithm reached the state of equilibrium for all the random deployments. In comparison, the work of [5] and [21], required two and 3 stations or actors, respectively, to maintain a live network (50% or more sensors remain after equilibrium was reached). In our case, equilibrium was achieved with 1 facility with two docking ports for a similar network size and over 80% of network survivability.
200
E. Velazquez, N. Santoro, and M. Lanthier
Sensor Losses Unl Equilibrium For Various Transmission Ranges
Cumulave Sensor Losses Over Time 70 16
60
14
Sensor Losses
Sensor Losses
12 10 8 6
50 40 30 20
4
10
2 0
0
0E+00
1E+05
2E+05
3E+05
50m
75m
100m
Time (seconds)
200m
300m
400m
Transmission Range
(a)
(b) Sensor Losses Unl Equilibrium For Variable Mobility Costs
Recharge Trips For Variable Transmission Ranges 35
2500
30 Sensor losses
Recharge Trips
2000 1500 1000
25 20 15 10
500
5 0
0 50m
75m
100m
200m
Panic Runs
300m
1/10 E
400m
1/5 E
1/2 E
(c)
1800
4000
1600
3500
Recharge Trips
4500
1400 1200 1000 800
2000
1000
200
500
0
0 E
Panic Runs
5E
64%
2500
400
1/2E
4E
3000
1500
600
1/5E
3E
Passive vs Acve Recharge Staon
2000
1/10E
2E
(d)
Recharge Trips For Variable Mobility Costs
Recharge Trips
E
Mobility Cost
One-hop Runs
2E
3E
4E
5E
59% 63% 36% 41%
37% Passive Staon - 100%
One-hop Runs
(e)
Acve Staon -75%
Panic Runs
Acve Staon -50%
One-hop Runs
(f) Fig. 4. Experimental results
3.2
Transmission Range and Mobility Cost
The second experiment was designed to verify the impact of the sensor’s transmission range on the overall performance. The characteristics of the network were the same as in the previous test and the experiments are run the same
FFP in Wireless Sensor Networks
201
length of time. The only difference is that the transmission range was varied from 50m, 75m, 100m, 200m, 300m and 400m. Figure 4(b) shows the cumulative number of sensor losses until equilibrium for each range value. In a deployment of 1000x1000m2 a transmission range of 50m was too restrictive, which means that most of the sensors were isolated and the number of immediate neighbors in the CDG was too small to guarantee a gradual approach towards the recharge location. Another interesting observation is that by increasing the transmission range, the number of losses decreased dramatically. However, for larger ranges (300m and 400m) there was a decline on the overall performance, since many neighbors were discovered resulting in an added overhead to maintain more information per sensor as well as additional interactions due to update messages as result of successful swapping and recharging operations. Figure 4(c) shows the quality of the solution in terms of one-hop runs vs. panic runs. In an ideal system, our solution should reach the state of equilibrium using one-hop runs only. As expected, for a transmission range of 50m, most of the trips could be considered panic runs since there is almost no migration due to the lack of one-hop neighbors. The best breakdown between one-hop and panic runs occurs with 100m range. However, there are more visits to the recharge location, when compared to the 200m, 300m and 400m cases. Although there is no clear explanation to this phenomenon, one can argue that there is a trade-off between the total number of recharge trips and the breakdown between one-hop vs. panic runs. In a panic run situation, a sensor travels from a more distant location and after having been recharged, it needs to travel further in order to return to its initial location. This situation creates a coverage hole that would last for a longer period of time, when compared to a one-hop run. However, more one-hop recharge trips also means more coverage holes but for shorter periods of time. The next experiment explored the impact of locomotion cost on the number of losses until equilibrium was reached as well as the distribution and number of recharge trips. The network setup remained the same with the transmission range fixed at 100m. Figure 4(d) shows the number of losses until equilibrium for several mobility costs. The cost function is based on the weighted distance traveled by the sensors, with the weight constant w defined as a function of the energy spent to send a packet. For example: if E is the energy spent to send a packet over the 100m range, then for each meter traveled, the sensor will spend wE units of energy, where w ∈ {1/10, 1/5, 1/2, 1, 2, 3, 5}. In other words, the energy spent to move the robot 100m, ranges from 10x to 500x the energy required to send a packet over the same distance. In particular, the values observed for the ProbBot robot fluctuated around 54x the communication energy. The simulation results show that as the locomotion costs increase (in relation to the transmission cost) so does the number of sensor losses until equilibrium. The trend seems to be closer to a step function with clear discrete increments at some values. Another observation is that despite the increase in the number of sensor losses, the network survivability is still over 70%, even for the worst case.
202
E. Velazquez, N. Santoro, and M. Lanthier
In terms of the quality of the solution in the variable mobility cost scenario, Figure 4(e) shows the same step function behavior for the total number visits to the station. However, there is a significant degradation on the number of onehop trips as the locomotion cost increases because a larger number of sensors fall into the BATTERY CRITICAL state before completing their migration. 3.3
Passive vs Active Charging Station
The last experiment examined the case where the recharge station was given a more active role in order to minimize sensor losses while waiting for an available socket.Figure 4(f) shows the comparison between passive vs. active charging stations while using the same sensor deployment as the previous tests. These particular tests used a fixed transmission range of 100m, a mobility cost factor of 1/5E per meter traveled and a recharge rate 10x faster than battery drain in idle state. In this experiment, the sensors notified the station when their batteries reached 100%, 75% and 50% of charge. By giving a more active role to the facility, the network survivability at the state of equilibrium was improved from 80% to close to 90% for the 75% recharge case. The number of one-hop trips was also improved from 37% to 41% for the 75% sensor recharge case. However, the number of recharge trips increased by 30%. Recharging the batteries to 50% of their capacity almost doubled the number of recharge visits when compared to the 75% case. The number of one-hop trips also decreased but the network survivability remained close to 90%. Once again there is a trade-off between creating temporary coverage holes produced by additional recharge trips and permanent coverage holes produced by sensor losses due to battery depletion.
4
Conclusions and Future Work
In this work we have presented an efficient, completely distributed and localized solution for the facility absorbed Frugal Feeding Problem (FFP). Our novel solution recommends taking a pro-active approach to energy restoration based on the construction of a Compass Directed Graph (CDG) and a swapping-based incremental approach towards the rendezvous location. In summary, our proactive solution has the following properties: 1) The proposed CDG guarantees that sensors will reach the rendezvous location within a finite number of swapping operations. The trajectory is loop-free. 2) All decisions made by the sensors regarding the next swapping operation are based on local knowledge (i.e., the algorithms are completely distributed and localized). 3) The proposed CDG and the incremental swapping algorithm are dynamic and self-correcting: neighboring information is updated any time a successful swapping or recharge operation takes place. The experimental analysis of our novel pro-active solution to the FFP shows that for networks of 100:1 sensor/facility ratio, a state of energy equilibrium can be
FFP in Wireless Sensor Networks
203
reached with over 80% network survivability. The simulations also expose several trade-offs between key variables (i.e., transmission range, locomotion cost) and the quality of the overall solution in terms of optimal and panic visits to the facility. The simulations also show that when giving the facility a more active role during the recharging process, the state of equilibrium can be reached with close to 90% network survivability. The breakdown between optimal and panic runs is also improved but there is an increase in the overall number of recharge trips. Future enhancements to this work may explore in more detail the impact of mobility. Locomotion costs are very dependent on physical conditions, hardware specifications, battery technology, etc. This may involve the use of the PropBot mobile robots in larger scale implementations of our pro-active solution. Another possibility may also include the study of other underlying topologies based on a different neighbor selection process as well as a new threshold selection mechanism based on the number of hops needed to reach the recharge station as opposed to the current distance-based approximation.
References 1. Afzar, M.I., Mahmood, W., Akbar, A.H.: A battery recharge model for wsns using free-space optics (fso). In: Proceedings of the 12th IEEE International Multitopic Conference, pp. 272–277 (2008) 2. Afzar, M.I., Mahmood, W., Sajid, S.M., Seoyong, S.: Optical wireless communication and recharging mechanism of wireless sensor network by using ccrs. International Journal of Advance Science and Technology 13, 59–68 (2009) 3. Arwin, F., Samsudin, K., Ramli, A.R.: Swarm robots long term autonomy using moveable charger. In: Qi, L. (ed.) FCC 2009. Communications in Computer and Information Science, vol. 34, Springer, Heidelberg (2009) 4. Couture-Beil, A., Vaughan, R.: Adaptive mobile charging stations for multi-robot systems. In: Proceedings of the 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 1363–1368 (2009) 5. Drenner, A., Papanikolopoulos, N.: Docking station relocation for maximizing longevity of distributed robotic teams. In: Proceedings of the 2006 IEEE International Conference on Robotics and Automation, pp. 2436–2441 (2006) 6. Feeney, L.: An energy consumption model for performance analysis of routing protocols for mobile ad hoc networks. Mobile Network Applications 6, 239–249 (2001) 7. Feeney, L., Nilsson, M.: Investigating the energy consumption of a wireless network interface in an ad hoc networking environment. In: IEEE Infocom (2001) 8. Frey, H., Ruhrup, S., Stojmenovic, I.: In Guide to Wireless Sensor Networks. In: Routing in Wireless Sensor Networks. Springer, London (2009) 9. Heinzelman, W., Chandrakasan, A., Balakrishnan, H.: An application-specific protocol architecture for wireless microsensor networks. IEEE Transactions on Wireless Communications 1, 1276–1536 (2002) 10. Kranakis, E., Singh, H., Urrutia, J.: Compass routing on geometric networks. In: Proceedings of the 11th Canadian Conference on Computational Geometry, pp. 51–54 (1999) 11. Li, X., Nayak, A., Stojmenovic, I.: Exploiting actuator mobility for energy-efficient data collection in delay-tolerant wireless sensor networks. In: Fifth International Conference on Networking and Services ICNS, pp. 216–221 (2009)
204
E. Velazquez, N. Santoro, and M. Lanthier
12. Litus, Y., Vaughan, R., Zebrowski, P.: The frugal feeding problem: energy-efficient, multi-robot, multi-place rendezvous. In: Proceedings of the 2007 IEEE International Conference on Robotics and Automation, pp. 27–32 (2007) 13. Litus, Y., Zebrowski, P., Vaughan, R.T.: A distributed heuristic for energy-efficient multirobot multiplace rendezvous. IEEE Transactions on Robotics 25, 130–135 (2009) 14. Lung, C.-H., Zhou, C., Yang, Y.: Applying hierarchical agglomerative clustering to wireless sensor networks. In: Proceedings of the International Workshop on Theoretical and Algorithmic Aspects of Sensor and Ad-hoc Networks, WTASA (2007) 15. Luo, J., Hubaux, J.P.: Joint mobility and routing for lifetime elongation in wireless sensor networks. In: Proceedings of IEEE INFOCOM, pp. 1735–1746 (2005) 16. Mei, Y., Xian, C., Das, S., Hu, Y., Lu, Y.: Sensor replacement using mobile robots. Computer Communications 30, 2615–2626 (2007) 17. Michaud, F., Robichaud, E.: Sharing charging stations for long-term activity of autonomous robots. In: Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems, vol. 3, pp. 2746–2751 (2002) 18. Perillo, M., Zhao, C., Heinzelman, W.: On the problem of unbalanced load distribution in wireless sensor networks. In: Global Telecommunications Conference Workshops, GlobeCom 2004, pp. 74–79 (2004) 19. Rahimi, M., Shah, H., Sukhatme, G., Heidemann, J., Estrin, D.: Studying the feasibility of energy harvesting in a mobile sensor network. In: Proceedings of the IEEE International Conference on Robotics and Automation, pp. 19–24 (2003) 20. Roundy, S., Otis, P., Chee, Y., Rabaey, J., Wright, P.: A 1.9ghz rf transmit beacon using environmentally scavenged energy. In: IEEE International Symposium on Low Power Electricity and Devices (2003) 21. Sharif, M., Sedighian, S., Kamali, M.: Recharging sensor nodes using implicit actor coordination in wireless sensor actor networks. Wireless Sensor Networks 2, 123– 128 (2010) 22. Simplot-Ryl, D., Stojmenovic, I., Wu, J.: Energy efficient backbone construction, broadcasting, and area coverage in sensor networks. In: Handbook of Sensor Networks: Algorithms and Architectures, pp. 343–379 (2005) 23. Stojmenovic, I., Lin, X.: Power-aware localized routing in wireless networks. IEEE Transactions on Parallel and Distributed Systems 12, 1122–1133 (2001) 24. Tirta, T., Lau, B., Malhotra, N., Bagchi, S., Li, Z., Lu, Y.: Controlled mobility for efficient data gathering in sensor networks with passively mobile robots. In: IEEE Monograph on Sensor Network Operations (2005) 25. Vargas, A.: The omnet++ discrete event simulation system. In: Proceedings of the European Simulation Multi-Conference (ESM 2001), pp. 319–324 (2001) 26. Wang, W., Srinivasan, V., Vikram, K.: Extending the lifetime of wireless sensor networks through mobile relays. IEEE/ACM Transactions on Networking 16, 1108– 1120 (2008) 27. Warwerla, J., Vaughan, R.: Near-optimal mobile robot recharging with the ratemaximizing forager. In: Proceedings of the 9th European Conference on Artificial Life, pp. 776–785 (2007) 28. Younis, O., Fahmy, S.: Heed: A hybrid, energy-efficient, distributed clustering approach for ad hoc sensor networks. IEEE Transactions on Mobile Computing 3, 366–379 (2004)
Integrating WSN Simulation into Workflow Testing and Execution Duarte Vieira and Francisco Martins Faculdade de Ciˆencias da Universidade de Lisboa LaSIGE & Departamento de Inform´ atica Edif´ıcio C6 Piso 3, Campo Grande 1749 - 016 Lisboa, Portugal [email protected], [email protected]
Abstract. Sensor networks are gaining momentum in various fields, notably in industrial and environmental monitoring, and more recently in logistics. The information gathered from the environment (by sensor networks) may influence the execution of workflows, making it difficult to test these systems as a whole. Generally, the tests carried out on the aforementioned systems make use of recorded information in earlier workflow executions. Alternatively, we propose the testing of such workflows by incorporating results obtained from the simulation of sensor network applications, allowing the testing of new workflows, as well as of the changes made to a given workflow by events in the environment. This paper describes a means of integrating existing platforms with the aim of introducing the simulation of sensor networks in workflow testing and execution.
1
Introduction
A wireless sensor network (WSN) consists of a collection of tiny devices capable of measuring a given scalar or vector field and that can communicate wirelessly. In a WSN there can be nodes with additional capabilities (besides sensing the environment) such as act on the environment (actuators), collect and process data from sensors, or (re)configure the network behavior (base stations). The topic of sensor networks has attracted the attention of both companies and research groups. The challenges raised in terms of hardware, such as device miniaturization, battery capacity improvement, and communication range increase, are as important as those raised at the software level, particularly in what concerns the operating systems and the programming languages for these devices. In both areas there have been important advances [2, 18, 27]. The applications of sensor networks are many and include, for example, reading our body’s vital signs (body sensor networks) or monitoring environmental conditions (e.g., management of air quality) [2]. Another application area is the Internet of Things and Services (ITS), which aims at integrating the state of the world seen from the eyes of sensors in high-level applications available on the Internet. An ITS application may benefit from environment observations and G. Parr and P. Morrow (Eds.): S-Cube 2010, LNICST 57, pp. 205–218, 2011. c Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2011
206
D. Vieira and F. Martins
adapt its behavior accordingly. For example, a home automation system may extend its features and, in addition to the traditional task scheduling (e.g., turning on and off a given device according to a predetermined plan), react to environmental conditions or to behavioral rules of the inhabitants. Another application area is logistics, where, for example, we may want to tailor a delivery process according to the actual conditions of the goods being transported, and to the traffic information in the route to the destination. In this case, the information obtained from the sensors can lead to modifications in the delivery process, such as a change in the order by which the goods are delivered. Applications that encode workflows are difficult to test because their behavior depends on external events (the world) that are, in the general case, nondeterministic. The most common approach to test these applications is to replay the execution of saved workflow traces. Although simple, this approach suffers from several problems, namely: (a) only allows the testing of workflows that retain the same execution trace as the ones saved, (b) disallows the testing of brand new workflows, and (c) prevents the testing of new variants of a given workflow. The approach we propose to test these applications is to obtain environmental information by simulating the sensor networks, enabling the creation of virtual environments where we can study the behavior of sensors per se, the sensor network as a whole, and the application. However, this approach comes with a price: (i) it requires the construction of a simulation model to simulate the WSN; (ii) as the scope of sensor networks widens it is necessary to use simulators, which constitutes by itself a challenge because simulating a sensor network with nontrivial behavior is far from being a simple, fast task; and (iii) the results of these simulations need to be integrated into the workflow management systems (WFMS). In [26] we describe a process for automatizing the creation of simulation models for a WSN from high-level specifications that addresses (i) and (ii). In this paper, we focus on integrating these WSN simulations in workflow management systems, covering (iii). We identify two main challenges in such integration: i) the communication between the WSN simulator and the WFMS, and ii) the changeover (in a transparent way) between using a WSN simulation and a real WSN. As for i) the communication needs to occur in both ways, i.e., from the WSN simulator to the WFMS, when acquiring sensor readings, and verso, for example, for allowing the workflow to reprogram the network when necessary. In both directions we use techniques that push data between systems, instead of periodically querying the target systems. As for ii) we want to minimize the changes made in the workflow definition, and therefore reduce the testing impact of the changes cause by changeovers between real and simulated WSN. In the remainder of this paper, Section 2 presents related work, specifically aimed at integrating real WSN data into workflow execution; Section 3 discusses the simulation of WSN, while presenting a programming language and a simulator; Section 4 presents some workflow management systems and elaborates on the integration of WSN simulations with the execution of workflows; Section 5
Integrating WSN Simulation into Workflow Testing and Execution
207
presents a scenario in logistics that illustrates the use of workflow management systems, integrating workflows with sensor networks to assess the conditions in which takes place a supply of materials sensitive to temperature; and, finally, Section 6 concludes the paper and presents some directions for future work.
2
Related Work
The integration between WSN and workflow execution present in the literature applies only to real sensor networks. Typically, the integration is performed when the data collected by the WSN is made available through web services. An example of such approach is the Graphical Workflow Execution Language for Sensor Networks [9] (GWELS), a language and toolkit for integrating WSN applications into workflow execution. The toolkit adapts WSN data coming from XML web services as data sources, and provides an environment where it is possible to define a system that tailors its behavior accordingly with the received data. Another example of this approach is presented in [10], where the authors propose to achieve integration in a fully standard-compliant way, using standard web service technologies for data acquisition and communication, and a Business Process Execution Language [15] (BPEL) engine for the workflow execution. A similar approach in nature are the sensor grids [17] that aim at integrating WSN data (provided by web services) in grid computing environments, such as GridKit [11]. In [21], the authors use grid workflow management systems to integrate a WSN available on the grid in a workflow. The workflow code itself is generated from a graphical modeling environment. In both previous approaches, the data is provided by real WSN. We argue that real WSN data is not suitable for testing workflows that encompass WSN applications, because it is generally not possible to control what the WSN senses. Data from WSN simulation, on the other hand, varies accordingly with the simulation itself and, therefore, gives us more control over the testing process. In addition, if the workflow requires the WSN to behave differently (to be reprogrammed), the previous approaches do not provide the means to test the reprogramming before deployment.
3
Simulation of Wireless Sensor Networks
WSN simulation provides a test bed that allows for quick deployments of WSN applications and gives the opportunity to test these applications in networks with a variable (potentially large) number of sensor nodes. When simulating WSN, we not only test low-level issues, such as signal transmission, or measurement of physical quantities, but also high-level aspects such as communication protocols and application execution. Simulation is therefore of great interest in any activity that involves sensor networks. In [26] we propose a simulation model generator for sensor network applications and present encouraging results, both in terms of simulation duration time
208
D. Vieira and F. Martins
and in terms of memory usage when simulating applications with several hundred of sensors. In this section, we present Callas, a language for programming WSN, as well as the simulation model generator. 3.1
Callas
Callas [20] is a programming language aiming at establishing a formal basis for the development of languages and run-time systems for WSN. The language can be used directly as a programming language for sensor networks or as an intermediate language to which higher level WSN languages can be compiled. The Callas programming language is type-safe. This property ensures that well-typed programs do not produce errors at run-time, a property of extreme importance in the context of sensor networks, in which testing and debugging are difficult or even impossible to perform after sensors are deployed in a real environment. Another feature of the language permits the reprogramming of sensors remotely. This allows for bug corrections or application upgrades without having to physically redeploy the sensors. In a Callas network, all network nodes implement the same interface (enforced at compile time), but they may behave differently depending on the actual implementation of the interface. A Callas application consists of several files: an interface file defining the functions available to the sensor network (all nodes in the network share the same type); a program file by sensor type describing its behavior. and, finally, a network description file that details the network configuration in terms of, e.g., number, type, position, behavior, of each sensor in the network. We omit Callas code in this paper, but the interested reader may refer to [19, 20] for several examples of programming with Callas. The execution of Callas programs is performed using a virtual machine. This allows for abstracting sensors hardware (which is extremely heterogeneous) and for supporting features of the language, such as sensor reprogramming, across multiple platforms. Currently, we have an implementation of the Callas virtual machine (CVM) for the SunSpots, for sensors running TinyOS [12], and for the VisualSense [4] WSN simulator. The architecture of the CVM is depicted in Figure 1. It includes three threads: one that runs the interpreter, one that receives messages from the network, and one that sends messages to the network. The communication model of the virtual machine is very akin to middleware systems except that calls are obviously asynchronous. The main thread executes the interpreter, that evaluates the Callas programs and reads and places messages from the input and on the output queues. There are two additional threads for interfacing between CVM and the sensor low-level communication devices, adapting messages (byte-code) accordingly. The Callas language allows for low-level function calls to the sensor operating system through the use of a special language construct (extern). The CVM must implement the interface with the corresponding operations in the operating system. In fact, the CVM is parametric on an object, identified in the figure as Ext Op, that must be instantiated for the particular operating system where it
Integrating WSN Simulation into Workflow Testing and Execution
Callas VM
209
Ext Op
Main Thread Execution Engine Input Q.
Output Q.
Receiver Thread
Sender Thread
Buffer
Buffer
Byte-code M.
Byte-code M.
Sensor OS Input/Output
Wireless Network Fig. 1. The Callas virtual machine architecture
is running. For executing the CVM embedded as a VisualSense actor, we must provide this Ext Op object adapted to interact with VisualSense, in particular, with network-related code and with the external operations of the run-time system. 3.2
Simulating Callas Applications
Sensor network simulators can be categorized as architecture-specific, when only one node/architecture is supported, or as generic, when the simulator provides the means to model the sensor nodes. VisualSense [4] is a generic, Open Source WSN simulator based on the modeling framework Ptolemy II [8], developed at UC Berkeley. It allows (a) to simulate all the WSN aspects mentioned in the begining of Section 3, (b) to simulate networks with nodes running different code among them, a rare feature [7], and (c) to model and simulate using a GUI. In Ptolemy II, modeling is accomplished using components called actors that interact solely by message passing, following the Actor Model [1]. The simulation model generator presented in [26] allows the end user to parameterize the number and distribution of the nodes, the program that runs on each sensor (or set of sensors), and the network and node models to be used. A simulation model for the application is generated from a Callas network description file. It contains information needed to compile the application ( interface and program code for each sensor type), and information specific to construct
210
D. Vieira and F. Martins
the simulation, given in the form of key = value pairs. Section 5 provides a concrete example of network description file used to create a simulation model. The performance and scalability evaluation of the resulting WSN models execution, depicted in Figure 2 were obtained with VisualSense 7.01 on a Linux based PC with an Intel QuadCore 2.66GHz CPU and 3.4GB of RAM. Our experiments show that the simulation duration grows polynomially while the memory footprint grows linearly. We believe that simulation duration is not a critical factor, as one would expect to wait for a few hours before having results for a 5000 node network.
18 16 14 12 10 Duration (Hours) 8
Memory (GB)
6 4 2 0 100
200
400
500
1000 2000 3000 4000 5000
Fig. 2. Duration of simulation and memory usage (horizontal axis) given the number of sensors in the network (vertical axis)
4
Workflow Execution with WSN Simulation Integration
Workflow management systems are specialized computer systems where it is possible to model/program, test, and execute workflows. These systems are mainly used to analyze business processes, but they can also be used for design-time business process validation [22]. Most workflow management systems are bound to a particular language, for instance, the YAWL System [25] executes workflows written in Yet Another Workflow Language [24] (YAWL). There are also a number of workflow management systems that execute Business Process Execution Language [15] (BPEL), such as jBPM [14]. Typically, the integration of WSN into workflow execution is made with real WSN exposed as web services, or by integrating sensor networks in grid computing environments that can execute workflows. We claim that if real WSN data is to be used for testing workflows that integrate with WSN applications, then there is the need for an additional WSN test bed, since it is neither possible to control what the WSN senses, nor can we use the real WSN as a testing and as a production environment.
Integrating WSN Simulation into Workflow Testing and Execution
211
In this section, we devise a means of integrating WSN simulation in a workflow management system as a proof of concept of the integration of WSN simulation in workflow management systems with both-way communication. Section 4.1 explains how a WSN simulation can be integrated in a specific workflow management system, and Section 4.2 elaborates on the integration difficulties that stem from the lack of interoperability among workflow languages and execution engines. 4.1
Integrating WSN Simulation in the Kepler Workflow Management System
Kepler [3] is a scientific workflow management system, a system equipped with a structured set of operations over data sets. Other scientific workflow management systems include Taverna [13] and Triana [23]. Kepler supports GUI modeling and execution, workflow composition, distributed computation, and access to data repositories and web services. Like VisualSense, Kepler is a Ptolemy II specialization. In Kepler, the workflow execution is determined by a computation domain. For example, in the Synchronous Dataflow domain the execution is synchronous and occurs in a pre-calculated sequence; in the Process Networks domain the computation is parallel, meaning that one or more components may run simultaneously; and, in the Discrete Event domain, workflow execution is triggered by events and takes time into account. One may use the same components interchangeably in Kepler and VisualSense. Therefore, it should be possible to include simulation models from VisualSense in Kepler workflows, obtaining a means of acquiring from the sensor network simulation into the workflow execution. Furthermore, since we are able to generate VisualSense simulation models from Callas applications, we can ease the simulationrelated work for the user. We use this configuration to prove the feasibility of the integration of WSN simulation in workflow management systems. In practical terms, the integration is as follows: the VisualSense WSN model is wrapped a in workflow component. The wrapper component provides an interface that allows both to acquire data from the WSN and to parameterize it. More complex WSN to workflow interfaces may be defined using the GUI. The communication between the workflow and the WSN simulation is mediated by the model wrapper. It converts the messages from the workflow to the network, and vice-versa, allowing for both way communication. The wrapper may be reconfigured, even at execution time, in order to allow a different interaction. The WSN integrated in the workflow (whether simulated or real) may be switched to another one, even at execution-time. This allows for testing WSN application changes made by workflows in testing environments (WSN simulation) before deployment (real WSN). Other difficulties of integrating VisualSense’s sensor network models in Kepler may lie in the (possible) computation domain heterogeneity. WSN simulation is performed in the Wireless domain, an extension of the Discrete Event domain that is not suited for all types of workflows. However, there should be no difficulties in the type of business processes that we are interested in simulating, because they are usually event oriented.
212
4.2
D. Vieira and F. Martins
Workflow Interoperability
A way to mitigate integration difficulties, and to use WSN simulation models in more workflow management systems is to create robust mechanisms of workflow interoperability, i.e., the ability to execute workflows in a distributed way, using two or more distinct workflow management systems. The available variety of workflow management system engines and description languages hinders workflow interoperability. For example, Taverna [13] interprets the Simple Conceptual Unified Flow Language [13] (SCUFL), Kepler uses the Modeling Markup Language [5] (MoML), Yawl system [25] uses Yet Another Workflow Language [24] (YAWL), and Triana [23] interprets, in addition to its proprietary format, the Business Process Execution Language [15] (BPEL). Summing to the aforementioned difficulties (the variety of description languages and of execution platforms), workflow specification languages usually have different degrees of expressiveness, making the translation among them not always possible, which compromises interoperability by language translation. Workflow interoperability could be achieved by standardizing the specification/execution language. There has been such attempts, for example, the Workflow Management Coalition created XPDL [15], and Microsoft and IBM have created BPEL, both aiming to become the standard. Another way to achieve interoperability of workflows is by integrating workflow engines so that it is possible to run each workflow in its execution environment, but being able to interact with other workflows, running in other environments. Such an approach is proposed by Kukla et al. [16]. The authors see workflow management systems as (legacy) applications embedded in a Grid Computing environment, in the case, GEMLCA [6] (Grid Execution Management for Legacy Code Applications).
5
Use Case: The Vaccines Delivery Process
The following scenario describes a workflow process integrated with environmental readings obtained from a WSN. A company distributes vaccines by several of its customers (e.g., pharmacies, hospitals). The vaccines are very sensible to the environment temperature and are compromised if exposed to a temperature above its accepted parameters. To cater for this specificity, vaccines are transported in small containers equipped with its own cooling and monitoring systems. The vehicle is itself supplied with a general refreshing system that maintains the overall temperature dependent on the total cargo. The control vehicle system consists of a sensor network including the temperature sensors of each container and a base station connected to the GSM system of the vehicle that is used for communications with the head office. Each sensor is programmed to fire an alert message should the temperature of a container reaches a given threshold (that might be different for each container). The base station is responsible for communicating the alert messages to the head office, for defining the GPS delivery routing, and for managing the overall cooling system. The delivery workflow process is controlled centrally at the company’s head office and is in contact with the vehicle’s control system. In case the vehicle
Integrating WSN Simulation into Workflow Testing and Execution
213
Head Office GSM Unit
Workflow Execution Engine
Vehicle Control System Vehicle Wireless Sensor Network GSM Unit
Base Station Sensor
Sensor Sensor
Sensor Sensor
Sensor Sensor
Fig. 3. The vaccine’s delivery configuration
reports an alert indicating that some vaccine container is in danger of deteriorate, it triggers alternative workflows, like changing the delivery order of the goods, that, in turn, may originate changes in the vehicle’s control system. The changes could be, for instance, the communication of a new destination coordinates for the GPS system, or the transmission of a new algorithm for adjusting the vehicles overall temperature, caused by the download of cargo from the vehicle. Figure 3 depicts the scenario configuration, with the head office workflow execution system and the vehicle’s control system communicating via GSM. Also it illustrates the local communications among the GSM units and the workflow execution engine and the vehicle sensor network, which is organized as a base station and several sensors. Testing this workflow using the traditional approach requires a method to replay the communications with the truck. However, this is not required if the simulation of sensor networks is integrated with the workflow execution. Furthermore, the integration may allow the use of simulation data from sensor networks in richer scenarios than the one just described. From the WSN application (defined in the network file, network. calnet , depicted in Figure 4), a simulation model, depicted in Figure 5 is automatically generated. The model can be further edited in VisualSense. The network description file allows to specify the number of nodes ( size ) and their positions ( position ), the network and node models (template) (the persistent format of Ptolemy II has the .moml extension) and the communication range (range). The two temperature sensors, Node1 and Node2, in blue, execute the code in node. callas . Periodically, they send the temperature in each container to the base station, represented in green. The base station executes the code in sink . callas . It is possible to identify the node communication ranges, as well as their relative positions.
214
D. Vieira and F. Martins
# file: network.calnet interface = iface . caltype sensor : c o d e = node . c a l l a s size = 2 r a n g e = 50 p o s i t i o n = random 0 , 0 t o 1 0 , 10 t e m p l a t e = c o n t a i n e r S e n s o r . moml sensor : code = s i n k . c a l l a s size = 1 r a n g e = 50 position = expl icit 0, 0 t e m p l a t e = gsmSink . moml
# code for sensor in container # two sensors of this type # sensor range # distributed randomly # sensor simulation model
# code for the base station
# positioned at a particular place # base station simulation model
t e m p l a t e = c o n t a i n e r N e t w o r k . model
# network model
Fig. 4. A Callas network description file with simulation parameters
CallasPowerLossChannel
Wireless Director
Node 1
Sink out
in Node 2
out
in
in
out
Fig. 5. Sensor network as defined in VisualSense
Integrating WSN Simulation into Workflow Testing and Execution
Wireless Director
CallasPowerLossChannel
GSMChannel
Sensor network (base station + nodes) channels.
GSM channel that allows communication with the control center.
215
WirelessToWired properties
Node 1
payload
outPort
This actor listens on the GSMChannel, forwarding all communications to the outPort of this model.
Sink
out
in
out in n
Node 2
gsmOut
In addition to the out port, the Sink (base station) has a gsmOut port, linked to the GSMChannel.
out
in
Fig. 6. VisualSense WSN simulation model to be integrated in the Kepler workflow
Fig. 7. Workflow in Kepler. Actor TruckNetwork encapsulates the network model in Figure 6. The CalculateBestPath actor encapsulates the workflow of path recalculated.
The communication between the sensor nodes and the base station is made through a channel (CalasPowerLossChannel). This simulates the wireless communication between the sensors in the container and the base station, taking into account signal properties such as power loss, and avoids the transmission of repeated messages. Ports in /out of all the represented nodes receive/send messages through CalasPowerLossChannel. The communication of the base station with the control center is simulated by the GSMChannel. Notice that the base station is the only node that can send messages on GSMChannel, namely output messages, making use its gsmOut port. The workflow described in Figure 7 calculates the best path for the deliveries, taking into account the container temperatures, and the truck location. The
216
D. Vieira and F. Martins
sensor network integration is made by encapsulating the network model in a Kepler actor (TruckNetwork), that acts as a data source. The workflow execution is, in this case, triggered by a message received from TruckNetwork. In a different workflow, the execution could be started by another event, possibly being altered by an incoming message from the sensor network. In the top right corner of Figure 6, there is the WirelessToWired actor, that receives the messages sent through GSMChannel and dispatches them through the outPort of TruckNetwork, depicted in Figure 7. In the workflow integration, this message is routed to a MessageDisassembler actor that extracts the containerID and truckPosition values, needed for the CalculateBestPath workflow. It should be mentioned that the WirelessToWired actor (that connects the sensor network to the workflow) is independent of the network and of the workflow; it is only a message broker.
6
Conclusions and Future Work
This paper presents an approach to integrate sensor network simulations into the execution of workflows, which allows the testing of higher level applications based on information from the things in the world. Our proposal aims to integrate the simulation of sensor networks of VisualSense [4] in the Kepler workflow management system [3], exploiting the interoperability of components (actors) between the two systems. This approach is feasible because both systems are extensions of the Ptolemy II modeling and simulation platform [8]. For the creation of WSN simulation models we make use of a generator (of simulation models) that we presented previously [26]. We believe that our approach is a valuable contribution for testing applications (not only those based on workflows) that depend on environmental values. It enables the execution of new workflows using simulated data, and eases experimentations with new scenarios. In contrast, testing workflows based on information from previous executions does not seem as flexible and complete. As for future work, our initial focus will be on the validation of the proposed integration model, as well as obtaining results that allow us to evaluate the proposed solution. An interesting point that deserves further attention is workflow interoperability, i.e., the ability to execute workflows in a distributed way, using two or more distinct workflow management systems. Although we have not addressed the availability of sensor information via web, this does seem viable and we envision what it could be achieved using web services: the base stations would have to be identified (and named) in the network description file, so that they could be presented as webservices, and so that the clients can register themselves in order to be notified by the base stations. Acknowledgments. The authors are partially supported by project CALLAS from Funda¸c˜ao para a Ciˆencia e Tecnologia (PTDC/EIA/71462/2006).
Integrating WSN Simulation into Workflow Testing and Execution
217
References 1. Agha, G.: Actors: A Model of Concurrent Computation in Distributed Systems. MIT Press, Cambridge (1986) 2. Akyildiz, I., Su, W., Sankarasubramaniam, Y., Cayirci, E.: A survey on sensor networks. IEEE Communications Magazine 40(8), 102–114 (2002) 3. Altintas, I., Berkley, C., Jaeger, E., Jones, M., Ludascher, B., Mock, S.: Kepler: An extensible system for design and execution of scientific workflows. In: SSDBM 2004: Proceedings of the 16th International Conference on Scientific and Statistical Database Management, pp. 423–424. IEEE Computer Society, Los Alamitos (2004) 4. Baldwin, P., Kohli, S., Lee, E.A., Liu, X., Zhao, Y.: Modelling of sensor nets in Ptolemy II. In: Proceedings of IPSN 2004, pp. 359–368. ACM Press, New York (2004) 5. Brooks, C., Lee, E.A., Liu, X., Neuendorffer, S., Zhao, Y., Zheng, H.: Heterogeneous concurrent modeling and design in java (volume 1: Introduction to ptolemy ii). Technical Report UCB/EECS-2008-28, Electrical Engineering and Computer Sciences University of California at Berkeley (2008) 6. Delaitre, T., Goyeneche, A., Kacsuk, P., Kiss, T., Terstyanszky, G., Winter, S.: Gemlca: Grid execution management for legacy code architecture design. In: EUROMICRO 2004: Proceedings of the 30th EUROMICRO Conference, pp. 477–483. IEEE Computer Society, Los Alamitos (2004) 7. Egea-Lopez, E., Vales-Alonso, J., Martinez-Sala, A., Pavon-Mari˜ no, P., GarciaHaro, J.: Simulation Scalability Issues in Wireless Sensor Networks. IEEE Communications Magazine 44(7), 64–73 (2006) 8. Eker, J., Janneck, J.W., Lee, E.A., Liu, J., Liu, X., Ludvig, J., Neuendorffer, S., Sachs, S., Xiong, Y.: Taming heterogeneity - the ptolemy approach. Proceedings of IEEE 91(2), 127–144 (2003) 9. Glombitza, N., Lipphardt, M., Werner, C., Fischer, S.: Using graphical process modeling for realizing soa programming paradigms in sensor networks. In: Proceedings of the 6th Annual IEEE/IFIP Conference on Wireless On demand Network Systems and Services, pp. 61–68 (2009) 10. Glombitza, N., Pfisterer, D., Fischer, S.: Integrating wireless sensor networks into web service-based business processes. In: MidSens 2009: Proceedings of the 4th International Workshop on Middleware Tools, Services and Run-Time Support for Sensor Networks, pp. 25–30. ACM, New York (2009) 11. Grace, P., Coulson, G., Blair, G., Mathy, L., Yeung, W., Cai, W., Duce, D., Cooper, C.: GRIDKIT: Pluggable overlay networks for grid computing. In: Chung, S. (ed.) OTM 2004. LNCS, vol. 3291, pp. 1463–1481. Springer, Heidelberg (2004), doi:10.1007/978-3-540-30469-2 40 12. Hill, J., Szewczyk, R., Woo, A., Hollar, S., Culler, D., Pister, K.: System architecture directions for networked sensors. In: Proceedings of the Ninth International Conference on Architectural Support for Programming Languages and Operating Systems, pp. 93–104. ACM Press, New York (2000) 13. Hull, D., Wolstencroft, K., Stevens, R., Goble, C., Pocock, M., Li, P., Oinn, T.: Taverna: a tool for building and running workflows of services. Nucleic Acids Research 34(Web Server issue), 729–732 (2006) 14. JBoss.: jbpm website, http://www.jboss.org/jbpm/ last accessed in October 20, 2010) 15. Ko, R., Lee, S., Lee, E.: Business process management (bpm) standards: A survey. Business Process Management Journal 15(5) (2009)
218
D. Vieira and F. Martins
16. Kukla, T., Kiss, T., Terstyanszky, G., Kacsuk, P.: A general and scalable solution for heterogeneous workflow invocation and nesting. In: WORKS 2008: Proceedings of the Workflows in Support of Large-Scale Science, Third Workshop, pp. 1–8. Springer, Heidelberg (2008) 17. Lim, H., Teo, Y., Mukherjee, P., Lam, T., Wong, W., See, S.: Sensor grid: integration of wireless sensor networks and the grid. In: The IEEE Conference on Local Computer Networks, 30th Anniversary, pp. 91–99 (2005) 18. Lopes, L., Martins, F., Barros, J.: Middleware for Network Eccentric and Mobile Applications, ch. 2, pp. 25–41. Springer, Heidelberg (2009) 19. Lopes, L., Martins, F.: A semantically robust framework for programming wireless sensor networks. TR 2010–01, DCC/FCUP (March 2010), http://www.dcc.fc.up.pt/dcc/Pubs/TReports/TR10/dcc-2010-01.pdf 20. Martins, F., Lopes, L., Barros, J.: Towards the safe programming of wireless sensor networks. In: Proceedings of ETAPS 2009 (2009) 21. Reddy, A., Kumar, A., Janakiram, D.: Workflow process model integrating wireless sensor networks with grids. Technical report, Distributed and Object Systems Lab, Dept. Of CSE (2007) 22. Rozinat, A., Wynn, M.T., van der Aalst, W.M.P., ter Hofstede, A.H.M., Fidge, C.J.: Workflow simulation for operational decision support. Data Knowl. Eng. 68(9), 834–850 (2009) 23. Taylor, I.J., Schutz, B.F.: Triana - A Quicklook Data Analysis System for Gravitational Wave Detectors. In: Second Workshop on Gravitational Wave Data Analysis, pp. 229–237 (1998) 24. van der Aalst, W.: Yawl: yet another workflow language. Information Systems 30(4), 245–275 (2005) 25. van der Aalst, W.M.P., Aldred, L., Dumas, M., ter Hofstede, A.H.M.: Design and implementation of the YAWL system. In: Persson, A., Stirna, J. (eds.) CAiSE 2004. LNCS, vol. 3084, pp. 142–159. Springer, Heidelberg (2004) 26. Vieira, D., Martins, F.: Automatic generation of WSN simulations: From Callas applications to VisualSense models. In: Proceedings of SENSORCOMM (2010) (to appear) 27. Yoneki, E., Bacon, J.: A survey of wireless sensor network technologies: Research trends and middleware’s role. Technical Report UCAM-CL-TR646, University of Cambridge (2005)
Revisiting Human Activity Frameworks∗ Eunju Kim and Sumi Helal Mobile and Pervasive Computing Laboratory The Department of Computer and Information Science and Engineering University of Florida {ejkim,helal}@cise.ufl.edu
Abstract. A classical activity theory has for long been established as a de-facto framework on which many human activity models and recognition algorithms are based. We observed several limitations to the classical activity theory, mostly attributed to its inability to discern among seemingly similar concepts and artifacts. We discuss these limitations and introduce a generic activity framework that addresses these issues. We provide a comparative analysis to quantify the effect of our proposed framework on the accuracy of activity recognition using a benchmark of activities of daily living. We report significant improvement of the recognition accuracy. Keywords: Human Activity, Human Activity Model, Human Activity Recognition, Activity Theory, Generic Activity Framework.
1 Introduction Activity recognition (AR) is an important sensing science in pervasive computing because it can be applied to many ubiquitous applications including health care and elder care [1][2][3][4]. A significant body of research has been established in the past decade in areas such as activity recognition algorithms, activity recognition systems, and activity datasets [5][6][15][16][17][18][19]. Creating activity frameworks is a prerequisite for enabling AR algorithms and systems, to explain, if activities are recognized through artifacts that a person uses, an artifact-usage based activity framework must first be constructed so that the AR system can recognize activities based on their relationships with the artifacts. For example, if an activity system detects that a person uses toothpaste and a tooth brush, then the system interprets that the person is doing the activity “brushing teeth”. Since an activity framework forms the basis for effective activity recognition, it is important to ensure that such frameworks can represent many real-world human activities. So far, several sensor based activity recognition projects have been based on and inspired by a well known activity theory [13][14][15]. However, we encountered some limitations of this theory while attempting to represent specific real world activities. To illustrate, for the eating activity, there are many similar activities such ∗
This research is being funded by an NIH Grant number 5R21DA024294-02.
G. Parr and P. Morrow (Eds.): S-Cube 2010, LNICST 57, pp. 219–234, 2011. © Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2011
220
E. Kim and S. Helal
as having lunch, chewing food, or having a meal. Even though they are similar, they are not the same. Some AR systems may classify all these activities separately while other AR systems may regard them as equal. We found that the traditional activity theory is not sufficient to support the recognition of similar activities. Another limitation is that the activity theory does not distinguish between the tool used for an activity and a target object of an activity. This can cause incorrect recognition of activity. For example, it is difficult to distinguish eating and washing dishes because both activities involve similar artifacts such as dishes, forks or spoons. Some research projects tried to overcome this limitation by combining motion and artifact [16][17]. Motion is useful to determine the relationship between an artifact and an activity. For instance, the scooping action of a spoon can be used to determine eating activity more definitively as compared to a spoon whose action is not specified. However, this approach is not sufficient either because it does not distinguish tool and object. For example, without connection between scooping spoon and food, it is still confusing whether the activity is eating. Besides tool, motion, and object, there are several other elements to consider in recognizing activities (e.g., subject, action order, time, location, or context). These elements should be included in any activity framework. Therefore, we propose a generic activity framework, which improves on the aforementioned problems. The rest of this paper is organized as follows. In section 2, we discuss the traditional activity theory and its limitations. The proposed approach is explained in section 3. A comparison and analysis are presented in section 4. Finally, section 5 concludes the paper.
2 Background In this section, we describe the well known activity theory and analyze its limitations in recognizing human activities. 2.1 Activity Theory Activity theory was founded by L. S. Vygotsky at The Cultural-Historical School of Psychology during 1920s and 1930s. It was developed by A. N. Leontjev and A. R. Lurija and coined the term “activity” [7][8]. Activity theory was first applied to humancomputer interaction (HCI) in the early 1980s [7]. Today, it is applied implicitly or explicitly in much of the vast body of activity recognition research. The definition of activity theory is described in [7] as: “Activity Theory is a philosophical and crossdisciplinary framework for studying different forms of human practices as development processes, both individual and social levels interlinked at the same time.” The activity theory utilizes four components (subject, tool, objective, and outcome). In [7][8], the authors used object originally instead of objective. But, since object has multiple meanings (something material or goal), we chose objective to avoid conflict with target object of an activity. A subject is a participant of an activity. An objective is a plan or common idea which can be shared for manipulation and transformation by the participants of the activity. Tool is an artifact a subject uses to
Revisiting Human Activity Frameworks
221
fulfill an objective. Outcome is another artifact or activity. Transforming the objective into an outcome motivates the existence of an activity. For example, having ones own house is an objective and the purchased house is the outcome. Transforming the object into the outcome requires various tools. These relationships between components are presented with lines in Fig. 1. Bold line indicates direct relationship between components while a gray line represents a mediated relationship. In Fig. 1, subject and tool have a direct relationship because a subject uses a tool in person. The relationship between an objective and subject is mediated by a tool because subjects achieve their objective using tools.
Fig. 1. Basic structure of activity theory [7]
Activity theory has a hierarchical structure. Because objectives of an activity are transformed into outcomes through several steps, activities are regarded as long-term formations. As shown in Table 1, activity is composed of actions and an action consists of operations [7]. Table 1. Hierarchical level of an activity and an example of activity, action, and operation
Levels Activity Action Operation
Related Purpose Motive Goal Conditions
Example of purpose Completing a software project Programming a module Using an operating system
Activities are composed of cooperative actions or chains of actions. These actions are all related to the motive of an activity. Each action has a goal and consists of operations to reach the goal. Operation is a unit component and it depends on the faced condition where the operation performs. The detailed description of each level is as follows: Activities. Activities are realized as chains and networks of individual and cooperative actions. These actions are related to each other by the same objective and motive.
222
E. Kim and S. Helal
Actions. Actions participate in an activity. An action has a goal. However, without a frame of reference created by the corresponding activity, the actions cannot really be understood. Operations. Actions consist of chains of operations, which are well-defined routines used subconsciously as answers to conditions faced during the performing of the action. In the above example in Table 1, operating system and computer will be conditions. 2.2 Limitations of Models Based on Activity Theory Even though activity theory is well-known and is often used in activity recognition research, it is not sufficient for real world activity recognition. Activity theory is created and developed by psychologists who wanted to understand human activity. On the other hand, modern activity recognition techniques are being employed to understand human activities automatically by intelligent systems. Since AR systems are less flexible than humans, it is necessary to improve the theory in order to use it in machine systems. The following are the limitations of activity theory for intelligent systems. Firstly, the border between hierarchical layers is blurred. As described in [7], an activity can lose its motive and become an action, and an action can become an operation when the goal changes [7]. For less flexible computer based system, this unclear border makes automated activity recognition difficult because the change of motive of activity and goal of action are not easy to detect. Therefore, it is necessary to find clearer ways to determine each layer. Secondly, tool and object need to be distinguished. The same item may be used as tool or object. It is important to distinguish such items in order to recognize and interpret human activities accurately. For example, there is cough syrup in a bottle. The bottle is a tool for containing the medicine. Therefore, if a subject operates the bottle, it may be for taking cough syrup. But, it is not true if the bottle is a target object of some activities such as moving a bottle. Combining motion and tool is helpful to recognize activities in this case. However, it is not sufficient. For instance, for the same tool and motion, it may be possible to have different activities for different target objects. For example, the motion pouring is combined with the medicine bottle. However, pouring the medicine bottle is not enough to determine the taking medicine activity because the target object of the activity is not clear. If a cup is a target object, the subject might take the medicine. But if the target object of the activity is sink or no object, it might mean the medicine was thrown away because of a sour taste. Activity framework should support representation of these cases. Last but not least, many human activities are too complicated to be represented by a single activity name. For example, eating has several similar activities such as having food or meal, having breakfast, lunch or dinner, or having snack. Since the top layer is activity in activity theory, the activity layer includes everything. This makes AR system design cumbersome and difficult to conceptualize. This difference in granularity is not conducive to sharing or modularizing AR systems. To solve the aforementioned limitations, we propose a generic activity framework in the next section.
Revisiting Human Activity Frameworks
223
3 Proposed Generic Activity Framework In this section, we explain the proposed generic activity framework. 3.1 Generic Activity Framework Composition Design The generic activity framework has a hierarchical structure. The hierarchical structure has several advantages. Firstly, it makes the activity recognition system more tolerant to sensor environment change. For instance, even if more sensors are inserted in the AR system, the upper layers in the hierarchy will not be seriously influenced. In other words, the additional sensors will cause changes to the motion, tool, and operation layers. But, the operation layer will not be affected directly from the sensor change. Secondly, activity recognition using hierarchical structure is analogous to the way people recognize, so it is easier to design more natural and intuitive AR algorithm. To illustrate, when people observe an event, they accumulate it, and compose the unit observations until they find what it is. Each layer of the structure consists of activity components. In total, there are eight components in the generic activity framework. We chose the eight components according to 5W1H framework and we also found the eight is the most influential. 5W1H is well known method to represent knowledge for many applications including web or newspaper because of its capacity to describe knowledge. It was created by Rudyard Kipling, the Nobel Laureate of Literature in 1906 [9]. 5W1H framework deals with six keywords (‘who', 'what', 'where', 'when', 'why' and 'how'). We split ‘how’ into two components (tool and order) for more detailed information. We also add context because it is important to understand activities. It is not necessary that every activity contains all eight components as long as the activity is recognized clearly. For example, the walking activity does not require any object. Also some components such as motive are difficult to know. The detailed description of the eight components is given below: Subject. A subject is an actor of the activity. Subject has an important role as an activity classifier especially when there are multiple people. In other words, a different subject means a different activity. Time. This is the time when an activity is performed. It consists of start time and end time. We can also calculate the duration of an activity with time. Time and duration are useful to recognize more detailed activities. For example, if an eating activity is performed around noon; it is having lunch instead of just eating. Also depending on the duration of the eating, we can classify the eating to having a meal or having a snack. Location. Location is the place where an activity is performed. If an activity is performed in several places, location will have multiple values. Location is also a crucial classifier of activities. In other words, since location has a particular function such as sleeping or cooking, it gives important information for recognizing activities. Motive. Motive is the reason why a subject performs a specific activity. Motive is the objective in activity theory as shown in Fig. 1. To determine motive, some artificial intelligent reasoning technique may be required.
224
E. Kim and S. Helal
Tool. Tool is an artifact that a subject uses to perform an activity. Tool provides essential information to classify activities. For example, a spoon or a fork is a tool for eating or cooking. Therefore, an AR system can expect those activities when it detects that a user uses a spoon or fork. Object (Target). An object can also be any artifact like tool. However, object is the target of an activity while as a tool is used by a subject. Distinction between tool and object is important for accurate activity recognition because some artifacts are both tool and object depending on an activity. Order. Order is the sequence in which actions of an activity are performed. Usually order does not matter for many activities. But, order is important for some activities. For example, to have food, we should serve food first and cut, pick, scoop food. Context. Context is information, which is used to determine the situation where an activity is performed. Some contexts such as temperature or humidity are directly sensed by installed sensors. On the other hand, some contexts like motive of activity need some artificial intelligent techniques such as reasoning or pattern recognition to elicit them.
Fig. 2. Composition diagram of generic activity framework. It is composed of several hierarchies and each hierarchical layer contains classifier components.
Revisiting Human Activity Frameworks
225
Fig. 2 shows a composition diagram of the generic activity framework. In Fig. 2, rectangles are layers and ellipses present components. According to the composition of components, the activity framework has a hierarchical structure that is similar to activity theory. Activity theory has three layers—operation, action, and activity. In our generic activity framework, meta activity and classified meta activity layers are added. Also it clearly defines the components of each layer. The detailed description for each layer is given below: Sensors. Sensors are installed in the pervasive space (e.g. a smart home) to collect event information of the space. Based on the source of sensor data, sensor is classified into four types: motion, tool, object, and context sensor. A motion sensor is about peoples’ movements such as raising an arm or turning body. Nowadays, gyro sensors or accelerometer sensors are used to sense human motion. Tool sensor data is from sensors attached to equipment which is used by people. Object sensor data is from sensors installed on passive objects such as groceries or frozen food packets [1]. RFID readers and tags are representative sensors to recognize tools and objects. Sound sensors, vibration sensors or pressure sensors are also used to recognize activities. Operation. Operation is a composition of tool and motion. The user operates tools with specific motion. For example, if computer is a tool, some hand or finger motion will be performed for typing a keyboard. Action. Action is determined by combination of operation and object. For example, if a user types a command to open a file, typing on the keyboard is an operation and the file is an object and this combination is open file action. Object is important to recognize actions. However, some actions do not have an object. For instance, in sleeping activity, the action is lying down on the bed. A bed is a tool for sleeping, but no object is involved in sleeping. Moreover, some activities like walking do not require tools. Therefore, although tool and object are important, they are not mandatory components. Activity. Activity is a collection of actions. Activity may involve multiple actions that occur in a certain order. For example, for laundry activity, we should put clothes into the washer first. Otherwise, it is difficult to say it is laundry activity even though the washer runs. But for many activities, the order of actions varies a lot according to the user. For example, there are several actions such as scooping, picking, and cutting food for eating. The order of these actions is totally up to a person. Therefore, we consider the relationship between activity and action unless the order is critical for recognizing activity. Meta activity. A meta activity is a collection of activities. When an activity is complicated, it is composed of several simple activities. For example, to prepare a meal, people obtain food material from refrigerator and wash, cut or chop the materials. Sometimes, microwave is used, and people may also wash dishes to serve food. In this example, preparing a meal is a meta activity. Other activities such as preparing food material, washing, cooking, and microwaving are simple activities.
226
E. Kim and S. Helal
Classified meta activity. When meta activity is combined with context information including time or location, meta activity can be more specialized. Classified meta activity is the specific meta activity which contains more context information. For example, having a meal meta activity is classified into several meta activities such as having breakfast, lunch, or dinner according to time of the activity performed. 3.2 UML Diagram of Generic Activity Framework In this section, a Unified Modeling Language (UML) diagram shows activity components and their relationships. Each activity component and layer is an entity in Fig. 3.
Fig. 3. UML diagram of generic activity framework
These entities are connected to each other with two relationships such as IsComposed-Of and IS-A. The detailed descriptions of two activity relationships are given below: Composition (Whole-part) relationship. Composition relationship is a wholepart relationship between activity and component. In Fig 3, terminal entities
Revisiting Human Activity Frameworks
227
represent activity components such as subject, tool, motion, object, time, or location. Internal entities are operation, action, activity, and context. Root entity is meta activity. IS-A (Inheritance or general-special) relationship. Is-A relationship shows the relationship between general entity versus special entity. This is also called inheritance relationship because special entities inherit characteristics of general entity. To illustrate, having a meal is a general meta activity which indicates having food. When having a meal meta activity combines with time, location, etc., the meta activity is specialized. For example, if having a meal is performed at lunch, it is having lunch. If having lunch is performed at a restaurant, it will be eating out. Since having lunch or eating out is having a meal meta activity, it inherits features of the meta activity (e.g. component of the meta activity). 3.3 Example of Generic Activity Framework In this section, we will show a nested diagram of our generic activity framework including a simple example.
Fig. 4. Nested diagram of our generic activity framework. Shade ellipses indicate new components in the generic activity framework.
The classification shows composition hierarchy in Fig. 4. The composition is performed from the motion layer to the meta activity layer. To illustrate, an action is composed of several operations, and activity is composed of several actions. Components are additional classifiers for each classification layer. For instance, an action is classified based on objects involved and related operations. The components of an activity are actions and the order of those actions. Examples of each component are shown in the diagram.
228
E. Kim and S. Helal
4 Comparison and Analysis In this section, our generic activity framework is compared with activity theory. We used an activity scenario for the care of the elderly from [4]. From the scenario, we retrieved meta activities, activities, actions, operations, motions, tools, objects, and related components such as time or location. Table 2 shows the result of retrieved components. In the generic activity framework, there are 11 meta activities, 13 classified meta activities, and 33 activities while as there are 32 activities when we applied activity theory. This result implies that our generic activity framework can recognize more activities than activity theory. Table 2. Meta activity and activity list of eldercare scenario. It shows meta activities, activities, actions and their corresponding tools and objects. Meta Activities
Activities (Location)
Rest
Sleeping (Bedroom) Relaxing (Living room) Hygiene Taking a bath (Bathroom) Brushing teeth (Bathroom) Toileting (Restroom) Grooming Combing hair (Bathroom) Getting dressed (Bedroom) Preparing a Cooking meal (Kitchen) Breakfast Lunch Dinner Snack Microwave (Kitchen) Refrigerator (Kitchen) Freezer (Kitchen)
Actions Operations Lying down Getup Sitting down Getup Turning on/off water faucet Moving brush
Tool Bed
Object (Target) Body
Sofa
Body
Water faucet Tooth brush Toothpaste Toilet flush
Body Teeth
Comb
Hair
Open / Close dresser, closet Cutting, Chopping Stirring Serving
Clothes
Boiling Baking Starting program
Range or Oven Microwave Dish
Open/Close
Refrigerator Food
Open/Close
Freezer
Frozen food
Sink Water faucet
Pushing toilet flush Moving comb
Washing dishes by Scrub hand (Kitchen)
Dresser Closet Knife Utensils Plate
Food
Food Food
Plates Utensils Spoon, Fork Knife
Revisiting Human Activity Frameworks Table 2. (continued) Meta Activities
Activities (Location)
(Ingesting) Having a meal Breakfast Lunch Dinner Snack
Eating (Dining room)
Medication
Laundry
Entertainment & Information
Cleaning Bedroom Kitchen Bathroom Living room Dining room Laundry room
Actions Operations Cutting Picking Scooping Serving
Drinking
Tool Plates Spoon, Fork Knife Plates Dining mat Cup
Holding a cup Drink Taking medicine Taking medicine Medicine (Anywhere) Drink water bottle Cup Washing clothes Turning on/off Washer (Laundry Room) Choosing an option Drying clothes Turning on/off Dryer (Laundry Room) Choosing an option Keep clothes in Open dresser Dresser dresser or closet Close dresser knob (Bedroom) Watching TV Turning on/off TV remote (Living room) Choosing control program Listening to music Turning on/off Audio (Living room) Choosing player program Access Internet Open browser Computer (Living room) Reading a book Reading a book (Anywhere) Reading newspaper Reading (Anywhere) newspaper Vacuum (Anywhere)
Turning on/off Moving arm Moving hand
Vacuum cleaner
Mopping (Anywhere) Sweeping (Anywhere)
Mop
Moving arm Moving hand Moving arm Moving hand
Broomstick
Object (Target) Food
Food Water Juice, Tea Medicine
Clothes
Clothes
Clothes
TV
CD
Book Newspaper
Bedroom Kitchen Bathroom Living room Dining room Laundry room Mop Bathroom floor Bedroom Kitchen Bathroom Living room Dining room Laundry room
229
230
E. Kim and S. Helal Table 2. (continued)
Meta Activities
Activities (Location)
Cleaning (continued from the previous page)
Wiping
Bedroom Kitchen Bathroom Living room Dining room Laundry room Getting Out
Moving
Actions Operations Moving arm Moving hand
Washing dishes (Kitchen)
Starting program
Ordering (Anywhere)
Moving arm Moving hand
Leaving Home (Entrance) Arriving Home (Entrance) Walking (Anywhere) Motion (Anywhere)
Open front door Close front door Open front door Close front door Stepping
Moving
Tool
Object (Target) Dresser Sofa Desk Front door
Dish washer
Plates Utensils Spoon, Fork Knife Most artifacts Bed Desk, Chair Plates Refrigerator Toothbrush Toothpaste Front door Front door Body Body
In Table 2, we can see that many artifacts are used as tools or object in activities. When an artifact is used for several activities, it is difficult to classify activities accurately using activity theory because it regards artifacts as tools only. For example, in Table 2, for activities cooking, eating, ordering or washing dishes, a dish is an artifacts and it is a tool for cooking and eating and an object for ordering or washing dishes. Since activity theory consider dishes as tool, using dish information is not sufficient to determine which activity is performed. In contrast, generic activity framework determines activities considering both tool and object. Most artifacts have their own function. Usually, when the artifact is used for the particular function; it is a tool of the corresponding activity. On the other hand, the artifacts are maintained by the user. In the management activities such as ordering or cleaning, the artifacts are objects. Table 2 also shows the relationships between meta activities, activities, actions and operation. Some meta activities such as preparing a meal, having a meal or cleaning are classified into several specialized meta activities according to context such as time, location etc. For example, preparing meal may involve preparing breakfast, lunch, or dinner according to the time at which it is performed. Cleaning specifies several activities like cleaning bedroom or cleaning kitchen depending on the location. Since these meta activities are obviously different from each other, they should be clearly determined as shown in Table 2.
Revisiting Human Activity Frameworks
231
The generic Activity Framework can recognizes more activities than activity theory because it classifies activities from sensor data with more classifiers such as motion, tool, object, order, subject and context. For example, bed is a tool for sleeping. Therefore, sleeping activity is retrieved from an artifact bed in activity theory where artifacts are considered as tools only. On the contrary, bed is also a target object for the ordering bed activity. Also even though a subject is lying down on a bed, if the activity happens too short time, it may not be sleeping. With considering eight components, generic activity framework can classify more activities. Fig. 5 shows the number of related activities for each artifact. The number of activities is inferred from Table. 2. For example, plate is a tool for eating and cooking in Table. 2. Also plate is an object of washing dishes and ordering plates activities. Therefore, the number of activities is two and four respectively.
Fig. 5. This graph shows the number of correctly recognizable activities for each artifact
Since a practical activity recognition systems must account for the uncertainty in realworld situations, we need to manage the uncertainty of the system. Theories such as Bayesian probability can address the problem of uncertainty. Even though Bayesian probability is a well-established technique, it has several difficulties such as requiring large volume of data or enumeration of all possibilities and it is cumbersome for practical applications [11]. To overcome the disadvantages of Bayesian probability, the certainty factor was developed and it outperforms for several areas such as diagnostics and medicine [11]. The value of certainty factor ranges from -1(very uncertain) to +1(very certain) through zero (neutral). Certainty factor is obtained from MB (Measure of belief) and MD (Measure of disbelief) using equation (1) CF (H, E): certainty factor from hypothesis H influenced by evidence E. CF value is from –1 to 1. CF (H, E) is determined by MB (H, E) – MD (H, E).
CF(H, E) = MB(H, E) − MD(H, E)
(1)
232
E. Kim and S. Helal
MB (H, E): MB is the measure of increased belief in hypothesis H influenced by evidence E. p(H) and 1-p(H) are the probabilities of that hypothesis being true or false respectively. The evidence, E, reduces uncertainty due to the absence of the evidence. In other words, E increases p(H) and decreases (1 - p(H). If the evidence is very weak, then p(H|E) - p(H) is almost zero, and the uncertainty remains about the same. On the other hand, if the evidence, E, is very strong, p(H|E) - p(H) will equal 1 - p(H) and MB will be 1, therefore, no uncertainty is left. The function max is used to normalize the MB value positive (between 0 and 1).
1 if p(H) = 1 ⎧ ⎪ MB(H, E) = ⎨ max ( p(H | E), p(H) ) − p(H) otherwise ⎪ 1 − p(H) ⎩
(2)
MD (H, E): measure of increased disbelief on hypothesis H influenced by evidence E. If the evidence is very weak, then p(H) – p(H|E) is almost p(H), and the uncertainty will be close to 1. On the other hand, if the evidence, E, is strong, p(H) – min(p(H|E), p(H)) will equal 0 and MD will be 0. The purpose of function min is to make the MD value positive.
1 if p(H) = 0 ⎧ ⎪ MD(H, E) = ⎨ p(H) − min ( p(H | E), p(H) ) otherwise ⎪ p(H) ⎩
(3)
The probabilities of hypothesis, p(H), are assigned according to Table 2. Since we consider all 33 activities to be equally probable, we assigned the same probability (1/33) to each activity. The conditional probability of H given evidence E, p(H|E), is calculated based on Table 2. We counted activities for each evidences. The time was chosen to be the time duration for each activity. To find reasonable duration for each activity, we used a real world data set, which is provided by University of Amsterdam [20]. This data set records activities of daily living performed by a man living in a three-bedroom apartment for 28 days [19]. If we could not find a dataset for an activity, we assigned the duration based on common sense. After finding every conditional probability, we summed them up according to the addition rule of probability. Table 3 shows an example of Sleeping. The probabilities of other activities are calculated with same method. Table 3. The probabilities of Sleeping activity for each evidence
Evidence (E) tool motion object order subject time location
p(Sleep and E) 1/33 1/33 2/33 1/33 1/33 11/33 1/33
p(E) 2/33 2/33 5/33 0.25 1 0.5 10/33
p(Sleep| E) Sum of p(Sleep| E) 0.50 0.50 0.40 0.12 0.03 0.68 0.10
0.50 0.75 0.85 0.87 0.87 0.96 0.96
Revisiting Human Activity Frameworks
233
Using the estimated probabilities, we computed the certainty factor. Fig. 6 shows the certainty factor for each activity. Generic activity framework shows higher certainty than activity theory for every activity in Fig. 6. The main reason is that unlike activity theory which considers artifacts as tools only, generic activity framework accounts for eight components including both tools and objects. For instance, there are many activities related to bed. If the AR system recognizes sleeping when it detects bed, the certainty of the activity is low because there is high probability that some activity other than sleeping is being performed.
Fig. 6. Certainty factor of activities. For every activity, generic activity framework shows higher certainty factor value.
5 Conclusion Despite the many achievements and progress of activity recognition research, accurate recognition remains a very challenging and dynamic problem due to the complexity and diversity of human activities. In order to address some of the challenges; we propose a generic activity framework, which is a refinement of the classical activity theory. The generic activity framework allows researchers to create an activity model for complex and diverse human activities. The generic activity framework defines eight components of human activity and based on the components it creates a hierarchical activity structure. A major advantage of the proposed approach is that it can represent real world activities better by using the eight components and by distinguishing tools from objects. This advantage implies reduced false recognition of activities due to misinterpretation of object use. Another important advantage is that it can classify activities according to context. Therefore, better representation of human activity is possible.
References 1. Helal, S., Kim, E.J., Hossain, S.: Scalable Approaches to Activity Recognition Research. In: 8th International Conference Pervasive Workshop (2010)
234
E. Kim and S. Helal
2. Helal, S., Cook, D., Schmalz, M.: Smart Home-based Health Platform for Behavioral Monitoring and Alteration of Diabetes Patients. In: Journal of Diabetes Science and Technology, vol. 3(1) (January 2009) 3. Helal, A., King, J., Zabadani, H., Kaddourah, Y.: The Gator Tech Smart House: An Assistive Environment for Successful Aging. In: Hagrass, H. (ed.) Advanced Intelligent Environments, Springer, Heidelberg (2008) 4. Mann, W., Helal, S.: Smart Technology: A Bright Future for Independent Living. The Society for Certified Senior Advisors Journal 21, 15–20 (2003) 5. Kim, E.J., Helal, S., Cook, D.: Human Activity Recognition and Pattern Discovery. IEEE Pervasive Computing 9(1), 48–52 (2010) 6. Kim, E.J., Helal, S., Helal, S., Lee, J., Hossain, S.: Making an activity dataset. In: submitted to 7th International Conference on Ubiquitous Intelligence and Computing (2010) 7. Kuutti, K.: Activity theory as a potential framework for human-computer interaction research. In: Nardi, B.A. (ed.) Context and Consciousness: Activity Theory and HumanComputer Interaction, MIT Press, Cambridge (1996) 8. Davydov, V.V., Zinchenko, V.P., Talyzina, N.F.: The Problem of Activity. In: Leontjev, A.N. (ed.) Soviet Psychology, vol. 21(4), pp. 31–42 (1983) 9. Rozan, A., Zaidi, M., Yoshiki, M.: The Presence of Beneficial Knowledge in Web Forum: Analysis by Kipling’s Framework. In: Knowledge Management International Conference & Exhibition (KMICE 2006), Kuala Lumpur, Malaysia (2006) 10. Roventa1, E., Spircu, T.: Studies in Fuzziness and Soft Computing, pp. 153–160. Springer, Heidelberg (2007) 11. David Irwin, J.: The industrial electronics handbook, p. 829. IEEE Press, Los Alamitos (1997) 12. Kusrin, K.: Question Quantification to Obtain User Certainty Factor in Expert System Application for Disease Diagnosis. In: Proceedings of the International Conference on Electrical Engineering and Informatics, pp. 765–768 (2007) 13. Philipose, M., Fishkin, K.P., Perkowitz, M., Patterson, D.J., Hahnel, D., Fox, D., Kautz, H.: Inferring Activities from Interactions with Objects. In: IEEE Pervasive Computing: Mobile and Ubiquitous Systems, vol. 3(4), pp. 50–57 (2004) 14. Surie, D., Pederson, T., Lagriffoul, F., Janlert, L.-E., Sjölie, D.: Activity Recognition Using an Egocentric Perspective of Everyday Objects. In: Indulska, J., Ma, J., Yang, L.T., Ungerer, T., Cao, J. (eds.) UIC 2007. LNCS, vol. 4611, pp. 246–257. Springer, Heidelberg (2007) 15. Lefrere, P.: Activity-based scenarios for and approaches to ubiquitous e-Learning. Personal and Ubiquitous Computing 13(3), 219–227 (2009) 16. Pentney, W., et al.: Sensor-Based Understanding of Daily Life via Large-Scale Use of Common Sense. In: Proceedings of AAAI 2006, Boston, MA, USA (2006) 17. Wang, S., Pentney, W., Popescu, A.-M., Choudhury, T., Philipose, M.: Commonsensebased joint training of human activity recognizers. In: Proceedings of International Joint Conference on Artificial Intelligence, IJCAI (2007) 18. Intille, S.S., Larson, K., Tapia, E.M., Beaudin, J.S., Kaushik, P., Nawyn, J., Rockinson, R.: Using a live-in laboratory for ubiquitous computing research. In: Fishkin, K.P., Schiele, B., Nixon, P., Quigley, A. (eds.) PERVASIVE 2006. LNCS, vol. 3968, pp. 349–365. Springer, Heidelberg (2006) 19. Kasteren, T., Noulas, A., Englebienne, G., Krose, B.: Accurate Activity Recognition in a Home Setting. In: Proceedings of the Tenth International Conference on Ubiquitous Computing (Ubicomp), Korea, pp. 1–9 (2008) 20. University of Amsterdam Activity Data set, http://www.science.uva.nl/~tlmkaste/
Deriving Relationships between Physiological Change and Activities of Daily Living Using Wearable Sensors Shuai Zhang1, Leo Galway2, Sally McClean1, Bryan Scotney1, Dewar Finlay2, and Chris D. Nugent2 1
School of Computing and Information Engineering, University of Ulster, Coleraine campus, Cromore Road, Coleraine, Co. Londonderry, BT52 1SA, Northern Ireland 2 School of Computing and Mathematics, University of Ulster, Jordanstown campus, Shore Road, Newtownabbey, Co. Antrim, BT37 0QB, Northern Ireland {s.zhang,l.galway,si.mcclean,bw.scotney,d.finlay, cd.nugent}@ulster.ac.uk
Abstract. The increased prevalence of chronic disease in elderly people is placing requirements for new approaches to support efficient health status monitoring and reporting. Advances in sensor technologies have provided an opportunity to perform continuous point-of-care physiological and activityrelated measurement and data capture. Context-aware physiological pattern analysis with regard to activity performance has great potential for health monitoring in addition to the detection of abnormal lifestyle patterns. In this paper, the successful capture of the relationships between physiological and activity profile information is presented. Experiments have been designed to collect ECG data during the completion of five predefined everyday activities using wearable wireless sensors. The impact of these activities on heart rate has been captured through the analysis of changes in heart rate patterns. This has been achieved using CUSUM with change points corresponding to the transition between activities. From this initial analysis a future mechanism for context aware health status monitoring based on sensors is proposed. Keywords: Wireless Sensors, Physiological Profile, Activities of Daily Living, Health Status, Wellbeing.
1 Introduction Over the next 40 years, the worldwide demographic for the elderly population is projected to increase significantly, with the number of people aged over 60 rising from approximately 700 million in the year 2009 to 2 billion by the year 2050 [1]. Due to the prevalent onset and advancement of chronic disease with age, the overall number of people suffering from some form of chronic condition will potentially increase in correspondence with the estimated growth in the size of the elderly population. For a large number of elderly sufferers, the progression of a chronic condition can make it even more difficult to live independently in terms of maintaining and managing health, wellbeing and lifestyle. This, in turn, places pressure on the resources of health and social G. Parr and P. Morrow (Eds.): S-Cube 2010, LNICST 57, pp. 235–250, 2011. © Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2011
236
S. Zhang et al.
care services, as the traditional reactive model of healthcare undertakes the provision of support for monitoring and management of chronic diseases [2, 3]. Therefore, in order for healthcare services to retain the ability to effectively monitor and manage the health of an ever increasing, ageing population, a more sustainable form of care provision is required. Subsequently, the challenge for a pro-active healthcare model is to provide an improved quality of service at a lower cost, in terms of both financial resources and the workload required from healthcare professionals and members of the patient’s family. Additionally, support should also be provided for elderly patients and their families that facilitates independent living, thus permitting chronic disease monitoring and management within a familiar, comfortable environment [4]. Such approaches also have the potential to extend the period of time living within the home environment. Connected Health is an overarching concept that utilizes Assistive Technologies in order to provide remote healthcare and wellness solutions for the purpose of maximising healthcare resources, patient self-management and patient engagement with healthcare professionals [3]. Characteristically, Point of Care devices are employed to acquire and monitor a patient’s physiological information. Such devices are typically deployed within the patient’s living environment and provide both local and remote wireless communications, thereby facilitating remote monitoring and management of the patient’s vital sign information within the convenience of the home environment. By supporting timely access to a patient’s physiological record in addition to information relating to how they have interacted within their environment, a more pro-active, efficient and cost effective approach to patient assessment and chronic disease management may be realized. Moreover, immediate access and analysis of the current health status of a patient also makes it potentially possible to detect and act upon the early stage symptoms of chronic conditions [5], [6]. The aim of this paper is to capture and investigate the underlying relationships between an individual’s physiological profile and their activity profile. Data have been acquired from a number of healthy participants using wireless sensor technologies during a predefined set of daily activities. With the future goal of providing context aware monitoring of health status based on information derived from wearable sensors, an indication of an individual’s health and wellbeing may be inferred from the relationships between the health status and performance observed when a selection of Activities of Daily Living (ADLs) are carried out [5], [6], [7], [8], [9]. It is anticipated that the discovery of such relationships will provide valuable information that may be further utilized to carry out interventions in an individual’s lifestyle in order to maintain or improve overall health status, e.g. the elimination of potentially dangerous activities for cardiovascular conditions based on key physiological patterns observed. In addition, the physiological information may also be used to assist in the inference of ADLs, thus providing support for the completion of these activities. By utilizing healthy participants within the trial study, a baseline set of relationships has been established, which may subsequently be used to inform further research, and provide an initial contribution to the goal of context aware health monitoring and analysis. This paper is organized as follows: Section 2 provides a brief synopsis of related research, followed by a description of the methodology employed in Section 3, including an overview of the wireless sensor technologies, predefined activity scenarios, pre-processing and data analysis techniques used. Results are subsequently presented and discussed in Section 4. The paper then
Deriving Relationships between Physiological Change and Activities
237
concludes in Section 5 by proposing future research that may be conducted in order to continue and extend the current work.
2 Related Research Throughout the research literature physiological profile data have been utilized exclusively [10], [11], or in conjunction with activity profile data [7], [8], and environmental information [6], [9], [12], for a variety of Connected Health purposes. These include the automated observation and analysis of physiological function, and health and wellbeing status [6], [12], the analysis of relationships between physiological and activity profiles [7], [9], and the classification and prediction of vital signs, ADLs, and health and wellbeing status [6], [7], [8], [9], [10], [11]. In [10], Pawar et al. conducted research into the use of physiological profile data for the classification of activities. A single-lead wearable ECG sensor device, configured for Lead II, was employed in order to obtain ECG signal data from a number of healthy participants over a range of ADLs, including ascending and descending stairs, sitting in a chair, and upper-limb motion. Motion artifacts inherent in the ECG signal were analyzed using a supervised learning approach based on Principle Component Analysis (PCA) in order to classify the ADLs. Subsequently, the authors demonstrated the potential use of physiological data for activity classification and suggested the use of class-specific PCA-filtering for the attenuation of motion artifacts within ECG signals [10]. Jakkula et al. also used physiological profile data exclusively for the classification and prediction of health trends within the investigations reported in [11]. Employing Support Vector Machines in conjunction with vital sign data, heartbeat and blood pressure were individually classified using a binary taxonomy, which indicated whether a vital sign was likely to increase or decrease. The authors have suggested that the use of physiological information with such a technique may provide a useful measure of deterioration in health status [11]. Extending the research in [8], Jakkula discussed the use of both physiological and activity profile data for the classification and prediction of health status. A classifier was developed using the K-Nearest Neighbor (K-NN) algorithm, trained with input vectors comprised of physiological data, including heartbeat, blood pressure, weight and temperature, and activity data, including the type of ADLs performed. By classifying labeled training examples as either healthy or unhealthy, the resulting model can be used to predict health status from unseen input vectors [8]. In [7], Yuchi and Jo investigated the relationship between an individual’s physiological and activity profiles in order to perform heart rate prediction. Using a single-channel ECG monitor with integrated 3-axis accelerometer, heart rate and accelerometer data were captured in real-time from a single healthy participant over a range of ADLs for a period of 90 minutes. The relationship between the physiological and activity profile was modeled with a feed-forward Neural Network trained using examples from the captured, preprocessed data, which comprised of inputs corresponding to the heart rate and acceleration for the current time step and an output corresponding to the heart rate for the next time step. The authors found that predicted heart rate was close to the actual heart rate and suggested that modeling the relationship between an individual’s physiological and activity profile in such a manner may be potentially useful as an indicator for cardiac problems [7].
238
S. Zhang et al.
Both physiological and activity profile data were also used in conjunction with environmental information in the research conducted by Ogawa et al. In [12], investigations were carried out into the continuous and automated monitoring of physiological data within the home in order to provide support for the long-term management of physiological function. By employing sensor devices within areas of the home associated with daily activities, including an ECG monitor in the bathroom and a temperature sensor in the bedroom, information pertaining to the participant’s physiology and activity levels was monitored along with environmental information when daily activities such as bathing and sleeping were performed. Although the authors did not conduct further analysis on the derived data, the research suggests that the focus of future efforts would be the selection and development of algorithms for the analysis and interpretation of such profile data [12]. The relationship between an individual’s health status and ability to perform ADLs, based on physiological, activity and environmental information, was investigated in research conducted by Celler et al. In [6], it was suggested that changes in everyday activities, such as bathing and sleeping patterns, could potentially indicate changes in health status. By monitoring information related to both the activities being performed and the state of the environment, in conjunction with an individual’s physiological profile, inferences made on the status of health and wellbeing may be improved. Such measures were used to automatically monitor and assess the health status of elderly people for the prediction of health and wellbeing status, and to provide further insight into the behaviours observed during ADLs [6]. The relationship between an individual’s lifestyle and health and emotional wellbeing was also investigated by Jakkula et al. based on the use of physiological data, together with activity and environmental information [9]. The lifestyle of a single participant was monitored over a period of 40 days and comprised physiological profile information, such as heartbeat, activity profile information, such as upper and lower-body movement, and environmental information, including distances travelled and room usage statistics. By explicitly labeling the captured data in terms of a discrete set of classes associated with levels of wellbeing, a model of the relationship was developed using the K-NN algorithm in order to perform classification and prediction of the future status of health and wellbeing of the participant [9]. Although a variety of approaches to the collection and use of ADL-based information have been reported within the literature, the brief review reveals that the predominant application of such data is in the classification and prediction of health status. For the majority of cases, both physiological and activity profile information has been utilized during classification, and a relatively straightforward taxonomy has been employed. Where physiological information has been exclusively used during the classification of ADLs, research efforts have focused on the use of implicit features derived from the sensor data. Correspondingly, within the preliminary research presented herein, the heart rate has been obtained from sensor data, however such physiological information has been utilized alongside activity profile data in order to examine the effect on heart rate caused by the transitions that occur during various commonplace activities. Before classification and prediction of such activities can be performed based on the explicit use of physiological information, an improved understanding of the relationship between the physiological profile and activity profile is required.
Deriving Relationships between Physiological Change and Activities
239
3 Methodology In order to investigate the relationship between an individual’s physiological and activity profile on a number of tasks, a methodology for the real-time collection and subsequent analysis of activity-related data was initially specified. The following subsections outline the methodology adopted, including an overview of the wireless sensor devices used during data capture, a description of the activity scenarios performed by participants and details of the statistical techniques employed for data pre-processing and analysis. 3.1 Shimmer Wireless Sensor Platform To facilitate data acquisition during the activity scenarios, the Shimmer1 wireless sensor platform developed by Shimmer Research was utilized. Employing two Shimmer devices, upper-body and lower-body motion performance on activities were captured using the devices’ integrated 3-axis MEMS accelerometer, with a sample rate of 50Hz and accelerometer sensitivity specified within the range [-1.5, 1.5] g. Physiological data was simultaneously acquired using the 3-lead electrocardiograph expansion module in conjunction with the accelerometer. The expansion module employs three recording electrodes to record bipolar leads in the Einthoven limb lead configuration. Due to restrictive cable length, the recording electrodes were placed in the left pectoral region to record variants of Lead II and Lead III. Both channels of ECG data were sampled simultaneously at 100Hz [13]. In order to effectively detect changes in anterior-posterior and lateral movement, the upper-body device was placed in the middle of the participant’s left pectoral, while the lower-body device was placed at the mid-point between the thigh and knee on the anterior of the participant’s right leg [14]. Accelerometer data from both devices, together with the ECG signal from the upper-body device, were transmitted wirelessly to a receiving computer via the IEEE 802.15.1 Bluetooth communications protocol. A custom Windows-based application, developed using the BioMOBIUS application development platform [15], was used for the real-time reception and storage of the transmitted data, in addition to the overall configuration of the sensor devices. 3.2 Activity Scenarios ADLs consist of a wide range of possible actions, and interactions with objects, within a variety of environments [16]. For the investigations presented in this paper, two high-level activity scenarios were defined, which provide a rudimentary simulation of some basic activities a person may perform within the home environment. The first activity scenario, entitled Arrive Home, comprises the subset of activities: {Ascend Stairs, Walk, Sit Down}. Likewise, the second activity scenario, entitled Leave Home, is composed of the subset of activities: {Stand Up, Walk, Descend Stairs}. Although the scenarios have been specified within a home 1
Sensing Health with Intelligence, Modularity, Mobility and Experimental Reusability.
240
S. Zhang et al.
environment, the activities involved are also commonplace within a hospital environment or workplace. The entire subset of activities for each high-level activity scenario must be performed in order to consider the high-level activity completed [2]. Subsequently, for Scenario I the participant must ascend a number of flights of stairs, walk along a corridor for a given distance then sit down in a sitting straight position from a standing straight position to accomplish the high-level activity Arrive Home. Similarly, for Scenario II the participant must stand up in a standing straight position from a sitting straight position, return along the corridor and finally descend the three flights of stairs to complete the high-level activity Leave Home. For the purposes of generating data for use in the preliminary investigations, a number of healthy participants were asked to repeatedly perform both activity scenarios, as previously outlined in Section 3.1. As each scenario comprises a number of activities, data from both accelerometers and the ECG signal were continuously captured before, during and after the participant’s performance of each activity. In order to obtain baseline physiological and motion values for each activity, data capture started approximately 30 seconds prior to commencement of the activity, with the participant adopting a standing straight or sitting straight posture, depending on the activity. Similarly, once the activity was performed, data capture was continued for approximately 30 seconds, with the participant remaining in a standing straight or sitting straight posture, in order to record any ECG recovery information that may have occurred. A single occurrence of either Scenario I, or Scenario II, was considered complete after the participant performed the associated set of activities. During the data capture phase of the preliminary investigations, two participants performed both activity scenarios five times each, resulting in physiological and activity information being captured from a total of 60 activities. 3.3 Pre-Processing Techniques Prior to data analysis, pre-processing was performed on the raw data values acquired from the accelerometer and ECG sensors in order to reduce the noise inherent in the devices along with the noise generated due to motion artefacts. For the accelerometer data, a Finite Impulse Response (FIR) filter was employed as a low-pass filter to produce a set of acceleration values for each activity. As such, the FIR filter was independently applied to the raw data values obtained from each axis of the accelerometer and utilized a Barlett Window to attenuate frequencies above 20Hz. The resulting sets of values were subsequently normalized within the range [-1, 1] in order to give a direct mapping from individual accelerometer values to gravitational acceleration values within the range [-1, 1] g. The values for gravitational acceleration were used to represent the activity profile of the participant for a given activity, thereby providing a contextual reference during analysis of the ECG signal. For the ECG signal values, a Fast Fourier Transform was applied during preprocessing to attenuate low-frequency noise and waveform irregularities within the recorded signal. In order to obtain heart rate from the ECG signal, automated R-peak detection was performed on the resulting ECG signal, using an adaptive filter with a
Deriving Relationships between Physiological Change and Activities
241
threshold of 70% of the maximum level of the signal, and the average heart rate for each activity calculated from successive R waves after two passes of the filter have been applied. 3.4 CUSUM Data Analysis Technique In order to reveal the significance of any underlying relationships discovered between the ECG data and the ADLs, analysis of the heart rate values during each activity was performed using the statistical technique Cumulative Sum Control Chart (CUSUM). As a tool for detecting shifts in the mean of a process, the application of CUSUM analysis to the heart rate values permits the discovery of changing patterns normally associated with performance of an activity [17]. Such patterns can then be used to monitor and identify any abnormal cases that may arise during the activity. The CUSUM technique [18], as applied to the set of heart rate values for a given activity, is given in Figure 1. As may be observed, the analysis comprises a two-step process. In the first step, the cumulative sum of the difference between the values of each data point and the process mean are calculated over time, where the process mean, is an estimate of the control mean. In the case of the set of heart rate values, the process mean is the average heart rate value during the activity process. The difference, Sdiff, between the maximum and minimum CUSUM values, Smax and Smin respectively, is then calculated and utilized within a bootstrapping process in order to identify the significance of Sdiff, thus identifying the change in the trend associated with the heat rate values. During the bootstrapping process, the data points are randomly reordered, the corresponding difference in CUSUM values determined and compared with the difference from the original data. The confidence for the change in trend is then given as the percentage of bootstrapping cases that have smaller values for the CUSUM difference than the values for the difference from the original data points. Once a change in the trend is evaluated as significant, a change point can be identified in order to split the set of data points into two segments with individual characteristics. If the confidence value is greater than a pre-specified threshold, then Step 2 of the CUSUM algorithm is performed in order to identify the point of significant change within the dataset. Consequently, the change point is the point in the data sets at which the variance in the two separate segments is minimised. For the investigations presented herein the bootstrapping sample size, L, was set to 1000 to cover a sizeable distribution of Sdiff during the re-ordering of data. Additionally, a value of 1 was utilized for the significance in the change of trend. During the analysis conducted for the investigations, once a change in the pattern of the heart rate had been identified, information from the activity profile of the participant, contained within the acceleration data, was utilized in order to provide further contextual information regarding the possible reason for the change. Correspondingly, the points of transition within the activity profiles are compared with the change points determined for the associated heart rate in order to reveal the underlying relationship between the physiological and activity profile for an activity.
242
S. Zhang et al.
Step 1: Calculate difference of maximum and minimum of cumulative sum: Calculate the process mean:
x = ∑ xi / N N
i=1
where N is the number of data points (i.e. heart rate values)
S i = S i −1 + ( xi − x ), i = 1,...N
S diff = S max − S min
and
Use bootstrapping to estimate significance of change in trend: Repeat (i = 1,…,L) Randomly re-order data points (bootstrap samples) Calculate CUSUM value for bootstrap samples: i i i S diff = S max − S min
End (i = L) M is the number of bootstraps with index j such that: j S diff < S diff
Trend confidence value, Conf = (M/L). If the confidence value is greater than 95%, proceed to Step 2, otherwise the change in trend is not significant Step 2: Repeat (for data point (m >= 2) to calculate Mean Square Errors) m −1
N
i =1
j =m
MSE (m) = ∑ ( xi − x1 ) 2 + ∑ ( x j − x2 ) 2 m −1
Where
x1 = ∑ xi ( m − 1) i =1
N
and
x2 = ∑ xi ( N − m + 1) i =m
End (m = N-1) The change point is the data point that minimises the MSE value:
β = arg min MSE (k ) k
Fig. 1. CUSUM: Two-Step Process Applied to Heart Rate Values from a Single Activity
Deriving Relationships between Physiological Change and Activities
243
4 Results and Discussion For the investigations, physiological and activity profile data were captured from two healthy participants over five repetitions of both activity scenarios, where each activity scenario consisted of three activities. Although data from a total of 60 activities were recorded, only 83.33% of the datasets contained complete information, due to issues with the Bluetooth communications between the wearable sensor devices and the receiving computer. Initially, detailed results from one participant for a single performance on one activity will be given, along with the corresponding analysis. This will be followed by selected results and analysis from a single participant on the remaining set of activities performed. Figure 2 illustrates the set of values for the heart rate, as determined using the Rpeak detection method previously described in Section 3.3, alongside the corresponding set of values obtained for the Mean Square Error (MSE) during CUSUM analysis. The activity profile information captured for the activity, in terms of the sets of normalized acceleration values from both the upper-body and lowerbody accelerometers, are subsequently presented in Figure 3.
Fig. 2. Activity Sit Down: Heart Rate & Mean Square Error (MSE). CUSUM result is given in both graphs (dashed vertical line).
From the CUSUM analysis, it has been discovered that there is a significant change in the trend associated with the heart rate values during the activity. The corresponding change point of the trend is identified during CUSUM analysis by minimizing the total individual variances for the data points within the two segments separated by the change point. In Figure 2 the position of the change point is illustrated with a dashed vertical line
244
S. Zhang et al.
at 35.04 seconds in both the graph depicting the set of heart rate values and the graph showing the corresponding set of values for the MSE. As can be seen in the graph of MSE values, the change point corresponds to the data point for which the smallest MSE value has been determined. Based on the CUSUM analysis, the data points before the change point represent a pattern of values that are distinct from the pattern formed by the data points after the change point. Consequently, the range of values for the heart rate before the change point is [72.28, 91.70], whereas the range of heart rate values after the change point is [56.89, 71.44]. In order to determine the potential cause of the change in heart rate pattern, the activity profile obtained during the activity over the same period may be examined.
Fig. 3. Activity Sit Down: Upper & Lower-Body Acceleration Values. Anterior-posterior acceleration (solid line) and vertical acceleration (dotted line) are shown on both graphs. CUSUM result is also given in both graphs (dashed vertical line).
Figure 3 provides an illustration of the corresponding activity profile, with the set of acceleration values from the upper-body and lower-body accelerometers being displayed in the top and bottom graphs respectively. Although a 3-axis accelerometer was used during the activities, only the two most dominant axes are shown in order to increase the clarity of the figures. In Figure 3 it can be seen that both of the accelerometers show a consistent change in values during the transition from the initial standing position to the seated position, prior to the change in the pattern of the heart rate. For the upper-body accelerometer, the chest is recorded as moving briefly from an anterior position to a posterior position, then back to a more anterior position as the participant moves through the activity of sitting down. Simultaneously, there is
Deriving Relationships between Physiological Change and Activities
245
a rapid increase in vertical acceleration of the chest, which is followed by a quick decrease as the participant settles in a seated position. In a similar manner, the lowerbody acceleration values show a rapid transition from an anterior position to a posterior position, coupled with a rapid increase in vertical acceleration, as the sensor device rotates by 90° around the lateral axis during the activity. Although the activity begins before the change point at approximately 30.53 seconds, a comparison with the graph of the heart rate in Figure 2 shows a brief fluctuation in heart rate around the time of the activity. Subsequently, the change in the pattern of the heart rate reflects the fact that the participant has just performed the activity. Results from the set of values for the heart rate, together with the corresponding set of values from the upperbody accelerometer, obtained during the Stand Up activity, are presented in Figure 4.
Fig. 4. Activity Stand Up: Heart Rate & Upper-Body Acceleration. Anterior-posterior acceleration (solid line) and vertical acceleration (dotted line) are shown on the Upper-body Acceleration graph. CUSUM result is given on both graphs (dashed vertical line).
In Figure 4 it can be viewed that the activity commences at approximately 30.49 seconds, immediately prior to the change in heart rate pattern at 31.59 seconds. Similar to the transition from standing to sitting, previously shown in Figure 3, the acceleration values indicate the chest initially moves in the posterior direction, followed quickly by a move in the anterior direction as the participant moves from a sitting to standing position. Likewise, there is an initial increase in vertical acceleration, with a subsequent decrease before the acceleration during the activity. Although a comparison of the upper-body acceleration values from both the Sit Down and Stand Up activities shows that they have similar patterns, the corresponding values before and after the activities are significantly different. Moreover, an overall
246
S. Zhang et al.
increase in the value of the heart rate for the Stand Up activity can be seen in Figure 4, with values for the heart rate before and after the change point occurring within the ranges [59.65, 72.28] and [73.14, 93.09] respectively. By contrast, the heart rate illustrated in Figure 2 for the Sit Down activity shows an overall decrease in value. Figure 5 illustrates the results for the heart rate and corresponding lower-body acceleration for the activity Walk. In the lower-body acceleration graph in Figure 4, the regular, repeated movement of the leg that occurs during the Walk activity is clearly depicted by the acceleration values. As with the previous results from the Sit Down and Stand Up activities, the activity begins at approximately 31.56 seconds, immediately prior to the change point at 33.96 seconds. From the graph of the heart rate presented in Figure 5, a general fluctuation in the heart rate may be observed, with an overall rise occurring after the change point as the participant continues to perform the activity. Subsequently, the values for the heart rate before the change point are within the range [66.06, 83.03], whereas after the change point the values for the heart rate are in the range [84.16, 102.40].
Fig. 5. Activity Walk: Heart Rate & Lower-Body Acceleration. Anterior-posterior acceleration (dotted line) is shown on the Lower-body Acceleration graph. CUSUM result is given on both graphs (dashed vertical line).
A similar correlation between the heart rate values and the acceleration values can also be discerned in the results from the activities Ascend Stairs and Descend Stairs, which are presented in Figure 6 and Figure 7 respectively. Similar to the results presented in Figure 5, in both Figure 6 and Figure 7 the regular, repeated movements of the leg may again be seen in the lower-body acceleration graphs, as the participant performed the activities of Ascend Stairs and Descend Stairs respectively. As the
Deriving Relationships between Physiological Change and Activities
247
stairwell used during the activities contained 3 flights of stairs with a small mezzanine level in-between each flight, the increased acceleration during the flights can be recognized in both of the acceleration graphs.
Fig. 6. Activity Ascend Stairs: Heart Rate & Lower-Body Acceleration. Vertical acceleration (dotted line) is shown on the Lower-body Acceleration graph. CUSUM result is given on both graphs (dashed vertical line).
In Figure 6 and Figure 7 it may also be observed that the activities start at approximately 30.00 seconds and 32.09 seconds respectively, yet continue after the corresponding changes occur in the heart rate patterns at 37.43 seconds and 35.50 seconds. Comparing the graph of the heart rate in Figure 6 with the equivalent graph in Figure 7, it is readily apparent that considerably more effort is required by the heart when performing the activity Ascend Stairs. Subsequently, the heart rate during Ascend Stairs continues to increase for a short period after the activity is performed before it begins to recover to a resting rate. For the Ascend Stairs activity, as illustrated in Figure 6, the heart rate has a value within the range [63.34, 87.77] before the change point and a value within the range [89.04, 122.88] after the change point. Likewise, during the Descend Stairs activity, shown in Figure 7, before the change point the values for the heart rate occur within the range [65.36, 76.80], whereas after the change point the values for the heart rate occur within the range [77.77, 97.52]. Again, the heart rate graphs in Figure 6 and Figure 7 show distinct patterns of the heart rate before and after the respective change points, thus reflecting the response by the heart to the activities performed.
248
S. Zhang et al.
Fig. 7. Activity Descend Stairs: Heart Rate & Lower-Body Acceleration. Vertical acceleration (dotted line) is shown on the Lower-body Acceleration graph. CUSUM result is given on both graphs (dashed vertical line).
From the investigations, it has been observed that there is a high correlation between the time when an activity takes place and the time when the change point occurs, with the change point occurring shortly after the activity commences. Subsequently, it has been shown that a transition to an activity results in a corresponding change in the pattern of the heart rate, which is potentially due to an increase in the function of the heart in response to the effort required to perform an activity. Within the results, it has also been shown that even the least significant activities, such as Sit Down and Stand Up, have an obvious effect on the patterns of the heart rate. Thus the relationship between the physiological and activity profile provides crucial information that may be further used in order to develop algorithms that determine if physiological profile information derived from sensors is abnormal or alarming, given the context from corresponding activity profile information. Ranges of values for the heart rate for different ADLs can potentially be determined, as different activities have a different effect on the change in the pattern of the heart rate. Consequently, threshold values for heart rate alarms on individual activities may be determined based on derived features of the heart rate in order to detect abnormalities during ADLs. Given the context of the current activity, it is expected that the values of the heart rate after the activity will occur within a range defined by the associated threshold value. Any heart rate values that are detected outside the threshold for an activity can be reported in order to prompt a further investigation by a
Deriving Relationships between Physiological Change and Activities
249
clinician. Subsequently, mechanisms that provide context aware monitoring of health status, based on physiological information derived from wireless sensor technology may be further developed. As previously stated, during the data acquisition phase of this study there was a periodic problem with packet loss occurring from the Bluetooth transmission between sensor devices and receiving computer. Potential causes for such packet loss include faulty wireless sensors, interference from other wireless transmission signals and the implementation of the Bluetooth stack on the receiving computer. Consequently, the problem of packet loss is potentially significant for the practical deployment of wireless sensors for health monitoring and analysis. In order to address this problem, a number of approaches may be adopted, such as the use of a more robust communications protocol, or the use of duplicate sensor devices to facilitate signal validation and fault tolerant communications. Likewise, the use of adaptive data analysis techniques, which permit effective analysis from incomplete information, may be applied to aid in the resolution of this problem.
5 Conclusion and Future Work The relationship between physiological profile and activity profile information, obtained from performance of ADLs, has considerable potential for use in the inference of health and wellbeing status. Consequently, such relationships may be used by healthcare services in order to provide the facility for elderly patients to age in place. In this paper, the first steps towards this goal have been made through preliminary investigations that verify the ability to capture changes in the patterns associated with heart rate, which occur as a direct result of performing daily activities. Using five distinct activities, the relationship between physiological and activity profile information for each activity has been successfully determined through application of the CUSUM analysis technique. Based on this work and the relationships discovered, the research should initially be extended to incorporate intelligent data analysis techniques for the modeling and classification of the activities based on the physiological profile information. Although participants within this preliminary work performed an exclusive set of predetermined activities, the research should also be extended to permit the continuous monitoring and analysis of both performance and activity profiles of participants freely conducting activities within a sensor-based, smart environment. Furthermore, the problem of packet loss during data acquisition has a potentially significant impact on the deployment and utilization of wireless sensors for the purpose of health monitoring. Although a number of approaches from both communications protocol and data analysis points of view have been suggested to help resolve the problem, these provide further challenges for the effective management of network traffic within sensor-based environments and challenges for the development of effective data analysis methods that operate effectively on incomplete datasets. Subsequently, both of these aspects provide additional areas in which to extend the current research.
250
S. Zhang et al.
References 1. United Nations, Department of Economic and Social Affairs, Population Division: World Population Prospects The 2008 Revision: Volume I: Comprehensive Tables. Technical Report ST/ESA/SER.A/287 (2009) 2. Zhang, S., McClean, S., Scotney, B., Hong, X., Nugent, C., Mulvenna, M.: An intervention mechanism for assistive living in smart homes. J. Ambient Intell. Smart Environ. 2(3), 233–252 (2010) 3. Bogan, D., Spence, J., Donnelly, P.: Connected Health in Ireland: An All-Island Review. Technical Report 1109CH-REP (2010) 4. Hong, X., Nugent, C., Mulvenna, M., McClean, S., Scotney, B., Devlin, S.: Evidential fusion of sensor data for activity recognition in smart homes. Pervasive Mob. Comput. 5, 236–252 (2009) 5. Noury, N., Hadidi, T., Laila, M., Fleury, A., Villemazet, C., Rialle, V., Franco, A.: Level of Activity, Night and Day Alternation, and well being measured in a Smart Hospital Suite. In: IEEE Engineering in Medicine and Biology Society, pp. 3328–3331. IEEE Press, Los Alamitos (2008) 6. Celler, B.G., Earnshaw, W., Ilsar, E.D., Betbeder-Matibet, L., Harris, M.F., Clark, R., Hesketh, T., Lovell, N.H.: Remote monitoring of health status of the elderly at home. A multidisciplinary project on aging at the University of New South Walses. Int. J. Bio-Med. Comput. 40(2), 147–155 (1995) 7. Yuchi, M., Jo, J.: Heart Rate Prediction Based on Physical Activity using Feedforward Neural Network. In: International Conference on Convergence and Hybrid Information Technology, pp. 344–350. IEEE Computer Society, Los Alamitos (2008) 8. Jakkula, V.: Predictive Data Mining to Learn Health Vitals of a Resident in a Smart Home. In: Proceedings of the 7th IEEE International Conference on Data Mining Workshops, pp. 163–168. IEEE Computer Society, Los Alamitos (2007) 9. Jakkula, V., Youngblood, G., Cook, D.: Identification of Lifestyle Behavior Patterns with Prediction of the Happiness of an Inhabitant in a Smart Home. In: AAAI Workshop on Computational Aesthetics: Artificial Intelligence Approaches to Beauty and Happiness, pp. 23–29. AAAI Press, Menlo Park (2006) 10. Pawar, T., Chaudhuri, S., Duttagupta, S.: Body Movement Activity Recognition for Ambulatory Cardiac Monitoring. IEEE Trans. Biomed. Eng. 54(5), 874–882 (2007) 11. Jakkula, V., Cook, D., Jain, G.: Prediction Models for a Smart Home Based Health Care System. In: Proceedings of the 21st International Conference on Advanced Information Networking and Applications Workshops, pp. 761–765. IEEE Computer Society, Los Alamitos (2007) 12. Ogawa, M., Tamura, T., Togawa, T.: Fully Automated Biosignal Acquisition in Daily Routine Through 1 Month. In: Proceedings of the 20th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, pp. 1947–1950. IEEE Press, Los Alamitos (1998) 13. Shimmer Research: Shimmer Biophysical Expansion User Guide I: ECG, EMG, GSR. User Guide, Shimmer Research (2010) 14. Trost, S.G., McIver, K.L., Pate, R.R.: Conducting Accelerometer-Based Activity Assessments in Field-Based Research. Med. Sci. Sports Exerc. 37(11), 531–543 (2005) 15. Shimmer Research: SHIMMER: Sensing Health with Intelligence, Modularty, Mobility and Experimental Reusability User Manual, Revision 2.0b, Shimmer Research (2008) 16. Philipose, M., Fishkin, K., Patterson, M., Fox, D., Kautz, H., Hahnel, D.: Inferring activities from interactions with objects. IEEE Pervasive Comput. 3(4), 50–58 (2004) 17. Hinkley, D.: Inference about the change-point from cumulative sum tests. Biometrika 58, 509–522 (1971) 18. Pettitt, A.: A simple cumulative sum type statistic for the change-point problem with zeroone observations. Biometrika 67, 79–83 (1980)
Author Index
Abbaspour, Maghsoud
151
Boucetta, Ch´erifa 164 Burns, William 179 Coen Porisini, Alberto 135 Cruciani, Federico 179 del Cid, Pedro Javier 91 Donnelly, Mark P. 179 Finlay, Dewar
235
Hailes, Stephen 107 Helal, Sumi 219 Homayounnejad, Saman 151 Hughes, Daniel 91 Hughes, Danny 20 Huygens, Christophe 20
Oh, J.-G.
52
Santoro, Nicola 189 Santos, Jo˜ ao Ruivo 75 Scotney, Bryan 235 Sicari, Sabrina 135 Stangaciu, Cristina 1 Stangaciu, Valentin 1
20, 59, 91
Kaafar, Mohamed Ali Kanter, Theo 121 Kim, Eunju 219 Kim, Jaehoon 52 Kwon, Jagun 107 Lanthier, Mark Lee, K.-S. 52
Nie, Jing 36 Norling, Roger 121 Nugent, Chris D. 179, 235
Paggetti, Cristiano 179 Parente, Guido 179 Parr, Gerard 36 Patterson, Timothy 36 Paulino, Herv´e 75
Galway, Leo 235 Ghebleh, Abbas 151
Joosen, Wouter
McClean, Sally 36, 235 Michiels, Sam 20, 59, 91 Minier, Marine 164 Morrow, Philip 36
164
189
Marcu, Marius 1 Martins, Francisco 205 Matthys, Nelson 20
Teacy, Luke 36 Thoelen, Klaas 59 Topirceanu, Alexandru Ueyama, J´ o
20
Velazquez, Elio 189 Vieira, Duarte 205 Volcinschi, Daniel 1 Walters, Jamie Zhang, Shuai
121 235
1