FEATURE INTERACTIONS IN SOFTWARE AND COMMUNICATION SYSTEMS IX
Proceedings of the International Workshop on Feature Interactions previously published by IOS Press:
Feature Interactions in Telecommunications and Software Systems VIII Edited by S. Reiff-Marganiec and M.D. Ryan Feature Interactions in Telecommunications and Software Systems VII Edited by D. Amyot and L. Logrippo Feature Interactions in Telecommunications and Software Systems VI Edited by M. Calder and E. Magill Feature Interactions in Telecommunications and Software Systems V Edited by K. Kimbler and L.G. Bouma Feature Interactions in Telecommunication Networks IV Edited by P. Dini, R. Boutaba and L. Logrippo Feature Interactions in Telecommunications III Edited by K.E. Cheng and T. Ohta Feature Interactions in Telecommunications Systems Edited by L.G. Bouma and H. Velthuijsen
Feature Interactions in Software and Communication Systems IX
Edited by
Lydie du Bousquet Laboratoire d’Informatique de Grenoble (LIG), Université Joseph Fourier, France
and
Jean-Luc Richier Laboratoire d’Informatique de Grenoble (LIG), CNRS, France
Amsterdam • Berlin • Oxford • Tokyo • Washington, DC
© 2008 The authors and IOS Press. All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, without prior written permission from the publisher. ISBN 978-1-58603-845-8 Library of Congress Control Number: 2008922182 Publisher IOS Press Nieuwe Hemweg 6B 1013 BG Amsterdam Netherlands fax: +31 20 687 0019 e-mail:
[email protected] Distributor in the UK and Ireland Gazelle Books Services Ltd. White Cross Mills Hightown Lancaster LA1 4XS United Kingdom fax: +44 1524 63232 e-mail:
[email protected]
Distributor in the USA and Canada IOS Press, Inc. 4502 Rachael Manor Drive Fairfax, VA 22032 USA fax: +1 703 323 3668 e-mail:
[email protected]
LEGAL NOTICE The publisher is not responsible for the use which might be made of the following information. PRINTED IN THE NETHERLANDS
v
Organised and hosted by Laboratoire d’Informatique de Grenoble, France
Sponsored by Institut IMAG
Université Joseph Fourier – Grenoble I
Institut National de Recherche en Informatique
Institut Polytechnique de Grenoble
Centre National de Recherche en Informatique
Grenoble Alpes Métropole
Ville de Grenoble
Conseil Général de l’Isère
This page intentionally left blank
Feature Interactions in Software and Communication Systems IX L. du Bousquet and J.-L. Richier (Eds.) IOS Press, 2008 © 2008 The authors and IOS Press. All rights reserved.
vii
Preface These proceedings record the papers presented at the ninth International Conference of Feature Interactions in Software and Communication Systems (ICFI 2007), held in the city of Grenoble, France. This conference builds on the success of the previous conferences in this series. The first edition of this conference, known then as Feature Interaction Workshop (FIW), held in St Petersburg, Florida, USA, in 1992. It was then held in Amsterdam, The Netherlands (1994), Kyoto, Japan (1995), Montreal, Canada (1997), Lund, Sweden (1998), Glasgow, UK (2000), Ottawa, Canada (2003), and Leicester, UK (2005). FIW became ICFI in 2005. The Feature Interaction Workshop was originally created for discussion and reporting on the feature interaction problem in telecommunication systems. In this domain, an interaction occurs when one telecommunications feature/service modifies or subverts the operation of another one. Undesired interactions can both lower this quality and delay service provisioning. Therefore, the problem of feature interactions in telecommunications is of great importance. In the past decade, a lot of attention has been devoted to the development of methods for detection and resolution of feature interactions. However, this feature interaction phenomenon is not unique to the domain of telecommunications systems. It can also occur in any large software system that is subject to continuous changes. For this edition, the conference has a range of contributions by distinguished speakers drawn from both telecommunication and other software systems. Besides its formal sessions the conference included a doctoral symposium, a panel and two invited talks. All the submitted papers in these proceedings have been peer reviewed by at least two reviewers drawn from industry or academia. Reviewing and selection were undertaken electronically. ICFI2007 has been sponsored by IMAG, University Joseph Fourier, INPG, l’INRIA, Grenoble City, Grenoble Alpes Métropole (Communauté d'agglomération de Grenoble), le Conseil Général de l'Isère and la Région Rhônes-Alpes. University Joseph Fourier (Grenoble I) and the Laboratoire d'Informatique de Grenoble (LIG) have provided all local organization and financial backing for the conference. We would like to thank Jean-Luc Richier, Didier Bert, Pascale Poulet and Frédérique Chrétiennot for their help in organizing this event. Online information concerning the conference is available under the following Uniform Resource Locator (URL): http://www-lsr.imag.fr/ICFI2007/ Lydie du Bousquet, Farid Ouabdesselam
viii
Message from the Doctoral Symposium co-chairs The following pages contain the proceedings of the Doctoral Symposium that was held in conjunction with the International Conference on Feature Interactions in Software and Communication Systems in Grenoble in September 2007. Five papers were presented at the symposium, discussing emerging research on different aspects related to feature interaction. Gavin Campbell of the University of Stirling shows how feature interactions can occur in sensor networks, in the form of conflicts between policies. Resolution possibilities are discussed with respect to several examples. Ben Yan of the Nara Institute of Technology studies feature interactions in home networks from the point of view of different types of system safety. It demonstrates how these concepts can be formalized and validated, leading to safety assurance. Andreas Classen of the University of Namur studies feature interactions for systems that are closely integrated in their environment. For such systems, the environment can be the source of interactions. A formalization of the concepts in the event calculus leads to the possibility of automated feature interaction detection. Lionel Touseau of the University of Grenoble addresses the problem of service cooperation in inter-organizational service-oriented computing. In the resulting dynamically changing environments, service availability must be guaranteed. This can be achieved by the use of appropriate service-level agreements and related arrangements. Romain Delamare of the IRISA/INRIA Rennes takes into consideration Aspect Oriented Programming, where new features or aspects can be added to programs, thus requiring changes in existing test cases. He outlines a method by which it can be determined which test cases are impacted, and thus need to rewritten, because of new aspects. These interesting presentations on current and future advances on the feature interaction problem promise significant developments in our research area. We are looking forward to seeing full papers from these authors in the next Feature Interaction Conference. Luigi Logrippo, Université du Québec en Outaouais
David Marples Technolution BV
Lydie du Bousquet Laboratoire Informatique de Grenoble (LIG)
ix
Programme Committee The following people were members of the ICFI 2007 programme committee and reviewed papers for the conference: Conference Co-chairs: Lydie du Bousquet, Université Joseph Fourier, Grenoble, France Farid Ouabdesselam, Université Joseph Fourier, Grenoble, France Daniel Amyot, University of Ottawa, Canada Lynne Blair, University of Lancaster, UK Muffy Calder, University of Glasgow, UK Krzysztof Czarnecki, University of Waterloo, Canada Michael Fisher, University of Liverpool, UK Tom Gray, PineTel, Canada Jean-Charles Grégoire, INRS-Telecommunications, Canada Dimitar Guelev, Bulgarian Academy of Sciences, Bulgaria Robert J. Hall, AT&T Labs Research, USA Mario Kolberg, University of Stirling, UK Pascale Le Gall, LaMI, Université d’Evry Val d’Essonne, France Yves Le Traon, IRISA, France Fuchun Joseph Lin, Telcordia Technologies, USA Luigi Logrippo, Université du Quebec en Outaouais, Canada Evan Magill, University of Stirling, UK Dave Marples, Global Inventures, USA Alice Miller, University of Glasgow, UK Masahide Nakamura, Nara Institute of Science and Technology, Japan Tadashi Ohta, Soka University, Tokyo, Japan Klaus Pohl, LERO, University. of Limerick, Ireland and Software Systems Engineering, Univ. Duisburg-Essen, Germany Stephan Reiff-Marganiec, University of Leicester, UK Jean-Luc Richier, CNRS, LIG, France Mark Ryan, School of Computer Science, University of Birmingham, UK Pierre-Yves Schobbens, University of Namur, Belgium Henning Schulzrinne, Columbia University, USA Ken Turner, University of Stirling, UK Pamela Zave, AT&T, USA
x
External Referees We are grateful to the following people who aided the programme committee in the reviewing of papers, providing additional specialist expertise: Erwan Brottier, IRISA, France Kim Lauenroth, Software Systems Engineering, Univ. of Duisburg-Essen, Germany Clementine Nebut, IRISA, France Thorsten Weyer, Software Systems Engineering, Univ. of Duisburg-Essen, Germany
xi
Contents Preface Lydie du Bousquet and Farid Ouabdesselam
vii
Message from the Doctoral Symposium Co-Chairs Luigi Logrippo, David Marples and Lydie du Bousquet
viii
Programme Committee
ix
Quality Issues in Software Product Lines: Feature Interactions and Beyond (Invited Talk) Andreas Metzger
1
Service Broker for Next Generation Networks Fuchun Joseph Lin and Kong Eng Cheng
13
A Feature Interaction View of License Conflicts G.R. Gangadharan, Michael Weiss, Babak Esfandiari and Vincenzo D’Andrea
21
Managing Feature Interaction by Documenting and Enforcing Dependencies in Software Product Lines Roberto Silveira Silva Filho and David F. Redmiles
33
Towards Automated Resolution of Undesired Interactions Induced by Data Dependency Teng Teng, Gang Huang, Xingrun Chen and Hong Mei
49
Policy Conflicts in Home Care Systems Feng Wang and Kenneth J. Turner
54
Conflict Detection in Call Control Using First-Order Logic Model Checking Ahmed F. Layouni, Luigi Logrippo and Kenneth J. Turner
66
Policy Conflict Filtering for Call Control Gavin A. Campbell and Kenneth J. Turner
83
Towards Feature Interactions in Business Processes Stephen Gorton and Stephan Reiff-Marganiec
99
Resolving Feature Interaction with Precedence Lists in the Feature Language Extensions L. Yang, A. Chavan, K. Ramachandran and W.H. Leung Composing Features by Managing Inconsistent Requirements Robin Laney, Thein Than Tun, Michael Jackson and Bashar Nuseibeh Artificial Immune-Based Feature Interaction Detection and Resolution for Next Generation Networks Hua Liu, Zhihan Liu, Fangchun Yang and Jianyin Zhang
114 129
145
xii
Model Inference Approach for Detecting Feature Interactions in Integrated Systems Muzammil Shahbaz, Benoît Parreaux and Francis Klay
161
Considering Side Effects in Service Interactions in Home Automation – An Online Approach Michael Wilson, Mario Kolberg and Evan H. Magill
172
Detecting and Resolving Undesired Component Interactions by Runtime Software Architecture Gang Huang
188
Doctoral Symposium Sensor Network Policy Conflicts Gavin A. Campbell Considering Safety and Feature Interactions for Integrated Services of Home Network System Ben Yan
195
199
Problem-Oriented Feature Interaction Detection in Software Product Lines Andreas Classen
203
How to Guarantee Service Cooperation in Dynamic Environments? Lionel Touseau
207
Impact of Aspect-Oriented Software Development on Test Cases Romain Delamare
211
Subject Index
215
Author Index
217
Feature Interactions in Software and Communication Systems IX L. du Bousquet and J.-L. Richier (Eds.) IOS Press, 2008 © 2008 The authors and IOS Press. All rights reserved.
1
Quality Issues in Software Product Lines: Feature Interactions and Beyond (Invited Talk) Andreas METZGER 1 Software Systems Engineering, University of Duisburg-Essen Schützenbahn 70, 45117 Essen, Germany Abstract In software product line engineering, reusable artifacts are pro-actively created such that they can efficiently be reused in order to build customer-specific software products. To support the efficient reuse, variability is explicitly defined and introduced into the reusable artifacts. This variability implies that the reusable artifacts do not define a single software product but a set of such products. Specifically, the reusable artifacts do not constitute an executable system which could be tested. Thus, in order to check the reusable artifacts of a software product line for defects, the variability in those artifacts has to be handled. This invited talk will elaborate on different strategies for how to handle the variability in the reusable artifacts, and how existing quality assurance techniques for software product lines, including feature interaction analysis, address the specific challenges that are posed by those strategies. Keywords. Software product line engineering, Testing, Feature interactions, Formal reasoning
1. Motivation Software product line engineering (SPLE [1][2][3]) has shown to be a very successful paradigm for developing a diversity of similar software products at low cost, in short time, and with high quality. Numerous success stories report on the significant achievements of introducing software product lines in industry (see [3]). There are two essential differences between SPLE and the development of single software products: • Variability is explicitly defined and managed: Product line variability describes the variation between the products that belong to a software product line in terms of properties and qualities, like features that are provided or requirements that are fulfilled [4]. The central concepts for defining and documenting the variability of a software product line are variation point and variant. A variation point describes what varies between the products of a software product line; e.g., the products of 1 Corresponding Author: Andreas Metzger, Software Systems Engineering, University of Duisburg-Essen, Schützenbahn 70, 45117 Essen, Germany, E-mail:
[email protected].
2
A. Metzger / Quality Issues in Software Product Lines: Feature Interactions and Beyond
an on-line store product line can vary in terms of the payment options that are offered. A variant describes a concrete instance of a variation point; e.g., an on-line store can offer payment by credit card or by debit card. • The development process of a software product line is divided into two interrelated sub-processes: ∗ In domain engineering, the commonalities and the variability of the software product line are defined and reusable artifacts are created. Commonalities are properties and qualities that are shared by all products of the software product line [5]. Reusable artifacts include requirements, design models, components, code, test cases, and documentation. ∗ In application engineering, customer-specific software products are derived from the reusable artifacts by binding the variability, i.e., by selecting the desired variants for the variation points. Like in the development of single software products, quality assurance activities are essential in SPLE to guarantee the desired quality of the derived software products. These quality assurance activities can include – besides many others – inspection, formal verification, static analysis, as well as code- and model-based testing. One key aim of those quality assurance activities is to uncover the evidence of defects in the development artifacts. In SPLE, a defect in a reusable artifact can affect all software products that are derived from this artifact. As an example, the ‘place a call’ feature is a commonality of a mobile phone product line. Thus, a defect in the components which realize this feature can lead to failures in all mobile phones of the product line. As a further example, let us assume an undesired feature interaction between the features ‘silent mode’ and ‘sound alarm when battery low’, i.e., let us assume that those features interact in such a way that the mobile phone will make a sound even if in silent mode. Such a feature interaction will occur in all mobile phones that provide both features. Similar to the development of single software products, defects should be uncovered as early as possible in the SPLE process, as uncovering a fault late in the development process can lead to very high correction costs. Uncovering a defect late in the SPLE process can be very costly especially when several products of the software product line have already been developed and deployed, because all those products might have to be corrected. The earliest phase in SPLE is domain engineering, during which the reusable artifacts are constructed. However, existing quality assurance techniques from the development of single software products cannot be applied directly to the reusable artifacts, because those artifacts contain variability. This means that those artifacts do not define a single software product but a set of such products. Specifically, no executable system exists in domain engineering that could be tested. Quality assurance techniques which consider the specifics of SPLE are thus needed. This talk will elaborate on different strategies and techniques for checking the reusable artifacts in domain engineering while handling the variability in the reusable artifacts.
A. Metzger / Quality Issues in Software Product Lines: Feature Interactions and Beyond
3
2. Strategies and Techniques for Quality Assurance in Domain Engineering Existing techniques for quality assurance in domain engineering generally follow three different strategies for handling the variability in the reusable artifacts: • Commonality Strategy: When following the commonality strategy, only the common parts, which are shared by all products of the software product line, will be covered by the quality assurance technique (cf. [6][3]). • Sample Strategy: When following the sample strategy, sample products (a subset of all products of the software product line) are checked (cf. [6][3]). This implies that the common parts are checked as well as the variants which have been bound in the sample products. • Comprehensive Strategy: When following the comprehensive strategy, all products of the software product line are checked for defects. This implies that the common parts as well as all the variants of the software product line are covered by the quality assurance technique. In the following sub-sections, those strategies and the challenges for realizing them in concrete techniques will be elaborated. Examples of concrete techniques that have been developed within our research group and in collaboration with other researchers will be given to illustrate how those challenges can be addressed. 2.1. Commonality Strategy Quality assurance techniques that follow the commonality strategy aim at checking only the common parts of a software product line. Typically, the variants are either ignored during the checking of the reusable artifacts or they are replaced by placeholders that abstract from the variants or that simulate them. As an example for the first case, an inspection of a reusable requirements specification for a software product line could focus on common requirements only. As an example for the second case, variable code fragments could be replaced by a single code fragment that implements some basic behavior or at least guarantees that the code will compile. 2.1.1. Benefits The benefits of the commonality strategy are that early testing in domain engineering is enabled and that quality assurance activities can be performed even if no or only a few variants have been realized. 2.1.2. Challenges Techniques that follow the commonality strategy must at least address the following challenges: 1. How to keep the effort for creating the placeholders to a minimum? Creating placeholders usually requires development effort. Thus, the number of placeholders should be kept as small as possible. 2. How to guarantee an adequate coverage of the domain artifacts? Variants are not checked when following the commonality strategy. Thus, quality assurance activities should be planned that complement the commonality strategy.
4
A. Metzger / Quality Issues in Software Product Lines: Feature Interactions and Beyond
2.1.3. Example Technique: “Testing” An example for a technique which implements the commonality strategy is the modelbased testing technique ScenTED (see [6][7][8]).
(1) (1)
Variation Point
Variant
(2) (2)
Variability Placeholder
Test Case
Figure 1. Model-based Testing with ScenTED
In ScenTED, placeholders for the variants are developed (see (1) in Fig. 1) and test cases are generated while considering these placeholders (see (2) in Fig. 1). The result of the test cases generation process is a set of test cases, which guarantees that the common functionality of the software product line is covered and that the number of placeholders, which are needed to execute the set of test cases, is minimized (cf. Challenge 1). Besides testing in domain engineering, ScenTED supports the reuse of test cases for testing in application engineering. Thereby, ScenTED can be used to complement the test of the commonalities in domain engineering by performing product-specific tests in application engineering (cf. Challenge 2). 2.2. Sample Strategy Quality assurance techniques that follow the sample strategy aim at checking the common parts as well as selected variants. The basic steps of this strategy are typically as follows: 1. Determine the sample products (defined in terms of variants that are bound). 2. For each of the sample products: (a) Derive product-specific artifacts by binding the variability in the domain artifacts. (b) Apply quality assurance techniques from the development of single software products to the derived artifacts.
A. Metzger / Quality Issues in Software Product Lines: Feature Interactions and Beyond
5
2.2.1. Benefit The benefit of this approach is that existing techniques from the development of single software products can be used as they are. 2.2.2. Challenges In order to implement the sample strategy, the following challenges have to be faced: 1. How to determine representative sample products? The sample products should be chosen in such a way that the results of checking those sample products allow drawing conclusions about the overall quality of the software product line. 2. How to keep the number of selected sample products manageable? The number of sample products should be kept as small as possible while guaranteeing a representative coverage of the software product line. Otherwise, the effort for checking the sample products will become infeasible. 2.2.3. Example Technique: “Testing” As mentioned in Section 2.1, the ScenTED technique supports testing individual software products in application engineering. Thus, ScenTED can also be used to test sample software products in domain engineering. To select representative sample products for testing, products should be chosen which include variants that are likely to be used in many software products (cf. Challenges 1 and 2). The rationale behind this selection is that if a variant is used in most of the products of the software product line, an undiscovered defect in this variant can have an almost as severe effect on the quality of the software product line as a defect in a commonality (also see [6]). 2.2.4. Example Technique: “Feature Interaction Analysis” A more refined approach for selecting sample products has been implemented in the RAFINA technique (see [9][10]). RAFINA has been developed to analyze a software product line with respect to (undesired) feature interactions. It builds on a previous technique for detecting feature interactions in single software products (cf. [11]). To determine the sample products in RAFINA, we assume that if an interaction between the features F = {f1 , . . . , fr } is observed, there will also be interactions between all features f ∈ F where F ⊂ F with 1 < |F | < r. Stated differently, this assumption means that there will be no m-way feature interactions (with m > 2) in the products. In general, an m-way feature interaction is a feature interaction that does not occur between 1 < i < m features but occurs among m features [12]. We presuppose that each feature relates to a variant in the software product line. Thereupon, in order to keep the number of sample products to a minimum, we select the products which provide the maximum number of variants and thus features. If a feature interaction is uncovered in such a maximal sample product, this implies that feature interactions will be present in all smaller products that provide a subset of the interacting features of the sample product (cf. Challenge 1). It should be noted that even if an m-way interaction (with m > 2) exists in one of the sample products, RAFINA will detect that m-way interaction. However, in addition to the true m-way feature interaction, RAFINA will falsely uncover interactions between
6
A. Metzger / Quality Issues in Software Product Lines: Feature Interactions and Beyond
1 < i < m features. Yet, as m-way interactions have shown to be very rare, the number of those false positives is negligible. To illustrate how the number of sample products relates to the number of the potential products of the software product line (cf. Challenge 2), let us examine a single variation point. Let n be the number of variants of that variation point, to which at most k ≤ n and at least j ≤ k of the variants may be bound (for a further discussion on the potential constraints on variability see [3] or [9]). This variation point allows the derivak tion of i=j ni products. Following the RAFINA approach for selecting the sample products, it would suffice to only consider the products with the maximum number of variants bound, which leads to nk sample products in total. The extent to which the number of sample products can be reduced depends on k, i.e., the maximum number of variants that can be chosen for a variation point. The closer k is to the number of the variants per variation point (n), the smaller the number of sample products that have to be considered will be. The same holds when k comes closer to 1. However, in the latter case even a brute-force approach, which checks all the possible products (cf. Section 2.3), would be feasible. Figure 2 shows the numbers of sample products for varying values of k that need to be checked for a variation point with 13 variants (n = 13). Number of variants per variation point n = 13 Number of sample products
All products
6000 4000
RAFINA
2000
0 1
2
3
4
5
6
7
8
9
10
11
12
13
Maximum number of variants that can be chosen per variation point (k)
Figure 2. Comparison of the Number of Sample Products With the Number of All Products
The number of sample products can become very high if k is around n/2 (gray area in the figure). Thus, this can pose a scalability problem especially if a software product line has many variation points with k ≈ n/2. As a solution, the value of k could be modified for the purpose of feature interaction detection as follows: • k → 1: The value of k should only be reduced to k = 2, as otherwise feature interactions between two variants of the same variation point would go unnoticed. This results in n2 = (n · (n − 1))/2 variant combinations per variation point. • k → n: Increasing k to n promises the largest reduction of the number of sample products, because all variants per variation point could be selected. This leads n to n = 1 variant combination per variation point. However, the violation of the constraint on the maximum number of variants per variation point can lead
A. Metzger / Quality Issues in Software Product Lines: Feature Interactions and Beyond
7
to the identification of feature interactions that would never exist in any of the software product line’s products. In order to eliminate those feature interactions, a subsequent step is performed: RAFINA checks whether an actual product of the software product line can offer the features that are involved in the interaction. Modifying k requires that the reusable artifacts allow for binding an unplanned number of variant combinations. Thus, if the modification of k is not possible, an alternative approach for selecting the sample products can be followed: If one relaxes the requirement that all kinds of feature interactions (including m-way interactions with m > 2) have to be detected, it suffices to check all pair-wise feature-combinations. If a software product line has l variants (resp. features) in total, this results in l·(l−1) sample products. The product l · (l − 1) can be considerable smaller than the number of sample products that need to be considered with the initial RAFINA approach for k ≈ n/2. 2.3. Comprehensive Strategy The comprehensive strategy aims at checking all potential products of the software product line for defects. A ‘brute-force’ realization (cf. [3]) of the comprehensive strategy could be as follows: 1. Bind the variability in the reusable artifacts for each of the potential products of the software product line. 2. Apply techniques from the development of single systems to the derived artifacts of each of those products. 2.3.1. Benefit The comprehensive strategy is the strategy that leads to the best coverage of the domain artifacts. Although the sample strategy (see the previous section) allows checking all variants of the software product line by determining representative sample products, those variants are not checked in all potential reuse contexts, i.e., they are not checked for all products of the software product line. 2.3.2. Challenge The number of potential products in a software product line of industry-relevant size prevents any ‘brute-force’ approach from being used for realizing the comprehensive strategy in practice. To illustrate, if the reusable artifacts contain 15 variation points with 2 variants each, approximately 1 billion possible software products can be derived from those artifacts if point, 2are no further constraints for combining the variants. For each variation there 2 15 = 1 + 2 + 1 = 4 variant combinations are possible, leading to 4 ≈ 109 i=0 i potential products of the software product line. Industry reports on software product lines which range to up to tens of thousands of variation points and variants (see [13][14]). A significant challenge for realizing the comprehensive strategy thus is how to deal with the complexity that is involved in checking all potential products.
8
A. Metzger / Quality Issues in Software Product Lines: Feature Interactions and Beyond
2.3.3. Example Technique: “Formal Reasoning” The AVIP technique (see [4,15]) allows to formally reason on the reusable artifacts in domain engineering in order to identify inconsistencies (a specific kind of defect) in those artifacts. Inspired by the work of Czarnecki and Pietroszek [16] and Thaker et al. [17], the AVIP technique follows the comprehensive strategy while addressing the complexity challenge involved with this strategy. The key to handling the complexity of checking each potential product of the software product line is to exploit the power of state-of-the-art verification tools, like SAT Solvers [18], Constraint Programming Systems [19], or Model Checkers [20]. Those tools have reached a level of efficiency that allows them to be applied to problems of industry-relevant size. In AVIP, the domain artifacts to be checked as well as the consistency constraints that the artifacts must satisfy are expressed as inputs to a SAT Solver. The SAT Solver then efficiently computes whether the artifacts violate the consistency constraints for any valid product of the software product line. The valid products of the software product line are defined by an Orthogonal Variability Model (OVM [3]). An OVM is a dedicated model that documents the variation points and the variants of a software product line together with potential constraints on selecting the variants. The variants in the OVM are related to variable elements in the reusable artifacts via cross-links (x-links [4]). Whenever a variant is selected for a concrete product, the x-linked elements in the reusable artifacts will be included in the derived artifacts. Figure 3 shows an example of a simple OVM which is x-linked to a component diagram as a reusable artifact.
VP1: Media
Variability Constraint
1..* V1
Variant
Orthogonal Variability Model (OVM)
VP
Variation Point
V3
V2
Audio
O = (V1 V2 V3)
Text
Image
X-Link UserConsole «component»
Reusable Artifact
MediaPlayer «component»
:UserInterface 1..1
«component»
PlayMedia
A = (W1 W2) (W1 W2)
«component»
W1:MP3Player W2:PDFViewer Component Diagram
Figure 3. Example of OVM, Reusable Artifact, X-Links and Propositional Formulae
In AVIP the semantics of the OVM, the reusable artifacts as well as the x-links are formalized (for further details on the formal semantics see [4]). This formalization
A. Metzger / Quality Issues in Software Product Lines: Feature Interactions and Beyond
9
is used to map the artifacts and the consistency constraints to inputs for a SAT Solver, whereby the consistency checks are automated. A SAT Solver requires a propositional formula as input and, if the formula is satisfiable, delivers an assignment for the Boolean variables such that the input formula evaluates to true. Many of the off-the-shelf SAT Solvers require the input formula to be in Conjunctive Normal Form (CNF). However, CNF is not a very natural representation for our problem. Therefore, we have started to use the non-clausal solver NoClause [21], whereby we can avoid the complex translation of a propositional formula into CNF. In AVIP, an OVM O is mapped to a propositional formula O such that O evaluates to true for each valid product of the software product line. A Boolean variable v ∈ V ar(O) corresponds to a variant of the software product line. Figure 3 shows the result of such a mapping for an exemplary OVM. O = V1 ∨ V2 ∨ V3 only evaluates to true, when at least one variant has been chosen for the variation point. A reusable artifact A together with its consistency constraints are mapped to the propositional formula A. Each Boolean variable w ∈ V ar(A) represents a variable element in the reusable artifact. If the Boolean variable w is set to true this means that the variable element will be contained in the artifact that is derived from the reusable artifact. A is defined in such way that it will only evaluate to true if the combination of variable elements creates an artifact which satisfies the consistency constraint, i.e., if the artifact that is derived from A is free from inconsistencies. The result of such a mapping is shown for the component diagram in Figure 3. The multiplicity of 1..1 at the PlayMedia port of the UserInterface component requires that exactly one component instance is plugged in at this port. This leads to the propositional formula A = (W1 ∧ ¬W2) ∨ (¬W1 ∧ W2). To determine any consistency violations, the satisfiability of G = ¬(O ⇒ A ) is checked. A is the propositional formula A in which the Boolean variables V ar(A) have been replaced by propositional formulae over Boolean variables in V ar(O). More specifically, if the variants represented by the Boolean variables v1 , . . . , vn are x-linked to the variable elements represented by w1 , . . . , wm , each of those Boolean variables wi is replaced by (v1 ∨ . . . ∨ vn ). In the example of Figure 3 this results in the propositional formula A = (V1 ∧ ¬(V2 ∨ V3)) ∨ (¬V1 ∧ (V2 ∨ V3)). Whenever the SAT Solver finds a solution for the formula G, this points to a consistency violation in the reusable artifacts: When G evaluates to true, O ⇒ A must have evaluated to false. Due to the implication (⇒), this requires that O has evaluated to true while A has evaluated to false. This means that for a valid product of the software product line (defined by the assignment of Boolean variables which made O evaluate to true) an inconsistent artifact can be derived. In the example shown in Figure 3, the overall formula to be checked by the SAT Solver is: ¬((V1 ∨ V2 ∨ V3) ⇒ ((V1 ∧ ¬(V2 ∨ V3)) ∨ (¬V1 ∧ (V2 ∨ V3)))). This formula has at least one solution: It will evaluate to true for V1 = true, V2 = true and V3 = true as an example. This points to an inconsistency in the reusable
10
A. Metzger / Quality Issues in Software Product Lines: Feature Interactions and Beyond
artifact: When those variants are bound, two component instances are bound in the component diagram, where only one instance is allowed at a time. Our first experiments have shown the efficiency of the AVIP approach (cf. [4]) and we are confident that it will scale to very complex models as well. 3. Conclusion and Perspectives Quality assurance for software product lines is an important field of research. This talk has reviewed some of the challenges that need to be addressed by quality assurance techniques for software product lines and it has presented how existing techniques address those challenges. This talk has focused on analytical quality assurance techniques, i.e., on techniques which check the artifacts after they have been built. However, there also exists a wide range of principles and techniques that can be applied during the construction of the artifacts such that they will be built without certain kinds of defects. Specifically, codegenerators or configurators (e.g., see [22] [23]) can be used for model-driven product line development. The software product line community has achieved impressive results for quality assurance in software product lines. Still, quite a few research issues remain open. Some of those issues, which can be potential topics for future research, are presented below: ‘Debugging’: The presented quality assurance techniques for software product lines that comprehensively check the reusable artifacts (comprehensive strategy) determine whether the reusable artifacts comply to some pre-defined quality constraint. If this constraint is violated, they list the variants for which this violation will occur in an actual software product. However, in order to correct the reusable artifacts, the reasons for the violation of the quality constraint, i.e., the actual defects, have to be located. As an example, a quality constraint might be violated because the constraints on variability have been defined too loosely, thus allowing the derivation of unwanted products. Currently, the product line engineers have to find such defects manually. This can be a very challenging task when the models become large and complex. Thus, automated techniques that support the product line engineers in ‘debugging’ the reusable artifacts need to be developed (cf. [24]). Empirical Evaluation: The presented quality assurance techniques are promising. They efficiently address the problem of complexity when checking the reusable artifacts in domain engineering. Yet, the effectiveness of those techniques, i.e., their ability to uncover defects, needs further investigation. We expect that we can uncover a significant number of defects in domain engineering, which – if they went uncovered – would imply huge correction costs in application engineering. Applying product line techniques to other paradigms: First publications report on similarities between software product line engineering and service-based systems engineering (e.g., [25]). Quality assurance of a service-based system, for instance, faces a similar – if not worse – complexity problem. Due to the loose coupling of services, they can be composed to a potentially unbound number of different service-based systems. It should be interesting to see how far the solutions for handling the complexity in checking the reusable artifacts of a software product line can be applied to the com-
A. Metzger / Quality Issues in Software Product Lines: Feature Interactions and Beyond
11
plexity problem of checking the potential service compositions in service-based systems engineering. Acknowledgments Parts of this work have been sponsored by the German Research Foundation (DFG) under grant Po 607/1-1 PRIME and Po 607/2-1 IST-SPL. I cordially thank Kim Lauenroth and Ernst Sikora for fruitful discussions on formal reasoning in software product line engineering, Klaus Pohl for the joint research on variability management in software product line engineering, and Heiko Stallbaum for helpful comments on earlier drafts of this contribution.
References [1] [2] [3] [4]
[5] [6] [7]
[8]
[9]
[10]
[11] [12]
[13] [14]
Weiss, D.M., Lai, C.T.: Software Product Line Engineering - A Family-Based Software Development Process. Addison-Wesley, Reading, Mass. (1999) Clements, P., Northrop, L.: Software Product Lines: Practices and Patterns. Addison-Wesley Professional, Reading, Mass. (2001) Pohl, K., Böckle, G., van der Linden, F.: Software Product Line Engineering: Foundations, Principles and Techniques. Springer, Heidelberg (2005) Metzger, A., Heymans, P., Pohl, K., Schobbens, P.Y., Saval, G.: Disambiguating the documentation of variability in software product lines: A separation of concerns, formalization and automated analysis. In Sutcliffe, A., ed.: 15th IEEE International Conference on Requirements Engineering (RE 2007), 15-19 October 2007, New Delhi, India, Proceedings, IEEE Computer Society (2007) Coplien, J., Hoffman, D., Weiss, D.: Commonality and variability in software engineering. IEEE Softw. 15(6) (1998) 37–45 Pohl, K., Metzger, A.: Software product line testing. Commun. ACM 49(12) (2006) 78–81 Reis, S., Metzger, A., Pohl, K.: Integration testing in software product line engineering: A model-based technique. In Dwyer, M.B., Lopes, A., eds.: Fundamental Approaches to Software Engineering (FASE), 26-30 March 2007, Braga, Portugal, Proceedings. Volume 4422 of LNCS., Springer (2007) 321–335 Reuys, A., Kamsties, E., Pohl, K., Reis, S.: Model-based system testing of software product families. In Pastor, O., e Cunha, J.F., eds.: Advanced Information Systems Engineering, 17th International Conference (CAiSE 2005), 13-17 June 2005, Porto, Portugal, Proceedings. Volume 3520 of LNCS., Springer (2005) 519–534 Metzger, A., Bühne, S., Lauenroth, K., Pohl, K.: Considering feature interactions in product lines: Towards the automatic derivation of dependencies between product variants. In Reiff-Marganiec, S., Ryan, M., eds.: Feature Interactions in Telecommunications and Software Systems VIII (ICFI’05), 2830 June 2005, Leicester, UK, IOS Press (2005) 198–216 Metzger, A., Pohl, K.: Anforderungsbasierte Erkennung von Feature-Interaktionen in der Produktlinienentwicklung. In Biel, B., Book, M., Gruhn, V., eds.: German Conference on Software Engineering (SE 2006), 28-31 March 2006, Leipzig, Germany, Proceedings. Volume P-79 of LNI., Köllen Druck und Verlag GmbH, Bonn (2006) 53–58 Metzger, A.: Feature interactions in embedded control systems. Computer Networks 45(5) (2004) 625–644 Kawauchi, S., Ohta, T.: Mechanism for 3-way feature interactions occurrence and a detection system based on the mechanism. In Amyot, D., Logrippo, L., eds.: Feature Interactions in Telecommunications and Software Systems VII (FIW 2003), 11-13 June 2003, Ottawa, Canada, Proceedings., IOS Press (2003) 313–328 Deelstra, S., Sinnema, M., Bosch, J.: Product derivation in software product families: a case study. Journal of Systems and Software 74(2) (2005) 173–194 Maccari, A., Heie, A.: Managing infinite variability in mobile terminal software. Softw., Pract. Exper. 35(6) (2005) 513–537
12 [15]
[16]
[17]
[18]
[19] [20] [21]
[22]
[23] [24] [25]
A. Metzger / Quality Issues in Software Product Lines: Feature Interactions and Beyond
Lauenroth, K., Pohl, K.: Towards automated consistency checks of product line requirements specifications. In Egyed, A., Fischer, B., eds.: 22nd IEEE/ACM International Conference on Automated Software Engineering (ASE), 5-9 November, Atlanta, GA, USA, Proceedings. (2007) Czarnecki, K., Pietroszek, K.: Verifying feature-based model templates against well-formedness ocl constraints. In Jarzabek, S., Schmidt, D.C., Veldhuizen, T.L., eds.: Generative Programming and Component Engineering, 5th International Conference (GPCE 2006), 22-26 October 2006, Portland, Oregon, USA, Proceedings, ACM (2006) 211–220 Thaker, S., Batory, D., Kitchin, D., Cook, W.: Safe composition of product lines. In: Generative Programming and Component Engineering, 6th International Conference (GPCE 2007), 1-3 October 2007, Salzburg, Austria, Proceedings, ACM (2007) Zhang, L., Malik, S.: The quest for efficient boolean satisfiability solvers. In Brinksma, E., Larsen, K.G., eds.: Computer Aided Verification, 14th International Conference (CAV 2002), 27-31 July 2002, Copenhagen, Denmark, Proceedings. Volume 2404 of LNCS., Springer (2002) 17–36 Dechter, R.: Constraint Processing. Elsevier, Oxford, UK (2003) Clarke, E.M., Grumberg, O., Peled, D.A.: Model Checking. MIT Press, Cambridge, Mass. (2000) Thiffault, C., Bacchus, F., Walsh, T.: Solving non-clausal formulas with DPLL search. In Wallace, M., ed.: Principles and Practice of Constraint Programming, 10th International Conference (CP 2004), 27 September - 1 October 2004, Toronto, Canada, Proceedings. Volume 3258 of LNCS., Springer (2004) 663–678 Muthig, D., Atkinson, C.: Model-driven product line architectures. In Chastek, G.J., ed.: Software Product Lines, Second International Conference (SPLC 2), 19-22 August 2002, San Diego, CA, USA, Proceedings. Volume 2379 of LNCS., Springer (2002) 110–129 Czarnecki, K., Eisenecker, U.: Generative programming: methods, tools, and applications. ACM Press/Addison-Wesley Publishing Co. New York, NY, USA (2000) Benavides, D., Ruiz-Cortes, A., Trinidad, P., Segura, S.: A survey on the automated analyses of feature models. XV Jornadas de Ingenierıa del Software y Bases de Datos, JISBD 2006 (2006) Helferich, A., Jesse, S., Mikusz, M.: Software product lines, service-oriented architecture and frameworks: Worlds apart or ideal partners? In Draheim, D., Weber, G., eds.: 2nd International Conference on Trends in Enterprise Application Architecture, 29 November - 1 December 2006, Berlin, Proceedings. (2006) 143–157
Feature Interactions in Software and Communication Systems IX L. du Bousquet and J.-L. Richier (Eds.) IOS Press, 2008 © 2008 The authors and IOS Press. All rights reserved.
13
Service Broker for Next Generation Networks Fuchun Joseph Lin and Kong Eng Cheng Telcordia Technologies 1 Telcordia Drive Piscataway, NJ 08854, U.S.A. {fjlin, kcheng}@research.telcordia.com
Abstract. This paper describes the emerging need of feature interaction managers in Next Generation Networks that are based on a convergent IP architecture to support voice, data, and multimedia services. Though this need has been addressed by the telecom industry with various architectural components under different names such as Service Capability Interaction Manager (SCIM) in the 3GPP (3rd Generation Partnership Project) IP Multimedia Subsystem (IMS), the consensus of the industry is to call such a network function Service Brokering and the network component fulfilling this function Service Broker. This paper reports the current industrial status in architecting the Service Broker, discusses the limits of the Service Brokering functions defined in 3GPP, and points out open issues for further research. Keywords. Feature interaction management, Next Generation Networks, Service Capability Interaction Manager (SCIM), Service Broker, 3GPP IP Multimedia Subsystem (IMS)
1. Feature Networks
Interactions
Management
in
Next
Generation
Next Generation Networks (NGN) based on a convergent IP architecture revolutionize the traditional approach of building special purpose networks for specific vertical services (e.g. PSTN [Public Service Telephone Network] for voice services, cable networks for video delivery, and Internet for data services). The idea is to use one network (i.e. IP network) to offer all services that span across voice, data, and multimedia communications. In such next generation networks, access networks can take any of the following forms: DSL, Cable, fixed wireless or mobile wireless while there is only one IP core network shared by all access networks. This is a total integration of all the existing networks with a high speed IP core network and various access gateways on the edge to interface with different access networks. This architecture allows independent evolution of core and access networks and also shields the changes of one from impacting the other. Moreover, service or application technologies developed above the IP transport layer can also be
14
F.J. Lin and K.E. Cheng / Service Broker for Next Generation Networks
made independent of the network technologies below the IP layer. This makes services and applications shielded from constant change and evolution of underlying network technologies. With applications all built on top of an IP transport layer there is no need to maintain multiple service networks. This greatly reduces the capital and operational expense associated with maintaining service development and operations support for multiple networks. Feature interactions occur in Next Generation Networks due to the following reasons: 1. All NGN services are competing for the underlying shared network resources below the IP transport layer in core and access networks. As a result, an NGN service may inhibit another NGN service due to the constraints of the underlying transport resources. 2. Above the IP transport layer, NGN services are mostly based on SIP (Session Initiation Protocol) [5] as the signaling protocol and IMS (IP Multimedia Subsystem) [1][2][3][4] as the session control. As a result, NGN services may interact with one another either via SIP signaling or via IMS sessions. For example, two NGN services may be simultaneously triggered by the same SIP method in an IMS session. Thus there is a need to manage which service will have precedence over the other. 3. Moreover, it is also possible for IMS-based NGN services to interact with nonIMS based NGN services. For example, a non-IMS service such as calendar service can be used to decide whether an IMS service such as call forwarding need be triggered.
2.
Service Broker as Feature Interactions Manager in NGN
In this section, we will survey 3GPP effort in defining an architectural framework for service brokering in the IMS. The 3GPP is currently conducting a feasibility study of IMS Server Brokering in Release 8 in order to deal with feature interaction problems. The 3GPP IMS [1][2][3][4][5][6][7] already supports some selected Service Brokering functions via two IMS functional components and the interactions between them: x Serving Call Session Control Function (S-CSCF) and its Filter Criteria x Application Server (AS) and its Service Capability Interaction Manager (SCIM) In 3GPP, the S-CSCF provides call session control while services can be provisioned on three types of Application Servers [4] as depicted in Figure 1. The S-CSCF communicates with Application Server via the IP multimedia Service Control (ISC) interface that is based on SIP. Three types of Application Servers [4] include: 1. SIP Application Servers 2. The IM-SSF (IMS Service Switching Function) Application Server for hosting the CAMEL (Customized Applications for Mobile Enhanced Logic) network features [8].
F.J. Lin and K.E. Cheng / Service Broker for Next Generation Networks
3.
15
The OSA (Open Service Access) Service Capability Server (SCS) that interfaces to the OSA Application Server [9] for third party service creation.
Figure 1. Service Provision for 3GPP IMS (From 3GPP TS 23.218)
Additionally, there is a specialized type of SIP Application Server, the Service Capability Interaction Manager (SCIM) that performs feature interaction management between application servers. In summary, the Service Brokering functions in 3GPP exist in either S-CSCF or SCIM. Below we give further details on each of these functions. 2.1.
S-CSCF and its Filtering Criteria
Figure 2 below shows how S-CSCF utilizes Filter Criteria to mediate the execution of service logic in the Application Server.
Figure 2. Filter Criteria in S-CSCF (From 3GPP TS 23.218)
16
F.J. Lin and K.E. Cheng / Service Broker for Next Generation Networks
Filter Criteria (FC) are defined as the information which the S-CSCF receives from the HSS (Home Subscriber Server) or the AS (Application Server) that defines the relevant SPTs (Service Point Triggers) for a particular application. They define the subset of SIP requests received by the S-CSCF that should be sent or forwarded to a particular application in the Application Server. The SPTs are the points in the SIP signaling that may cause the S-CSCF to send/proxy the SIP message to an SIP AS/OSA SCS/IM-SSF. The subsets of all possible SPTs which are relevant to a particular application are defined by means of Filter Criteria. SPTs may potentially include: x any initial known or unknown SIP method (e.g., REGISTER, INVITE, SUBSCRIBE, MESSAGE) x presence or absence of any header x content of any header x direction of the request with respect to the served user x session description information (i.e. SDP) Multiple SPTs can be linked via logical expressions (e.g., AND, OR, NOT). Initial Filter Criteria (iFC) are the filter criteria that are stored in the HSS as part of the user profile and are downloaded to the S-CSCF upon user registration. They represent a provisioned subscription of a user to an application. Subsequent Filter Criteria (sFC) are the filter criteria that are signaled from the SIP AS/OSA SCS/IM-SSF to the S-CSCF. They allow for dynamic definition of the relevant SPTs at application execution time.
Access to existing IN (WIN/CAMEL) applications
Prepaid App.
Call Rest. App.
IN SCP Prepaid App.
OSA Call Rest. App Svr App.
PTT App.
WIN / CAP
Access to external 3rd party applications
OSA API
Application IM-SSF Server-A
Application Server-B 4. INVITE
Application OSA SCS Server-C
5. INVITE
3. INVITE
HSS
Filter Criteria
S-CSCF
6. INVITE
Destination Destination
2. INVITE Calling UE
1. INVITE
P-CSCF
Figure 3. S-CSCF as Service Broker across Different IMS Application Servers
On the Application Server, Service Platform Trigger Points (STPs) are the points in the SIP signaling that instruct the SIP AS, OSA SCS and IM-SSF to execute the service logic.
F.J. Lin and K.E. Cheng / Service Broker for Next Generation Networks
17
The S-CSCF may receive a set of Filter Criteria in the iFC or sFC. In order to allow the S-CSCF to handle different Filter Criteria in the right sequence, a priority shall be assigned to each of them. The S-CSCF will then sequence the handling of these Filter Criteria based on this priority. The mechanism of Filter Criteria thus enables the S-CSCF to perform brokering functions as depicted in Figure 3. However, the actual interaction logic for managing the interactions still needs to be developed. Figure 3 shows that a SIP INVITE sequentially triggers Prepaid, Push to Talk (PTT), and Call Restriction Applications residing in IN SCP, SIP, and OSA Application Servers, respectively, before it is routed to its destination. 2.2.
Application Server and its SCIM
In the 3GPP IMS service provision architecture in Figure 4, the SIP AS contains a SCIM (Service Capability Interaction Manager) to manage feature interactions and do ‘work flow management’ between SIP Application Servers. The SCIM thus can provide service brokering functions for the services on the 3GPP SIP Application Server as it will arbitrate the execution of service logic across multiple SIP applications. This brings up the possibility of combining the Filter Criteria in S-CSCF and the SCIM in the Application Server to create multiple levels of service brokering as indicated in Figure 4. Figure 4 shows that three services PTT (Push to Talk), GLM (Geographical Location Manager), and Presence residing on the SIP AS (Application Server-B) are managed by the SCIM while the services residing on three Application Servers are managed by the Filtering Criteria in the S-CSCF. This in essence creates a new challenge of managing interaction logic of “distributed service brokering functions”. Prepaid App. Application Server-A
GLM App. PTT PTT Pres. App. App. App.
Call Rest. App.
Application Server-B (SCIM)
Application Server-C
4. INVITE S-CSCF uses filter criteria to arbitrate among different features HSS
5. INVITE Application Server can internally coordinate among separate features (SCIM)
3. INVITE
Filter Criteria
S-CSCF
6. INVITE
Destination Destination
2. INVITE Calling UE
1. INVITE
P-CSCF
Figure 4. Combined Use of Filtering Criteria and SCIM for Service Brokering
18
F.J. Lin and K.E. Cheng / Service Broker for Next Generation Networks
3.
Limitation of Existing Service Brokering Functions
This section points out the limitation of existing service brokering functions defined in the 3GPP IMS. 1. Brokering only at the SIP Protocol Level As the analysis in Section 2 indicates, the service brokering functions currently in the 3GPP IMS are operated strictly at the SIP protocol level. This implies very severe limitation on the types of services that can be managed by the Service Broker. For example, all non-SIP applications such as HTTP web browsing, LDAP directory, and SOAP web services will be excluded from consideration. 2. Limits on Filter Criteria The Filter Criteria currently defined by the 3GPP IMS are conditions based on SIP REQUEST-URI, method, and header, direction of request (incoming or outgoing Call), and the content of SDP as well as logical expressions of these conditions. Thus its expressive power is very limited. For example, if an application is triggered based on comparing the contents between two SIP headers, this won’t be supported by the current Filter Criteria. 3. Limits and Lack of Requirements on SCIM The SCIM as now cannot arbitrate service logic across SIP AS, OSA SCS, and IM-SSF as it is embedded in the SIP AS. Furthermore, the requirements for the SCIM are not currently specified at all by 3GPP, as indicated in 3GPP TS 23.003, Section 5.5, “the internal structure of the application server is outside the standards.” Basically, the only service brokering function specified by 3GPP now is the Initial Filter Criteria of the S-CSCF. 4. Weak Support of Service Broker as a Stand-Alone Component The 3GPP has yet to define a stand-alone Service Broker functional component for the integration of SIP services since its SCIM is embedded in the SIP Application Server. 5. Little Support for Dynamic Interactions Management Though 3GPP IMS defines Subsequent Filter Criteria to enable dynamic feature interactions management (Section 2), the Filter Criteria in active use now are mostly Initial Filter Criteria that defines only a static priority order among multiple services. As a result, the dynamic feature interaction management at runtime such as modifying the priority sequence of services or inserting new services is still not well understood. 6. No Support for Interactions across multiple users or multiple sessions The 3GPP hasn’t addressed feature interaction management issues across multiple users or across multiple sessions of a user.
4.
Open Issues for Further Research
It is clear that the current service interaction management architecture in the 3GPP IMS are not sufficient to manage interactions between NGN application servers. The
F.J. Lin and K.E. Cheng / Service Broker for Next Generation Networks
19
open problem is what functional architecture enhancement is required to better or best support service interactions management based on suitable extension of the existing IMS/NGN protocols and procedures. The current draft of 3GPP TR23.810, “Architecture Impacts of Service Brokering”, defines what are required for Service Brokering in IMS/NGN “The service brokering functions are to provide an end user a coherent and consistent IP multimedia service experience when multiple IP multimedia applications are invoked in a session. Such support involves identifying which applications are invoked per subscriber, understanding the appropriate order of the set of applications, and resolving application interactions during the session [TS 22.228]. The applications can reside in any type of IMS Application Servers including an IM-SSF, SIP AS, OSA SCS or other (e.g. OMA enabler) or any combination of the above.” [10] Based on the limitation of existing NGN service brokering functions in Section 3, we summarize the open issues that are faced by the industry right now – 1. What service brokering functions can be standardized? We believe service brokering functions can be divided into two categories: on-line and off-line. Off-line functions include the following tasks o Identify all applications subscribed by a user o Understand how many ways these applications may work together by resolving their potential interactions o Decide one or more service behaviors of combined applications (based on the user’s expectation) for provisioning On-line functions then are to ensure that in a live session, when these multiple applications are invoked by the user, they will work as what the user expects them to work. We believe the only service brokering functions that can be standardized are those on-line functional architecture elements that provide architecture support in enforcing the appropriate order of interacting applications. 2. How much impact to the IMS core network and AS when introducing more capable service brokering functions? We believe the architecture introduced should produce as minimum impact as possible but on the other hand, it should provide as much flexibility as possible in order to accommodate any new applications. As these two are competing tradeoffs, the architecture need to be carefully designed to meet both requirements with maximum benefits. 3. How to accommodate all applications deployed over three types of IMS application servers including integration with existing IN services such as CAMEL? Note that these IN services are not SIP-based and need to be mapped to the corresponding SIP SPTs (Section 2). 4. How to accommodate service integration across different access networks such as UMTS, WLAN, WiMAX, and cable? Ideally, there should be no issues due to the fact that all services are developed on top of a common IP layer. But in reality, each access network has its own specific QoS, security, and charging methods and also interacts differently with the core network. As a result, service integration across these various networks will need to consider integration of heterogeneous QoS, security, and charging brought in by each different network.
20
F.J. Lin and K.E. Cheng / Service Broker for Next Generation Networks
5. How to support service integration between SIP and non-SIP applications and accommodate both in the IMS/NGN service architecture? Many of IP services are not SIP-based and service integration between SIP and non-SIP applications seems to provide the most fertile field for new NGN applications. For example, many emerging IPTV services are not SIP-based; however, integration of IPTV and SIP-based communication services can enable many attractive triple play services. 6. How to support service integration across multiple providers? One type of IMS Application Services is the OSA Application Server via OSA SCS interface that provide an open platform for any third party to become an IMS service provider via a secure interface. Ideally, the Service Broker should allow service integration over application servers of different providers without requiring each provider to expose internal details of their services. 7. How to deal with distributed interaction management between multiple service brokers within the same or across different administrative domains with both security and charging considerations? Such a distributed service brokering function is essential when multiple service providers are involved in the IMS services.
References [1]
3GPP TS 22.228, 3GPP Technical Specification Group (TSG) Services and System Aspects (SA); Service requirements for the Internet Protocol (IP) multimedia core network subsystem; Stage 1 [2] 3GPP TS 23.002, 3GPP TSG SA; Network architecture [3] 3GPP TS 23.228, 3GPP TSG SA; IP Multimedia Subsystem (IMS); Stage 2 [4] 3GPP TS 23.218, 3GPP TSG CN; IP Multimedia (IM) session handling; IM call model; Stage 2 [5] 3GPP TS 24.229, 3GPP TSG CN; IP Multimedia Call Control Protocol based on Session Initiation Protocol (SIP) and Session Description Protocol (SDP); Stage 3 [6] 3GPP TS 29.228, IP Multimedia (IM) Subsystem Cx and Dx Interfaces; Signaling flows and message contents [7] 3GPP TS 29.229, Cx and Dx Interfaces based on the Diameter protocol, Protocol details [8] 3GPP TS 29.078, Customized Applications for Mobile network Enhanced Logic (CAMEL) Phase X; CAMEL Application Part (CAP) specification [9] 3GPP TS 29.198, Open Service Access (OSA); Application Programming Interface (API); Part 1: Overview [10] 3GPP TR 23.810 Draft, V0.5.0 (2007-05), “Architecture Impacts of Service Brokering”.
Feature Interactions in Software and Communication Systems IX L. du Bousquet and J.-L. Richier (Eds.) IOS Press, 2008 © 2008 The authors and IOS Press. All rights reserved.
21
A Feature Interaction View of License Conflicts Gangadharan G.R. a , Michael WEISS b,1 , Babak ESFANDIARI b , and Vincenzo D’ANDREA a a Department of Information and Communication Technology, University of Trento, Via Sommarive, 14, Trento, 38050 Italy b Systems and Computer Engineering, Carleton University, 1125 Colonel By Drive, Ottawa, K1S 5B6, Canada Abstract. In this paper, we introduce the problem of license conflicts, which occurs when information assets (such as software, data, or multimedia files) are composed, derived or versioned. A license specifies a set of permissions granted by an asset owner to an asset consumer (as expressed in the form of licensing clauses), effectively waiving what would otherwise be an infringement of the owner’s intellectual rights. Thus, a license allows producers to control how consumers may use and extend the asset. New assets can be produced by composing multiple assets or deriving an asset from an existing asset, as governed by their licenses. Licenses interact with each other either directly or indirectly during the composition or derivation of assets. Licenses can also interact with other versions of the same license during the evolution of an asset. We view interactions of licenses as feature interactions, especially if those interactions result in conflicts. Here, features correspond to licensing clauses. In this paper, we identify and analyze feature interactions of licenses during the composition, derivation, and evolution of assets. Keywords. Feature Interactions, Information Assets, License Conflicts
1. Introduction Information assets (referred to simply as assets in this paper) are described as information that is of value to an organization. An asset can be software, a component, service, process or content that holds intellectual value. It can be combined with other assets. A new asset can also be derived from an existing asset. New versions of an asset can, furthermore, be released as a representation of enhancements to its functional or nonfunctional specification. However a new asset is produced, its distribution involves a license that must represent the unified view of the licenses of the composed assets, or the parent asset. During the formation of new asset, the licensing clauses of one asset may conflict with the licensing clauses of other assets. These conflicts are similar to the conflicts observed in feature interactions. Hence, licensing clauses are modeled as features in this 1 Corresponding Author: Department of Systems and Computer Engineering, Carleton University, 1125 Colonel By Drive, Ottawa, K1S 5B6, Canada; E-mail:
[email protected].
22
G.R. Gangadharan et al. / A Feature Interaction View of License Conflicts
paper. Understanding these interactions, and developing techniques for their detection and resolution will be critical for the legally authorized use of assets on any significant scale. Furthermore, the detection of these feature interactions will be the cornerstone for developing a framework for the semi-automatic composition of licenses. In this paper, we take first steps towards such a framework by providing a conceptualization of license conflicts as feature interactions, and a classification of license feature interactions. This paper introduces license conflicts as feature interactions, which may occur when information assets are composed, derived or versioned. It is organized as follows. Section 2 introduces the fundamentals of asset licensing and licensing clauses. Section 3 frames the problem of feature interactions for licenses. We classify licensing conflicts that can arise from these interactions in Section 4. In Section 5, we illustrate the various scenarios of feature interactions in the context of licenses specific to services, music or software assets. Section 6 discusses related work in this field, followed by our conclusions in Section 7.
2. Basics of Asset Licensing The distribution of an asset is always accompanied by a license, which describes the terms and conditions imposed by its producer. A license reflects the overall business value of the asset to its producers and consumers. Licensing is often used to protect the intellectual rights of asset producers, thereby turning the assets into a source of revenue, and licenses into a tool for business strategy. Also, licenses give developers control over how consumers can use the licensed assets. Consequently, asset producers rely on licenses to protect their assets from unauthorized consumption. An asset producer (the licensor) never transfers ownership of the asset to the consumer. Instead, the consumer (the licensee) merely obtains the right to use/extend the asset subject to the restrictions imposed by the license [1]. Thus, asset licensing is considered to include all transactions between a licensor and a licensee, in which the licensor agrees to grant the licensee the right to use and/or extend (by deriving from it) the asset under predefined terms. More broadly, an asset license is expected to have these elements [2]: 1. Subject of the License: The subject of the license relates to the definition of the asset being licensed, such as a unique identification code for the asset, a name for the asset, and other additional information. 2. Scope of Rights: The scope of rights reflects on what the licensee can do with the licensed asset [3]. This defines the extent to which the asset can be used, accessed, and value added to it (composition or derivation). Several different grants of rights are described including the right to reproduce, display, access, modify, make derivative works, sell or distribute, import, and sub-license to another party, who can do any of the above. The Scope of Rights falls into four types: Usage, Reuse, Manage, and Transfer. • Usage: Usage pertains to the end use of the asset. Usage rights are generally rendering actions like execute, play, display or print. • Reuse: Set of rights pertaining to the reuse of an asset by modifying, excerpting, or aggregating. Reuse can be in full or in part. • Manage: Rights pertaining to the digital management of an asset. This includes housekeeping actions such as back up, install, or uninstall.
G.R. Gangadharan et al. / A Feature Interaction View of License Conflicts
23
• Transfer: Transfer rights apply to the actions that allow a person or agent to transfer some specific rights to another person or agent. In general, transfer rights include the right to sell, lend, or lease. Transfer rights may involve ownership transfer, and may allow the asset to be used in perpetuity with or without exchange of value. 3. Financial Terms: Describe how the licensee will pay for the use of an asset. Consumers make payments either through royalties or a lump sum payment. Generally royalties are based on per unit sales. Lump sum payments are an alternative to royalties. Sometimes, lump sum payments are also used in addition to royalties. A lump sum payment can be paid by the consumer in advance of using the service (prepaid) or at the later stage (post-paid). Alternatively, the producer can make the asset available free of charge. 4. Warranties, Indemnification, and Limitations: Address issues of who bears the financial risk of asset defects or the legal risk of a third party claiming that the asset infringes on or violates their intellectual rights. • Warranties: A warranty is a promise regarding the description of the assets and their quality, stated by the producer. • Indemnification: Provision of defense by the licensor for the licensee if a third party sues the licensee, alleging that the licensee’s use of the licensed asset infringes on or violates their intellectual rights [4]. • Limitation of liability: Limitation of liability deals with the liability of each of the parties under the license agreement. 5. Evolution: Pertains to the rights over future releases or versions of an asset. Furthermore, there are licensing clauses that provide moral support to consumers and providers. Attribution: An asset may expect attribution for its use in any form by another asset. Thus, attribution is ascribing an asset to its creator. Non-Commercial Use: An asset can allow or deny other assets to use it either for non-commercial purposes or for commercial purposes. Sharealike: An asset may expect another asset to reflect the same terms and conditions (similar to Copyleft of GNU2 or Sharealike of Creative Commons [5]). As rights for assets vary based on the nature and context of the assets involved, the expression of rights for a particular asset will be more specific. For example, one of the rights for a multimedia asset can be to play it. The concept of playing can not be directly applied as a right to web service assets. Similarly, the rights for a web service differentiate between the levels of interface and implementation, which are not separate for multimedia or software assets.
3. Feature Interaction Problem for Asset Licenses In software, a feature is a component of additional functionality, i.e., it extends the core functionality of the software [6]. Features are added incrementally. This can happen at different stages in the lifecycle of the software and changes are usually made by different 2 http://www.gnu.org/copyleft/
24
G.R. Gangadharan et al. / A Feature Interaction View of License Conflicts
developers. Features are often developed and tested independently, or within a particular context. However, when several features are combined, there may be interactions between the features. Interactions are behavioural modifications, in the context of software, where one feature affects the behavior of another. Such effects can be benign and even required, or adverse. Thus, the feature interaction problem concerns the coordination of features such that they cooperate towards a desired result at the application level. Applied to asset licenses, licenses can be thought of having several features (usually referred to as licensing clauses) such as allowing users to create derivative works from the asset. When licenses are used together (in some way) within the same context (we intentionally avoid the term “combined” here since, as we shall see later, composition has a specific meaning for licenses), there can be conflicts between the licenses. Such conflicts take the form of license clauses (i.e. features) affecting clauses of another license in an adverse way. We can conceptualize the relationship between assets and their associated licenses and the interactions of licenses as shown in Figure 1. Assets are represented as circles, and their associated licenses using earmarked rectangles. Solid lines between assets show relationships between assets. Associations between assets and their licenses are shown as directed lines, pointing from licenses to assets. Interactions between licenses are shown as bidirectional dashed lines.
Figure 1. Assets, Licenses, and Interactions
Licensing clauses can be classified into the following three categories based on how the interactions among them affect one another: 1. Independent clauses: These licensing clauses do not affect the resulting license. For example, a software component with a license clause similar to Noattribution of a Creative Commons (CC) license will not affect the resulting license. A No-Attribution clause leaves the choice of Attribution clause open to the resulting license, and has thus no impact on it. 2. Compelling clauses: The presence of certain clauses in a license may restrict the clauses of a resulting license, and forces the resulting license to adhere by the compelling clauses. For example, the copyleft clause of the GNU GPL makes the resulting license viral. Licensees must distribute the asset to other parties under the same terms as the GPL. 3. Repelling clauses: Certain clauses will not allow the combination with certain other licensing clauses. For example, a component with a licensing clause similar
G.R. Gangadharan et al. / A Feature Interaction View of License Conflicts
25
to the Non-Commercial clause of a CC license will deny the component to interact with another component under a licensing clause that allows commercial use. The Non-commercial clause states that the licensee may not use the component for commercial purposes. There are certain license clauses, which are broader in scope of operation than certain other clauses. Assume two assets with different license clauses, e.g. composition and derivation. If one asset allows composition, a license allowing derivation can also be used, because derivation subsumes composition. We say that derivation and composition are compatible, or derivation can be redefined as composition. The concept of redefinition (at the license clause level) is similar to the concept of redefinition of a method in a subclass [7]. Redefinition implies that two license clauses are compatible, if the given license clause is more permissive (accepts more) than the corresponding clause in the other license.3 License conflicts occur when licenses with incompatible clauses are combined. In certain cases, the absence of one or several of these clauses will not cause conflicts. Table 1 lists rules to determine the compatibility of license clauses with unspecified (“don’t care”) license clauses. Together with redefinition, Table 1 allows us to determine when different types of license clauses are, in effect, compatible with one another. The details of checking license compatibility are, however, out of scope for this paper, and are described elsewhere. Specified Clause
Compatible
Rationale
Composition
NO
A license denying composition cannot be compatible with a license allowing composition.
Derivation
NO
Derivation specifies the creation of an asset based on one or more existing assets.
Attribution
YES
The requirement to specify attribution will not affect the compatibility when unspecified.
Sharealike
YES
The composite license must be similar to the license with the Sharealike clause.
Noncommercialuse
NO
Commercial Use is denied by Non-commercial use.
Payment
YES
Payment clauses do not affect compatibility directly, if unspecified. Table 1. Rules for determining compatibility with unspecified licensing clauses
4. Types of Feature Interactions in Asset Licenses Licenses associated with assets interact in three cases: 1. When an asset is combined with other assets, their associated licenses also need to be combined, and these licenses interact. 2. When new assets are derived from an existing asset, the license of the existing asset interacts with that of the derived one. 3 Two
license clauses are trivially compatible if they are identical.
26
G.R. Gangadharan et al. / A Feature Interaction View of License Conflicts
3. When a new version or release of an asset is published, the license of evolved asset interacts with the license of previous version. We describe these feature interactions in more detail below, and provide a general template for each type. Examples of using them are given in Section 5. 4.1. Feature Interactions During the Composition of Asset Licenses Composition is a form of integrating assets in a way that adds value to the assets taken individually. When assets are composed, their licenses are also composed. If composition results in incompatibilities, then the corresponding assets cannot be composed. The composition of licenses can produce a set of compatible licenses for the composite asset.4 The composite license can contain licensing clauses, which need not be present in the licenses of the assets that are composed. Assume P and Q are assets composed to form a composite asset R, as shown in Figure 2. In the figure, IXY represents the interaction between the assets X and Y . It is expected that the licenses L(P ) and L(Q) are compatible, which, in turn, requires their license clauses to be compatible. A detailed algorithm for checking license compatibility is provided in another paper [8].
Figure 2. License Interactions during Composition
This composition can be represented as: LC(R) ⊃ (LC(P ) ∩ LC(Q)) ∪ (LCN EW ) where LCN EW is a set of licensing clauses exclusive to the composite asset. We can distinguish two types of interactions between the licenses: 1. Interactions between the licenses being composed (IP Q ) 4 That
is, there are multiple licenses to choose from for the composite asset that are all compatible with the composed licenses. This aspect is further explored elsewhere.
G.R. Gangadharan et al. / A Feature Interaction View of License Conflicts
27
2. Interactions between the composite license and each of the licenses being composed (IRP and IRQ ) In addition to those direct interactions, there could be also indirect license interactions. Figure 3 provides an example. The indirect interaction IN DRγ is shown by a dotdashed line. Consider an asset Y that composes in another asset X. For example, Y is a service that provides weather information such as Yahoo!Weather. In turn, it gets its weather data from another service X. Y has a licensing agreement, L(X), with X for receiving the weather data. However,the copyright over the data that Y is offering as a service remains with X.
Figure 3. Indirect Licensing Clauses Interactions
An end user γ that wants to use the service Y is bound to the terms of a license, L(Y ). For example, the clauses in this license could include: • Not to reproduce, (re)sell or exploit the service for commercial purposes. • Not to modify, distribute or create derivative works based on the service. • Not to access the service by any other means than through the interface provided by Y for accessing the service. These terms restrict γ’s use of Y . γ can not use the service for any commercial purposes, nor can it derive or distribute the service. However, assume that the license of X allows any party to use the service and allows to create derivative works based on X. The license terms imposed by Y , thus, restrict γ from doing something that L(X) permits. For example, if γ were to create a derivative work of X, its license L(γ) would now be in conflict with L(Y ). 4.2. Feature Interactions During the Derivation of Asset Licenses For a new asset, a new license is given by its developer. This new license might be an existing standard license like the GNU GPL. The new license can also be derived from an existing license. The concept of a derivative license is similar to that of derivative software in Free/Open Source Software (FOSS) [9].
28
G.R. Gangadharan et al. / A Feature Interaction View of License Conflicts
A derived license must include all licensing clauses from its parent license, but can add new clauses. The template for license derivation is as shown in Figure 4. The derivation of a license L(T ) from L(S) can be represented as: LC(T ) ⊇ LC(S) ∪ LCN EW where LCN EW is a set of licensing clauses exclusive to the derivative asset.
Figure 4. License Interactions during Derivation
There can be interactions between the newly added clauses LCN EW with the existing clauses L(S). We expect LCN EW to be compatible with L(S). 4.3. Feature Interactions During the Evolution of Asset Licenses Over time, an asset can evolve in the following ways: • Modifications by the producer of functional or non-functional properties of the asset, represented by new releases or new versions. • Termination of the current running asset and substitution by a new asset with different behavior. • Same asset, but switching to a different asset license. When an asset is released in several versions, each version of the asset can have a different license. However, the licenses of a particular version must not contradict that of a previous version. Here, the licensor is not creating a new license based on an existing one (different from a derivative license). The versions of licenses interact as shown in Figure 5 as the asset evolves over time.
5. License Conflicts Scenarios As Feature Interactions As assets are composed, derived or evolved, license conflicts can arise as a result of feature interactions of licensing clauses. Below, we describe three scenarios of licensing interactions for different types of assets. We analyze each scenario to determine the cause of the license conflict in terms of the type of feature interaction it represents, and instantiate the corresponding template.
G.R. Gangadharan et al. / A Feature Interaction View of License Conflicts
29
Figure 5. License Interactions during Evolution
5.1. License Conflicts of Web Services Example 1. (Web service composition) Consider a restaurant service R that composes a map service M and a resource allocation service I. Assume that M allows composition and permits the service to be used for any purposes, and that I allows derivation (subsumes composition as per Table 1), but can be used only for non-commercial purposes. When M and I interact during composition, a license conflict occurs, because I denies commercial use. Based on Table 1, these license interactions cause a conflict, resulting in the incompatible licenses. The Non-Commercial Use feature in the license of I causes a conflict with the unspecified Non-Commercial Use feature in the license of M . If Non-Commercial Use is not specified, Commercial Use is deemed to be incompatible. Figure 6 instantiates the template for Composition of Asset Licenses.
Figure 6. Web Service Composition and License Interactions
5.2. License Conflicts in Multimedia Files Example 2. (Modification and re-licensing of a music file) Consider a music file S with a license LP lay,Derive that allows users to play the file (render the asset in audio
30
G.R. Gangadharan et al. / A Feature Interaction View of License Conflicts
form), and to create derivative works based on it. Assume, that another music file T is derived from this file. If the owner of the derived music file issues the file under a license LP lay,Save that allows users to save the file (save a copy including any changes to permanent storage), this license conflicts with the license of the parent music file S. The play feature in the license of the parent music file does not, by itself, grant the right to store the file in modified form, although the derive feature allows modification. Thus, there is a conflict with the save feature in the license of the derived file. The interaction is the result of a Derivation of Asset Licenses. Different from Evolution of Asset Licenses, the modifications of file and license are not made by their original creator. Figure 7 instantiates the template for Derivation of Asset Licenses interactions as applied to this scenario.
Figure 7. Modification and re-licensing of a music file as license derivation
5.3. License Conflicts in Software Assets Example 3. (Re-releasing an asset under a new license) Consider a software component S1 released under the GNU General Public License (GPL) license. At some point in the future, the licensor may decide to release a new version S1 under two different licenses say, GNU GPL5 and Affero GPL6 . However, the Affero GPL is incompatible with GNU GPL version 2 because of section 2(d) that covers the distribution of application programs via web services or computer networks. Thus, the release of S2 under Affero GPL conflicts with the license of the previous version S1 . Software released under the GNU GPL cannot be re-released under a GPL incompatible license. This conflict is due to changes made to the license of a component. It does not fall under Derivation of Asset Licenses, however, as the licensor did not create a new license based on an existing one. Instead, the licensor re-released a new version S2 of a component S1 under a license that was incompatible with the existing license. This situation is shown in Figure 8, which instantiates the template for Evolution of Asset Licenses interactions. Here, the existing license was GNU GPL, which requires that software released under this license cannot be re-released under an incompatible license such as the Affero GPL. 5 http://www.gnu.org/copyleft/gpl.html 6 http://www.affero.org/oagpl.html
G.R. Gangadharan et al. / A Feature Interaction View of License Conflicts
31
Figure 8. Re-releasing an asset under a new license as license evolution
6. Related Work and Discussions An asset is generally distributed with a license that governs what asset licensees can do. A license [1] includes all transactions between the licensor and the licensee, in which the licensor agrees to grant the licensee the right to use and access the asset under predefined terms and conditions. Asset licenses interact with one another during the course of the composition, derivation and evolution of assets, and conflicts may arise due to incompatibilities between license clauses. In our own work, we have studied licensing of services [10,11], and formalized the licensing clauses for services [12]. There has been related work on policy conflicts [13,14,15,16]. Policies and licenses are similar in that they govern what an asset does, but are not the same.Policies are commonly used for access control, quality of service, or other management tasks [16]. They capture high-level goals that can be enforced automatically. Policies are meant to be defined by users, allowing them to customize the behavior of a system. Policies provide the means for specifying and modulating the behavior of a feature to align its capabilities and constraints with the requirements of its users [15]. Licenses primarily focus on usage terms and access methods to assets, thus governing what users can do with an asset. They similarly modulate the use (not behavior) of the asset. There are important similarities and differences between policy conflicts and license conflicts. A policy conflict occurs, if there are policies (for example, authentication or privacy policies) specified on two features that refer to their corresponding operations, and the policies are not compatible [13]. Policy conflicts are particularly prone to cause user confusion, as policies are often specified by the users as part of customizing a feature [14]. License conflicts occur in the following scenarios: • A composite asset (aggregation of two or more assets) licenses should be compatible with the licenses of all the assets being composed. • As assets evolve over time, the changes introduced (addition of a new clause or modification of an existing clause) in the licensing clauses should be compatible with the other existing licensing clauses. • The license of a derivative asset (as it is inherited from a parent asset license) should be compatible with the parent license. License conflicts directly preclude the making of composite or modified or derivative assets, thus causing loss for business.
32
G.R. Gangadharan et al. / A Feature Interaction View of License Conflicts
7. Concluding Remarks When assets are combined with other assets, whether or not they can be combined is determined by the compatibility of their associated licenses. New assets cannot be derived from existing assets, unless the license of the existing asset is compatible with that of the derived one. The evolution of assets should be consistent with the corresponding licenses. In this paper, we have modeled interactions of licenses as feature interactions, especially if those interactions result in conflicts. Using feature interactions view, we have detected license conflicts during composition, derivation, and evolution. We are continuing our work to resolve the license conflicts by feature interaction approaches.
References [1] [2] [3]
[4] [5] [6] [7] [8]
[9] [10] [11]
[12]
[13]
[14] [15]
[16]
Classen, W.: Fundamentals of Software Licensing. IDEA: The Journal of Law and Technology 37(1) (1996) World Intellectual Property Organization: Successful Technology Licensing. WIPO Publishers, Geneva, Switzerland (2004) Garcia, R., Gil, R., Delgado, J.: A Web Ontologies Framework for Digital Rights Management. Journal of Artificial Intelligence and Law Online First (http://springerlink.metapress.com/content/03732x05200u7h27) (2007) Chavez, A., Tornabene, C., Wiederhold, G.: Software Component Licensing: A Primer. IEEE Software 15(5) (1998) 47–53 Fitzgerald, B., Oi, I.: Free Culture: Cultivating the Creative Commons. Media and Arts Law Review (2004) Calder, M., Kolberg, M., Magill, E., Reiff-Marganiec, S.: Feature Interaction: A Critical Review and Considered Forecast. Computer Networks 41(1) (2003) 115–141 Jezequel, J.M., Train, M., Mingins, C.: Design Patterns and Contracts. Addison-Wesley (1999) Gangadharan, G.R., Weiss, M., D’Andrea, V., Iannella, R.: Service License Composition and Compatibility Analysis. In: Proceedings of the International Conference on Service Oriented Computing (ICSOC’07). (2007) Feller, J., Fitzgerald, B.: A Framework Analysis of the Open Source Software Development Paradigm. In: Proc. of the 21st Annual International Conference on Information Systems. (2000) 58–69 D’Andrea, V., Gangadharan, G.R.: Licensing Services: The Rising. In: Proceedings of the IEEE Web Services Based Systems and Applications (ICIW’06), Guadeloupe, French Caribbean. (2006) 142–147 Gangadharan, G.R., D’Andrea, V., Weiss, M.: Free/Open Services: Conceptualization, Classification, and Commercialization. In: Proceedings of the Third IFIP International Conference on Open Source Systems (OSS), Limerick, Ireland. (2007) Gangadharan, G.R., D’Andrea, V.: Licensing Services: Formal Analysis and Implementation. In: Proceedings of the Fourth International Conference on Service Oriented Computing (ICSOC’06), Chicago, USA. (2006) 365–377 Sahai, A., Thomposn, C., Vambenepe, W.: Specifying and Constraining Web Services Behaviour through Policies. In: Proceedings of the W3C Workshop on Constraints and Capabilities for Web Services. (2004) Reiff-Marganiec, S., Turner, K.: Feature Interaction in Policies. Computer Networks 45(5) (2004) 569–584 Kamoda, H., Yamaoka, M., Matsuda, S., Broda, K., Sloman, M.: Policy Conflict Analysis Using Free Variable Tableaux for Access Control in Web Services Environments. In: Proceedings of the 14th Intl. World Wide Web Conference (WWW). (2005) Turner, K., Blair, L.: Policies and Conflicts in Call Control. Computer Networks 51 (2007) 496–514
Feature Interactions in Software and Communication Systems IX L. du Bousquet and J.-L. Richier (Eds.) IOS Press, 2008 © 2008 The authors and IOS Press. All rights reserved.
33
Managing Feature Interaction by Documenting and Enforcing Dependencies in Software Product Lines Roberto Silveira SILVA FILHO and David F. REDMILES Bren School of Information and Computer Sciences Department of Informatics 5029 Donald Bren Hall Irvine, CA 92697-3440 {rsilvafi, redmiles}@ics.uci.edu
Abstract. Software product line engineering provides a systematic approach for the reuse of software assets in the production of similar software systems. For such it employs different variability modeling and realization approaches in the development of common assets that are extended and configured with different features. The result is usually generalized and complex implementations that may hide important dependencies and design decisions. Therefore, whenever software engineers need to extend the software product line assets, there may be dependencies in the code that, if not made explicit and adequately managed, can lead to feature interference. Feature interference happens when a combined set of features that extend a shared piece of code fail to behave as expected. Our experience in the development of YANCEES, a highly extensible and configurable publish/subscribe infrastructure product line, shows that the main sources of feature interference in this domain are the inadequate documentation and management of software dependencies. In this paper, we discuss those issues in detail, presenting the strategies adopted to manage them. Our approach employs a contextual plug-in framework that, through the explicit annotation and management of dependencies in the software product line assets, better supports software engineers in their extension and configuration. Keywords: Feature interaction, software product lines, product line documentation, contextual component frameworks, software dependencies, and publish/subscribe infrastructures.
Introduction The need for faster software development cycles that meet the constantly evolving requirements of a problem domain has driven industrial and academic research in the area of Software Product Lines (SPL for short). The goal of SPL engineering is “to capitalize on commonality and manage variability in order to reduce the time, effort, cost and complexity of creating and maintaining a product line of similar software
34
R.S. Silva Filho and D.F. Redmiles / Documenting and Enforcing Dependencies in SPL
systems” [1]. In SPLs, reuse of commonality allows the reduction of the costs of producing similar software systems, while variability permits the customization of software assets to fit different requirements of the problem domain [2]. SPLs are usually designed using the concept of features and variation points [3]. Variation points represent the locations in the software that enable choices, while features represent user-observable units of variability associated to one or more of those points. In many industrial settings, commonality is implemented in the form of large pieces of software such as object-oriented frameworks, whereas features implement new behavior by direct extension and source code configuration[4]. In such approaches, features can interact in a positive way, by the combined extension of the common code in different variation points. They can also interact in a negative way, by defining behaviors that are incompatible with other features installed in the same infrastructure. In the latter cases, the interaction is also called interference. A feature interference occurs when the addition of a new feature affects or modifies the operation of the system in an unpredicted way [5]. In fact, many nontrivial feature interferences in software are a result of conflicting assumptions about service operations and system capabilities that are not explicitly documented or exposed to the programmers of those features [6]. SPLs are no exception. The dimensions of extensibility in SPLs are not always orthogonal, and their dependencies are not always explicit. As a consequence, whenever software engineers need to extend a SPL with new features, there may be dependencies within and among variation points and features that, if not documented and managed, can lead to feature interference. Our experience in the development of YANCEES [7], a highly extensible and configurable publish/subscribe SPL, makes evident some of those issues. In particular issues associated with the lack of management and documentation of fundamental, configuration-specific, incidental dependencies and emerging system properties. Those dependencies are further explained as follows. Fundamental (or problem domain) dependencies encompass the logical relationships that are common to all software product line members. For example, in the publish/subscribe domain, the process of: publication of events, followed by their routing based on subscription expressions, and the subsequent notification to subscribers define a common behavior shared by all publish/subscribe infrastructures. The fundamental dependencies that involve this common behavior restrict variability in the problem domain and create configuration rules that must be obeyed in the extension and configuration of a SPL. Moreover, they restrict some configurationspecific dependencies. Configuration-specific dependencies. These include the compatibility relations between features that extend or refine the common SPL behavior in the implementation of the different SPL members. Those dependencies are expressed in the form of inter-feature relations such as “compatible”, “incompatible”, “optional”, “exclusive”, “alternative”, and others. For example, ‘content-based’ filtering, ‘tuplebased’ message format and ‘push’ notifications define a compatible combination of features present in many content-based publish/subscribe infrastructures, whereas ‘content-based’ filtering and events represented as ‘objects’ are usually incompatible. Incidental (or technological) dependencies are consequence of the variability realization approaches employed in the construction of the SPL. Examples of such
R.S. Silva Filho and D.F. Redmiles / Documenting and Enforcing Dependencies in SPL
35
approaches include: design patterns, parameterized classes, aspects, mixings and others [8]. The benefits provided by each one of these approaches come with extra costs: the increase of the overall software complexity and the need to comply with their configuration and extension rules. For example, the use of software patterns such as Strategy, require the proper implementation of interfaces and the selection criteria. Moreover, these approaches usually introduce indirections in the code that, when applied in combination, may hinder its legibility and extension ([9] pp. 295). Dependencies on emerging system properties represent assumptions about system-wide guarantees for example: security, guaranteed event delivery, total order of events, and other properties that depend on different configuration parameters of the infrastructure. These system-wide properties may vary due to complex dependencies between the system components and parameters. In YANCEES, for example, the total order of events is a function of the distribution of the system. In peer-to-peer settings, for example, the total order of events is not preserved, whereas in centralized settings it is assured by the infrastructure. The inherent variability in software product lines, together with the need to cope with fundamental, configuration-specific, incidental and system properties dependencies not only creates a configuration management problem, but also hinders the reuse and the proper extension of software product lines. It makes possible for changes in different parts of software to break implicit system assumptions, leading to feature interference. In fact, our experience in the design, implementation and use of YANCEES shows that software engineers lack appropriate knowledge of those dependencies and assumptions, what we call variability context: the information necessary to understand, extend and customize the software product line. They also lack automated support in the form of tools and mechanisms that enforce those relations in the SPL, providing runtime and configuration-time guarantees. This paper describes in detail those issues in the design and development of YANCEES, a publish/subscribe infrastructure SPL and discusses the strategies used in supporting software engineers in extending and configuring this infrastructure. In particular, we argue for the use of dependency models in both design and implementation, with the elucidation and enforcement of those dependencies in the product line code. Our approach represents dependencies in the code artifacts, allowing their automatic enforcement at both load time and run time, at the same time that support software engineers in extending and configuring the product line, by supporting their understanding of the hidden dependencies and configuration rules of software. The contributions of this paper are in different fronts. From a feature interaction perspective, we provide a case study that shows how the lack of documentation and enforcement of fundamental, configuration-specific, incidental dependencies and emerging system properties can interfere in the feature reuse and the extension of SPLs. From a feature interaction research perspective, we show how the explicit documentation of those dependencies in the SPL, combined with the use of contextual component frameworks and configuration managers can help in the detection and prevention of feature interference. This paper is organized as follows: section 1 presents the technological background of our approach. Section 2 discusses our experience in the design,
36
R.S. Silva Filho and D.F. Redmiles / Documenting and Enforcing Dependencies in SPL
implementation and extension of YANCEES. Section 3 discusses our approach in managing those issues. Section 4 discusses some related work and we conclude in section 5.
1. Background The work presented in this paper relies on concepts from the areas of SPL variability modeling, and software component frameworks. We introduce these concepts here. 1.1. Variability modeling Variability modeling approaches provide a notation for representing choices and constraints (dependencies and rules) involving units of variability (features, variants, components) in SPLs. First generation modeling languages such as FODA [10], represent variability in terms of features and their compatibilities (alternative, multiple, optional and mandatory) and incompatibilities (exclusive or excludes) around predefined variation points. Researchers soon realized the importance of representing other kinds of dependencies in these models, proposing different extensions. For example, Ferber et al. [11] introduces the notion of “intentional”, “environmental”, and “usage” dependencies; whereas Lee and Kang [12] proposes the representation of runtime feature interactions such as “activation” and “modification” dependencies. Those models, however, suffer from a fundamental problem: the lack of representation of dependencies as first-class entities, and their traceability to implementation concerns. The inadequate management and representations of dependencies in SPLs [13] motivated the development of second generation variability modeling approaches [14]. These approaches represent dependencies as first-class entities, and support the variability management by the use of constraint checkers. Together, they provide software engineers with an overview of variability in the system, supporting their navigation through the space of valid product configurations, and deriving individual product members that meet a valid set of quality and feature attributes. An example of variability model and environment is COVAMOF [15], which also represents overall system quality attributes and tacit knowledge in the documentation of more complicated relations between features. While very useful in the representation of system variability, commonality, and the interaction between features, these models fail to: (1) support source code-level maintenance and evolution, and (2) support the runtime configuration management of features [16]. In this paper, we propose an approach that, by the integration of design models into the code, allow software engineers to better understand the underlying assumptions in the code implementation; whereas allows the infrastructure to automatically enforce those relations, preventing feature interference.
R.S. Silva Filho and D.F. Redmiles / Documenting and Enforcing Dependencies in SPL
37
1.2. Contextual software component frameworks Component models define the basic encapsulation, communication and composition rules that support the development of component-based software. Contextual Component Frameworks (CCF) [17] implement these models and support the automatic creation and composition of objects based on user-defined properties (or context). A CCF uses the inversion of control (IoC) and injection of dependencies principles [18] to transparently provide user-requested services and properties to the components in the system. Dependency Injection is a form of IoC that removes explicit dependence on container APIs, separating those concerns from the component implementation. Property-based contextual composition allows software engineers to select environmental characteristics and crosscutting concerns required by the component. This is achieved with the use of properties, usually expressed in the code or associated manifest configuration files. Common properties include: transactional communication, persistency, security and other crosscutting concerns. Examples of well-known component frameworks include CORBA Component Model, COM/ActiveX and Enterprise JavaBeans. YANCEES uses this approach to separate configuration management concerns from feature implementations and to support software engineers in the extension and configuration of software product lines. It explicitly represents variation points and inter-feature dependencies in the software source code, with the specific goal of supporting variability and preventing feature interaction caused by the lack of representation and enforcement of dependencies.
2. Case study: YANCEES, a publish/subscribe product line This section describes our experience in the design and implementation of YANCEES, a highly configurable and extensible publish/subscribe SPL, and discusses the main variability management issues faced. Publish/subscribe infrastructures implement a distributed version of the Observer design pattern [19], as shown in the top level of Figure 1. In its initial stage the pattern is very simple, it provides an interface (IPubSub) which allow polishers (IPublisher) to send events to the infrastructure; whereas subscribers express interest on those events through the use of subscriptions, using the subscribe(Subscription exp) command. A subscription is a logical expression in the content or order of the events. When a subscription is satisfied, a notification with the message matching this expression is sent to subscribers (ISubscriber) through the call of the notify(Event: evt) command in their interfaces. This pattern is used in the implementation of different publish/subscribe infrastructures in different domains. For a survey of existing pub/sub systems please refer to [20]. The majority of publish/subscribe research and commercial infrastructures fall short of mechanisms that allow their customization and configuration to comply with the evolving requirements demanded by event-driven applications [20]. Motivated by this fact, we developed a flexible publish/subscribe infrastructure called YANCEES (Yet ANother Configurable and Extensible Event Service) [7], that allows the different aspects of the publish/subscribe pattern to be extended and customized. In
38
R.S. Silva Filho and D.F. Redmiles / Documenting and Enforcing Dependencies in SPL
the coming sections, we briefly present the main elements of YANCEES design and implementation. 2.1. YANCEES design and implementation Different principles and strategies were applied in the design and implementation of YANCEES. These are: the use of a micro kernel architecture, supporting variability on different publish/subscribe dimensions; the application of different variability mechanisms such as: abstract classes and interfaces, extensible languages, dynamic and static plug-ins, generic events; and the wide use of static and dynamic configuration managers. These principles and strategies are further discussed as follows. Table 1 Publish/subscribe infrastructures variability dimensions and examples. Model
Description
Example
Event model
Specifies how events are represented
Tuple-based; Object-based; Recordbased, others.
Publication model
Permits the interception and filtering of events as soon as they are published, supporting the implementation of different features and global infrastructure policies.
Elimination of repeated events, persistency, publication to peers (through protocol plug-ins).
Subscription model
Allow end-users to express their interest on subsets of events and the way they are combined and processed.
Filtering: content-based, topicbased, channel-based; Advanced event correlation capabilities
Notification model
Specifies how subscribers are notified when subscriptions match published events.
Push; pull; both, others
Protocol model
Deals with other necessary infrastructure interactions other than publish/subscribe. They are subdivided in interaction protocols (that mediate end-user interaction), and infrastructure protocols (that mediate the communication between infrastructure components)
Interaction protocols: Mobility; Security; Authentication; Advanced notification policies. Infrastructure protocols: federation, replication, Peer-to-peer integration.
Variability Dimensions. Around a common generalized publish/subscribe micro kernel, different variability dimensions were implemented in YANCEES, as listed at Table 1. The YANCEES variation points were selected according to the main publish/subscribe design concerns described by Rosenblum and Wolf [21] model, extended to include the notion of protocols, a design concern that captures the different kinds of infrastructure distribution strategies and other sorts of user interactions outside of the publication and subscription of events. Extensible languages and plug-is. Publish/subscribe infrastructures have special requirements of dynamism driven by its interactive characteristic. Subscriptions are dynamic in essence; they are posted and removed at runtime by their users, and are expressed in terms of commands in a subscription language. As a consequence, variability in this domain requires the simultaneous evolution of language and infrastructure capabilities. Those requirements lead us in the choice of extensible languages and plug-ins [22] as the main variability realization approach for the
R.S. Silva Filho and D.F. Redmiles / Documenting and Enforcing Dependencies in SPL
39
subscription and notification models. In particular, YANCEES is implemented as a composition framework ([17] chapter 21.1) where component instances (in our case plug-ins) are created and combined at runtime in response to composition operators (subscription, notification commands in the user’s posted subscriptions) with the help of parses (or Mediators). The extensible language is implemented in XML, having its grammar defined using W3C XMLSchema standard. Static plug-ins. Non-interactive characteristics are implemented by static plugins and filters, installed at load-time (i.e., when the infrastructure is bootstrapped). The publication model, for example, is implemented as a Chain of Responsibility design pattern (see [23]), where filters (as static plug-ins), are composed into event processing queues that intercept the publication of events, implementing global system policies. Features that are shared by different variation points are implemented as static plug-ins a.k.a. services. <
> IPubSub
<> IPublisher
1 *
1
<> ISubscriber
*
+publish(Event: event) +subscribe(Subscription: sub, ISubscriber: subscriber) +unsubscribe(Subscription: sub)
+notify(Event: event)
ConcretePublisher
ConcreteSubscriber <<singleton>> PubSubFaçade
<> IAdapter
builds configuration ArchitectureManager
sends event to
sends events to EventQueue
interacts with <> IMediator
<<singleton>> ProtocolMediator
+enqueue(Event: event) queries
+parse(Subscription: sub) +connectToNewProtocol() +connectToSharedProtocol()
<<singleton>> PublishMediator
+configArchitecture(File config) +createComponents()
<<singleton>> PlugInRegistry +query(String: keyword)
<<singleton>> NotificationMediator
<<singleton>> SubscriptionMediator
manages <> IPlugin listens to
1 <> IStaticPlugin
0..* <> IFilter +doFilter(Event: event) +addSuccessor(IFilter: filter)
successor
<> ISubscriptionPlugin <> INotificationPlugin
<> IProtocolPlugin
+handle(Event: evt)
+sendNotification(Event: evt)
Figure 1 Overview of YANCEES core architecture, with its main components and interfaces
Generic Events. The variability in the event model is supported by the use of object wrappers that can hold different content formats (attribute/value pairs, objects, XML files or plain text) under a generic interface (IEvent in Figure 1). Configuration managers and dynamic parsers. The final design decision is the implementation of variability management in the system itself by the use of configuration managers that install static plug-ins and filters, and mediators, that allocate plug-ins at runtime. Applying those strategies the original publish/subscribe design pattern was extended as presented in Figure 1, which shows the main YANCEES core
40
R.S. Silva Filho and D.F. Redmiles / Documenting and Enforcing Dependencies in SPL
components and interfaces. Due to space limitations, other classes such as exceptions and auxiliary objects are not represented in Figure 1. The PublishMediator handles the publication of events, allowing the extension of its dimension through the use of filters (implementing IFilter interface). The NotificationMediator utilized notification plug-ins to implement different notification policies; whereas the SubscriptionMediator handles the interpretation of different subscription language expressions allocating appropriate subscription plug-ins. The dynamic allocation and discovery of plug-ins at runtime is supported by the use of a PluginRegistry component. After passing through the publication filters, the events are placed on the internal EventQueue and/or sent to adapters (implementing the IAdapter interface) that allow the integration with existing pub/sub systems. The ArchitectureManager installs static and dynamic plug-ins in the infrastructure based on a configuration file describing the features and their implementation files. The YANCEES core, composed of all the mediators, queue, registry and interfaces presented in Figure 1 is about 6000 LOC of Java code. The plug-ins and extensions used in different projects comprise another 3500 LOC. 2.2. Extending and Configuring YANCEES This section presents an example on how YANCEES can be extended and configured to support different application domains. In particular, we show how it was extended to support Impromptu [24], a peer-to-peer (P2P in short) file sharing tool. Impromptu provides an interface and a repository that allow users to share files in an ad-hoc peerto-peer way. In Impromptu, events are used to monitor the activity of a local file repository from each peer, to inform the arrival or departure of new peers in the network, and to synchronize the shared visualization of user’s interfaces from every peer. The peer discovery protocol is implemented using the IETF multicast DNS protocol. Every Impromptu peer executes a local YANCEES instance which is connected to other YANCEES instances in every peer in the network, thus forming a virtual P2P event bus. In this configuration, YANCEES provides both local and global event-based communication. Locally, it decouples the file repository from the GUI. Globally, it allows the monitoring of events from the file repositories in other peers, keeping their visualizations synchronized. In the support of Impromptu, YANCEES was extended and configured with plug-ins, filters and a tuple-based event format as illustrated in Figure 2. In this example, events are represented as attribute/value pairs of variable length and number. This is achieved by extending the GenericEvent interface. The subscription language is also extended to support two kinds of filtering: content-based, allowing the filtering of events based on the content of all their fields; and topic-based filtering, allowing the fast switching of events based on a single field. It also supports event sequence detection that operates over each one of those filters. The subscription language extension requires two steps: (1) the implementation of the ISubscriptionPlugin interface, and (2) the extension of the XMLSchema of the subscription language for every new command. The notification policy is push, extended in the same way as the subscription plug-ins, i.e. implementing the INotificationPlugin, and extending the notification language. The protocol model supports the mDNS peer discovery, detecting the arrival and departure of YANCEES instances in the local network. It
R.S. Silva Filho and D.F. Redmiles / Documenting and Enforcing Dependencies in SPL
41
also supports the publication of events between YANCEES peers, creating a virtual bus, with the help of the PeerPublisher plug-in that publishes to and receives events from other peers. Events from other peers are placed directly in to the event queue, skipping the publication model filtering. Finally, the publication model is extended with two filters: one that removes repeated events as they are published, thus saving network bandwidth; and another filter that forwards the events to the protocol plug-in. Publication filters must implement the IFilter interface. These extensions are put together with the help of the ArchitectureManager that assembles a valid infrastructure based on a configuration file that defines the feature names, their implementation file, and the variability dimension they extend. Impromptu YANCEES Subscription Publication Publishers
mDNS notifications
Repeated Events
mDNS
Send To Peers
addPeer() removePeer()
Protocol
Notification
Content Filter Event Queue
Peer Publisher
Seq. Push
Subscribers
Topic Filter dynamic build
Publication Mediator
dynamic build
subscription
Notification Mediator
Parsers
Other YANCEES Instances
Events published to and coming from peers
Figure 2 YANCEES configuration with Impromptu required functionality
The generality and variability approaches employed in the design and implementation of YANCEES, whereas provide the required flexibility, resulted in different issues that lead to feature interference. Those issues are further discussed in the next sections. 2.3. Feature interference in the YANCEES variability model Fundamental (or problem domain) dependencies. Through the lack of appropriate documentation, software engineers may assume that certain variable characteristics of the infrastructure are constant. For example, plug-ins and filters may be implemented with specific timing and event formats in mind. A change in the event representation, for example, from variable attribute/value pairs to fixed records, can completely invalidate the subscription language ContentFilter, SequenceDetector or even the input filters and protocol plug-ins in the system in Figure 2. ConFigureuration-specific dependencies. Some features in YANCEES have their functionality implemented through the integration of different components spanning more than one variation point. In the example of Figure 2, the SendToPeers will only work properly if the PeerPublisher and mDNS plug-ins are both installed in the protocol variation point. This characteristic creates a dependency between these features. Moreover, changes in any of those components due to natural software evolution may invalidate the implementation of the whole feature. Incidental (or technological) dependencies. Each variability realization approach introduces specific configuration rules which, if not accounted for, can lead to interference. In the example of Figure 2, the accidental inversion of the order of
42
R.S. Silva Filho and D.F. Redmiles / Documenting and Enforcing Dependencies in SPL
SendToPeers and RepeatedEventsFilter pug-ins would result in the erroneous publication of repeated events to all peers, interfering with the overall system performance. Dependencies on emerging system properties. Implicit assumptions on existing system attributes also permeate the implementation of features in our model. The order of events, for example, is a function of the distribution and of the protocol algorithms used. In a centralized setting, event order is usually guaranteed, whereas in distributed settings as in this P2P model, events can arrive earlier or later than others (coming from the PeerPublisher plug-in for example), invalidating the SequenceDetector subscription command (‘Seq.’ in Figure 2). Those assumptions may also directly impact the behavior the RepeatedEventsFilter. Generality issues. One of the main strategies of reuse in YANCEES is the implementation of a generalized common core. The use of generic interfaces throughout the system, permits specific extensions to be developed, while the common pub/sub process is preserved and reused. This approach, however, has a disadvantage of hiding implicit dependencies and assumptions. For example, the filter interface only prescribes a doFilter(IEvent: evt) method that must be implemented by all the filter components. It does not prescribe any timing or control dependencies between other filters, installed together in the publication model, nor explicitly represent environmental assumptions such as the impact a filter may have in other parts of the system if events are removed, modified or added by this component. The same is true to the subscription and notification models where plug-ins implement generic interfaces dependent on IEvent generic event representation. As a result, syntactically sound expressions can be incompatible with the current system configuration as event order, format or timing.
3. Managing feature interaction in YANCEES In order to address and prevent the different kinds of feature interaction discussed in the last section, software engineers need a way to better understand and enforce the fundamental, configuration-specific and incidental dependencies in the SPL without jeopardizing its flexibility. In YANCEES these goals are achieved by the documentation and enforcement of design and implementation level dependencies in the code. This information is exposed to the software engineers in context, i.e. in the variation points of the system, in a way that is both human and machine readable, supporting engineers in understanding these dependencies, and the infrastructure itself enforcing these dependencies at both load time and runtime. 3.1. Modeling dependencies The first step in the management of feature interaction is the proper modeling of dependencies. One of the most important kinds of dependencies in SPL are the fundamental dependencies. They usually become implicit in the common SPL assets, and impact the other dependency types previously discussed. For being common to all product line members, these dependencies can be analyzed during the design of the system, being further refined as the infrastructure gets implemented.
R.S. Silva Filho and D.F. Redmiles / Documenting and Enforcing Dependencies in SPL
43
We represent the fundamental dependencies in our model in Figure 3, using a notation similar to Ferber and Haag’s approach [11]. Note that, in the diagram of Figure 3, we also introduce new dimensions (written in italic) to represent emerging properties of the SPL. In YANCEES, these properties are the timing, routing and resource concerns that change as a consequence of parameters selected in different variation points. Besides the representation of the problem domain dependencies between variation points, dependencies also exist between features within the same variation point and across variation points (as exemplified in Figure 2). For the lack of space, we do not provide a diagram for these dependencies in this paper. Is a concern between routing and subscription
<> Notification sends <>
<> Event
Is a concern between the event format and the operators
filters
<> User Protocol <> Publication
<>
<> <> routes
Is a consequence of distribution
Routing
filters according to <>
<> affected by <>
<>
<>
guaranteed by
Timing
<> <> Subscription <> queries content <> Content operator
Resource
connects <>
<> Protocol
queries order <>
<> Order operator
<> Infra Protocol
Figure 3. A dependency model of publish/subscribe main variation points and concerns
In the diagram of Figure 3, the event model and its representation directly impacts the subscription and routing models. Timing is another crucial concern in the model. A change in the way YANCEES routers are federated may affect the timing guarantees of the system (guaranteed delivery or total order of events), which will impact the subscription language semantics. A change in the resource model may also affect the timing model. For example, in a hierarchical distributed system, the total order of events may not be feasible. Finally, the notification model is orthogonal to the other features. Since it manages only events, it can vary independent form the other features. 3.2. Representing and managing dependencies in the product line assets Once the dependencies are identified, they must be formally incorporated in the implementation of the infrastructure. In YANCEES, this is achieved by the use of source code annotations in both the common code and in the feature implementations. In particular, we use the Java annotations API (available since JDK1.5), that permits the creation of custom properties that are associated to classes, methods and fields. Figure 4 illustrates, in general terms, the main strategies of our approach. The dependencies between the many variation points (emerging properties and fundamental dependencies) of the system are represented by annotations in the variation points of the code (arrows between variation points and properties in the picture).
44
R.S. Silva Filho and D.F. Redmiles / Documenting and Enforcing Dependencies in SPL Feature f5
Feature f3
VariabilityModel.java References to variation points
V3 Property A Property B
Variation
V2 Point V1
Variation point annotations EMERGING: Provides properties A and B; FUNDAMENTAL: Depends on features in V4, V2
Config. Rule 1 Config. Rule 2
Matches and enforces
Interface Composition Filter
Enforced by composition filter
Feature f1
Feature annotations EMERGING: Requires properties A=a2 and B=b1; FUNDAMENTAL: Depends on features in V4=f3 CONFIGURATION-SPECIFIC: Incompatible with V7=f12 Compatible with V2=f7 INCIDENTAL: Satisfies configuration rule 1 Requires configuration rule 2
Composition Framework
V5
Feature f7
V4
VARIATION POINTS: VP1= f1 Vp2 = f7 Vp3 = f5 PROPERTIES: A = function(v4,v2) B = function(v1)
Figure 4 Summary of the approach: managing dependencies with context annotations
The variation point V1, for example, is extended with Feature f1 that has specific emerging, fundamental, configuration-specific and incidental dependencies as described in Figure 4. Those values are matched with the provided properties of the system. The composition framework, based on the annotations in the code, guarantees that the feature’s requirements are met. In other words, all required and provided dependencies are satisfied. Table 2 Summary of the contextual annotations used in YANCEES Depend.
Annotations
Description
Fundamental
@DependsOnVP
Expresses a general dependency existing between variation points and between properties.
@DependsOnProperty Configuration -specific
Traceability
@RequireFeature
Express a dependency on a specific feature on a variation point.
@CompatibleWithFeature @CompatiblwWithProperty
Expresses compatibility with existing features and emerging properties
@ImplementsFeature
Marks classes that implement variation points and features in the code.
@ImplementsVariationPoint Incidental
@ProvidedGuarantees @RequiredGuarantees
Specifies the provided and required guarantees of the extension
In our implementation, the VariabilityModel class (top right of Figure 4), provides a single point of access to the dependency meta-model and the emerging system properties. The emerging properties are encoded in the variability model as rules based on the features installed in each variation point. The main variation points in the infrastructure have their implementation classes referenced in this model,
R.S. Silva Filho and D.F. Redmiles / Documenting and Enforcing Dependencies in SPL
45
allowing the navigation through their dependencies by following their annotations. The dependencies between the variation points are encoded in their respective classes using the annotations described in Table 2. Table 3 sample annotations for the AbstractFilter variation point and SendToPeers input filter. //--- Indicates fundamental dependencies on other variation points --@DependsOnVP(VariabilityModel.VariationPoints.EVENT) // --- Indicates what variation point this class implements --@ImplementsVariationPoint(VariabilityModel.VariationPoints.PUBLICATION) public abstract class AbstractFilter implements FilterInterface { //--- Abstract implementation goes here --} // --- Local configurationconcerns --@ProvidedGuarantees(modifyEventContent=false, modifyEventOrder=false, modifyEventType=false) @RequiredGuarantees(intactEventContent=false, intactEventOrder=false, intactEventType=false) // --- Compatibility with features and emerging properties --@CompatibleWithFeature( variationPontType = VariabilityModel.VariationPoints.EVENT, featureClass= edu.uci.isr.yancees.YanceesEvent.class, featureName="Event.AttributeValueEvent") @CompatibleWithProperties( resource = VariabilityModel.Resource.ANY, routing = VariabilityModel.Routing.ANY, timing = VariabilityModel.Timing.ANY) // --- Feature unique ID --@ImplementsFeature(name = "Publication.PublishToPeers", version="1.0") public class SentToPeersInputFilter extends AbstractFilter { // --- plug-in implementation --- }
An example of the use of code annotations is presented in Table 3. In this example, two classes are presented: the AbstractFilter class that implements the publication variation point and the SendToPeersInputFilter which implements “Publication.PublishToPeers“ feature in the publication model as discussed in section 2.2. These classes are annotated with different tags (highlighted in grey), expressing the local and global dependencies and configuration concerns of this feature. In particular, it expresses the filter intent of preserving the existing order, content and type of the events. It also expresses the guarantees this component requires from the publication variation point. This allows those extensions to require, in this example, that no other component in the chain of responsibility this filter participates with will be able to modify the attributes and content of the events. Annotations also describe the component compatibility with existing concerns and variation point’s extensions. The enforcement of the properties specified in the component annotations is guaranteed, at load time, by the YANCEES architecture manager, which checks for coherent sets of components using the dependency annotations and the information in the architecture configuration file. At runtime, the YANCEES Composition
46
R.S. Silva Filho and D.F. Redmiles / Documenting and Enforcing Dependencies in SPL
framework, with the help of the subscription and notification mediators, check for compatibility dependencies and enforce required and provided guarantees. For such, the framework uses composition filters [25] to wrap plug-ins and data elements (events), controlling their access according to the properties provided and required by the filters. This approach has been used to annotate features and variation points in YANCEES, reducing the feature interference issues discussed in this paper, and helping software engineers in the implementation more robust extensions. One of the advantages of our approach is the ability software engineers have to narrow or broaden the compatibility of a component based on more restrictive or broad compatibility declarations and, in doing so, control the level of enforcement a provided by the infrastructure.
4. Related work In the field of publish/subscribe infrastructures, different approaches are used to provide flexibility to software [20]. The management of feature interaction in this domain has been, to the best of our knowledge, ad-hoc and not well described. In systems such as FACET [26], for example, the configuration management of features does not directly supports software engineers in extending and in managing feature interaction induced by dependencies. In software product lines, variability management approaches, as described in the background section, and surveyed by [14], strive to enforce configuration rules and dependencies. Unlike those approaches, we integrate both the dependency model and the runtime guarantees in the system itself. For such, we employ a contextual framework that is part of the product line, bundling in the source code, the information necessary for its extension and customization, together with runtime and load time tools that enforce those constraints. The use of annotations to elucidate design concerns in the code has been studied as a way to separate and integrate design concerns [27]. Our model applies a similar approach to software product line concerns, with explicit runtime support for the enforcement of those dependencies. Finally, in the feature interaction community, Metzger et al. [28] proposes an approach for systematically and semi-automatically deriving variant dependencies. Our work complements this approach by providing a practical way of incorporating those dependencies in the management of feature interaction.
5. Conclusions and future work Dependencies restrict the variability of a system and variability makes managing dependencies difficult. When improperly documented and managed, dependencies lead to feature interference. As a consequence, the benefits of variably require extra configuration management measures. The gains in software reuse and variability obtained by the use of software product lines usually come with the increase in the software complexity. This
R.S. Silva Filho and D.F. Redmiles / Documenting and Enforcing Dependencies in SPL
47
complexity is a function of the dependencies between the many variation points and the implementation of variability realization approaches. Moreover, the use of software frameworks and other approaches that require direct access to source code, usually suffers from the lack of documentation of these dependencies, and have no automated support for the users in managing those issues. As a consequence those issues represent an important source of programming and design errors in software product line engineering, which can lead to feature interference. In this paper, we show how those issues may lead to feature interference in publish/subscribe SPLs, and discuss the strategies used to manage feature interaction in YANCEES. In particular, our approach is based on a contextual component framework that uses source code annotations expressing dependencies and configuration rules to support software engineers in extending and configuring SPLs, preventing feature interaction. This approach allows for both static and runtime configuration of components, coping with the dynamism requirements of the publish/subscribe domain. Currently, the modeling of dependencies in our approach come from the SPL engineers expertise and the manual analysis of dependencies in the code. In the future, we plan on automating the generation of those dependencies by the static analysis of the SPL source code using approaches such as those proposed at [28]. Future work also includes the broadening the scope of our approach, applying it to other flexible software implementations, for example Apache Tomcat.
Acknowledgments. This research was supported by the U.S. National Science Foundation under grant numbers 0534775, 0205724 and 0326105, an IBM Eclipse Technology Exchange Grant, and by the Intel Corporation.
References [1] [2] [3] [4] [5]
[6]
[7]
[8]
C. Krueger, "Software Product Line Concepts: www.softwareproductlines.com/introduction/concepts.html," The Software Product Lines site, 2006. J. Coplien, D. Hoffman, and D. Weiss, "Commonality and Variability in Software Engineering," in IEEE Software, vol. 15, 1998, pp. 37-45. I. Jacobson, M. Griss, and P. Jonsson, Software Reuse. Architecture, Process and Organization for Business Success: Addison-Wesley, 1997. J. Bosch, "Evolution and Composition of Reusable Assets in Product-Line Architectures: A Case Study," presented at TC2 First Working IFIP Conference on Software Architecture (WICSA1), 1999. T. F. Bowen, F. S. Dworack, C. H. Chow, N. Griffeth, G. E. Herman, and Y.-J. Lin, "The feature interaction problem in telecommunications systems," presented at Software Engineering for Telecommunication Switching Systems, 1989. I. Zibman, C. Woolf, P. O'Reilly, L. Strickland, D. Willis, and J. Visser, "An architectural approach to minimizing feature interactions in telecommunications," IEEE/ACM Transactions on Networking, vol. 4, pp. 582-596, 1996. R. S. Silva Filho and D. Redmiles, "Striving for Versatility in Publish/Subscribe Infrastructures," presented at 5th International Workshop on Software Engineering and Middleware (SEM'2005), Lisbon, Portugal., 2005. M. Svahnberg, J. v. Gurp, and J. Bosch, "A Taxonomy of Variability Realization Techniques," Software Practice and Experience, vol. 35, pp. 705-754, 2005.
48
[9] [10]
[11]
[12]
[13]
[14] [15]
[16] [17] [18] [19]
[20] [21]
[22] [23] [24]
[25] [26]
[27]
[28]
R.S. Silva Filho and D.F. Redmiles / Documenting and Enforcing Dependencies in SPL
K. Czarnecki and U. W. Eisenecker, Generative Programming - Methods, Tools, and Applications: Addison-Wesley, 2000. K. C. Kang, S. G. Cohen, J. A. Hess, W. E. Novak, and A. S. Peterson, "Feature-Oriented Domain Analysis (FODA) Feasibility Study - CMU/SEI-90-TR-021," Carnegie Mellon Software Engineering Institute, Pittsburgh, PA CMU/SEI-90-TR-021, 1990 1990. S. Ferber, J. Haag, and J. Savolainen, "Feature Interaction and Dependencies: Modeling Features for Reengineering a Legacy Product Line," Lecture Notes in Computer Science. Second International Conference on Software Product Lines, SPLC'02, vol. 2379, pp. 235-256, 2002. K. Lee and K. C. Kang, "Feature Dependency Analysis for Product Line Component Design," Lecture Notes in Computer Science - 8th International Conference on Software Reuse, ICSR'04, vol. 3107, pp. 69-85, 2004. S. Deelstra, M. Sinnema, J. Nijhuis, and J. Bosch, "Experiences in Software Product Families: Problems and Issues during Product Derivation, Proceedings of the Third Software Product Line Conference (SPLC 2004)," Springer Verlag Lecture Notes on Computer Science, vol. 3154, pp. 165182, 2004. M. Sinnema and S. Deelstra, "Classifying variability modeling techniques," Information and Software Technology, vol. 49, pp. 717-739, 2007. M. Sinnema, S. Deelstra, J. Nijhuis, and J. Bosch, "COVAMOF: A Framework for Modeling Variability in Software Product Families," Lecture Notes in Computer Science, vol. 3154/2004, pp. 197-213, 2004. C. W. Krueger, "Software product line reuse in practice," presented at 3rd IEEE Symposium on Application-Specific Systems and Software Engineering Technology, Richardson, TX, USA, 2000. C. Szyperski, Component Software: Beyond Object-Oriented Programming, 2nd edition: ACM Press, 2002. M. Fowler, "Inversion of Control Containers and the Dependency Injection Pattern," http://www.martinfowler.com/articles/injection.html, 2004. J. Dingel, D. Garlan, S. Jha, and D. Notkin, "Reasoning about implicit invocation," presented at 6th International Symposium on the Foundations of Software Engineering (FSE-6), Lake Buena Vista, FL, USA, 1998. R. S. Silva Filho and D. F. Redmiles, "A Survey on Versatility for Publish/Subscribe Infrastructures. Technical Report UCI-ISR-05-8," Institute for Software Research, Irvine, CA May 2005. D. S. Rosenblum and A. L. Wolf, "A Design Framework for Internet-Scale Event Observation and Notification," presented at 6th European Software Engineering Conference/5th ACM SIGSOFT Symposium on the Foundations of Software Engineering, Zurich, Switzerland, 1997. D. Birsan, "On Plug-ins and Extensible Architectures," in ACM Queue, vol. 3, 2005, pp. 40-46. E. Gamma, R. Helm, R. Johnson, and J. Vlissides, Design Patterns: Elements of Reusable ObjectOriented Software: Addison-Wesley Publishing Company, 1995. R. DePaula, X. Ding, P. Dourish, K. Nies, B. Pillet, D. Redmiles, J. Ren, J. Rode, and R. S. Silva Filho, "In the Eye of the Beholder: A Visualization-based Approach to Information System Security," International Journal of Human-Computer Studies - Special Issue on HCI Research in Privacy and Security, vol. 63, pp. 5-24, 2005. L. Bergmans and M. Aksit, "Composing Crosscutting Concerns Using Composition Filters," Communications of the ACM, vol. 44, pp. 51-58, 2001. F. Hunleth and R. K. Cytron, "Footprint and feature management using aspect-oriented programming techniques," presented at Joint conference on Languages, compilers and tools for embedded systems: software and compilers for embedded systems, Berlin, Germany, 2002. A. Bryant, A. Catton, K. D. Volder, and G. C. Murphy, "Explicit Programming," presented at 1st International Conference on Aspect-Oriented Software Development, Enschede, The Netherlands, 2002. A. Metzger, S. Bühne, K. Lauenroth, and K. Pohl, "Considering Feature Interactions in Product Lines: Towards the Automatic Derivation of Dependencies between Product Variants," presented at Feature Interactions in Telecommunications and Software Systems VIII, Leicester, UK, 2005.
Feature Interactions in Software and Communication Systems IX L. du Bousquet and J.-L. Richier (Eds.) IOS Press, 2008 © 2008 The authors and IOS Press. All rights reserved.
49
Towards Automated Resolution of Undesired Interactions Induced by Data Dependency Teng TENG, Gang HUANG1, Xingrun CHEN and Hong MEI Key Laboratory of High Confidence Software Technologies, Ministry of Education School of Electronics Engineering and Computer Science, Peking University 100871, Beijing, China
Abstract. The application-specific mode of data sharing and usage, called data pragmatics, leads to many undesired interactions related to data dependency between applications. Our previous work focuses on the automatic detection of these undesired interactions in the context of J2EE (Java 2 Platform Enterprise Edition). In this paper, we propose a set of automated solutions based on middleware for this problem. Keywords. data dependency, middleware, feature interaction.
1. Introduction For modern data-centric applications, if different applications are bound to the same data source, and if their objects are mapped to the same data tables, or tables which have explicit or implicit relationships, such data-related interactions between subsystems are called data dependencies, as shown in Figure 1(a). When a certain application fails to manipulate the data in a correct way, other applications may not work properly as expected, then undesired interactions induced by data dependency (UIDD) occur. We argue that the occurrence of UIDD is exactly due to the applicationspecific data sharing and usage mode, called data pragmatics (DP) which reflects the data semantics in the application-specific context. From the angle of DP, if DPs from different applications are overlapped, and if their data manipulations conflict with each other, the UIDD will occur. Our previous paper [2] illustrated a realistic example of this type of interactions: JPS and JST. As shown in Figure 1(b), in this scenario, when application A creates a new instance of a persistent object ‘a’, actually the middleware would insert a new row into the common table ‘CT’. As the attributes of ‘a’ are not mapped to all of the columns of ‘CT’, the other columns which are not associated with ‘a1’ are filled with ‘NULL’. And then, if the attributes of persistent object ‘b’ of application B are also mapped to ‘CT’, and the primary key column set of ‘b’ is not the same as ‘a’. As a result, there exists a certain row which is used by ‘b’, and some of its primary key 1
Corresponding Author: Gang HUANG; Email: [email protected], Tel: 86-10-62757670, Fax: 86-10-62751792.
50
T. Teng et al. / Towards Automated Resolution of UIDD
columns for ‘b’ are filled with ‘NULL’, so the proper execution of B is interrupted, and undesired interactions occur. CT A
id
a c_name
DS app1
app3
app2
……
appn
PK
PK
name
c_address
address
c_balance
balance
c_tele
tele email
B b id name address balance telephone email
(a) Data-centric application mode
(b) UIDD sample
Figure 1. Undesired interactions caused by data dependency in data-centric applications
Currently, application persistence is usually implemented by middleware. As a mediator which transforms application object invocations to data manipulations, middleware conceals technical details of DBMS from application objects as well as implementation and runtime details of application objects from DBMS. Furthermore, the development, deployment and management of modern data-centric applications mainly depend on middleware instead of DBMS. Since the DBMS and DBA lose the global understanding and control on the whole system, the critical issue of UIDD between applications which has been well resolved in the classical DBMS emerges again. Therefore, it is a natural and feasible way to resolve the UIDD by middleware. Middleware is capable of collecting the data usage information of all applications. According to the information collected, it can detect the existence of UIDD. This work has been addressed in our previous work [2]. But to our best knowledge, how to eliminate UIDD still remains unresolved. So we review the problem in [2], and propose a middleware-based approach to automatically eliminate UIDD.
2. Solutions of UIDD Based on the middleware-based approach to collect the data usage information and discover UIDD in our previous work [2], this paper focuses on how to eliminate UIDD. 2.1. Restraint Solution Restraint is a simple solution for feature interactions, i.e., to avoid the situation which one feature interfere with the other [1]. For the above description of the UIDD, we can adopt a restraint solution to restrain some dangerous creating manipulations from executing automatically. If the relative importance between A and B can be judged, the data manipulations of the application with less importance should be restrained. But how can we judge which application has relatively more importance by middleware. This is a vital challenge for us. And this paper proposes five criteria: 1.
Correctness of the mapping policies: The application whose primary key attributes are properly mapped to the primary key columns is considered to be more important as this arrangement is not devastating for the data consistency.
T. Teng et al. / Towards Automated Resolution of UIDD
2.
3.
4.
5.
51
The referenced degree: The number of quoted times of a certain application which is referenced by others can reflect its influence to its counterparts. So this number can be a guideline of application’s relative importance. Data access frequency: It reflects data usage degree of the application. Therefore, as to an application, higher access frequency means greater importance. Deployment order: Applications deployed later usually meet the new requirements of users, so we consider that the application deployed later is of more importance than those deployed earlier. Importance specified by users: Middleware should support to manually set importance degree for some applications which may need special protection.
Let’s review the example of JPS and JST in [2]. Following the above five criteria, we can draw the conclusion that JPS is relatively more important than JST. For JPS and JST, in our illustration, the analysis result is listed in table1. The process is: 1) An absolutely correct mapping strategy marks 100 points. 2) The reference degree of an application is based on its referenced times by other applications, every time marks one point. 3) Data access frequency is segmented into several levels by the frequency difference. Add every 50 points to the application if it belongs to a higher level of frequency. 4) Every latterly deployed application can get an extra 100 points. 5) Add the points specified by users. Finally, sum up all, lower points means less importance of an application. As shown in Tab.1, JST is the answer. 2.2. Coordination Solution The restraint approach usually damages one side as a sacrifice to avoid the interactions. Compared with the restraint approach, the coordination approach tries to find a solution or compromise to meet both of the conflicted sides[1]. For the UIDD existing in the object attribute mapping, while middleware inserts values for A, it should also insert values with proper meaning for the columns which is used by B. Actually, these values may be meaningless for A, but this may prevent the proper execution of B from UIDD. For the example of JPS and JST in [2], when JST creates a new instance of ‘AccountBean’, a new row would be inserted into the target table ‘CustomerTB’. In this insertion, all 11 columns associated with ‘AccountBean’ should be filled with the attribute values while the other columns should be filled with some default unique values, such as ‘userID’ which is associated with the primary key attribute of ‘ContactInfoEJB’ in JPS. Then JPS will not be interrupted owing to ‘NULL’ value existing in some columns. And these columns may be associated with the primary key attributes of CMP EJBs of JPS. 2.3. Implementation Mechanism for Solutions Both restraint and coordination solutions proposed in this paper can be implemented by middleware with its implementation mechanism for data usage of applications. In the current development, business object persistence is implemented by middleware. Middleware builds the binding and mapping between business objects and target tables at runtime according to the user-specific persistence configuration files. It receives invocations which are from the business layer to objects, and transforms these
52
T. Teng et al. / Towards Automated Resolution of UIDD
invocations into data manipulations on the corresponding target tables. So middleware can accurately control the execution of data manipulations. As object/relational mapping middleware (ORM) is a promising approach to enabling object oriented programs (OOP) to access relational database management systems (RDBMS) in an object oriented style, our work is illustrated by CMP-EJB[4], a typical ORM used widely in building large-scale enterprise applications. This paper gives five criteria and extends PKUAS(a J2EE application server which provide CMP EJB container) [3] to resolve UIDD automatically by ORM, and this requires extending Entity EJB Container, as shown in Figure 2(a).
Persistence Coordinator
3.2
Persistence Manager
5
DB
3.1 6
4 1
Client
CMP EJB Container
2
CMP EJB 3
7
(a) Extended CMP EJB Container
(b) Performance evaluation
Figure 2. Solution framework
The standard actions performed by the CMP EJB Container include 1) waiting for incoming requests from clients, 2) delegating the request to the CMP EJB implementation for pre-processing preferred by EJB developers, 3) waiting for the result of CMP EJB pre-processing, 4) invoking the persistence manager for 5) accessing the database, 6) waiting for the result of data manipulations and 7) returning the final reply to the client. Based on the standard actions, we design and implement some extension of the containers to modify the normal execution of data manipulations, e.g., a persistence coordinator in our approach. Since the UIDD detected previously would be recorded by middleware[2], the coordinator can determine how to adapt the implementation of data manipulations of applications based on the detecting result. In this paper, we implements two automatic solutions: restraint and coordination, which have been discussed previously. Under the direction of the policies of these two solutions, the coordinator can modify the semantics of the normal data manipulations which may result in data dependencies, and selectively execute them on CMP EJBs by controlling the actions performed by the container and the persistence manager, as shown in Figure 2(a). Table 1. Applying the criteria to JPS and JST. App
Correctness
Referenced degree
Access frequency
Deployment order
JPS JST
100 0
0 0
100 50
0 100
Importance specified by users 50 0
Total Score 250 150
To attain the actual effect (negative and positive) of our solutions, we test the time overhead caused by the introduction of the restraint & coordination solutions. As shown in Figure 2(b), ‘Res’ segment shows that the performance of the creation manipulation which is extended with ‘Restraint’ mechanism is about 45.4% lower than
T. Teng et al. / Towards Automated Resolution of UIDD
53
the normal one. And the same penalty of the creation manipulation is about 66.6% for ‘Coordination’ mechanism, as shown in ‘Coor’ segment. As predicted, due to the complicated run-time logic of these two mechanisms, they do cause a considerable cost for the execution of creation manipulation. But this cost is necessary to eliminate the destructive UIDD and guarantee the proper execution of applications. And a positive result also indicates that these two mechanisms have no substantial impacts on the normal application execution if no UIDD exists: ‘Normal’ segment shows that in an environment without UIDD, the application performance penalty caused by the extended container is only 3.92%.
3. Future Work There are some open issues to be addressed. Firstly, we only considered the UIDD arising from middleware-managed object attribute mapping, some other types of UIDD may exist and need to be detected and resolved. Secondly, our current solutions are patterns rather than formal guide for developers.
Acknowledgments This effort is sponsored by the National Basic Research Program (973) of China under Grant No. 2002CB312003; the National Natural Science Foundation of China under Grant No. 90412011, 90612011 and 60403030.
References [1] [2]
[3]
[4]
Dirk O. Keck and Paul J. Kuehn, The Feature and Service Interaction Problem in Telecommunications Systems: A Survey, IEEE Transactions on Software Engineering, Vol. 24, No. 10, 1998, 779-796. Teng, T., Huang, G, Li, R., Zhao, D., Mei, H., Feature Interactions Induced by Data Dependencies among Entity Components, 8th International Conference on Feature Interactions in Telecommunications and Software Systems, Leicester, UK, 2005, 252-269. Mei, H. and Huang, G. PKUAS: An Architecture-based Reflective Component Operating Platform, invited paper,10th IEEE International Workshop on Future Trends of Distributed Computing Systems, 2004, 163-169. SUN Microsystems, Enterprise JavaBeans Specification, Version 2.0, (2001).
54
Feature Interactions in Software and Communication Systems IX L. du Bousquet and J.-L. Richier (Eds.) IOS Press, 2008 © 2008 The authors and IOS Press. All rights reserved.
Policy Conflicts in Home Care Systems Feng WANG and Kenneth J. TURNER Computing Science and Mathematics, University of Stirling, Stirling, FK9 4LA, UK [email protected], [email protected]
Abstract. Technology to support care at home is a promising alternative to traditional approaches. However, home care systems present significant technical challenges. For example, it is difficult to make such systems flexible, adaptable, and controllable by users. The authors have created a prototype system that uses policy-based management of home care services. Conflict detection and resolution for home care policies have been investigated. We identify three types of conflicts in policy-based home care systems: conflicts that result from apparently separate triggers, conflicts among policies of multiple stakeholders, and conflicts resulting from apparently unrelated actions. We systematically analyse the types of policy conflicts, and propose solutions to enhance our existing policy language and policy system to tackle these conflicts. The enhanced solutions are illustrated through examples. Keywords: Policy-based management, policy conflict, home care system.
1. Introduction Policies have emerged as a promising and more flexible alternative to features. Among the benefits of policies, they are much more user-oriented. However, policies are prone to conflicts much as features are prone to interactions. This paper examines the issues of policy conflict in a novel application domain: home care. It is predicted that the growing percentage of older people will have enormous impact on the demand for care services. This will exert huge pressures on the resources of existing care services [1]. Increasingly, providing care at home is seen as a promising alternative to traditional healthcare solutions. By making use of sensors, home networks and communications, older people can prolong independent living in their own homes. Remaining in a familiar environment while being taken care of also improves their quality of life. Their families and informal carers can also be relieved of constant worry whether those in care are well. The hardware to enable home care services, such as sensor technologies and communications, has matured in terms of cost and availability. Providing software solutions to deliver home care service, however, remains a challenging task. Most home care systems have been created in an ad hoc way. The systems are usually handcrafted and manually customised to the needs of individual scenarios. Because the solutions for home care services are hard-coded, even simple changes in services requires an on-site visit by specially trained personnel. They are therefore costly to change. Proprietary, off-the-shelf telecare products suffer from similar problems. The functions of a product are typically fixed in special-purpose devices. Data from these devices cannot easily be accessed, and the devices work only with products from the
F. Wang and K.J. Turner / Policy Conflicts in Home Care Systems
55
same company. Domestic health monitoring and home automation are currently very limited. The major issues in home care delivery are flexibility, adaptability, customisability and cost. We have successfully demonstrated that it is possible to use a policy-based system to integrate data from a variety of home sensors. Sensor data is used to support a variety of home automation and home care services [1]. Considerable research remains to realise the potential of this work and to demonstrate its value in supporting care of older people. One major issue is the detection and resolution of policy conflicts, which is the focus of this paper. Essentially, policies are rules that define the behaviour of a system. A typical policy consists of a trigger, a condition and an action. There are two basic types of policies: authorization policies and obligation policies. Authorization policies give a set of subjects the authority to carry out some actions upon a set of target objects; in negative form, they require subjects to refrain from doing so. Obligation policies specify that a set of subjects is responsible for taking some actions upon the target objects when a certain trigger event is received and the some conditions are satisfied. When enforcing the policies, it is possible that multiple policies may conflict with each other. We use the following general definition of policy conflict: two policies are said to conflict with each other if there is inconsistency between them. The classification of conflicts by Moffet et al. [2] is discussed later. When applying policy-based management to home care systems, we observe that certain classes of policy interaction are unique to this domain: x policy rules of multiple stakeholders may conflict x policy actions resulting from apparently different triggers may interact according to changing situations x policy actions may conflict over time. The issues in a policy-based home care system are as follows. What types of the policy interactions should be tackled inside the policy system? What type of interactions should be tackled outside the policy system? If the policy interaction is tackled by the policy system, how should it be handled? Based on the analysis of the problems, we propose a solution to tackle the above issues. Our solution is built on top of our previous work on the ACCENT project [3]. In order to explain our solution, we will first introduce the previous work on ACCENT. The paper is organized as follows. Section 2 briefly describes the policy language for home care. Section 3 presents how policies are deployed and enforced inside the home care system. In section 4, policy conflict issues in home care systems are identified and analysed. A solution for resolving these conflicts is proposed in section 5. Related work is discussed in section 6. Finally, in section 7 we describe the current status of the work.
2. Policy Language for Home Care Systems The policy language for care at home builds on our previous work for call control [3]. To allow use in different application domains, the policy language has two parts: x the domain-independent core policy language defines the structure of policies (e.g. their combinations) and general attributes of policies (e.g. metadata) x domain-specific extensions reflect specialisations for each kind of application.
56
F. Wang and K.J. Turner / Policy Conflicts in Home Care Systems
A policy rule consists of three parts: trigger, condition and action. Although the core language defines some of these, specific elements are normally defined perdomain. The core policy language is defined in [3], with its specialization for home care in [1]. A home care policy consists of a set of policy attributes and a set of policy rules. The attributes of a home care policy include the following: x id uniquely identifies the policy in the policy store. x description explains the purpose of the policy in plain text. x owner indicates the entity that the policy belongs to. A notation similar to email addresses is used, e.g. [email protected]. x applies_ to identifies the entities to which a policy applies (e.g. sensors, people, virtual entities l computer programs). An email-like notation is used for entities: [email protected] means movement sensor 1 in the kitchen of house1. Omitting ‘1’ means any movement sensor in this kitchen. x preference states how strongly the policy definer feels about it, and represents the modality of the policy. Examples are should, should not. Internally the value of this attribute is represented as integer (which may be positive or negative) . x valid_from and valid_to specify the time period during which a policy is valid x profile is used to group the policies. A policy with an empty profile is always applicable, while one with a non-empty profile must match the user’s current profile. x enabled states whether the policy system should consider a policy or not. x changed indicates the last-modified time of a policy. For home care policy rules, a generic trigger device_in is used: its arguments indicate the trigger type and the sensor that caused it. A trigger sets environment variables to reflect the current state of the environment. A policy condition can make use of these variables to check whether it is eligible for execution. A generic action device_out is defined to instruct actuators to execute actions. This action has arguments to indicate the actuator, the action to be executed and the parameters of the action. In our home care system, a home care service is a rule-based application described by policy rules. An example policy for home care is shown in 0. Dementia patients often wander at night, and this worries their relatives. The policy in figure 1 states that, if movement is detected in the bedroom when it is night (10PM–7AM), remind the patient to go back to bed. The obvious closing tags are omitted in the XML definition. <policy_rule> device_in(arg1,arg2) <parameter>time in 22:00:00..07:00:00 speak(arg1,arg2) Figure 1. Night-Time Wandering Reminder Policy Example
F. Wang and K.J. Turner / Policy Conflicts in Home Care Systems
57
Policy 1
2
Policy Deployment
Static Analysis 4
5
3
Design Time
Policy Store 5
3
Policy Enforcement 2
Dynamic Analysis 6
7
Event Service 1
Run Time
4
Design-time Module 8
Event
Command
Run-time Module
Figure 2. Policy Deployment and Enforcement in A Home Care System
3. Deployment and Enforcement of Policies Figure 2 illustrates how policies are deployed and enforced in the policy system. 3.1. Policy Deployment At design time, a policy is defined using editing tools such as a policy wizard. This policy is then passed to the policy deployment module (step 1). Since the policy may conflict with the existing policies in the policy store, it is passed to the static analysis module to check for conflicts (step 2). The static analysis module retrieves related policies from the policy store (step 3), performs conflict detection analysis, and returns the result to the policy deployment module (step 4). If there is conflict, the user is notified. If there is no conflict, the policy is saved in the policy store (step 5). 3.2. Policy Enforcement The policy enforcement module makes decisions on which actions should be executed and issues them for execution. At run time, a sensor sends out an event through the event service (step 1). The event is passed to the policy enforcement module (step 2). The policy enforcement module retrieves relevant policies from the policy store (step 3). For each retrieved policy, the policy enforcement module checks the trigger part and the condition part of the policy against input triggers and the current environment setting. If the trigger matches and the policy conditions hold, the corresponding policy action is added to the set of potential actions. Once the policy enforcement module
58
F. Wang and K.J. Turner / Policy Conflicts in Home Care Systems
finishes checking the relevant policies, there will be a set of potential actions. If there is more than one action in the set, this is passed to the dynamic analysis module for detection and resolution of conflicts (step 4). Our existing policy system uses resolution policies to detect and resolve conflicts among actions. The structure of a resolution policy is very similar to that of an ordinary policy. The major difference is that in the resolution policy, the triggers are the actions of regular policies rather than ordinary triggers from sensors. A detailed description of resolution policies can be found in [4]. The dynamic analysis module retrieves resolution policies from the policy store (step 5), and applies these to the set of potential actions to select the most appropriate actions if there are conflicts. This action is then passed back to the policy enforcement module (step 6). The policy enforcement module sends the actions to the event service (step 7). The event service acts as a broker, passing commands to actuators for execution (step 8). A resolution policy supports two types of resolution actions: specific actions and generic actions. For specific actions, a resolution policy specifies what to do when there are conflicting actions. The outcome is not limited to the set of conflicting actions. For generic actions, the resolution is chosen from among the conflicting actions. This relies on comparing the attributes of conflicting policies. Borrowing from our previous work, the following generic actions are used in home care: x apply_newer, apply_older: decides whether the newer or older policy is chosen. x apply_one: chooses some action from the set of potential actions. x apply_negative, apply_positive, apply_stronger, apply_weaker: decides the action by checking the value of policy preferences. x apply_inferior, apply_superior: uses the applies_to attribute to decide within one hierarchy whether the superior’s policy or inferior’s policy is chosen.
4. Policy Conflicts in Home Care Systems 4.1. Detection and Resolution of Policy Conflicts in General To simplify our analysis, for now we only consider a policy with a single rule. A home care policy has the following elements: subject, target, trigger, condition, action, owner and preference. Much as for Ponder (http://ponder2.net) we consider two types of policies: authorisation (A) and obligation (O). For authorisation policies, the subject is authorised to take an action on a target object. For obligation policies, the subject is obliged to take action on the target when receiving a trigger and the condition is satisfied. If we combine the type of policy with the modality, we get the following policy modes: positive authorisation (A+), negative authorisation (A-), positive obligation (O+), and negative obligation (O-). 4.1.1. Types of Policy Conflicts in Home Care According to Moffet and Lupu’s classification [2] [5], there are two types of policy conflicts: modality conflicts and goal conflicts. Modality conflicts can be detected by looking at the policy alone. For modality conflicts, the following attributes (subject, target, action) of two policies overlap, but
F. Wang and K.J. Turner / Policy Conflicts in Home Care Systems
59
the mode of the policy contradicts. The other attributes of the policy may be different, including trigger, condition and owner. There are three possible modality conflicts: x A+, A- : one policy states that the subject is authorised to take some action, but the other policy prohibits the subject from performing this action. x O+, O- : one policy states that the subject is obliged to take some action, but the other policy states that the subject is obliged not to take this action. x A-, O+ : one policy states that the subject is obliged to take some action, but the other policy states that the subject is not authorised to do this. A+/O- is not a policy conflict, since no actions result from this combination: the subject is authorized to take some actions, but must refrain from taking these [2] [5]. Goal conflicts need application-specific information to be detected. Moffet et al. [2] identify four types of conflicts: conflicts of imperative goals, in particular for resources; conflicts of authority goals, including conflict of duty and conflict of interest; multiple managers; and self-management. In the home care domain, we consider actuators as the target of policy. A sensor, person or computer program is considered as a subject of policy-based management. Inside the policy system, there will be an agent for each such entity to act on its behalf. Due to lack of computation power on sensors, our policy system employs a centralised server for enforcing policies. This implies that the policy server acts as an agent for all subjects of the policy system. In home care systems, modality conflicts may arise in one owner’s policies due to overlapping situations. They also may arise between multiple owners’ policies. For goal conflicts, we are currently particularly interested in multiple managers and conflicts for resources since many care services are represented as obligation policies. These services are triggered by events from sensors. 4.1.2. Detection of Conflicts: Statically vs. Dynamically Modality conflicts can be detected at definition time or at run time. Detection is achieved by comparing the subject, target, action and preference of two policies. This indicates whether there are potential conflicts. For modality conflicts, if the situations of two policies are exactly the same, this potential conflict becomes definite; the conflict should be eliminated at definition time. If conflict depends on the evaluation of a run-time situation, this type of potential conflict may still need to be detected and resolved at run-time. Detecting goal conflicts needs application-specific information. This may use an explicit definition of conflict situations. It may also use automatic reasoning about the effects on goals if the semantics of these is properly specified. Our policy system supports the specification of conflicting situations by the user. As seen in section 3, our policy system supports both static analysis and dynamic analysis. Dynamic analysis is performed when a trigger from sensors is received and processed. It requires resources and time, and may slow down the decision making process of the policy system. Comparing with dynamic analysis, static analysis is more desirable as it reduces the burden on dynamic conflict. However, not all conflicts can be detected by static analysis, especially potential conflicts.
60
F. Wang and K.J. Turner / Policy Conflicts in Home Care Systems
4.1.3. Resolving Conflicts Once a conflict is detected, conflicting policies need to be resolved. This can be by achieved by notifying the user and asking for a decision, or it can be done automatically. For automated resolution, our policy system supports both specific actions and generic actions. For policies that belong to a single owner, several policy attributes can be used to choose the resolution action. For example, the policy language supports choosing an action with the strongest preference. For policies that belong to different users in one organization, the ‘distance’ between a policy and the managed object can be used to choose the resolution action. In our policy language, this distance is derived from the applies_to attribute. Suppose one policy applies to @cs.stir.ac.uk and the other policy applies to [email protected]. An apply_superior resolution action will choose the policy which applies to @cs.stir.ac.uk as the higher domain in the hierarchy. 4.2. Special Issues for Policy Conflicts in Home Care In home care systems, beside the modality conflicts discussed above we observe the following three special types of conflicts between policy actions. How to deal with these conflicts is the focus of this paper. Should the interaction be tackled inside the policy system, or should it be dealt with outside the policy system (e.g. by the actuators)? If the interaction is tackled by the policy server, how can we enhance our existing policy system to handle it? If the interaction is tackled outside the policy server, what functionalities are required from the external system? 4.2.1. Dependency among Situations The actions resulting from different situations may conflict with each other, and situations may have interdependencies. The situation of obligation is common in a home setting. These situations rely on context information. As Dey points out [6], there are different levels of context information. High-level situations can be inferred from low-level sensor data, and the trigger from one sensor can be used to infer multiple situations. As an example, a ‘door open’ sensor can detect the situation of the door being left open. Suppose a policy states that when the front door is left open, a reminder should be given to the resident to close the door. Combining the door sensor and the sensor in the door lock, a new situation can be detected: the door has been broken open. Suppose another policy states that, when the door is broken open, the resident should be advised stay in his/her room to call for help. When a door is broken into, which action should be taken [7]? In the above case, if we specify the two situations as two separate triggers, there will be no policy conflict and two actions will be executed. It does not make sense to remind the user to lock the door, while at the same time stating that there has been a break-in. In this example, we can see that situations in home care can have logic relationships between them. One situation may be implied by another, or two situations may be implied by triggers originating from the same sensor. Besides logical relationships, there are other relationships such as containment. For example, one policy reacts to movement in the bedroom, while another reacts to movement in any room of the house. The situation of the second policy contains that of the first.
F. Wang and K.J. Turner / Policy Conflicts in Home Care Systems
61
A policy system needs to be able to detect and resolve policy conflicts due to dependent situations. The issue is how to specify the triggers and conditions of the policies properly so that the conflicts are detected. 4.2.2. Multiple Stakeholders In home care systems, policies may be defined by multiple organizations (e.g. a social work department, a surgery, a clinic). Their policies may conflict with each other. How can the conflicts of multiple stakeholders be handled? In fact, conflicts among multiple stakeholders are not much different from the case of a single stakeholder. The difference is in resolution of the conflicts. If the resolution action is chosen from one of the conflicting actions, dealing with multiple stakeholders is an issue. When policies are defined by different organizations, there is no hierarchy among the stakeholders. Some solution is needed to decide how one stakeholder’s policies should be evaluated compared to other stakeholders’ policies. 4.2.3. Interactions between Actions over Time The policy actions in a home care system take time to complete. It is therefore possible for new actions to conflict with ongoing actions. Suppose a medical reminder service can alert a patient to take medicine at certain times. This system will remind the patient again if there is no response to the first reminder. While the medical reminder is running, a more urgent situation such as a fire may be detected in the house. Following the fire alarm policy, the system will remind the user to leave the house immediately. How to deal with these interactions in a policy-based system? A fundamental question is whether they should be dealt within the policy system or not.
5. Enhancement to the Home Care Policy Systems 5.1. Tackling Dependencies among Situations A situation is specified jointly by the trigger and the condition of a policy. To tackle dependencies among situations, we introduce a situation dependency graph (see Figure 3). The nodes on the left are the sensors. The nodes in the middle and on the right are situation nodes. A situation node receives inputs from the nodes on its left and evaluates its function to get a new value. If the value of a situation node has changed, it will send the update to other situation nodes that depend on it. Each situation node also supports queries for its current value. In figure 4, situation B depends on sensor A, thus there is a directional link from A to B. For a policy system to detect conflicts among dependent situations, the trigger part of a policy must specify all the sensors that are used to derive a situation. In the dependency graph, these sensors are the root nodes of the situation node. In the condition part of the policy, an environment variable with the name of the situation node is used. This environment variable is set up by the policy system to keep the most current value of the situation node. In figure 4 for example, if a policy requires situation F, then the trigger part of the policy is the list {A, C, E}. The parameter of the condition is F.
62
F. Wang and K.J. Turner / Policy Conflicts in Home Care Systems
B A
C
D F
E
Figure 3. Situation Dependency Graph
The dependency graph is designed and maintained by the service designer. If there are new sensors or new software module installed to detect new situations, the dependency graph is updated to reflect the changes. This may affect the triggers and conditions available to the policy definer when using the policy editing tool. The example of ‘door open’ vs. ‘door broken into’ will show how this works. The ‘door open’ reminder policy has the following elements: applies_to:[door1] trigger: [door1:] condition: door_open eq true action:remind(reminder_bedroom, door left open) preference: 3
The ‘door broken into’ policy has the following elements: applies_to: [door1, lock2] trigger: [door1:open, lock2:broken] condition: broken_into eq true action:remind(reminder_bedroom, door broken into) preference: 5
In the above policies, the trigger of the first policy must originate from a specific door sensor, but does not require a particular type of trigger as this is implied by the condition. The trigger of the second policy originates from door1 with type open, and from lock2 with type broken. When the door is broken into, the policy server will receive triggers from the door sensor and door lock sensor at the same time. It will retrieve policies that apply to any of these sensors. Since the triggers and conditions of both policies are satisfied, there will be two actions. These two actions compete for the same reminder service, so there are conflicts. In our system, these conditions are handled by part of a resolution policy. Other parts of the resolution policy decide which action to choose. For example, a policy preference may be used in the generic action apply_stronger. In this case, the action from the second policy would be executed. 5.2. Resolving Conflicts of Multiple Stakeholders Detecting conflicts among multiple stakeholders is the same as for a single stakeholder except that the owners of the policies differ. We therefore add a new generic resolution
F. Wang and K.J. Turner / Policy Conflicts in Home Care Systems
63
action: apply_stakeholder. This relies on the partial ordering among stakeholders to choose one action among the conflicting ones according to a predefined order among stakeholders. We believe that a total ordering of stakeholders in all situations is not sensible in home care. In a multiple organization setting, the ordering among stakeholders is not fixed and is valid only under certain conditions (e.g. when performing certain actions). 5.2.1. Specifying the Order among Stakeholders We make use of resolution policies to specify the order among stakeholders. The condition of the ordering rule can compare the attributes of the action and the policy. The new action set_order has been introduced to specify the ordering among stakeholders. This action has three parameters: the two different owners of the policies, and the relational operator between the owners (gt, lt, eq, unspecified). gt means the first owner ranks higher than the second, with the other operators having the obvious interpretation. The example of Figure 4 shows that the warden has higher priority over the tenant for setting the TV volume at night time (from 23:00 to 7:00). <parameter>time in 23:00..07:00 <parameter>action eq device_out(TV, setVolume) set_order(arg1,arg2, arg3) Figure 4. Example of specifying Order among Stakeholders
Multiple orders among stakeholders can be specified under the same conditions. The relative orders among stakeholders are transitive. That is, if owner A is ranked higher than owner B and owner B is ranked higher than owner C, then owner A is ranked higher than owner C. This can help to simplify specification of the ordering. The stakeholder parameters of the set_order action can also be roles. In the above example, a policy owner whose role is warden has higher priority than a policy owner whose role is tenant. Our policy system support roles through policy variables, which can contain a single value or a list of values. 5.2.2. Applying Order among Stakeholders When conflicts are detected between policies of different owners, the resolution action apply_stakeholder can be used. Suppose there are conflicting policies that want to set the volume of the TV differently. The condition part of a resolution policy would check whether both their targets are the same TV; whether both actions are set_volume, and whether the volume levels are different. The policy preferences would indicate if both
64
F. Wang and K.J. Turner / Policy Conflicts in Home Care Systems
policies are positive obligation policies. The action part of the resolution would use apply_stakeholder to ensure that the warden’s policy is respected. 5.3. Handling Interactions of Policy Actions over Time To handle the policy interactions over time inside the policy system, we need to be able to detect the conflicts and then resolve the conflicts. According to Moffet’s classification of policy conflicts, the interactions between a new action and the existing actions are goal conflicts (or resource conflicts) rather than modality conflicts. These are action conflicts, not policy conflicts. Detecting conflicts between new actions and existing running actions needs the policy system to keep a record of all running actions. To find out whether a new action conflicts with ongoing actions, applicationspecific information is needed. This can be achieved by asking the user to specify the specific conditions to check. Similarly, resolving conflicts can be achieved by asking the user to resolve the choice of action manually. Even after an action is stopped due to conflict how to deal with it later also needs to be specified. In addition, only certain kinds of actions may suffer from this kind of conflict. For simplicity, we therefore move the handling of action conflicts from the policy server to the actuators in our system. In the example given earlier, the alarm system would be an actuator and would use a priority-based approach to handle actions that conflict over time. A new alarm with a higher priority would stop an existing alarm with lower priority.
6. Related Work Policy-based management has been applied in many areas, for example network and distributed systems management [8], telecommunications [4, 9], pervasive computing environments [10, 11, and 12], semantic web services [13], and large evolving enterprises [14]. The present paper has used the taxonomy of policy conflicts in [2] that describes how policy conflicts can be detected and resolved at definition time or at run time. [5] reviews policies in distributed systems and proposes using meta-policies to detect and resolve conflicts in one organization. However, this work does not tackle the problem of conflicting policies among multiple stakeholders. In addition, the solution in [5] is mostly for static detection and resolution of policy conflicts. Dunlop et al. have proposed a solution to detect conflicts dynamically using deontic logic, but this tackles only modality conflicts [14]. [10, 11] propose a solution based on reasoning about the effects of actions to detect and resolve goal conflicts in pervasive computing environments. The aim is to guarantee the execution order of actions resulting from a single trigger. However, this work also does not deal with policy conflicts among multiple stakeholders.
7. Conclusion The paper has focused on conflict issues when using policy-based management in home care systems. Three specialised kinds of conflict have been identified in home care: multiple stakeholder conflicts, policy conflicts due to dependency among
F. Wang and K.J. Turner / Policy Conflicts in Home Care Systems
65
situations, and conflicts among actions over time. Based on the analysis of these conflicts, we have proposed a solution to enhance our existing policy system to handle these conflicts. This has included extensions to the policy language and the policy system. The enhancements also have implications for other part of the home care system, including sensors and actuators. We plan to evaluate the approach through field trials in actual homes.
References [1] [2] [3] [4] [5] [6]
[7]
[8] [9] [10] [11] [12] [13] [14]
F. Wang, L. S. Docherty, K. J. Turner, M. Kolberg and E. H. Magill. Service and Policies for Care At Home, Proc. Int. Conf. on Pervasive Computing Technologies for Healthcare, pp. 7.1–7.10, Nov 2006. J. D. Moffett and M. Sloman. Policy Conflict Analysis in Distributed System Management. Organizational Computing, 4(1):1–22, 1994. K. J. Turner, S. Reiff-Marganiec, L. Blair, J. Pang, T. Gray, P. Perry and J. Ireland. Policy Support for Call Control, Computer Standards and Interfaces, 28(6):635–649, Jun. 2006. K. J. Turner and L. Blair. Policies and Conflicts in Call Control, Computer Networks, 51(2):496-514, Feb. 2007 E.C. Lupu and M. Sloman. Conflicts in Policy-Based Distributed Systems Management, IEEE Trans. on Software Engineering, 25(6), 1999. A. K. Dey, D. Salber and G. D. Abowd. A Context-based Infrastructure for Smart Environments. In Proc. 1st Int. Workshop on Managing Interactions in Smart Environments, pp. 114–128, Dublin, Dec. 1999. M. Perry, A. Dowdall, L. Lines and K. Hone. Multimodal and ubiquitous computing systems: supporting contextual interaction for older users in the home. IEEE Trans. on IT in Biomedicine, 8 (3):258–270, 2004. N. Damianou, N. Dulay, E. Lupu and M. Sloman. Ponder: A Language specifying Security and Management Policies for Distributed Systems, Technical Report, Imperial College, London, UK, 2000. Special Issue on Feature Interactions in Telecommunications Systems, IEEE Communications Magazine, 31(8), 1993. C. Shankar, A. Ranganathan and R. Campbell. An ECA-P Policy-based Framework for Managing Ubiquitous Computing Environments, Proc. 2nd Int. Conf. on Mobile and Ubiquitous Systems, 2005. C. Shankar and R. Campbell. Ordering Management Actions in Pervasive Systems using Specificationenhanced Policies, Proc. 4th Int. Conf. on Pervasive Computing and Communications, Pisa, Mar. 2006. L. Kagal, T. Finin and A. Joshi, A Policy Language for Pervasive Systems, Proc. 4th Int. Workshop on Policies for Distributed Systems and Networks, Lake Como, Jun. 2003. A. Uszok, J. Bradshaw, R. Jeffers, M. Johnson, A.Tate, J. Dalton and S. Aitken. KAoS Policy Management for Semantic Web Services. Intelligent Systems, 19(4):32–41, 2004. N. Dunlop et. al. Dynamic Conflict Detection in Policy-Based Management Systems, Proc. EDOC ’02, 2002.
66
Feature Interactions in Software and Communication Systems IX L. du Bousquet and J.-L. Richier (Eds.) IOS Press, 2008 © 2008 The authors and IOS Press. All rights reserved.
Conflict Detection in Call Control Using First-Order Logic Model Checking Ahmed F. LAYOUNI1, Luigi LOGRIPPO1, Kenneth J. TURNER2 Université du Québec en Outaouais, Département d’informatique et ingénierie, Gatineau, QC, Canada J8X 3X7 (Email: laya01 | luigi @uqo.ca) 2 University of Stirling, Department of Computing Science and Mathematics Stirling FK9 4LA, Scotland, UK (Email: [email protected])
1
Abstract. Feature interaction detection methods, whether online or offline, depend on previous knowledge of conflicts between the actions executed by the features. This knowledge is usually assumed to be given in the application domain. A method is proposed for identifying potential conflicts in call control actions, based on analysis of their pre/post-conditions. First of all, pre/postconditions for call processing actions are defined. Then, conflicts among the pre/post-conditions are defined. Finally, action conflicts are identified as a result of these conflicts. These cover several possibilities where the actions could be simultaneous or sequential. A first-order logic model-checking tool is used for automated conflict detection. As a case study, the APPEL call control language is used to illustrate the approach, with the Alloy tool serving as the model checker for automated conflict detection. This case study focuses on pre/post-conditions describing call control state and media state. The results of the method are evaluated by a domain expert with pragmatic understanding of the system’s behavior. The method, although computationally expensive, is fairly general and can be used to study conflicts in other domains.
Keywords: Call control, conflict detection, feature interaction, policy, APPEL, Alloy, logic model checking.
1 Introduction
1.1 Features and Policies for Call Control Feature interactions have been discussed with respect to many types of systems, although a good part of the literature has concentrated on call processing systems. A survey of the literature on the subject can be found in [2]. Feature interaction is a complex phenomenon and can be analyzed from different points of view. Much research in the area has emphasized the behavioral aspect of the phenomenon. In this perspective, feature interactions are often seen as the result of complex behavior interleaving for the state machines that represent the features. In two feature interaction contests [10,12] the contestants were given what essentially
A.F. Layouni et al. / Conflict Detection in Call Control Using First-Order Logic Model Checking
67
were state models for features. These had to be composed, and their composition had to be modeled, looking for behavioral traces showing undesirable behavior such that, for example, one feature was not allowed to run to completion due to the intervention of another feature. In the world of VoIP, users are allowed to program their own features. However, most users do not program them from scratch using VoIP facilities directly. Rather, each VoIP system offers a set of basic features that can be combined by users and enterprises, by using specifically designed languages, to implement different policies. CPL (Call Processing Language [15]) is a well-known, early embodiment of this idea. Other policy languages with different purposes are LESS [22,23] and APPEL [20, 21]. In these approaches, users can specify policies such as: ‘if a call arrives from Alice during work hours, treat it as urgent’, ‘calls to Bob should be tried at all addresses where Bob normally works’. The familiar paradigm is at the heart of these systems, and we conjecture that it will continue to be used. This paradigm is essentially identical to the ECA, or <event, condition, actions> paradigm that has been applied extensively in areas such as reactive databases, agent systems, access control systems and the semantic web. Generally speaking, a rule is enabled when its trigger occurs and its condition holds. Note the difference between trigger and condition. The trigger can be an external or internal event. A trigger can convey parameters for use in conditions and actions. Conditions can check database or ‘context’ information, such as the time of day or the role of the user in an enterprise ontology. Application of the rule leads to one or more actions. This apparently simple paradigm allows many variations, and is a good match to the many requirements of call control. A policy can expand in a number of such rules. By means of policies and rules one can define the correspondent of traditional features, though policies can be higher-level, user-oriented and more declarative. Several actions can be proposed simultaneously, for example when one rule defines multiple actions or multiple rules are activated by the same trigger. When this happens, the different actions can direct the system to do incompatible things. Actions may also set conditions that can block other actions that should follow. Conflicts between actions imply potential conflicts between the policies that invoke the actions and are the main manifestation of feature interactions in policy systems. In this paper the terms conflict and incompatibility will be synonyms, and conflicts and incompatibilities will be seen as the consequences of logical inconsistencies. In policy systems there are resolution methods to ensure that only one action for each event is executed. For example, this is the situation for firewalls. Here, the rule file is typically scanned top-down and only the first applicable rule is used. This leads to just one action that accepts or rejects the proposed access. Some policy languages allow the user to include meta-rules for resolving cases where several actions may become simultaneously enabled. Often these meta-rules are based on priorities. The situation is complicated by the fact that for certain events, several actions may be needed. Nonetheless, for the validation of a policy set, all rules and actions that can become enabled for a given trigger and condition should be examined without considering resolution methods. Indeed, several cases of interest can be found in this way. For example, an important policy might be ‘shadowed’ by a more general but
68
A.F. Layouni et al. / Conflict Detection in Call Control Using First-Order Logic Model Checking
contradictory policy, or a specific case might have been added in contradiction to an important general policy. This can happen because users in these systems may be allowed to add and delete rules when they see the need for them. When they do this, they may not have a global view of all the consequences of the changes. Such situations could lead to unwanted system behavior, even though it may be technically correct. Users should be notified with a request, and possibly suggestions, for resolution. 1.2 Related Work Several authors have suggested that many undesirable feature interactions can be understood as the result of inconsistency in specifications. Perhaps the earliest and clearest statements in this sense can be found in [3,8], where feature interactions are modeled as inconsistencies among temporal logic specifications. According to this work, features A and B conflict if and only if a program realizing their joint specification A^B does not exist. The detection method uses the model-checker Cospan. A similar view is given a theoretical justification in [1]. But already the first classical paper on this subject [2] lists ‘conflicting assumptions’ as one of the main causes of feature interaction. Among others, [5, 9, 13] are based on the idea that feature interactions are the result of conflicting actions becoming enabled. But how to tell that actions can conflict? [22, 23] push the analysis to higher granularity by considering the pre/post-conditions of actions. For example, two actions having incompatible post-conditions can cause a feature interaction if they are simultaneously enabled, or two actions for which the first falsifies the pre-condition of the second can cause a feature interaction if they are enabled one after the other. Conflicts of pre/post-conditions in systems of ECA rules have also been studied in [18]. We extend the conflict identification method of [22, 23] to the language APPEL [20, 21], as well we refine and adapt the definitions used in these papers. We automate the conflict detection method using the first-order formal language Alloy [11]. The associated Alloy tool is used to identify the conflicts. A pragmatic approach to handling conflicts in APPEL is described in [19]. This work provides run-time support assuming that the conflicts have already been identified in some independent way. Another very recent contribution for the same language [16] provides a denotational semantics framework for APPEL, as well as a method to address feature interaction, but again assuming that conflicts between actions have already been identified. The method described in this paper can be used in conjunction with the techniques proposed in these two other papers to provide the information that they need, concerning the conflicts existing between specific actions. This method is a contribution towards a formal semantics for APPEL, as well as to feature interaction handling in APPEL. In a related paper [4], a technique has been developed for filtering conflicts in the same APPEL language. This other approach is founded on the intuitive notion that actions may conflict if they share a common effect. In contrast, the work reported here has a higher degree of precision. Pre/post-conditions are considered, as well as the ordering of actions. This leads to a formal model that allows semantically-based inferences to be drawn about the compatibility of actions. Still, because of our level of
A.F. Layouni et al. / Conflict Detection in Call Control Using First-Order Logic Model Checking
69
precision, the high-level analysis possible in [4] would be difficult with our method, as well several aspects that can be considered with that method would be difficult to consider with ours. For the time being, we must consider these two methods as both useful and complementary. Future research will have to deal with the problem of reconciling and integrating them.
2 Ordering and conflicts between actions In this method, the mutual consistency of actions is determined on the basis of their pre/post-conditions. We consider a system state to be characterized by a set of variables and their values. Pre/postconditions are predicates that describe these values. The pre-condition of an action describes the state(s) in which the system must be in order for the action to execute. The post-condition of an action describes the state(s) that can result from its execution. We shall see below that pre/postconditions can be consistent or inconsistent, leading to mutual consistency or inconsistency of states. The following timing relationships can apply between actions: x simultaneous execution: one action starts executing at a time when the other action has not completed. x sequential execution: one action starts executing after the other action has completed, i.e. one action strictly precedes another. If two actions start from or lead to mutually inconsistent system states, they are incompatible and should not be simultaneously executed. Even the case in which such actions are sequentially executed could be suspect, because the second action contradicts the results of the first (although this is normal in the evolution of a system). If an action establishes a post-condition which contradicts the pre-condition of another action, then the second action cannot immediately follow the first. More in detail, the following relations are of interest between the pre/postconditions of two actions A and B (this is not meant to be an exhaustive list): 1. Relationship between the pre-conditions of A and the pre-conditions of B: (a) The conjunction of the pre -conditions of these two actions is always true. The two actions can thus be executed simultaneously always. This is perhaps a rare situation. (b) The conjunction is satisfiable. In certain system states, A and B can both be executed. (c) The pre-conditions of the two actions are not simultaneously satisfiable. There are no system states for which A and B can be executed simultaneously. For example, they both might require the same device or they can be executed only in different connection states. 2. Relationship between the post-conditions of A and the pre-conditions of B (or vice versa). The cases are similar (a) The conjunction is always true: then the second action can always start after the first. (b) The post-conditions of A are simultaneously satisfiable with the preconditions of B. B can follow A in the case of simultaneous truth. (A more
70
A.F. Layouni et al. / Conflict Detection in Call Control Using First-Order Logic Model Checking
general case of these two situations is the case in which the post-condition of A implies the pre-condition of B.) (c) The post-condition of A is not simultaneously satisfiable with the pre-condition of B. In other words, B cannot follow A or A disables B. For example, A might free a device that B needs to find reserved, or A might leave the system in a connection state that is different from the one B requires. 3. Relationship between the post-conditions of A and B: (a) Simultaneous truth: no problem for concurrent execution. (b) The post-conditions of A and B are simultaneously satisfiable. This means that the results of A and B can be compatible. (c) The post-conditions of A and B are not simultaneously satisfiable. This means that the results of A and B are incompatible in principle. For example, one of them disconnects the call while the other continues it. Simultaneously executing the two actions would leave the system in an inconsistent, i.e. impossible state. Doing a thorough analysis of all the cases above would be rather complicated, and to our knowledge this has never been done for realistic call control systems. In this work, we are interested about a partial analysis of conflicts, and we identify three situations of conflict between actions (Figure 1): x concurrency conflicts: two actions have inconsistent pre-conditions, and thus cannot be executed in the same system state x disabling conflicts: an action leaves the system in a state where a second action cannot be executed x results conflicts: two actions would leave the system in an inconsistent (impossible) state, and thus cannot be executed simultaneously. Further, the two aspects of pre/post-conditions to be considered are the connection state and the media state.
Figure 1. Three types of conflicts
Conflicts among pre/post-conditions of more than two actions are also possible. However this kind of analysis is rarely performed because it becomes complex and
A.F. Layouni et al. / Conflict Detection in Call Control Using First-Order Logic Model Checking
71
very few concrete examples (where three actions can be in conflict without any two of them being in conflict) are known. In addition, our case study will be on APPEL, and run-time conflict handling for APPEL is designed so that only pairwise combinations of actions need be considered.
3 The APPEL Policy Language APPEL (ACCENT Project Policy Environment/Language) is a general-purpose language for expressing policies. The language is defined in [20], and its use for call control is described in [21]. APPEL conforms to the ECA model for policy rules. APPEL is supported by a policy system that interfaces to some system under control (e.g. a SIP server). When a trigger is received (e.g. there is an incoming call or a new party is being added to the call), the policy server retrieves all policies that apply. These are typically policies of the caller and the callee, but higher-level policies may also be retrieved (e.g. of the user’s organizations). Policies are then checked for applicability. Apart from explicit policy conditions, other factors that determine applicability include the profile of a policy and its period of validity. The result is a set of actions. Triggers, conditions and actions may all be composite. Triggers and conditions may be combined by logical operators, and actions may be conditional, sequential or concurrent. Although APPEL resembles a number of other policy languages, it differs in a number of important respects. It was specifically oriented towards the need for call control, as other approaches do not relate well to this application. For example, the Ponder policy language [6] assumes that the subject and target of a policy can be identified. However, in call control and other applications these concepts often are not present. APPEL was designed so that ordinary end users can formulate policies, unlike other languages that require a high degree of technical expertise. Since APPEL is XML-based, policies cannot be defined directly by a non-technical user. APPEL is therefore supported by a user-friendly policy wizard that allows creation and editing of policies using near-natural language. Although APPEL was originally developed for call control, it is of wider applicability. For example, it has also been used for policy-based management of home care and sensor networks. This wide range of applications is possible because APPEL has a core language that is supplemented by domain-specific extensions. This is reflected in the language schemas and also in the ontologies that define domain vocabularies. APPEL was designed with conflict handling in mind. As described in [19], the actions resulting from a trigger are filtered for compatibility. Special resolution policies are used to detect and to resolve conflicts. These policies resemble regular policies, but the trigger of a resolution policy is the action of a regular policy. Since resolutions are defined rather than being built into the policy system, there is considerable flexibility in how conflicts are handled. Generic resolutions choose among the conflicting actions, while specific resolutions propose domain-specific actions (that may differ from the conflicting ones). Although the approach supports automated run-time resolution of conflicts, it relies on resolution policies having been
72
A.F. Layouni et al. / Conflict Detection in Call Control Using First-Order Logic Model Checking
already defined. That is, as mentioned, the approach is dependent on already knowing what the conflicts are. In previous work, conflicts were determined manually – a tedious and error-prone task. The new work reported here provides a systematic, automated and semantically-based way of discovering conflicts that can then be used to define resolution policies.
4 APPEL Actions and Their Conflicts
4.1 APPEL Actions Although our approach could be used with APPEL in other domains, for concreteness and familiarity we use call control as the application domain. The call control actions in APPEL are defined by [20]. Some of these depend on particular communications protocols (e.g. H.323) and on particular parameters. We choose to abstract the key call control actions as follows: x connect_to initiates a new and independent call x reject_call rejects a call, i.e. prevents it from completing x forward_to changes the destination of the call x fork_to adds an alternative leg to the call x add_party adds a new party to an existing call x remove_party removes a party from the call x add_medium adds a new medium to the call x remove_medium removes a medium from the call x remove_default removes the default medium from the call x disconnect disconnects the call This list of actions provides an abstract view of the call processing cycle in APPEL: an initial connection action can be followed by reject, forward or fork. During the call, parties can be added or removed. Media can be added or removed. The call can then be disconnected. Note that ‘disconnect’ is not an action in APPEL at present, however our analysis has led to the conclusion that it should be added. The action remove_default deserves mention, especially since there is no add_ default. Certain actions, such as connect_to, implicitly reserve the default medium for the call (usually audio). Although the remove_default action also does not exist in APPEL, it is implicit. We have made it explicit because we will see later that it is useful to consider the availability of the default device in the pre/post-conditions. All these actions have parameters, which can themselves cause interactions. However the treatment of parameters would add considerable complexity to our analysis. We have abstracted away from parameters in our initial analysis of conflicts. We have also omitted actions that do not directly relate to call control (e.g. those that log or send messages). Our method can be applied to them, but this has not been done here because it would have complicated the presentation of the approach with little additional insight. For one thing, our tables would have had to be much larger.
A.F. Layouni et al. / Conflict Detection in Call Control Using First-Order Logic Model Checking
73
4.2 Pre/Post-Conditions for APPEL Actions Like all real-life distributed systems, call processing systems are complex and the conditions involved are correspondingly complex. In practical terms, with current means, analysis must be limited to few important characteristics. Following the example of [22, 23], we have decided to concentrate our analysis on two aspects: connection (or call) state and media state. We therefore characterize the state of a system as a pair . Table 1 shows the table of pre/post-conditions that was developed for this study. It represents a simplified and abstract view of call processing in APPEL. Setting up this table is a delicate task which determines the results of the analysis. Call processing progresses through three mutually exclusive connection states: NoCall, CallSetup, MidCall. Note that Table 1 does not describe a state machine, i.e. transitions and associated actions from state to state. For example, there is no action that leads from CallSetup to MidCall. It is assumed that this state transition will occur as a consequence of events that are not shown in the table. That is, the table intentionally does not describe how the real system works ‘behind the scenes’. The table identifies two categories of media: the default medium (e.g. audio) and media in general (e.g. video, messaging). It is useful to make this distinction because a call is always initiated with a default medium. This may later be augmented or replaced by something else (e.g. video may be added, or the call may be reduced to messaging only). The analysis presented in the following sections identifies six cases of conflict, in the three major categories we have identified: 1: Concurrency or Pre-Condition - Connection State 2: Concurrency or Pre-Condition - Media State 3: Disabling - Connection State 4: Disabling - Media State 5: Result or Post-Condition - Connection State 6: Result or Post-Condition - Media State Action connect_to reject_call forward_to fork_to
Pre-conditions Connection State Media State NoCall DefaultAvailable CallSetup DefaultReserved CallSetup DefaultReserved CallSetup DefaultReserved
add_party
MidCall
remove_party add_medium remove_medium remove_default disconnect
MidCall, PartyAddedToCall MidCall MidCall MidCall MidCall
DefaultAvailable
Post-conditions Connection State Media State CallSetup DefaultReserved NoCall DefaultAvailable CallForwarded DefaultAvailable CallForked DefaultReserved PartyAddedToCall, DefaultReserved MidCall
DefaultReserved
MidCall
DefaultAvailable
MediumAvailable MediumReserved DefaultReserved DefaultReserved
MidCall MidCall MidCall NoCall
MediumReserved MediumAvailable DefaultAvailable DefaultAvailable
Table 1. Pre/post-conditions for APPEL actions
74
A.F. Layouni et al. / Conflict Detection in Call Control Using First-Order Logic Model Checking
Connection State 1 NoCall NoCall CallSetup CallSetup MidCall MidCall
Connection State 2 MidCall CallSetup MidCall NoCall NoCall CallSetup
Table 2.Connection State incompatibilities 4.3 Concurrency Conflicts
reject_call
forward_to
fork_to
add_party
remove_party
add_medium
remove_medium
remove_default
disconnect
connect_to
As mentioned, in this case, the question is whether two actions can be executed starting from the same system state. This will not apply if they require states that are incompatible. For example, action connect_to cannot be concurrent with any other action, since it is the only action that can be executed before a call exists. Similarly, add_party requires the system to be in a state where the default medium is available, while remove_party instead requires the default medium to have been reserved. Note that this does not mean that the two actions are necessarily incompatible. Our analysis
1
1
1
1
1
1
1
1
1
Action Pair
connect_to
1
1
1
1
1
1
reject_call
1
1
1
1
1
1
forward_to
1
1
1
1
1
1
fork_to add_party remove_party add_medium remove_medium remove_default disconnect
Table 3. Pre-condition conflicts for Connection State (case 1) is not sufficiently detailed for such certitude. Indeed in every method reported in the literature, feature interaction detection only suggests the possibility of an interaction, which must be confirmed by domain experts, in consideration also of specific contexts. The approach requires incompatibilities in state to be defined. Table 2 shows the incompatibilities between connection states that we have used. Essentially, the table says that the three connection states are mutually incompatible.
A.F. Layouni et al. / Conflict Detection in Call Control Using First-Order Logic Model Checking
75
As a consequence of this, we obtain the results shown in Table 3 for incompatibilities among connection states. We can see here that reject_call and add_party are incompatible because each requires the system to be in a different state than the other. Two different connect_to actions are not incompatible for this reason, although they will be incompatible for other criteria, see below. Obviously the table is symmetric. The other aspect to be considered is media state. The table of media state incompatibilities is not shown here because it is rather simple. It indicates potential conflicts if the actions require some medium (including the default) to be both reserved and available. Here again, the necessary simplification should be understood. A call system will have a variety of selectable media and default media. To be complete and precise, one would have to consider the specific media and defaults in the system under consideration, as well as specific operations that reserve and release them. This type of detail is possible in practice, but is irrelevant for the purpose of this paper, which is illustrating the method. 4.4 Disabling Conflicts
add_party
remove_party
add_medium
remove_medium
remove_default
disconnect
fork_to
forward_to
reject_call
connect_to
As mentioned, it is possible for an action to leave the system in a state where another action is impossible. This can be determined by checking post-conditions against preconditions. Concerning the connection state, the incompatibilities to be considered are the same as earlier: the three states are incompatible. Thus, an action that must find the system in state MidCall cannot immediately follow an action that leaves the system in state CallSetup. Similarly for media state, an action that requires default media to be reserved cannot follow an action that sets default media available, and so on.
3
3
3
3
3
3
reject_call
3
3
3
3
3
3
forward_to
3
Action Pair
connect_to
3
3
3
fork_to
3
3
3
add_party remove_party
3
3
3
add_medium
3
3
3
remove_medium
3
3
3
remove_default 3
3
3
3
3
3
Disconnect
Table 4.Disabling conflicts for connection state (case 3)
76
A.F. Layouni et al. / Conflict Detection in Call Control Using First-Order Logic Model Checking
Table 4 shows the result obtained with respect to connection state. It is not symmetric because the disable relation is not symmetric. 4.5 Result Conflicts
5
disconnect
remove_default
remove_medium
add_medium
remove_party
add_party
fork_to
forward_to
reject_call
connect_to
Two actions are also incompatible if they lead to incompatible post-conditions. Again, these can refer to connection state or to media state. In the case of connection state, if an action leads to a certain connection state, another compatible action must lead to either the same state or to the next state. As mentioned, the cycle of states is as follows: NoCall leads to CallSetup which leads to MidCall, which leads again to NoCall. An action which leads to one of these states is incompatible with an action which jumps one link in the sequence. As an example, reject_call leads to NoCall, while add_medium leads to MidCall. Clearly a link is skipped here, since between the two we need an operation that establishes CallSetup. Hence the incompatibility. The complete incompatibility table between connection states will not be given for brevity, since essentially it reflects this reasoning. Note that this definition of state incompatibility is perhaps disputable, but this does not affect the validity of the method, which can be adapted to other definitions. Table 5 shows conflicts according to this criterion.
5 5
5
5
5
5
5
5
Action Pair
connect_to reject_call
5
forward_to fork_to
5
add_party
5
remove_party
5
add_medium
5
remove_medium
5
remove_default 5
5
5
5
5
5
disconnect
Table 5. Post-condition conflicts for connection state (case 5) For media state, the incompatibilities are again simple. If the actions lead to some media being available and reserved, or the default media being available and reserved, there is a post-condition incompatibility because of media. To save space, the results of this analysis are given in Table 6, the recapitulative table.
A.F. Layouni et al. / Conflict Detection in Call Control Using First-Order Logic Model Checking
77
4.6 Overall Results Table 6 shows the complete results for the six types of conflicts we have discussed. We have also analyzed other situations, for example the case where an action enables, or sets the pre-conditions, of another action [14]. In this case, the postcondition of the first action implies the precondition of the second one. These situations cannot be discussed for lack of space. 4.7 Assessment
1,2,6
1
1
1,2,5,6 1,2,6
1,2
1,4
1,2,6
4
4,5
4,6
1,2,3,5,6 1,3,4,5 1,3,5 1,3,5 1,3,4,5 1,3,4
reject_call
1,2,6
4,5
4
4,6
1,2,3,6
1,3,4
1,3
1,3
1,3,4
1,3,4,5
forward_to
1,2,4
3,6
3,6
3
1,2,4
1,6
1
1
1,6
1,6
fork_to
1,4,5
1,2,3,6 1,2,3,6 1,2,3
4
2,6
2,6
2,6
add_party
2,6
4
4
4
remove_party
1,4
1,4,6
1,5
1,3
1,3
1,3
4
2,6
1,5
1,3
1,3
1,3
2,6
4
1,2,5,6 1,3,4
1,3,4
1,3,4,6 2,6
4
1,2,6
1,4,5
1,4,6
3,4,5
1,4
2,3,5,6
3,5
3,5
1,2,5,6
Action Pair
3,4
1,2,5,6 1,4
1,2,6
disconnect
remove_default
remove_medium
add_medium
remove_party
add_party
fork_to
forward_to
reject_call
connect_to
How would a domain expert in call control (or APPEL) view these results? An expert is guided by a pragmatic understanding of the system’s behavior, while the approach of this paper is formal and systematic, at a high level of abstraction. As mentioned, the parameters of actions are disregarded, as well the view of system state is much simplified, and this means it is not said, for example, which specific party or medium is being added or removed. As a consequence, the method discussed here is intentionally pessimistic. However, since the goal of the work is to identify action pairs that require closer study because of potential conflicts, the approach is successful.
connect_to
add_medium remove_medium 4
4
remove_default
3,4,5
3,4
disconnect
Table 6. Summary of conflicts
5 Detecting Conflicts in APPEL with Alloy The method described in the previous sections could be implemented in different programming languages. Instead of using a conventional programming language, we decided to experiment with the model checker Alloy. This decision was taken for two
78
A.F. Layouni et al. / Conflict Detection in Call Control Using First-Order Logic Model Checking
reasons: Alloy allows high-level, conceptual modeling of systems architectures and their properties. Further, it has the capability of checking logical models, and thus is open to the possibility of extending our method to logically more complex pre/postconditions. 5.1 Alloy language and tool Alloy [11] is a formal method that includes a logic, a language, and a tool. The logic is primarily a relational logic. The language provides a user-friendly representation for the logic. It supports several specification styles, called predicate calculus style, relational style and navigational style (the last one being the most expressive and most commonly used). It includes a type system and mechanisms to favor reusability. The tool is essentially a first-order logic model-checking tool, based on the use of offthe-shelf satisfaction software. Alloy allows one to describe a system model, and will check it for consistency. It is also able to check whether certain properties are true for the system. However the user of Alloy is required by the execution system to specify a finite size for the model, and inconsistencies not found for the size specified could, at least in theory, appear for different sizes. Signatures are used in Alloy to define types, e.g. abstract sig Rules { trigger : one OBtrigger, condition : lone OBcondition, action : some OBaction }{ #action = 2 }
// there is one trigger // zero or more conditions // the set of acts is non-empty
defines a rule, and at the same time states that we are interested in generating exactly two objects of type action (for which there can be several, some), since we consider only conflicts between pairs of actions. Inheritance relationships can exist between signatures. Facts constitute a data base of facts that are known in the system, e.g. the pre/post-conditions of the actions (see Table 1): fact { connect_to.PreConnState = NoCall connect_to.PreMediaState = DefaultAvailable reject_call.PreConnState = CallSetup reject_call.PreMediaState = DefaultReserved . . . }
Or the fact that connection states are pairwise incompatible (encoding Table 2). fact AC { IncompSet.ConcConflict_Incomp_ConnState = MidCall -> NoCall + MidCall -> CallSetup + NoCall -> MidCall + NoCall -> CallSetup + CallSetup -> MidCall + CallSetup -> NoCall }
Predicates are properties that can be true or false. Assertions are properties that can be checked by the tool, and for which the tool will try to find a counterexample. For
A.F. Layouni et al. / Conflict Detection in Call Control Using First-Order Logic Model Checking
79
example, the following predicate is true if two actions are in concurrency conflict because of the connection state in their pre-conditions: pred Conc_Conflict_ConnState ( a1 : OBaction, a2 : OBaction ) { some v : a1.PreConnState, w : a2.PreConnState | (v -> w) in IncompSet.ConcConflict_Incomp_ConnState }
C12 asserts that predicate Conc_Confl_ConnState is true for the two objects connect_to and reject_call. assert C12 { Conc_Confl_ConnState ( connect_to, reject_call ) }
The Alloy tool is asked to check this assertion with: check C12
The result is that there is no counterexample to the predicate, thus the assertion is valid and the two actions conflict in their pre-conditions, making them unsuitable for concurrent execution. The core specification of this problem is about 3 pages of Alloy code. A further 22 pages are required for the check and assert statements needed to determine the presence of conflicts in all cases of interest. 5.2 Alloy Execution In its internals, the Alloy tool expresses the constraints in terms of Boolean expressions and then tries to solve these by invoking off-the-shelf SAT solvers. This problem is of exponential complexity. However, SAT solvers are improving in efficiency and many non-trivial problems can be treated. Current solvers can handle thousands of Boolean variables and hundreds of expressions, although of course much depends on the type of the expressions [11]. Thus, the Alloy user must find a judicious compromise between detail and abstraction, as well as size of model to be checked. Too many details or too large a model will cause the tool to run out of memory or time. The Alloy tool provides a number of useful graphical representations of its results: graphical, tree, XML. Alloy models can be checked in one of two ways: x With the function VerifActions which will check the whole model, but will find at most one (arbitrarily chosen) conflict for each execution. Unfortunately Alloy cannot be asked to continue finding solutions, as Prolog can. x By systematically checking assertions. To consider all cases for our model requires 600 executions (10 actions × 10 actions × 6 predicates). Each assertion takes about 2.5 minutes to check, for a total of around 25 hours. The analysis was performed on a Pentium with dual 2.80GHz CPUs and 1GB of main memory. We used Alloy version 3. Version 4 offers improvements in usability, but it became available late in the progress of this work.
80
A.F. Layouni et al. / Conflict Detection in Call Control Using First-Order Logic Model Checking
We are looking forward to improvements in the Alloy tool to simplify and expedite its use in a case like ours, where several hundreds of assertions have to be checked. It should be underlined that our algorithm would be much more efficient if implemented in a procedural programming language, however we wanted to work with a formal technique which allows a view that is close to the problem specification.
6 Conclusions We have described and justified a method for finding conflicts between call processing actions in a VoIP context, extending and adapting ideas in the work of [22,23] and others. We have demonstrated the effective application of this method to the actions of APPEL. Verification was undertaken using Alloy for first-order model checking. We have focused on APPEL and Alloy mainly because we are familiar with them. We plan experimentation and comparison with other applications, other policy languages and other formal tools. In another case study, the method was used to check the results of [23] with regard to LESS, and happily we were able to confirm them, as well as to complete them with the detection of some additional conflicts [14]. The contributions of this work are as follows: x The approach allows potential conflicts among policies to be determined through analyzing the pre/post-conditions of their actions. This is a general idea that is not restricted to call control, APPEL or Alloy. x As has been seen with Appel, the method is successful in identifying genuine conflicts that need to be resolved by a domain expert. x The approach provides a (partial) model of policy actions by defining their pre/post-conditions. In the context of this paper, this gives more precise meaning to APPEL. Note that the usefulness of this method is not limited to static feature interaction filtering. Understanding which actions conflict and why is useful in a number of areas of feature interaction research. This information is useful for feature interaction avoidance, for feature interaction detection, and for feature interaction resolution. Most of the methods that have been proposed in these areas assume that it has been previously determined by some other method that certain actions conflict. Neither is the method limited to single user interactions, since in principle conflicting actions can be in different users’ policies [17]. Our method can be integrated in other methods, i.e. the merge algorithm used in LESS. More detailed presentation of these results can be found in [14]. Future work will deal with various generalizations mentioned in the paper. A more complete model should be developed for APPEL and the pre/post-conditions of its actions. In particular, action parameters and more complete state descriptions should be taken into consideration. We plan to extend the approach to other policy languages, as well as to investigate other tool support besides Alloy.
A.F. Layouni et al. / Conflict Detection in Call Control Using First-Order Logic Model Checking
81
Acknowledgments This work was funded in part by the Natural Sciences and Engineering Research Council of Canada, the UK Royal Society, and the Royal Society of Edinburgh. The authors thank Gemma Campbell (University of Stirling) for discussions about detecting conflicts in APPEL, and Xiaotao Wu for discussion about his method. References 1. Aiguier, M., Berkani, K. and Le Gall, P: Feature specification and static analysis for interaction resolution. Proc. Formal Methods’06, LNCS 4085, 364–379, 2006. 2. Calder, M., Kolberg, M., Magill, E. H. and Reiff-Marganiec, S.: Feature interaction: A critical review and considered forecast, Computer Networks, 41:115–141, Jan. 2003. 3. Cameron, E. J., Griffeth, N. D., Lin, Y.-J., Nilson, M. E., Schnure, W. K. and Velthuijsen, H.: A feature-interaction benchmark for IN and beyond, IEEE Communications Magazine, 31(8):18–23, Aug. 1993. 4. Campbell, G. Turner, K.J. Policy calling filtering for call control. These proceedings. 5. Crespo, R.G., Carvalho, M., Logrippo, L.: Distributed resolution of feature interactions for internet applications. Computer Networks 51 (2), 382-397, Feb. 2007. 6. Damianou, N., Dulay, N., Lupu, E. Sloman, M.: The Ponder specification language. Workshop on Policies for Distributed Systems and Networks (Policy2001), Jan. 2001. 7. Felty, A.P., Namjoshi: Feature specification and automatic conflict detection, in Calder, M. and Magill, E. H. (eds.), Proc 6th. Feature Interactions in Telecommunications and Software Systems, 179–192, IOS Press, May 2000. 8. Felty, A.P., Namjoshi: Feature specification and automated conflict detection, ACM Trans. on Software Engineering and Methodology, 12(1):3–27, Jan. 2003. 9. Gorse, N., Logrippo, L., Sincennes, J.: Formal detection of feature interactions with logic programming and LOTOS, Software and System Modeling, 5(2):121– 134, (mistakenly published as Detecting feature interactions in CPL), Jun. 2006. 10. Griffeth, N.D., Blumenthal, R., Gregoire, J.-C., Ohta, T.: Feature interaction detection contest of the fifth international workshop on feature interactions. Computer Networks, 32(4):487-510, April 2000. 11. Jackson, D.: Software Abstractions: Logic, Language, Analysis, MIT Press, 2006. 12. Kolberg, M., Magill, E., Marples, D., Reiff, S.: Second feature interaction context. In: M. Calder, E. Magill (Eds.) Feature Interactions in Telecommunications and Software Systems VI. IOS Press, 2000. 13. Kolberg, M., Magill, E.H. and Wilson, M E.: Compatibility issues between services supporting networked appliances, IEEE Communications Magazine, 41(11):136–147, Nov. 2003. 14. Layouni, A.F.: Méthode formelle pour la détection d’interactions de fonctionnalités dans les systèmes de politiques. Mémoire de maîtrise, Université du Québec en Outaouais, Département d’informatique et ingénierie, 2007 (forthcoming).
82
A.F. Layouni et al. / Conflict Detection in Call Control Using First-Order Logic Model Checking
15. Lennox J, Wu X. and Schulzrinne H.: CPL: A language for user control of Internet telephony services, RFC 3880, Internet Engineering Task Force, Oct. 2004. 16. Montangero, C., Reiff-Marganiec, S. and Semini, L.: Logic–based detection of conflicts in APPEL policies, Proc. Symposium on Fundamentals of Software Engineering (FSEN’07), Feb. 2007. 17. Nakamura, M., Leelaprute, P., Matsumoto, K, Kikuno, T.: On detecting feature interactions in the programmable service environment of Internet telephony. Computer Networks 45(5): 605-624 (2004). 18. Shankar, C., Ranganathan, A., Campbell, R.: An ECA-P policy-based framework for managing ubiquitous computing environments. Mobiquitous 2005, July 2005. 19. Turner, K. J., Blair, L.: Policies and conflicts in call control, Computer Networks, 51(2):496–514, Feb. 2007. 20. Turner, K. J., Reiff-Marganiec, S. and Blair, L.: APPEL: The ACCENT project policy environment/language. Technical Report CSM-161, University of Stirling, UK, Dec. 2005. 21. Turner, K. J., Reiff-Marganiec, S., Blair L., Pang, J., Gray, T., Perry, P. and Ireland, J.: Policy support for call control, Computer Standards and Interfaces, 28(6):635-649, 2006. 22. Wu, X. and Schulzrinne, H.: Handling feature interactions in the Language for End System Services, in Reiff-Marganiec, S. and Ryan, M. D. (eds.), Proc. 8th. Feature Interactions in Telecommunications and Software Systems, 270–287, IOS Press, 2005. 23. Wu, X. and Schulzrinne, H.: Handling Feature Interactions in the Language for End System Services, Computer Networks 51 (2), 515-535, 2007.
Feature Interactions in Software and Communication Systems IX L. du Bousquet and J.-L. Richier (Eds.) IOS Press, 2008 © 2008 The authors and IOS Press. All rights reserved.
83
Policy Conflict Filtering for Call Control Gavin A. Campbell and Kenneth J. Turner Computing Science and Mathematics, University of Stirling, Stirling FK9 4LA, UK e-mail: gca — kjt @cs.stir.ac.uk Abstract. Policies exhibit conflicts much as features exhibit interaction. Since policies are defined by end users, the combinatorial problems involved in detecting conflicts are substantially worse than for detecting feature interactions. A new, ontology-driven method is defined for automatically identifying potential conflicts among policies. This relies on domain knowledge to annotate policy actions with their effects. Conflict filtering is performed offline, but supports conflict detection and resolution online. The technique has been implemented in the R ECAP tool (Rigorously Evaluated Conflicts Among Policies). Subject to user guidance, this tool filters conflicting pairs of actions and automatically generates resolutions. The approach is generic, but is illustrated with the A PPEL policy language for call control. The technique has improved the scalability of conflict handling, and has reduced the effort required of the previous manual approach. Keywords. Call Control, Conflict Detection, Ontology, OWL, Policy
1. Introduction 1.1. Policies and Features Policies are rules used to control a system dynamically through a set of actions to be performed in specified circumstances. Policies are typically defined by an event, a condition and an action. Historically, policy-based systems have been developed in domains such as access control, quality of service, security and system management. In all these applications, policies are typically created and maintained by administrators. However, the authors’ approach is unusual in being designed for ordinary system users. During the past decade, many policy languages and systems have been developed to decentralise the control of system behaviour, to automate system management, and to give more control to end users. This added flexibility has the advantage that users can tailor services more accurately to their needs, reducing reliance on generic system facilities. Traditional feature-based approaches lack flexibility. In telephony, for example, the features are mostly defined by the network operator. Users have little choice except to select the features they wish and to define a few feature parameters. Systems that offer multiple, independently-defined features are prone to interactions – a well-known situation where the behaviour of one feature may affect another. Many feature interactions have been identified in call control. Detecting these interactions is often problematic due to the large numbers of features (several hundred in a
84
G.A. Campbell and K.J. Turner / Policy Conflict Filtering for Call Control
typical PBX). Resolving the interactions can also be problematic because features are low-level units of functionality. It is often necessary to understand the user’s true intention before obtaining a satisfactory resolution. For example, consider the well-known interaction between Do Not Disturb and Alarm Call. The user’s intention was presumably to avoid calls from others, but not the alarm call from the exchange. Policies are closer to user goals (e.g. ‘I do not wish to be called by anyone’) and so more faithfully reflect user intentions. Resolving interactions or conflicts is facilitated by the higher-level approach of policies. This paper presents an approach to conflict handling using domain knowledge captured in an ontology. Collecting this knowledge is a manual step. However, conflict detection is then fully automated using the R ECAP tool (Rigorously Evaluated Conflicts Among Policies). Conflict resolution is partially automated by R ECAP – outline resolution policies are automatically generated, for completion by the domain expert using a policy wizard. The general idea is that conflicts are identified and specified through offline filtering. The resulting conflict resolution policies are then use online. 1.2. Ontology Support for Policies The authors use a policy system called ACCENT (Advanced Component Control Enhancing Network Technologies). This includes a policy server that supports the A PPEL policy language, a wizard for creating and editing policies, and a variety of supporting interfaces for various application domains. In recent research, the authors have extended A PPEL to support new and multiple domains. As the core schema of A PPEL is generic, it can be extended for different applications by adding further schemas. However, this does not adequately deal with concepts in the application domains. The authors have therefore developed additional support for A PPEL through a range of ontologies. The new approach uses OWL (Web Ontology Language) to describe the core A P PEL language. The core ontology is then extended hierarchically to define user interface information and to specialise the language for particular domains. This has increased the extensibility and precision of the policy language. A PPEL is supported by a wizard that offers a web-based interface for creating and editing policies. This has been reengineered to replace hard-coded domain information (for call control) with information stored within the ontologies. The result is a highly flexible user interface, easily adaptable to reflect new application domains. 1.3. Related Work Policy conflict is equivalent of feature interaction in telephony and related domains. Since policies are defined in a decentralised manner, the potential for unwanted interaction is far greater than that of conventional feature-based systems. The increased flexibility that policies offer to users is offset by more pervasive, complex and subtle conflicts among policies. Conflicts in a policy-based environment are often caused by the simultaneous execution of policies with contradictory actions. (Conflicts can also arise between actions and system state, i.e. the result of previous actions.) Policy conflict requires study of three different aspects: filtering conflict-prone policies, defining conflict detection mechanisms,
G.A. Campbell and K.J. Turner / Policy Conflict Filtering for Call Control
85
and defining a conflict resolution strategy. Although policy filtering is a new departure, conflict detection and resolution have already been studied. In system management, for example, conflict detection and resolution techniques include [1,2]. Enhancements to COPS (Common Open Policy Service, RFC 2748) are aimed at managing policy conflict through rigorous definition of actions. Many techniques have been developed to automate feature interaction detection at the specification stage. Techniques in feature interaction detection have focused heavily on a variety of formal methods such as process algebras, automata and (temporal) logic. Of these, techniques for filtering interaction-prone features are the most relevant. However, few are directly relevant to policy-based control. Nonetheless, the ideas have influenced the work reported here. The notion of interaction filtering was initially presented in [3]. The filtering process is followed by detailed checking and refinement of conflicts. Several tools support an automated approach to filtering feature interactions. One example is a prototype designed to detect interactions in a call environment [4]. This filters interactions among IN services, using simple descriptions of the static structure for each service. Interactions are detected for groups of services used in particular call scenarios. Formal approaches have been followed by a number of researchers. FIX (Feature Interaction Extractor [5]) is an example of a domain-independent approach, although only application to telephony has been reported. This uses the model checker C OSPAN to run consistency tests on feature specifications. In a further stage, the tool user can investigate the generated scenarios and decide on their accuracy. [6] presents a filtering technique based on Use Case Maps and applies it to telephony features. [7] uses preconditions and postconditions to identify inconsistencies in features for L ESS (Language for End Systems Services). [8] describes work that is directly relevant to this paper as it uses temporal logic to formalise the semantics of A PPEL. This leads to a formal basis for automated detection of conflicts. In other work on A PPEL, [9] presents a method for discovering conflicts based on the pre/post-conditions of actions. This allows semantically-based inferences to be drawn about the compatibility of actions. However, it is technically more complex than the simple and intuitive approach of the work reported here. As complementary techniques, future study will investigate how [8,9] can be reconciled and integrated with the authors’ approach. The work reported here differs in important respects from the foregoing: • Policies rather than features are used for control. These support higher-level statements of user intentions, and facilitate the resolution of conflicts. • The approach is adapted to many domains, including ones outside telephony. For example, the authors use it to detect conflicts in home care and in sensor networks. • A formal specification of the system and its policies is not required. In practice a precise specification is usually infeasible because the system is too complex, is proprietary, or is open-ended because users can define their own features or policies. • The approach is intentionally less formal. This has the advantages of being simpler to set up and more intuitive, i.e. relying only on domain knowledge. Domain experts, rather than formalists, can define the information needed for conflict filtering. The analysis is efficient and domain-oriented.
86
G.A. Campbell and K.J. Turner / Policy Conflict Filtering for Call Control
1.4. Paper Outline Section 2 presents an overview of the ACCENT policy system, the A PPEL policy language, and its approach to conflict detection and resolution. Section 3 introduces ontologies, and outlines how they were used to model A PPEL. Section 4 explains how ontologies are used to identify policy conflicts. Section 5 discusses the approach to conflict filtering and the associated tool support. Section 6 evaluates the results.
2. The ACCENT Policy Approach 2.1. Policy System and Language The ACCENT policy system (Advanced Component Control Enhancing Network Technologies, www.cs.stir.ac.uk/ accent) was originally designed to allow users to tailor (Internet) call handling to their own preferences. As illustrated in figure 1, the ACCENT system is split across three layers. At the lowest level, the system layer connects the policy system to its external environment. Policy enforcement is handled by the policy system layer that incorporates the policy server, policy store (where policies reside) and policy database (containing user login and server configuration data). At the top level, the user interface layer is where users create policies and where contextual information is obtained. Policies are defined and edited via a web-based policy wizard [10]. Each policy is saved as an XML document and uploaded to the policy store. The general approach of ACCENT is described in [11]. A PPEL (ACCENT Project Policy Environment/Language [12]) is a comprehensive and flexible language, designed to express policies within the ACCENT system. Key factors in the design of A PPEL include a simple but concise structure, ease of extension, and orientation towards ordinary users. A PPEL comprises a core language and its specialisations for different application domains. The original specialisations were for call control and conflict resolution, but new specialisations have been developed for home care and sensor networks. A PPEL defines the overall structure of a policy document: regular policies, resolution policies, and policy variables. A policy consists of one or more rules in ECA form (Event-Condition-Action). Each rule has a combination of triggers (optional), conditions
User Interface Layer
Policy System Layer
Policy Wizard
Policy Database
Communications System Layer
User Interface
Policy Server System
Communications Network Server
Figure 1. ACCENT System Architecture
Context System
Policy Store
G.A. Campbell and K.J. Turner / Policy Conflict Filtering for Call Control
87
(optional), and actions (mandatory). The core language constructs are extended through specialisation for new applications. A policy is eligible for execution if its triggers occur simultaneously and its conditions apply. Additional conditions may be imposed, such as the period during which the policy applies, or the profile to which the policy belongs. When the policy system is informed of an event, the applicable policies are retrieved and applied if eligible. As multiple policies can be triggered, conflicts may arise among their actions. 2.2. Conflict Detection and Resolution Conflicts result from clashes between pairs of policy actions. As an example from call control, the caller may wish to conference in a third party whom the callee does not wish to speak to. The caller/callee policies propose add/remove party(person) for some individual. These contradictory actions must be identified as conflicting. They must also be resolved, e.g. by giving the caller (as the bill payer) priority. The ACCENT system allows for both static and dynamic conflict detection. Static detection is performed when a policy is defined and uploaded to the policy system, while dynamic detection occurs at run-time. Although both methods are permitted, only dynamic detection is currently implemented. This focus was intentional since run-time conflict handling is the more challenging task. Dynamic conflicts also subsume static conflicts. The actions resulting from a policy trigger are checked pairwise for conflicts. (The design of the language means that the order of comparison is irrelevant, and that only pairs need be checked.) The outcome is a set of non-conflicting actions. Human guidance is almost inevitably required to determine how best to handle conflicts. Only certain ‘technical’ conflicts might be detected fully automatically. Even then, the treatment of a conflict requires judgment. As an example, suppose one user wishes to add video to a call but the other user wishes to avoid this. This is clearly an add/remove conflict. A trivial resolution would be to permit one or other policy to prevail. However, an acceptable resolution might be much more complex, e.g. using a third party to adjudicate the conflict. As a further example, suppose one user wishes to add the G.723 audio codec to a call but the other user wishes to avoid it. This appears to be an identical kind of add/remove conflict. In fact it is not, because both parties (in H.323) must be willing to support the G.711 audio codec. There is therefore no need to treat this as a conflict. This illustrates that conflict detection requires domain knowledge and human intuition. Conflict handling in ACCENT is defined by resolution policies that are distinct from regular policies. Resolution policies express when and how the system should respond to conflicts. Their effect is to process a set of proposed policy actions, selecting those that are compatible with the conflict handling rules. Resolution policies are specified as an extension of the core A PPEL language, and therefore use the same syntax as regular policies. However, resolution policies use a different vocabulary because they serve a different purpose. The domain-specific actions of regular policies are the triggers of resolution policies. Resolution policies can dictate generic outcomes (selecting among the proposed actions) or specific outcomes (dictating domain-specific actions). A PPEL has a built in notion of policy preference which allows a user to indicate how strongly they wish a policy to be applied. This allocates priorities to policies as one means of resolving conflicts. However, other resolutions may be used such as choosing
88
G.A. Campbell and K.J. Turner / Policy Conflict Filtering for Call Control
the policy of a superior user, or choosing a longer-standing policy. Resolution policies gives considerable flexibility in that conflict handling is not hard-coded into the policy system. It is defined externally and can be domain-specific. To avoid infinite regress, resolution is performed just once. The approach ensures that the outcome is conflict-free, and does not require resolutions to be checked again for conflicts. Conflict handling within ACCENT is described in [13]. The main limitation of this previous work was that resolution policies had to be defined manually. This was tedious and error-prone. The new work reported here describes an ontology-driven mechanism to automate conflict detection. The R ECAP tool provides automated support for detecting conflicts and for creating outline resolution policies. The details of resolution require human judgment and are added in a further manual step.
3. Ontology Support for Policies 3.1. Ontology Background An ontology is the set of terms used to describe and represent an area of knowledge, together with the logical relationships among these [14]. It provides a common vocabulary to share information in a domain, including the key terms, their semantic interconnections, and the rules of inference. Ontologies enable separation of domain knowledge from common operational knowledge in a system. A variety of specialised languages are used to define ontologies. OWL (Web Ontology Language [15]) is a standard XML-based language. It is supported by a wide range of software, and can be integrated with other techniques. In addition, OWL provides a larger function range than any other ontology language to date. For these reasons, OWL was used to define the ontologies in the work reported here. An OWL ontology defines classes, properties and individuals. A class represents a particular term or concept in a domain, while a property is a named relationship between two classes. An individual is an instance or member of a class, usually representing real data content within an ontology. Properties are defined for classes in the form of restrictions that specify the nature of a relationship between two classes. OWL supports inheritance within class and property structures. OWL can also import shared ontologies. The ontological basis for A PPEL exploits this, using multiple documents for different aspects of the core language and its specialisation in various domains. Ontology support for policies is provided by P OPPET (Policy Ontology Parser Program Extensible Translation [16]). This uses the P ELLET ontology reasoning engine (pellet.owldl.com) and the Jena ontology parser (jena.sourceforge.net). P OPPET parses and integrates ontologies on behalf of the ACCENT system. Figure 2 illustrates the relationship between ACCENT and P OPPET. 3.2. Ontologies for Policies Ontologies were defined for the core of A PPEL and its domain specialisations. Using OWL, three layers of ontologies were developed [16]. At the lowest level, GenPol (generic policy) defines core language elements such as variables, rules, triggers, conditions and actions. This includes the basic elements
G.A. Campbell and K.J. Turner / Policy Conflict Filtering for Call Control
ACCENT User Interface Policy Wizard
89
OWL Ontology
RMI
POPPET Server
Policy Server
PELLET Reasoner
ACCENT
POPPET
Figure 2. Ontology Support by P OPPET for ACCENT Policies
of a policy and the cardinality rules relating these. Each core element is defined as an ontology class. Relationships between classes are defined using ontology properties that link them. Using properties to describe the associations between concepts is a powerful means of modelling the structure of A PPEL. The GenPol ontology contains no domain knowledge, only a definition of how high-level concepts may be combined to form a regular policy or resolution policy. The ACCENT policy wizard [10] is a user-friendly front-end for creating and editing policies. Such a facility is key in supporting policy definition by non-technical users of the system. The wizard presents policy and domain information using near natural language. The user interface is not part of A PPEL proper, but is essential for the system to be usable. Additional, wizard-related knowledge is therefore defined in WizPol (wizard policy) as an extension of GenPol. This specialises the core language for use with the wizard. Examples of wizard-specific facilities include the categorisation of triggers, conditions, actions and operators. In addition, a subset of the language functionality is matched to the skill or authorisation level of a user. The GenPol and WizPol ontologies define domain-independent aspects of regular policies and resolution policies. To specialise the language for a new domain, a further ontology is created to import and extend these base ontologies; importing WizPol implicitly imports GenPol as well. A domain-specific ontology can contain arbitrary new concepts, but all policy language concepts must be subclasses within the hierarchy defined by the base ontologies. Consequently, as these ontologies are combined through an import mechanism only, they do not suffer incompatibility issues. The CallControl domain ontology specialises A PPEL for call handling. Significant extensions include call control triggers, conditions and actions. Using properties defined in GenPol, constraints may be placed on individual triggers, conditions and actions. This defines their use for certain user levels and for display categories within the wizard. In addition, properties define which actions and conditions are permitted with a particular trigger, and the valid range of operators associated with each condition parameter. Further user interface and data type aspects may be included in a domain-specific ontology.
90
G.A. Campbell and K.J. Turner / Policy Conflict Filtering for Call Control
4. Automated Conflict Detection 4.1. Action Effects Conflicts arise between policy actions with certain parameters. When two actions with a similar effect are executed simultaneously, the result may be a conflict. For example, actions that add and remove the same aspect are potentially in conflict. Thus, the call control actions add party and remove party are likely to contradict each other. Other conflicts are far more subtle, and cannot easily be identified by naming alone. Action parameters may use enumerated types, e.g. call control parameter medium has possible values audio, video and whiteboard. Actions plus selected parameters allow a deeper exploration of conflicts. Where an action has an enumerated parameter type, conflicts between instances of the same action are likely only if the parameters are the same. For example, call control action add medium(audio) could be considered to conflict with a second add medium(audio). However, if the second action wished to add video then this would not be an obvious conflict. For this reason, actions with distinct values in an enumerated parameter set are treated as distinct actions. In general, an action must be considered along with a subset of its parameters. In a domain like call control, there is a rich set of action names that suggest conflicts in themselves. Even there, it is often necessary to take parameters into account. For example, adding one party and removing a different party is not problematic. In other domains such as home care and sensor networks, a much more limited selection of action names is used. This is because actions are mainly differentiated by their parameters. A simple device out action, for example, carries parameters that indicate the action type, device class, device instance and action parameters. Conflict detection has to work with the domain policy language as defined. In general, a subset of parameters must therefore be considered for conflict along with the basic action name. However, for simplicity the following text mainly refers to comparing actions. Policy actions are defined to have one or more effects on the execution environment. These effects range from the technical (e.g. bandwidth) to the social (e.g. privacy). Internal policy actions affect the policy system itself, such as setting system properties or accessing system resources. Conflicts are likely where two actions share a common effect. Any action may potentially conflict with itself. However, all action pairs must be considered too. (As noted earlier, only two-way and not n-way conflicts need be considered.) Figure 3 shows the effects of internal policy actions, while figure 4 shows the effects of call control actions. Call control actions with enumerated parameters are listed separately. Effects for internal policy actions are distinct from those of domain actions, as internal and external actions do not (normally) conflict. Effect categories differ depending on the language domain. As discussed in section 3.2, ontologies have been used to model policy language concepts. It is therefore convenient to define action effects in these ontologies. However, the ontologies play no role in conflict detection or resolution. As conflict detection is not an integral part of A PPEL, the concept of action effect is defined in the WizPol ontology. This allows conflict information to be specified outside the core language, while maintaining the advantage of further specialisation in domain-specific ontologies. Effect information is defined in WizPol through the ActionEffect class and the hasActionEffect property. The ActionEffect class is a superclass of all effect categories for both internal
G.A. Campbell and K.J. Turner / Policy Conflict Filtering for Call Control
Action log event(arg1) restart timer(arg1) send message(arg1,arg2) set variable(arg1,arg2) start timer(arg1,arg2) stop timer(arg1) unset variable(arg1)
91
Effect file timer channel variable timer timer variable
Figure 3. Internal Action Effects
Action add caller(conference) add caller(hold) add caller(monitor) add caller(release) add caller(wait) add medium(audio) add medium(video) add medium(whiteboard) add party confirm bandwidth connect to fork to forward to note availability note presence play clip reject call reject bandwidth remove medium(audio) remove medium(video) remove medium(whiteboard) remove party
Effect party, privacy party, privacy party, privacy party, privacy party, privacy medium, privacy medium, privacy medium, privacy party, privacy bandwidth route route route availability presence medium call bandwidth medium medium medium party
Figure 4. Call Control Action Effects
and domain-specific policy actions. Generic action effects are defined as subclasses of this class in WizPol. Domain-specific action effects are defined as subclasses within a separate domain ontology that imports WizPol. Each policy action is linked to the appropriate effect category class using the hasActionEffect property. This relates actions and effects, allowing a tool to infer overlapping actions. 4.2. Conflict Detection Algorithm Only pairs of actions need to be considered in the analysis; there are no three-way conflicts. Potential conflicts between actions can be inferred from the ontology-defined effect categories through a two-stage algorithm. Firstly, any two actions sharing at least one
92
G.A. Campbell and K.J. Turner / Policy Conflict Filtering for Call Control
common effect are identified as potentially conflicting. Secondly, actions with enumerated parameter types are analysed. Where two actions share the same parameter value then they potentially conflict, otherwise it is assumed that no conflict exists. where n is the The total number of action pairs, including self-conflicts, is n(n+1) 2 number of possible policy actions. The policy language for call control has 21 possible actions and therefore a total of 231 action pairs. Conflict handling is commutative (if A1 and A2 conflict, then so do A2 and A1) and associative (the way in which actions are paired is irrelevant). The ontologies allow a list of actions to be inferred for each effect category. If two actions are present in some category, they can be marked as potentially conflicting. For example, the call control actions fork to and forward to potentially conflict as they both affect the route. All action pairs deemed to conflict in this way are then automatically reviewed with respect to their parameters. As explained earlier, actions with enumerated parameter types are considered in more detail. This increases the total number of action pairings as an action may be instantiated multiple times with different parameter values. For example, the action add medium with its parameter is equivalent to three distinct actions. This allows more accurate analysis of potential conflicts. Where actions might be treated as potentially conflicting based on a shared effect, this might not be the case when particular parameters are considered. To explain this more concretely, some examples for medium are shown in figure 5. An action may conflict with itself if there is a common parameter (e.g. both instances wish to add video), and may not conflict if the parameters are different (e.g. they wish to add video and whiteboard respectively). Different actions with a common effect and the same parameter indicate potential conflict (e.g. attempting to add and remove audio simultaneously). Actions with a common effect and dissimilar parameters are assumed not to conflict (e.g. altering the medium by adding video and removing whiteboard). Action1 add medium(audio) add medium(audio) add medium(video) add medium(video)
Action2 remove medium(audio) add medium(video) add medium(video) remove medium(whiteboard)
Conflict × ×
Figure 5. Sample Call Control Conflicts with Action Parameters
5. The R ECAP Conflict Filtering Tool 5.1. Automated Support for Conflict Filtering The R ECAP tool (Rigorously Evaluated Conflicts Among Policies) has been developed to automate the algorithm in section 4 for identifying conflict-prone actions. Figure 6 illustrates what the tool looks like on-screen. Taking the first line as an example, the tool shows pairs of actions (add medium(audio) and add medium(audio)), why they conflict (shared effect on medium and privacy), and when this conflict was last modified (automatically or manually).
G.A. Campbell and K.J. Turner / Policy Conflict Filtering for Call Control
Figure 6. Screenshot of R ECAP
93
94
G.A. Campbell and K.J. Turner / Policy Conflict Filtering for Call Control
3
4
4
n2 io ct /A n1
3 3
4
io
3
1
ct
3
2
A
stop timer
3
unset variable
start timer
set variable
send message
restart timer
log event
Depending on the domain, the conflicts identified by R ECAP may or may not be complete and correct. Conversely, subtle conflicts that are not automatically flagged can be added by hand. As noted earlier, conflict handling will always require human judgment and cannot be fully automated. Based on human guidance, R ECAP produces conflict resolution policies. R ECAP is started by pointing at the relevant domain ontology. Using the action effects, the tool automatically constructs a matrix of all policy action pairs and highlights those deemed to be potential conflicts. The tool user may explore the matrix, confirming or refining each conflicting action pair. If closer inspection reveals that there is no real conflict, this pairing can be flagged as conflict-free. If an action is linked in an ontology to some effect, this may not be true of the actual implementation. Conflicts arising from this cause can be dismissed using the tool to undo the linking. Potential conflicts are displayed in the tool matrix by noting the common effects in the appropriate cell. For convenience, internal and domain-specific actions are described here in separate figures though in practice they are combined by R ECAP. The result of filtering internal conflicts for A PPEL is shown in figure 7. Conflicts are numbered in the figure according to the underlying effect. As an example of conflict, actions start timer and stop timer are in conflict because they both have a timer effect as indicated at their intersection. Some conflicts are non-obvious (e.g. add caller and add medium). Detailed study by a domain expert confirmed that all conflicts discovered are real, and that no conflicts had been missed. No changes were therefore needed in the analysis.
log event restart timer send message set variable start timer stop timer unset variable
Conflict: 1 channel, 2 file, 3 timer, 4 variable Figure 7. Internal Conflicts identified by R ECAP for A PPEL
Call control actions deemed conflicting by R ECAP are shown in figure 8. For simplicity, this figure shows conflicts between actions without parameters. In the tool, actions with enumerated parameter types are displayed and compared distinctly. Conflicts are numbered in the figure according to the underlying effect.
G.A. Campbell and K.J. Turner / Policy Conflict Filtering for Call Control
95
5,7
7 5,7 4,7 7 5,7
5 4
4 5
2
2 8
8 8
8 8 8 1 6 4
4 2 3 4 5
io n1 /A ct io n2 ct A
remove party
remove medium
reject call
reject bandwidth
play clip
note presence
note availability
forward to
fork to
connect to
confirm bandwidth
add party
add medium
add caller
Detailed study by a domain expert confirmed that all detected conflicts but one are real, and that no conflicts have been missed. There is a possible problem in that confirm bandwidth is indicated to conflict with itself due to a shared bandwidth effect. This could indeed be an error, as it might lead to bandwidth being allocated twice. As it happens, in the ACCENT system it is harmless to confirm bandwidth twice. Without human guidance, this action pair would be flagged as a conflict. It should be noted that the bandwidth effect is still required as it correctly identifies the conflict between confirm bandwidth and reject bandwidth.
add caller add medium add party confirm bandwidth connect to connect to forward to note availability note presence play clip reject bandwidth reject call remove medium remove party
Conflict: 1 availability, 2 bandwidth, 3 call, 4 medium, 5 party, 6 presence, 7 privacy, 8 route Figure 8. Call Control Conflicts identified by R ECAP for A PPEL
As demonstrated by figures 7 and 8, the automated conflict analysis (for call control) is very accurate. However, it confirms that human guidance is still needed in a small number of cases. R ECAP is mainly intended to analyse conflicts when a domain policy language is initially defined, using an ontology as the source of action effects. This initial analysis is saved to file and can subsequently be reloaded into the tool. This avoids the user and the tool from having to repeat a prior analysis, particularly if the user has manually modified the conflict list.
96
G.A. Campbell and K.J. Turner / Policy Conflict Filtering for Call Control
5.2. Automated Support for Resolution R ECAP turns the conflict list into a set of outline A PPEL resolution policies that define the detection part of conflict handling. These policies define the conflicting triggers and parameter conditions, but resolution actions must be completed manually. The policies are automatically uploaded to the policy system, where the wizard is used to define the resolutions. Conversely, R ECAP reads existing resolution policies and annotates the matrix with conflicts derived from these. This is a useful feature which allows conflicts defined manually via the policy wizard to be used in conjunction with conflicts identified by R ECAP. Resolution policies can be simple or complex, specific or generic, and dependent on many factors including the conflicting policies and their parameters. One or more actions may be required of a resolution. See [13] for a list of typical resolution policies. As an example, suppose one party wishes to add video to the call with add medium(video), while the other party wishes to conference in a third person with add party(person). This is correctly flagged as a conflict since the third party would be able to view the call parties and their workplaces (affecting privacy). Using human judgment, it might be decided to allow video and the third party. However, someone (e.g. a manager) should be included in the call to oversee it. In view of this complexity, R ECAP generates only outline resolution policies that specify default policy attributes, triggers corresponding to the conflicting actions, and default actions to resolve the conflict. The outline resolutions are then uploaded and customised using the wizard as normal. Resolution policy editing is dealt with by the wizard and not by R ECAP. This allows R ECAP to remain domain-independent and not be constrained to a particular resolution technique or policy language. An additional advantage is that resolution policies are then edited through the same interface as regular domain policies. All default resolution parameters are defined by a properties file, and can therefore be readily modified according to local practice. The property file allows any structural components of outline resolutions to be altered. Resolution policies are normally disabled on upload. This ensures they are ignored by the policy server until they have been edited to include a specific resolution. This avoids incomplete or inconsistent resolutions from being used accidentally. R ECAP could be given a more user-friendly interface to change the default resolution policy structure and parameters. Currently this is achieved by manually editing the properties file. Although the tool is mainly intended for use during definition of a new application domain, there could be some value in easing later changes. Policies in general are distinguished by unique identifiers, typically some phrase chosen by the user. Resolution policies automatically created by R ECAP have machinegenerated (but human-usable) identifiers. If the identifier of such a policy is changed manually, this could lead to duplication. The tool could detect this situation by looking for overlap of resolution triggers and conditions.
6. Conclusion A technique and a tool have been introduced for (semi-)automated filtering of conflictprone policies. Ontologies have been used to model the core and domain-specific aspects
G.A. Campbell and K.J. Turner / Policy Conflict Filtering for Call Control
97
of A PPEL – for regular and as well as resolution policies. Conflicts between policy actions are handled in ACCENT by resolution policies. Action effects defined in ontologies allow conflicting action pairs to be discovered as potential conflicts. As has been seen, the analysis leads to very accurate results (for call control). Nonetheless, R ECAP allows potential conflicts to be refined manually since a fully automated approach is impossible due to the complexity and subtlety of policy interactions. Following filtering, outline resolution policies are generated and uploaded for completion with the policy wizard. R ECAP offers an automated approach to conflict analysis and resolution where previously this was achieved manually. This has improved the scalability of A PPEL, and has substantially reduced the time and complexity of dealing with conflicts. Associating actions with their effects is very simple compared to formal methods, but yields very good results. The straightforward and domain-oriented approach is much less expensive to use than one that requires a complete formal model. R ECAP provides a way of visually identifying conflicts within an arbitrary collection of policy actions. Unlike many existing approaches and tools, policies in any domain may be analysed easily by R ECAP, and not just those for call control. The tool is also useful for policy applications where action parameters play a bigger role. R ECAP has been designed for stand-alone use. Although conflict data is mainly expected to derive from an ontology, conflict information may be input from a local file. Consequently, data generated by other tools or systems may be used by R ECAP for conflict filtering. The only requirement is knowledge of the conflict data format used. Although R ECAP is aimed at filtering conflicts in the initial stages of specifying a new policy language, it may be used in later revisions of the language to refine conflicts and to generate resolutions. Acknowledgements The authors thank their colleagues Stephan Reiff-Marganiec (now at the University of Leicester) and Lynne Blair (who was on leave from Lancaster University during the development of ACCENT). Both contributed substantially to the design of the policy system that lies at the foundation of the work reported in this paper. Gavin Campbell’s work on the P ROSEN project was supported by grant C014804 from the UK Engineering and Physical Sciences Research Council. References [1]
J. Chomicki, Jorge Lobo, and S. Naqvi. A logical programming approach to conflict resolution in policy management. In Anthony G. Cohn, Fausto Giunchiglia, and Bart Selman, editors, Proc. Principles of Knowledge Representation and Reasoning, pages 121–132. Morgan Kaufmann, 2000. [2] Emil C. Lupu and Morris Sloman. Conflict analysis for management policies. In Proc. 5th. International Symposium on Integrated Network Management, pages 430–443. Chapman-Hall, London, UK, 1997. [3] Kristofer Kimbler. Addressing the interaction problem at the enterprise level. In Petre Dini, Raouf Boutaba, and Luigi M. S. Logrippo, editors, Proc. 4th. International Workshop on Feature Interactions in Telecommunication Networks, pages 13–22. IOS Press, Amsterdam, Netherlands, June 1997. [4] Dirk O. Keck. A tool for the identification of interaction-prone call scenarios. In Kristofer Kimbler and Wiet Bouma, editors, Proc. 5th. Feature Interactions in Telecommunications and Software Systems, pages 276–290. IOS Press, Amsterdam, Netherlands, September 1998.
98 [5] [6]
[7] [8]
[9]
[10] [11]
[12]
[13] [14] [15] [16]
G.A. Campbell and K.J. Turner / Policy Conflict Filtering for Call Control
Amy P. Felty and Kedar S. Namjoshi. Feature specification and automated conflict detection. ACM Transactions on Software Engineering and Methodology, 12(1):3–27, January 2003. Masahide Nakamura, Tohru Kikuno, J. Hassine, and Luigi M. S. Logrippo. Feature interaction filtering with Use Case Maps at requirements stage. In Muffy H. Calder and Evan H. Magill, editors, Proc. 6th. Feature Interactions in Telecommunications and Software Systems, pages 163–178. IOS Press, Amsterdam, Netherlands, May 2000. Xiaotao Wu and Henning Schulzrinne. Handling feature interactions in language for end systems services. Computer Networks, 51:515–535, January 2007. Carlo Montangero, Stephan Reiff-Marganiec, and Laura Semini. Logic based detection of conflicts in APPEL policies. In Ali Movaghar and Jan Rutten, editors, Proc. Int. Symposium on Fundamentals of Software Engineering. Springer, Berlin, Germany, February 2007. Ahmed F. Layouni, Luigi Logrippo, and Kenneth J. Turner. Conflict detection in call control using firstorder logic model checking. In Farid Ouabdesselam and Lydie du Bousquet, editors, Proc. 9th. Feature Interactions in Telecommunications and Software Systems. Springer, Berlin, Germany, July 2007. In press. Kenneth J. Turner. The ACCENT policy wizard. Technical Report CSM-166, Department of Computing Science and Mathematics, University of Stirling, UK, December 2005. Kenneth J. Turner, Stephan Reiff-Marganiec, Lynne Blair, Jianxiong Pang, Tom Gray, Peter Perry, and Joe Ireland. Policy support for call control. Computer Standards and Interfaces, 28(6):635–649, June 2006. Stephan Reiff-Marganiec, Kenneth J. Turner, and Lynne Blair. APPEL: The ACCENT project policy environment/language. Technical Report CSM-161, Department of Computing Science and Mathematics, University of Stirling, UK, December 2005. Kenneth J. Turner and Lynne Blair. Policies and conflicts in call control. Computer Networks, 51(2):496– 514, February 2007. N. F. Noy and D. L. McGuinness. Ontology development 101: A guide to creating your first ontology. Technical Report KSL-01-05, Stanford Knowledge Systems Laboratory, Stanford, USA, March 2001. World Wide Web Consortium. Web Ontology Language (OWL) – Reference. Version 1.0. World Wide Web Consortium, Geneva, Switzerland, February 2004. Gavin A. Campbell. Ontology for call control. Technical Report CSM-170, Department of Computing Science and Mathematics, University of Stirling, UK, June 2006.
Feature Interactions in Software and Communication Systems IX L. du Bousquet and J.-L. Richier (Eds.) IOS Press, 2008 © 2008 The authors and IOS Press. All rights reserved.
99
Towards Feature Interactions in Business Processes a
Stephen GORTON a,b Stephan REIFF-MARGANIEC b ATX Technologies Ltd, MLS Business Centres, 34-36 High Holborn, London WC1V 6AE, United Kingdom b Department of Computer Science, University of Leicester University Road, Leicester LE1 7RH, United Kingdom email: {smg24,srm13}@le.ac.uk Abstract. The feature interaction problem is generally associated with conflicting features causing undesirable effects. However, in this paper we report on a situation where the combination of features (as policies) and service-targeted business processes yields non-negative effects. We consider business processes as base systems and policies as a feature mechanism for defining user-centric requirements and system variability. The combination of business processes and a diverse range of policies leads to refinement of activities and possible reconfiguration of processes. We discuss the ways in which policies can interact with a business process and how these interactions are different from other approaches such as the classical view of POTS or telecommunications features. We also discuss the conflicts that can arise and potential resolutions. Keywords. feature interaction, business processes, policy conflict, service oriented architecture
1. Introduction Feature Interaction [5] was first identified as a problem in telecommunications, where additional units of functionality (features) would interfere with each other and cause unpredictable behaviour. Telecommunications have become increasingly complex, including the birth of Internet Telephony services. Removing the “Telephony” part, feature interaction has also been identified in Web Services [38,39,40,41]. Web Services [1] are an implementation of Service Oriented Architecture (SOA), where the design of systems shifts from overall development to the orchestration of services. At a more abstract level, workflows are used to specify business processes. Each task in a workflow represents a unit of activity and can be completed by using a service. Business Process Management (BPM) research has often reported that it is paired well with SOA to produce flexible business software solutions (e.g. [26]). Policies are generally agreed as information that can modify a systems behaviour at runtime, without the need for recompilation or redeployment [19]. In
100
S. Gorton and S. Reiff-Marganiec / Towards Feature Interactions in Business Processes
our work, we use policies to express essential requirements and system variability by combining them with workflows [12] – the latter essentially means treating policies as features. The combination of policies and workflows in the context of Service Oriented Computing (SOC) can lead to feature interactions. What has not been studied is the nature of these interactions. This paper discusses and classifies the types of interactions that can occur, while also showing how the interactions occurring here are different from those in the more traditional domain of telecommunications, and even the more recent domain of policies. Throughout the paper the terms conflict and interaction are used with specific meaning. Interaction will be used to describe points of contact between workflows and policies; interactions are by and large desirable and hence a positive thing. Conflict will be seen as issues that arise between policies or when selecting services and are usually undesirable. The only exceptions to this are when we consider the traditional area of policy conflict, we will use the term policy conflict; when considering traditional feature interaction we will use that term. Overview. The remainder of this paper is structured as follows: Section 2 presents some background material on workflows, Service oriented Computing and policies. Sections 3 and 4 contains the two major contributions. First we present an analysis of the differences and similarities of feature interaction in its more traditional contexts and in the context of workflows, and then consider the types of interaction that can occur in the new setting with suggestions towards solutions. The paper is rounded off with related work in section 5 and a discussion in section 6.
2. Background There are three main ingredients to our work: workflows, Service Oriented Computing (SOC) and policies. Each serve a distinct purpose. Workflows describe the basic process model defining the main functionality. SOC is the foundation of the implementation. Policies augment the workflow to customise it to a particular user’s preference. The combination of the three provides us with the ServiceTargeted, Policy-Driven Workflow approach that we call StPowla. An overview showing the relations between the StPowla elements is shown in Figure 1. Policies are considered an overlay mechanism (including monitoring and enforcement) for business processes. Business processes are the workflow models viewed from the business or application domain. Often, the authors of such workflows are the end users, i.e. business analysts rather than software engineers. Business process models may then be mapped to more precise service models (e.g. SRML [8]) and then to concrete orchestration models. These are then mapped to services via platform-independent middleware. 2.1. Workflows A workflow is a connected graph of activities, or tasks. Each task represents a unit of work that contributes to a wider goal. Workflows are the accepted mechanism
S. Gorton and S. Reiff-Marganiec / Towards Feature Interactions in Business Processes
101
Figure 1. StPowla overall architecture.
for describing business processes, where a defined sequence of tasks contribute to the satisfaction of a business objective. Workflow description languages exist to describe processes in either codebased or graphic-based notations. Examples of the former include YAWL [33] and ebXML [32], while examples of the latter include BPMN [13] and UML Activity Diagrams [7]. In terms of SOC, BPEL [24] is the most widely accepted business process language, in that it can describe the process and orchestrate a number of services into the process. In our approach, a workflow is a core business model containing enough functional requirements to map each task to a service (Figure 2). At runtime, a task is automatically assigned a service. This assignment includes the discovery of the service, binding to it and invoking it (thus each task is a distinct computation from all other tasks). A workflow should have enough information attached to it to run successfully on its own. Policies are used to alter this process through either refinement or reconfiguration of the workflow. This kind of intervention is required to either maintain a current state (e.g. keep costs to a specified level) or to execute a different path of processing.
Figure 2. Tasks and their relation to services.
The key difference between a workflow for a business process and a workflow for another purpose (e.g. telecoms or home networks) is time. Business processes execute over a long period of time (perhaps hours or days). They include error handling or compensation actions for the recovery of the workflow in the case of
102
S. Gorton and S. Reiff-Marganiec / Towards Feature Interactions in Business Processes
failures. Also, policies may not have an effect immediately. Once triggered, they may take a period of time before the effects become evident. 2.2. Service Oriented Computing In SOC, software exists as separate entities, developed in isolation as services that are loosely coupled, platform independent, composable and based on open standards. In addition, they may be discoverable and self-describing. A Web Service is discoverable through directory services such as UDDI [23], is self describing through WSDL [36], is composable through a variety of mechanisms (the de facto standard is WS-BPEL [24]) and is based on XML as the open standard (e.g. messaging is often done through SOAP [35]). Services are key to this work. They are reusable software components that take part in a wider process. Our aim is to develop a policy-driven process model that is satisfied by services. Thus, the author of a process is not expected to write functional code, but rather specify enough requirements such that they can be mapped to an existing service (we note that there is the need for syntactic match-making between the process author and services). Services provide agility to processes in that a system is no longer confined to one individual implementing component. One service can be substituted for another, provided it takes the same inputs and returns the same type of outputs. IBM’s business model is based on the Service Component Architecture (SCA) [16], which is based on SOA. A client’s requirements are satisfied by a composition of IBM’s services (if a service doesn’t exist, they create it), thus the product supplied to a client is actually a composition1 . 2.3. Policies Policies are end-user defined rules for the management of a system. Our policies are either Event-Condition-Action rules (ECAs), or goals (e.g. constraint rules). Policies are a proven integrated software management technique. They force a system into dynamic behaviour as the system must react to given rules at runtime. Policies can be added incrementally, with (theoretically) no limit on the number that can be applied at once. However we do note that the probability of policy conflict grows as the number of policies increases. A policy conflict occurs when two or more policies contradict each other in terms of what the system is instructed to do or what state it should maintain. There are broadly three categories of conflict: goal conflicts, function conflicts and combined goal/function conflicts. A goal conflict is when two goals are in contradiction of each other. A functional conflict is when two policies state two different (non-compatible) paths of system execution. A combined conflict occurs when a functional policy chooses a system execution path that would violate a goal. We use the Appel policy description language [28] to define our policies. Appel is an XML-based language, which has recently gained formal semantics via a mapping to ΔDSTL [20] (formerly Appel only had a natural language 1 Keith
Goodman’s (IBM) recent keynote at IM2007
S. Gorton and S. Reiff-Marganiec / Towards Feature Interactions in Business Processes
103
semantics). It was developed initially as a call control language for the Internet Telephony domain, but is based on a core language with domain specialisations. In our research, we are working towards a customisation of Appel as a policymechanism for service-oriented business modelling. This requires some knowledge of the target domain through ontologies.
3. Feature interactions in SoC workflows Feature Interactions in the context of policies applied to workflows shows all the characteristics of traditional feature interaction, especially that they may hinder advancement of the system at runtime or at least violate user expectations. However, they differ in two significant aspects: one being an assumption about the knowledge of the main system available to policy designers and the other an assumption about the lifespan of the effect of an action. This section discusses both in more detail. 3.1. Details of Base System Considering traditional feature interaction, e.g. in the domain of telecommunications, we notice that there are two fundamental components: a base system and the features. In this domain features have been written by programmers with a sound knowledge of the base system and in general one would always expect a feature deployed on a base system to work correctly in the absence of other features. This notion is fundamental in the definition of feature interaction: if a feature f1 satisfies a property φ1 (written f1 |= φ1 ), and f2 |= φ2 ; a feature interaction is said to occur if, when the features are composed (denoted f1 ⊕ f2 ) we do not have f1 ⊕ f2 |= φ1 ∧ φ2 . We have argued [27] that in the context of policy conflict there is no explicit base system and that conflict emerges between a number of policies. This proved fruitful for addressing policy conflict in a structured way [20]. Considering workflows we are in a setting that differs from both of the above: the workflow presents a base system onto which policies are applied – however, the authors of policies do not need to be aware of the workflow (it would help if they were), as for example a business might change its overarching business policy regarding communication by email for security purposes, not realising that several of the workflows that are conducted within the business rely to some extend on email communication. It is clear that we have a number of stakeholders in this setting, some that are involved in formulating the business process and some that are involved in formulating policies applicable to the same. Clearly this breaks the fundamental assumption in feature interaction that a feature will operate as expected if it is put together with the base system. 3.2. Future Effects In traditional telecommunications systems the effect of a feature is relatively immediate: that is when a feature gets invoked it will perform some action. This has been used extensively in the approach by Calder et al. [4] where a feature
104
S. Gorton and S. Reiff-Marganiec / Towards Feature Interactions in Business Processes
manager was exploring the next responses and would use a commit and rollback mechanism to select a solution. In particular reorderings of features were explored in their work to allow for the fact that executing A followed by B might be acceptable whereas B followed by A leads to a conflict. Business processes differ fundamentally in this aspect in that the execution of a service might be performed over long periods of time. Using compensation actions (as described using the Sagas calculus in [3]) for workflow recovery, these business processes are regarded as long running transactions (LRTs). The effect of an LRT is that a feature that has no initial effect may have an effect sometime in the future. Let us consider a simple market trader. Their core business process involves selling products to buyers, reflected in a workflow details of which we can omit here. The trader adds two policies to their business model: the first specifies that new stock from a supplier is ordered once a trade has occurred and the second negotiates the price with the supplier of the product that was just sold. By adding these policies to the workflow the business process can be streamlined. An analysis of the example shows that if the reordering feature executes first, then the trader reorders supply at a previously agreed price. The second feature is then activated and a new price is negotiated. While the new price will not have any effect on this transaction (since the purchase has already been made), it will have an effect on future transactions. If the price negotiation feature executes first, then the price is renegotiated and then the reordering occurs with the new price. This example highlights very clearly that the order of execution of the policies matters – something that was to be expected and is very much in line with the observation from [4]. However, what is novel is the fact that the result of the execution of the policies, in either order, has a lasting effect on a future transaction.
4. Types of Interaction When considering workflows that are enhanced with policies in the context of service oriented computing, we can distinguish three broad types of interactions: conflicts between a number of applicable policies, conflicts between policies and workflows as well as conflicts in the service level agreements. In this section we consider all three types and will show that they are, while of course all being interactions, different in the way they emerge and how they need to be dealt with. 4.1. Policy Conflict Policy conflict occurs when two or more policies can be active at the same time and lead to conflicting actions being requested. Policy conflict has been defined in [27], where it was also pointed out that this definition is based on a specific application domain: only by considering a domain can a clear statement be made about which actions conflict. However, in addition there might be types of conflict that exist within the policies, independent of the application domain. These have
S. Gorton and S. Reiff-Marganiec / Towards Feature Interactions in Business Processes
105
been discussed in detail in [27] and we have mentioned these in the background section. Let us consider an example: In a bank loan approval process the workflow has a task of making an offer followed by a task to vet the offer. Typical policies attached to this might be “the vetter must be different from the offer maker” and “managers might make offers and vet these”. Clearly the execution of the second task (the vetting) might be allowed or blocked depending on how the two policies are interpreted. Cases like this have been discussed in the taxonomy in [27], and we can identify elements of Roles, Domain Entities and Policy Relation here: the employees, including the manager have places in the domain hierarchy and also play specific roles (vetter, offer maker). Furthermore one can argue that the policies allowing the manager to make offers and vet them is a refinement of the more general rule of having two people involved in the process. As the previous example shows, the conflicts between policies do not show any new characteristics, they do however continue to exist in this new domain. Detection and resolution methods fall into the categories discussed in [27], with a desire to detect and resolve as many issues at design-time but keeping in mind that this is not always possible and hence that decisions will need to be taken at runtime. Generally design-time methods will apply when the policies are under control of the same person or details of the overarching policies are known to the policy author (that is e.g. within a group or enterprise) detection involves some reasoning on a logical level (as e.g. in [20]) and resolution would involve policy redesign. However, if the workflow spans a range of businesses or the services are outsourced then detection of conflicts will only be possible by runtime methods and resolution will usually involve negotiation or some other dynamic means. What is however interesting to note for the purpose of this paper is the aspect about domains that was not considered in the taxonomy. When considering the policies in relation to workflows, which themselves are implemented by services we obtain several levels of ‘application domain’: on one hand we can consider the workflow to be the application domain, on the other hand the services can be seen as the application domain (and then one could further investigate which domain the services belong to). The next two subsections address these issues respectively. 4.2. Policies on Workflows A policy has the ability to manipulate a workflow in two ways. Firstly, it can refine the workflow by expressing further requirements for each task. An implementing service is restricted by all requirements in the task, thus the more requirements stated means that the service selection process becomes more precise and closer to the user’s needs. Secondly, a policy can reconfigure a workflow. This involves stating rules for the insertion or deletion of process components. This second concept is explained in the following example: Example 1. Consider a simple purchase process, where you request quotes from 3 suppliers and then you purchase from the cheapest. Suppose we add a policy that states “If the quote from A is below £100, cancel the other quotes and purchase
106
S. Gorton and S. Reiff-Marganiec / Towards Feature Interactions in Business Processes
directly from A”. If the price from A comes in below the given amount, then the workflow changes (Figure 3).
Figure 3. Example 4.2 basic workflow
4.2.1. Refinement Policies A refinement policy can be created by multiple stakeholders. This implies that a policy can be directed at different levels of process complexity. For example, an IT director may write a policy that overarches a set of processes whereas a project team member may write a policy solely for a single task of a process. Refinement is done through policies specifying constraints over tasks through SLA dimensions2 . This effectively enables stakeholder (or goal-based) conflicts, where different levels of stakeholders can add their own policies, without realising they conflict with others. Furthermore, there is a need to specify priorities over policies. Refinement Conflicts Policy authors are already able to specify modalities (must, should, prefer and their negations), but in the case of two conflicting policies that both have the same modality (must in the worst case), then a resolution is required. Possible solutions include the prioritisation of stakeholders: higher stakeholders have priority over lower stakeholders (e.g. directors over project team members). This method requires robust selection that will ensure that only specific stakeholders are allowed to create policies. Even then, policies should be agreed in advance and published to other stakeholders. In this situation, only generic policies (i.e. goals) can be expressed. Another solution is forced interactive negotiation. In this simple situation, two conflicting policy authors must be put in contact in order to negotiate and 2 more
information in section 4.3
S. Gorton and S. Reiff-Marganiec / Towards Feature Interactions in Business Processes
107
find a resolution between their conflicting policies. Intuitively, this is not a good solution if the end user wishes to have an automated process. 4.2.2. ECAs ECA rules can also specify goal constraints or functional rules. In either case, ECA rules need triggers. We have identified, through the mapping of services to tasks, the following events that are applicable: Workflow entry/completion/failure: Policies may be applied at the workflow level (including sets of workflows). This level includes the commencement of a workflow, its successful completion and an abnormal completion with no compensation, i.e. an error result. Task entry/completion/failure: Similar to the workflow level but this time based on tasks. A task failure does not imply a workflow failure, but instead a choice of control flow outputs from the task. Service entry/completion/failure: Again similar to the previous, but based on services. A service failure does not imply a failed task as a policy here can recover the task processing. Conversely, a service success can theoretically lead to task failure. It is our opinion that these are the most relevant and interesting triggers in a workflow from a control-flow perspective. A service is a black box, thus we cannot see inside it to recognise any triggers. Conversely, the workflow is the highest level at which we can inspect the system, since all policies can be applied no higher than this. We do, however, recognise that there may be further triggers available, especially if one considers data, constraints and resources, which are out of the scope of this paper. To demonstrate the use of trigger points and error handling (with policies) in a long running transaction, we use simple example as follows: Example 2. Consider a workflow to make a drink and then consume it, plus a separate workflow to purchase coffee granules (Figure 4). The workflow is augmented with policies that state: “if it is morning, I would like a coffee. Otherwise I would like tea”; “if there is no coffee, I would like tea”; and “if there is no coffee or tea, buy some more coffee granules”. The time of day is thus important to the final objective of the initial task (makeCoffee or makeTea), but of small significance in this example (we include it to make a point about time being a factor in business processes). If it is morning, we will try makeCoffee. If this fails, we will try makeTea. If this succeeds, then the task completes successfully. If not, then we execute the extra workflow to purchase coffee granules. If this completes successfully, we can go back and try makeCoffee, which will hopefully work now. Otherwise, should this extra workflow fail, then the main task makeDrink has not been compensated and the task ends in an error state.
108
S. Gorton and S. Reiff-Marganiec / Towards Feature Interactions in Business Processes
Figure 4. Workflow for making and consuming a drink.
ECA Conflicts arise when an event triggers two incompatible policies (i.e. a functional policy conflict, or combined functional/goal conflicts). A functional conflict is one where at least two paths of execution exist, but only one can be chosen. In a state-based system such as a workflow, it exists when the current state a b is X and two transitions are implied by policies (P1 : X − → Y and P2 : X − → Z). This is an example of a shared trigger interaction. At design time, this can be detected if the policy triggers are not dependent on runtime information. Otherwise, an online detection and resolution method is required. This may include priority sequences as an offline solution or user interaction as an online solution. Missed trigger interactions occur when a policy forces a workflow reconfiguration, and this avoids the desirable effects of another policy. For example, the cancelling of a doctors appointment may also inadvertently cancel the task of picking up a prescription, since the journey to the surgery is not made. Sequential action interactions occur when one policy triggers another. For example, we define a simple fail() function that declares a task to have completed abnormally. By calling this function, we might trigger any policies that exist, whose trigger is the current task’s failure, even if this is not what was desired (e.g. the failure policy may try to compensate but we might not want that if we have explicitly declared the task to have failed). Looping interactions occur when one policy triggers another, which triggers the first, etc. Again, provided that the policies are not based on runtime information, these can be avoided at design time. Otherwise, it is difficult to detect and resolve any loops, especially if runtime information is due to change. 4.3. Service Selection Each task inside a workflow has a functional requirement description. In addition it has a default policy. This policy is represented as follows: appliesTo taskId when task_entry do req(main, Inv, SLA ) The function req takes three parameters: main is the functional requirement of the task, Inv is the service invocation parameters and SLA is a set of Service Level Agreement (SLA) dimensions. It essentially says that when a task is reached in the control flow, it should execute according to the stated requirements (including finding, binding to and invoking a service).
S. Gorton and S. Reiff-Marganiec / Towards Feature Interactions in Business Processes
109
The primary basis of service selection is the functional requirement. The secondary basis is SLA dimensions. In this set, the policy author can add various non-functional requirements of the service, provided they are measurable in some meaningful way. For example, consider a task makeCoffee with a particular requirement that the served cup should be warm. Then, the policy would refine service selection as follows: appliesTo makeCoffee when task_entry do req(main, Inv, [cupTemperature=’’warm’’]3 ) Furthermore, since policies can be added incrementally, Appel includes composition operators such that policies can be added at runtime4 . Therefore, many policies stating many different SLA dimensions can be added to even a single task. This is also a method for adding general SLA dimensions across workflows. SLA Conflicts are easily identifiable conflicts. If two or more SLA dimensions address the same service attribute and require different values, then a conflict may exist. These conflicts can be resolved by prioritisation of policies (perhaps the most specific policy first), or by the addition of policy strength indicators. Even then, with two policies conflicting and being as strong as each other, there is still a conflict and a need for resolution. Brokerage services can lead to a feature interaction problem, under the auspices of an SLA conflict. For example, suppose a user does not wish to use service X. Instead at runtime, service Y is found, bound to and invoked. However, Y is a broker service and delegates its task to X, and returning X’s results to the user. The user is unaware of the involvement of X despite their requirement against using X and thus a feature interaction has occurred. This situation is synonymous with the traditional telecoms example of a feature interaction between Call Forwarding and Call Barring features. The most direct route to resolution in this case is to specify further SLA constraints that require a service to not be a broker, or to provide assurances that the SLA requirements are passed down the brokerage chain.
5. Related Work There has been extensive literature published about policies. They are gaining increasing recognition from implementers as a tool for creating system variability. In addition, there is extensive literature on workflows. Whilst the business processes we discuss can be described in policies or workflows separately, the former method demands too much variable specification and the latter too much static specification. 3 we
expect some knowledge of the service through ontology composition algorithms are not used as they are design time only solutions
4 policy
110
S. Gorton and S. Reiff-Marganiec / Towards Feature Interactions in Business Processes
Feature Interactions have, to our knowledge, yet to be reported in the domain of business processes. Weiss et al. [38,39,40,41] have reported on feature interactions in Web Services, but in general the subject is confined to telecoms and other network systems. A more formal analysis of feature interaction in processes may lead to specification in a temporal logic in order to provide analysis such as in [6]. In [21], the authors do apply feature interaction to workflows. However their approach does not take into account the instance-based changes via refinement and reconfiguration that we have considered here. Workflow Specification. Apart from natural English, structured languages are often used for expressing processes. BPEL [24] is considered the de facto standard for SOA-based business processes, despite its initial purpose as a service composition language. More traditional workflow languages are more appropriate for modelling processes. YAWL [33] is a powerful workflow language with semantics based on PetriNets. There are alternatives, include SMAWL [31] and others. These solutions may be considered better in terms of describing processes since they abstract away composition details that would be included in those solutions previously discussed. However, they are unable to define high-level requirements for activities or events that occur in the workflow. A sister approach to the code-based approaches, process calculi and Petri nets offer a formal method in which to express workflows as processes. The formalisms provide operational semantics allowing for reasoning about the process as used in e.g. [15] and [9]. The most widely-accepted universal process notation for business processes is the Business Process Modelling Notation [13] (BPMN). This graphical notation also describes process flows, though somewhat more structured through the use of swimlanes to describe different roles in the process. One particular advantage of BPMN is that it can be used to model a BPEL process [42]. However, BPMN is still limited by its inability to express service selection criteria including nonfunctional service properties [25]. Workflow Adaptation is normally viewed at the overall workflow level. Despite the reported need for flexibility in executing workflows, this is generally achieved through some process reengineering, such as in [18]. Workflow Patterns [17], are a common tool for expressing frequently-occurring patterns in workflows, do allow a certain degree of adaptation. Of particular interest are the insert case and delete case patterns. We consider workflow patterns as relevant work, but the differences exist between their offline design nature, and our online approach to analysis of feature interactions and workflow configuration. Policies are descriptive and essentially provide information that is used to adapt the behaviour of a system. Most work deals with declarative policies. Examples are the formalisms to define access control policies, and to detect conflicts [30,14]; formalisms for modelling the more general notion for usage control [43]; formalisms for Service Level Agreement, i.e. to specify client requirements and service guarantees, and to sign a contract with an agreement between them [22]. RuleML (www.ruleml.org) is a language for rule-based and knowledge-based systems, and allows Web-based rule storage, retrieval and interchange. Like Ap-
S. Gorton and S. Reiff-Marganiec / Towards Feature Interactions in Business Processes
111
pel, it is XML-based and allows for the definition of ECA rules (note that for readability we have not used Appel’s XML syntax in this paper). These rules can be translated through XSL transformations, depending on the application being used. None of this has been linked to workflows; there has been an initial discussion on linking policies with workflows, presenting the fundamental ideas in [11,10]. Workflows and Policies are combined by Wang [37] in the Policy-driven Process Design (PPD) methodology. Policies are linked to workflows by extracting processes from real business policies and using a common logic to unify them. However the work is more focussed towards extracting new process, rather than affecting current ones. Though Wang does mention insertion and deletion, with respect to control flows, it is only in an overview of the effects on all aspects of workflows (including constraints, data and resources). Furthermore, Wang makes no use of Service Oriented Architecture. Verlaenen et al [34] have a similar approach, in that policies are used to change workflows. However, the authors use a weaver and policy composition algorithm, indicating an offline approach. Our work specifically addresses the online state.
6. Summary and Further Work In this paper we considered interactions in the context of Service oriented Computing – in particular we considered systems that are described by a workflow that is subject to a number of policies capturing variability. The tasks in the workflow are implemented by services. The two main contributions are a description of the problem domain highlighting and a classification of conflicts in that domain. With respect to the former we identified differences in two major aspects with respect to traditional FI settings: the role of the base system and the longevity of effects of policies. With regard to the latter we presented three classes of interactions: one between policies (policy conflict), one between policies and workflows and one dependent on service selection. Future work includes the formalisation of StPowla, that is the development of a formal semantics for the workflow part which will allow to extend the conflict reasoning techniques for Appel to be extended to the interaction of policies and workflow.
Acknowledgements This work has been partially sponsored by the project SENSORIA, IST-2005016004.
References [1]
Gustavo Alonso, Fabio Casati, Harumi Kuno, and Vijay Machiraju. Web Services: Concepts, Architectures and Applications. Springer-Verlag Berlin and Heidelberg GmbH & Co. K, 2003.
112 [2] [3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12] [13] [14] [15]
[16]
[17] [18]
[19] [20]
[21] [22]
S. Gorton and S. Reiff-Marganiec / Towards Feature Interactions in Business Processes
Daniel Amyot and Luigi Logrippo, editors. Feature Interactions in Telecommunications and Software Systems VII, June 11-13, 2003, Ottawa, Canada. IOS Press, 2003. Roberto Bruni, Hern´ an C. Melgratti, and Ugo Montanari. Theoretical foundations for compensations in flow composition languages. In Jens Palsberg and Mart´ın Abadi, editors, POPL, pages 209–220. ACM, 2005. Muffy Calder, Mario Kolberg, Evan H. Magill, Dave Marples, and Stephan ReiffMarganiec. Hybrid solutions to the feature interaction problem. In Amyot and Logrippo [2], pages 295–312. Muffy Calder, Mario Kolberg, Evan H. Magill, and Stephan Reiff-Marganiec. Feature interaction: a critical review and considered forecast. Computer Networks, 41(1):115–141, 2003. Muffy Calder and Alice Miller. Using spin for feature interaction analysis - a case study. In Matthew B. Dwyer, editor, SPIN, volume 2057 of Lecture Notes in Computer Science, pages 143–162. Springer, 2001. Marlon Dumas and Arthur H. M. ter Hofstede. Uml activity diagrams as a workflow specification language. In Martin Gogolla and Cris Kobryn, editors, UML, volume 2185 of Lecture Notes in Computer Science, pages 76–90. Springer, 2001. Jos´e Luiz Fiadeiro, Ant´ onia Lopes, and Laura Bocchi. A formal approach to service component architecture. In Mario Bravetti, Manuel N´ un ˜ez, and Gianluigi Zavattaro, editors, WS-FM, volume 4184 of Lecture Notes in Computer Science, pages 193–213. Springer, 2006. Xiang Fu, Tevfik Bultan, and Jianwen Su. Formal verification of e-services and workflows. In C. Bussler, R. Hull, S. A. McIlraith, M. E. Orlowska, B. Pernici, and J. Yang, editors, WES, volume 2512 of Lecture Notes in Computer Science, pages 188–202. Springer, 2002. Stephen Gorton and Stephan Reiff-Marganiec. Policy support for business-oriented web service management. In J. Alfredo S´ anchez, editor, LA-WEB, pages 199–202. IEEE Computer Society, 2006. Stephen Gorton and Stephan Reiff-Marganiec. Towards a task-oriented, policy-driven business requirements specification for web services. In Schahram Dustdar, Jos´e Luiz Fiadeiro, and Amit P. Sheth, editors, Business Process Management, volume 4102 of Lecture Notes in Computer Science, pages 465–470. Springer, 2006. Stephen Gorton and Stephan Reiff-Marganiec. Policy driven business management over web services. In Rodosek and Aschenbrenner [29], pages 721–724. Object Management Group. Business Process Modelling Notation (BPMN) specification. http://www.bpmn.org, Feb 2006. Joseph Y. Halpern and Vicky Weissman. Using first-order logic to reason about policies. In CSFW, pages 187–201. IEEE Computer Society, 2003. Rachid Hamadi and Boualem Benatallah. A petri net-based model for web service composition. In Klaus-Dieter Schewe and Xiaofang Zhou, editors, ADC, volume 17 of CRPIT, pages 191–200. Australian Computer Society, 2003. IBM. Service component architecture. http://www.ibm.com/developerworks/library/specification/ws-sca/, 2007. Last accessed 4 June 2007. Workflow Patterns Initiative. Workflow patterns, 2007. accessed 24 July 2007. Beat Liver, Jeannette Braun, Beatrix Rentsch, and Peter Roth. Developing flexible service portals. In CEC ’05: Proceedings of the Seventh IEEE International Conference on ECommerce Technology (CEC’05), pages 570–573, Washington, DC, USA, 2005. IEEE Computer Society. Emil Lupu and Morris Sloman. Conflicts in policy-based distributed systems management. IEEE Trans. Software Eng., 25(6):852–869, 1999. Carlo Montangero, Stephan Reiff-Marganiec, and Laura Semini. Logic-based detection of conflicts in appel policies. In FSEN2007, Lecture Notes in Computer Science LNCS. Springer Verlag, 2007. Y. C. Ngeow, D. Chieng, A. K. Mustapha, E. Goh, and H. K. Low. Web-based device workflow management engine. In MUE, pages 914–919. IEEE Computer Society, 2007. Rocco De Nicola, Marzia Buscemi, Laura Ferrari, Fabio Gadducci, Ivan Lanese, Roberto
S. Gorton and S. Reiff-Marganiec / Towards Feature Interactions in Business Processes
[23] [24] [25]
[26] [27] [28] [29] [30]
[31]
[32] [33] [34] [35] [36] [37]
[38] [39] [40]
[41] [42] [43]
113
Lucchi, Ugo Montanari, and Emilio Tuosto. Process calculi and coordination languages with costs, priority and probability. 2006. SENSORIA Technical Report. OASIS. UDDI: Universal Description, Discovery and Integration. http://www.uddi.org, 2007. Last accessed 4 June 2007. OASIS. Web services business process execution language. http://docs.oasis-open.org/wsbpel/2.0/wsbpel-v2.0.pdf, 2007. Last accessed 4 June 2007. J. O’Sullivan, D. Edmond, and A. H. M. ter Hofstede. Formal description of non-functional service properties. Technical Report FIT-TR-2005-01, Queensland University of Technology, Brisbane, Feb 2005. Nathaniel Palmer. BPM & SOA. 2005. http://aiim.org/article-docrep.asp?ID=30562, last accessed 4 June 2007. Stephan Reiff-Marganiec and Kenneth J. Turner. Feature interaction in policies. Computer Networks, 45(5):569–584, 2004. Stephan Reiff-Marganiec, Kenneth J. Turner, and Lynne Blair. Appel: the accent project policy environment/language. Technical Report TR-161, University of Stirling, 2005. Gabi Dreo Rodosek and Edgar Aschenbrenner, editors. IM2007: 10th IFIP/IEEE Symposium on Integrated Network Management. IEEE, 2007. Fran¸cois Siewe, Antonio Cau, and Hussein Zedan. A compositional framework for access control policies enforcement. In Michael Backes and David A. Basin, editors, FMSE, pages 32–42. ACM, 2003. Christian Stefansen. Smawl: A small workflow language based on ccs. In Orlando Belo, Johann Eder, Jo˜ ao Falc˜ ao e Cunha, and Oscar Pastor, editors, CAiSE Short Paper Proceedings, volume 161 of CEUR Workshop Proceedings. CEUR-WS.org, 2005. UN/CEFACT and OASIS. Electronic business using extensible markup language. http://www.ebxml.org/, 2007. Last accessed 4 June 2007. Wil M. P. van der Aalst and Arthur H. M. ter Hofstede. Yawl: yet another workflow language. Inf. Syst., 30(4):245–275, 2005. Kris Verlaenen, Bart De Win, and Wouter Joosen. Towards simplified specification of policies in different domains. In Rodosek and Aschenbrenner [29]. W3C. SOAP. http://www.w3.org/TR/soap12-part1/, 2007. Last accessed 4 June 2007. W3C. WSDL: Web Service Description Language v2.0. http://www.w3.org/TR/wsdl20/, 2007. Last accessed 4 June 2007. Harry Jiannan Wang. A Logic-based Methodology for Business Process Analysis and Design: Linking Business Policies to Workflow Models. PhD thesis, University of Arizona, 2006. Michael Weiss. Feature interactions in web services. In Amyot and Logrippo [2], pages 149–158. Michael Weiss and Babak Esfandiari. On feature interactions among web services. In ICWS, pages 88–95. IEEE Computer Society, 2004. Michael Weiss, Babak Esfandiari, and Yun Luo. Towards a classification of web service feature interactions. In Boualem Benatallah, Fabio Casati, and Paolo Traverso, editors, ICSOC, volume 3826 of Lecture Notes in Computer Science, pages 101–114. Springer, 2005. Michael Weiss, Babak Esfandiari, and Yun Luo. Towards a classification of web service feature interactions. Computer Networks, 51(2):359–381, 2007. Stephen. A. White. Using bpmn to model a bpel process. BPTrends, 2005. http://www.bptrends.com, accessed on 15/03/06. Xinwen Zhang, Francesco Parisi-Presicce, Ravi S. Sandhu, and Jaehong Park. Formal model and policy specification of usage control. ACM Trans. Inf. Syst. Secur., 8(4):351– 387, 2005.
114
Feature Interactions in Software and Communication Systems IX L. du Bousquet and J.-L. Richier (Eds.) IOS Press, 2008 © 2008 The authors and IOS Press. All rights reserved.
Resolving Feature Interaction with Precedence Lists in the Feature Language Extensions L. Yang, A. Chavan, K. Ramachandran, W. H. Leung1 Computer Science Department Illinois Institute of Technology, Chicago, IL 60616
Abstract. With existing general purpose programming languages, interacting features executed in the same process must be implemented by changing the code of one another [1]. The Feature Language Extensions (FLX) is a set of programming language constructs that enables the programmer to develop interacting features as separate and reusable program modules. Features are integrated and have their interactions resolved in feature packages. FLX provides the precedence list facilities for the programmer to specify the execution order of the features in a feature package. While not applicable in all situations, precedence lists can be used to resolve many interaction conditions in a single statement. This paper describes the two types of precedence lists supported by FLX and their usage. We give the contradiction conditions that may occur when multiple precedence lists are used in a feature package and show how to resolve them. Finally, we show that the two types of FLX precedence lists are primitive: they can be used to implement arbitrary precedence relations among features that do not exhibit contradictions. Keywords: Feature interaction, program entanglement, feature interaction resolution, reusable feature modules, Feature Language Extensions.
1
Introduction
In software engineering literature, the terms feature, aspect and concern are often used synonymously to denote certain functionality of a software system. For example, reliable data transport and congestion control are two features of the Internet TCP protocol. Features are implemented by computer programs. Two features interact if their behaviors change when their programs are integrated together. The behavior of a computer program is manifested in the sequence of program statements that gets executed and its output for a given input. Consider TCP again. Without congestion control, reliable data transport will retransmit when a duplicated acknowledgement is received. After congestion control is added, the same message may cause the sender to retreat to slow start. Thus these two features interact. The term feature interaction was 1
Corresponding Author: W. H. Leung, Computer Science, Illinois Institute of Technology, 10 West 31st Street, Chicago, Illinois 60616, USA; E-mail: [email protected].
L. Yang et al. / Resolving Feature Interaction with Precedence Lists in the FLX
115
coined by developers of telecommunications systems, but its occurrence is common place: when a software system evolves, it usually means that new features are added to the system changing the behavior of existing features. We showed earlier [1] that if (C1) two features interact, (C2) they are executed by the same sequential process, and (C3) they are implemented by a programming language that requires the programmer to specify execution flows, then the programs of the two features will inevitably entangle in the same reusable program unit of the programming language. If the features do not interact, then program entanglement is not necessary. Program entanglement implies that features are implemented by changing the code of one another. Besides making it difficult to develop features, entangled programs are difficult to reuse, maintain and tailor to different user needs. And feature interaction is the root cause of program entanglement. (C1) and (C2) are generally dictated by the application such as the examples given earlier in TCP. Today’s general purpose programming languages require (C3). Existing TCP implementations are notoriously entangled (e.g. see [2]). It is not because the programmers lacked skill; they could not help it. The Feature Language Extensions (FLX) is a set of programming language constructs developed to solve the program entanglement problem. A FLX program unit consists of a condition part and a program body. The program body gets executed when its corresponding condition part becomes true. The programmer does not specify the execution flows of program units; hence FLX relaxes (C3). A feature is composed of a set of program units; it is designed according to a model instead of the code of other features. Features are integrated in a feature package. Features and feature packages are reusable. Different combinations of them can be packaged to meet different needs. We have added the foundation FLX constructs to Java. A research version of the FLX to Java compiler is downloadable from [3]. We call the conditions under which two interacting features change their behavior their interaction conditions, and the interaction is resolved with specification on the new behavior. Presently, the programmer read code to determine when the interaction conditions may become true, and change code to resolve the interaction conditions. This is a labor intensive and error prone process, and a main reason why software development is complex. Due to the way that the FLX compiler generates code, two program units written in FLX interact if the conjunction of their condition parts is satisfiable, or equivalently, if the condition parts of the two program units can become true at the same time. Two features interact when some of their program units interact. The satisfiable condition is their interaction condition. Several other researchers have constructed systems with this property (e.g. see [15]). As we shall see later, the condition part of a program unit is a set of quantifier-free first order predicate formulas. Detecting feature interaction in programs written in FLX then requires an algorithm, often called a satisfiability solver, which determines the satisfiability of such formulas. The first order predicate satisfiability solver of FLX does not require iterations of trial and error incurred in prior art and is overviewed in [4]. This paper focuses on using FLX to integrate features and resolve their interaction without changing their code. In particular, we discuss usage of the precedence list facilities provided by FLX.
116
L. Yang et al. / Resolving Feature Interaction with Precedence Lists in the FLX
A precedence list establishes a strict partial ordering 2 among a set of features in a feature package. FLX supports two types of precedence lists: a straight precedence list specifies that if the interaction condition for some of the features becomes true the programs of the features with higher precedence will get executed before the programs of features with lower precedence; and a priority precedence list specifies that only the program unit belonging to the feature with the highest precedence will get executed. Precedence list is a powerful facility. For example, in a telephony application written in FLX, the feature DoNotDisturb interacts with the plain old telephone service (POTS) whenever the phone is called. The interaction conditions of the two features are resolved in a single precedence list statement in a feature package. One of the authors came from the telecommunication industry and was involved in the development of DoNotDisturb in a production digital switch. The programmers in that project needed to go through hundred of thousands lines of code to find several hundred places to insert code for the feature. Later, as new features are added to the system, they had to remember not to forget including the code for the feature. We first introduced precedence lists in [5]. A more detailed discussion is given in this paper. We review briefly the FLX constructs to specify features and feature packages in Section 2. In Section 3, we describe the two different types of precedence lists implemented in FLX. We also show there that precedence lists alone is not sufficient in certain situations. When that happens, the interaction condition is resolved by program units in the feature package. In Section 4, we discuss the integration of multiple precedence lists. This can happen, for example, when two feature packages each with its own precedence list is integrated in a feature package. Multiple precedence lists can lead to contradictions that need to be resolved. In the same section, we introduce the compound precedence statement which specifies the precedence relations among precedence lists. It is a short hand for multiple precedence lists. In Section 5, we show that the two types of precedence lists supported by FLX are primitive in the sense that they can be used to specify arbitrary precedence relationships that do not contain contradictions. We review related work in Section 6. Our method to integrate interacting features without changing feature code appears to be new. The use of precedence lists as language mechanisms to resolve interaction is also new. We conclude the paper in Section 7.
2 Some FLX basics FLX supports the view that complex software should be organized as a collection of components and FLX is meant for the development of feature rich components called feature packages. In a telephone system developed using FLX, each telephone object is associated with two feature packages: a call processing feature package for features like call forwarding, and a digit analysis feature package for features like speed calling. Different telephone objects can be associated with different feature packages containing different sets of features, or the set of features can be the same but the
2 A strict partial order is an irreflexive, asymmetry and transitive relation between two elements of a set, denoted by “<”. For all a, b and c in P, we have (i) ~(a < a) (irreflexivity); (ii) if (a < b) then ~(b < a) (asymmetry); and (iii) if (a < b) and (b < c) then (a < c) (transitivity).
L. Yang et al. / Resolving Feature Interaction with Precedence Lists in the FLX
117
feature interactions are resolved differently. We will use the call processing feature package as a running example for this paper. As mentioned earlier, a feature written in FLX is developed according to a model. The model is composed of an anchor feature and a domain statement. The anchor feature implements the basic functionality; other features can be considered as its enhancements. Condition variables, called domain variables and events, are defined in the domain statement. They are used in the condition part of a program unit. Domain variables are initialized in the domain statement and space is allocated for them when a feature package using the domain statement is instantiated. For this paper, we will skip showing the syntax of a domain statement. The domain statement for the call processing package, called BasicTelephony, contains a domain variable state which is a simple extension of the class enum defined in Java 1.5. We will not describe the extension here as it is related to the FLX satisfiability algorithm only. The possible values of state, such as IDLE, RINGING, TALKING, define the different states that the phone associated with the feature package can have. Some of the events specified in the BasicTelephony domain statement come from another phone announcing its intent (TerminationRequest, Disconnect) or its state (Busy, Ringing, Answer) to this phone. Other events are signals (Onhook, Offhook. Digits) coming from the device driver of the phone. Events exchanged between phones contain a field FromPID identifying the sending phone. Surprisingly large number of features can be developed using this relatively simple domain statement. Since a domain statement can be extended using the inheritance mechanisms of FLX [1], we advocate a minimalist approach in designing domain statements: Define only those domain variables and events for the set of features to be implemented at the time. Later, if new features require new domain variables, such as the role play by the phone, and new events, such as request to add a video channel, they can be added without affecting features that have already been developed. anchor feature Pots { domain BasicTelephony; MakeCall { condition: state.equals(State.IDLE); event: Offhook; { fone.applyDialTone(); state = State.DIALING; } } ReceiveCall { condition: state.equals (State.IDLE); event: TerminationRequest e; { Ringing r = new Ringing (e.FromPID); rt.sendEvent (r); state = State.RINGING; } } } . . . }
Figure 1. A Portion of the FLX POTS code
118
L. Yang et al. / Resolving Feature Interaction with Precedence Lists in the FLX
A portion of the code for the anchor feature of the call processing feature package, called the plain old telephone service (POTS) feature, is given in Figure 1 showing only two of its program units: MakeCall applies dial tone when the user picks up the phone; ReceiveCall responds to a TerminationRequest event by updating the state of the call to RINGING and telling the calling party of that fact. The condition part of a program unit is composed of a condition statement and an event statement. The condition statement is a quantifier free first order formula of domain variables and their predicate methods. We do not support the existential and universal quantifiers explicitly. When the programmer has the need to say something like “there exists some elements”, we ask him to write a predicate method nonempty() instead. The event statement specifies a list of events. Each event may be attached with a qualification which is a first order formula on data carried in the event. The feature DoNotDisturb is shown in Figure 2. Its program unit SayBusy returns a busy event to the caller (identified by the fromPID of the received event e) whenever the phone receives a TerminationRequest event. feature DoNotDisturb { domain BasicTelephony; anchor POTS; SayBusy { condition: all; event: TerminationRequest e; { Busy b = new Busy(e.FromPID); rt.sendEvent (b); } } }
feature package Fp3{ domain: BasicTelephony; features: Fp1, F2; priorityPrecedence(Fp1, F2); PU1{ } }
Figure 3. feature package in FLX
Figure 2. The feature DoNotDisturb
FLX requires that the interaction among the program units in a feature be resolved before the feature is compiled; similarly, the interaction conditions among features in a feature package must be resolved before the feature package is compiled. DoNotDisturb and POTS interact: when the event TerminationRequest is received, the condition part of SayBusy in DoNotDisturb becomes true and the condition part of several program units in POTS, including ReceiveCall, may become true. We show how these two features may be integrated in a feature package in Section 3.1. The essential elements of a feature package are shown in Figure 3. It identifies one or more features and feature packages that will be integrated in the package and the domain statement used by them. The FLX compiler checks that the anchor feature is included in the list of features. The feature package may contain several precedence lists and program units. We will show how to use them to resolve interactions.
3
Resolving Feature Interaction with Precedence Lists
FLX provides two types of precedence lists, priority precedence and straight precedence. They are described in this section, illustrated with examples. We also show that while precedence lists are powerful mechanisms, there are situations that they are not sufficient.
L. Yang et al. / Resolving Feature Interaction with Precedence Lists in the FLX
119
3.1 Priority Precedence When a programmer decides to use priority precedence list to resolve feature interactions, he specifies the features in descending order of priority in a list with the highest priority feature at the first position of the list. When an interaction condition becomes true, the program unit from the feature with the highest priority gets executed. To help explain this, we show in Figure 4 the code of the feature package QuitePhone which integrates DoNotDisturb, POTS and CatchAll and uses a priority precedence list to resolve their interaction. The code for the feature CatchAll is given in Figure 5. Feature package QuitePhone { domain: BasicTelephony; features: DoNotDisturb, CatchAll, POTS; priorityPrecedence(DoNotDisturb, POTS, CatchAll); } Figure 4. QuitePhone feature package feature CatchAll{ domain: BasicTelephony; anchor: Pots; Catch{ condition: all; event: any; { System.err.println("unexpected condition and event”); BasicTelephonyEvent.getEventID(e)); } } } Figure 5. CatchAll feature package
When a phone assigned with QuietPhone receives the TerminationRequest event (i.e. when it is called), SayBusy of DoNotDisturb will be invoked and a busy event will be sent back to the caller. Program units of POTS and CatchAll will not be invoked. However, when the phone receives an OffHook event and it is idle, then the MakeCall program unit of POTS gets invoked and the user can make phone calls. The Catch program unit of CatchAll will be invoked only when the phone is in a particular state and an event unexpected by DoNotDisturb and POTS arrives. CatchAll is a very useful exception handling feature. We introduced exception handling in FLX in [1] and will cover it more fully in a separate article. 3.2 Straight Precedence With a straight precedence list, when an interaction condition becomes true program units from features that satisfy this condition are executed following the order in which the features are specified in the list. Figure 6 shows the StartMeter program unit of the Billing feature. It creates a billing record and starts the timer when a call is answered. StartMeter interacts with the CallAnswered program unit of POTS. Other program units of Billing and POTS also interact. The two features are integrated in the feature package NoFreeCalls as shown in Figure 7 with their interactions resolved in a straight precedence list.
120
L. Yang et al. / Resolving Feature Interaction with Precedence Lists in the FLX
Using this method, changing billing policy becomes quite easy. One can simply substitute one billing feature with another to gather different billing data. feature Billing { domain: BasicTelephony; anchor: POTS; StartMeter { condition: state.equals(State.IDLE); event: Answer e; { CallRecord = new CallRecord (e.fromPID); meter.start (1 second); } } } Figure 6 A program unit in Billing feature package NoFreeCalls { domain: BasicTelephony; features: Billing,Pots; straightPrecedence (Billing, Pots); } Figure 7 The NoFreeCalls feature package
3.3 Precedence Lists Are Not Always Sufficient In the examples of Figure 4 and Figure 7, a single precedence statement is used to resolve the interactions of features in a feature package. But very often, more flexible and finer control of the interaction condition is needed. Consider the CallForwarding feature with its most important program unit shown in Figure 8. CallForward transfers an incoming call by relaying the TerminationRequest event to the forward number if that number is defined and the call is not coming from that phone. Suppose that we integrate DoNotDisturb and CallForwarding together and place DoNotDisturb ahead of CallForwarding in a priority precedence list, no call will be forwarded. If we place CallForwarding ahead of DoNotDisturb, all calls will be forwarded. feature CallForwarding { domain: BasicTelephony; anchor: POTS; ForwardCall { condition: state.equals (State.IDLE); event: TerminationRequest e; { if ((forwardNumber != “”) && (forwardNumber != e.fromPID)) { rt.send (forwardNumber, e); stop; } } } } Figure 8. The ForwardCall program unit of CallForwarding
L. Yang et al. / Resolving Feature Interaction with Precedence Lists in the FLX
121
The programmer can choose to use a program unit in the feature package to resolve the interaction among the two features. In the example given in Figure 9, the interaction between DoNotDisturb and CallForwarding is resolved depending on whether the caller is identified in a list of phone numbers. feature package SelectiveCallForwarding { domain: BasicTelephony; features: DoNotDisturb, CallForwarding, Pots, CatchAll; priorityPrecedence (DoNotDisturb,CallForwarding,POTS,CatchAll); LinkedList phoneList = LinkedList (empty); // forwardable phones SelectToForward { condition: state.equalsTo(State.idle); event: TerminationRequest e; { if (phoneList.contains (e.FromPID)) CallForwarding; else DoNotDisturb; stop; } } } Figure 9. SelectiveCallForwarding feature package
By convention, a program unit in a feature package has highest precedence. Thus when the interaction condition becomes true, SelectToForward is executed first. The stop statement at the end of the program unit instructs the compiler not to invoke program units of lower precedence. In the example, SelectToForward refers to the features instead of calling their program units. The FLX compiler generates code to invoke the correct program units in these features. Alternatively, the program may call the program units of the features explicitly. In that case, the compiler will check that the program units are called with the correct condition as SelectToForward.
4 Multiple and Compound Precedence Lists FLX supports multiple precedence lists and compound precedence lists in a feature package. When feature packages containing precedence lists are integrated together, the integrating feature package contains multiple precedence lists by definition. A compound precedence list is a short hand to multiple precedence lists. Some FLX programmers argue that it is easier to understand than multiple lists. Precedence lists may contradict one another. The FLX compiler needs to identify the contradiction and enable the programmer to resolve the contradiction. 4.1
Combining Precedence Lists of the Same Type
FLX encourages its programmer to develop a feature based on the anchor feature only. The feature is usually tested with the anchor feature and beneficially with CatchAll in a feature package. When the programmer is finished with testing, he has two reusable programs: the feature itself and the feature package that he used to test the feature. The feature package that integrates the 3-way calling test package, called
122
L. Yang et al. / Resolving Feature Interaction with Precedence Lists in the FLX
3WayPackage, and SelectiveCallForwarding (Figure 9) is given in Figure 10. The new feature package has two priority precedence lists: one from 3WayPackage containing the features 3Way, POTS and CatchAll; the other comes from SelectiveCallForwarding containing DoNotDisturb, CallForwarding, POTS and CatchAll. The interaction between the two feature packages is resolved in another priority precedence list. feature package 3WayAndSelectiveCallForwarding { domain: BasicTelephony; anchor: POTS; features: 3WayPackage, SelectiveCallForwarding; priorityPrecedence (3WayPackage, SelectiveCallForwarding); } Figure 10. 3WayAndSelectiveCallForwarding feature package
When combining precedence lists of the same type, which is the case in the example of Figure 10, the FLX compiler applies two rules: First, a feature may appear in multiple lists but only one instance of it will appear in the combined list. Second, the partial ordering specified in the different lists is merged into a combined list. Following these two rules, the priority precedence list of the feature package in Figure 10 contains the following features in descending order: 3Way, SelectiveCallForwarding, DoNotDisturb, POTS, and CatchAll. SelectiveCallForwarding is considered a feature as it contains program units of its own. The net effect of combining the precedence lists in the example is adding the 3Way feature to SelectiveCallForwarding: When the phone is in its talking state, it can invoke the 3Way feature. Incoming calls are no longer blocked by DoNotDisturb in talking state; they will cause an audible signal to the speaker as specified by 3Way. The first rule of combining precedence lists is similar to virtual base classes in C++. The second rule may not be possible if in one list a feature f1 precedes feature f2 but in another list f2 is specified to precede f1. When that occurs, the FLX compiler will identify an order contradiction. An order contradiction is relevant only in the condition where f1 and f2 interact. The FLX compiler will identify that condition and the programmer can resolve the contradiction in a program unit of the feature package that combines the two lists. The condition part of the program unit will include the interaction condition and its program body will specify the computation when the condition becomes true. 4.2
Integrating Precedence Lists of Different Types
Suppose that we want to integrate the features Billing (Figure 6), POTS and CatchAll together. Billing has a straight precedence relationship over POTS and both of them should have priority precedence over CatchAll. The programmer can simply put these three precedence relations in the feature package that integrates these three features as shown in Figure 11. Following the precedence specifications, when some interaction condition between Billing and POTS becomes true, appropriate program units from these two features will be executed in order, and CatchAll will not get invoked as the other two features have priority precedence over it.
L. Yang et al. / Resolving Feature Interaction with Precedence Lists in the FLX
123
feature package BillingPackage { domain: BasicTelephony; feature: Billing, POTS, CatchAll; straightPrecedence (Billing, POTS); priorityPrecedence (Billing, CatchAll); priorityPrecedence (POTS, CatchAll); } Figure 11. BillingPackage feature package
When precedence lists of different types are combined, the FLX compiler checks for whether there is type contradiction. A type contradiction occurs when a feature is specified as having both priority and straight precedence over the other. For example, feature f1 has straight precedence over f2 and f3 in one list. In another list f2 has priority precedence over f3. When an interaction condition for the three features becomes true, it is not clear what should be done for the program unit in f3 after program units from f1 and f2 have been executed. The FLX compiler identifies the interaction condition, and the programmer must specify in a program unit in the integrating feature package to resolve the ambiguity. Precedence lists of the same type can be combined into a single partial ordering list, but not for precedence lists of different types. If we have a first priority precedence list including f1, f2 and f3, and a second priority precedence list including f2 and f4, we know that f1 has priority precedence over f4 from the transitivity property of strict partial orderings. But if the second list specifies straight precedence, then we do not know the precedence relationship between f1 and f4.
4.3
Compound Precedence List
One observes that in the feature package of Figure 11, both Billing and POTS have priority precedence over CatchAll. Using a method similar to factorization in algebra, one can reduce the multiple precedence lists into a single compound precedence list as shown in Figure 12. feature package BillingPackage { domain: BasicTelephony; feature: Billing, POTS, CatchAll; priorityPrecedence(straightPrecedence(Billing,POTS), CatchAll); } Figure 12. BillingPackage feature package
The precedence list of Figure 12 says that when an interaction condition becomes true for the three features, program units in Billing and POTS will be executed in order according to the straight precedence clause. The program unit in CatchAll will not be executed because of the priority precedence specification. In essence, the compound precedence list of Figure 12 is a short hand of the precedence lists in Figure 11. We know that they are equivalent because both will generate the same partial ordering as well as precedence types among the different features.
124
L. Yang et al. / Resolving Feature Interaction with Precedence Lists in the FLX
5. Priority and Straight Precedence Lists are Primitive With two features, when we say one precedes the other there can be only two meanings: that the program of one overrides that of the other (priority precedence), or the program of one should be executed before the other (straight precedence). When there are more features, a question arises: Can we use only the priority and straight precedence list to implement arbitrary precedence relations among arbitrary number of features? Arbitrary combinations of these features may have different precedence relations with one another. The question is important because precedence list mechanisms directly affect the way the compiler generates code. Each time we discover a precedence relation that cannot be implemented by the mechanisms already provided, we need to modify the compiler to support it. Fortunately, the answer to the question is affirmative if the desired precedence relation among the features does not contain order nor type contradictions. We use two steps to show the above statement. First, we show that features with arbitrary precedence relations and without contradiction can be represented generically. Step 2 shows that such a generic representation can always be implemented by the two types of precedence relations. Consider a set of features f1 to fn. Since there is no order contradiction, the features can be arranged linearly according to their partial ordering (such that f(i-1) either precedes or has no precedence relationship with fi). The possible precedence relationship among the features can be represented in a square matrix with the features arranged in order on the coordinates of the matrix. The diagonal of the matrix is empty as it makes no sense to say a feature precedes itself. The lower left triangle of the matrix underneath the diagonal is also empty because we already say that fi does not precede fj for i > j. Each element in the upper right triangle above the diagonal will indicate the type of precedence between fi and fj, for all i < j. Since there is no type contradiction, the value in each such element is either empty, showing priority precedence or showing straight precedence. Figure 13 shows such a matrix for the feature package given in Figure 12. Billing Billing POTS
POTS Straight precedence
CatchAll Priority precedence Priority precedence
CatchAll
Figure 13. Precedence relation matrix among the features in the feature package of Figure 12.
Given a precedence relation represented by such a matrix, the simplest way to implement it will be to use the appropriate precedence list for each nonempty element linking fi and fj, hence the answer is affirmative to the question of whether priority and straight precedence lists are primitive. The interested reader is encouraged to devise algorithms, similar to Karnaugh maps [6], which will generate the minimum number of precedence lists using compound precedence lists. In a feature package with many features, the programmer may find it useful to construct such a table to aid in the feature interaction resolution design.
L. Yang et al. / Resolving Feature Interaction with Precedence Lists in the FLX
125
6. Related Work The feature interaction problem affects all stages of software development, from the difficulties in recognizing interaction conditions during system specification to the difficulties in testing as feature programs constantly get changed if they are implemented with existing programming languages. FLX and its precedence lists facilities focus on solving the implementation problem of enabling the programmer to develop reusable feature modules without entanglement. The discussion in this section therefore emphasizes on work that allows specification of executable feature code. Among the pioneers that used programming language to facilitate the development of features, the languages Statechart [7], LOTOS (e.g. [8]), DSL [9] and Esterel [10] are the most important. They take different approaches. For example, Statechart is graphical and Esterel assumes instantaneous reaction to input. All of them support explicitly definition of finite state machines. Several of them, e.g. Esterel and LOTOS, developed verifiers for programs developed using them. But one cannot use them to develop interacting features as reusable program modules without entanglement. An empirical study [11] using these languages observed that “we have to rewrite POTS several times as features were added,” and asked: “how could reusable (sub)specifications be declared and used?” CRESS [12] is the first graphical language that explicitly supports the specification of features and their integration. It is a substantial work that included a model checker. But integrating features in CRESS require the programmer to manually determine where to “splice” and “insert” code to the features. Plath and Ryan extended the input languages of the model checkers SMV [13] and SPIN [14] to allow for the specification of features extending from a base system ([15] and [16]). Since the input languages of SMV and SPIN are mainly nonprocedural, their result on feature specification is quite similar to ours. They already had the concepts of anchor feature and features; we provide additional constructs and facilities to integrate features without requiring changing code. The notion that feature interaction can be resolved by arranging the features in some priority or precedence order has been suggested by a number of authors (e.g. [29]). We are aware of only one other work that has defined programming language facilities to specify feature precedence. Similar to FLX, the “Stack Service Model (SSM)” [30] associates a phone with a number of features. The features in SSM are put into a stack. The priority of a feature is determined by its position in the stack. The notion of feature interaction resolution in SSM is by preserving the safety assertions of the features; in FLX, it is specifying behavior change. Precedence relationship in FLX is partial ordering of two different types; SSM has a single stack and depends on feature code to determine whether a feature will override or will execute before features of lower precedence. Execution of a feature in SSM is triggered by a token passed from one feature to the other in the same stack; in FLX, it is due to a condition becomes true. Because of these differences, we believe that the feature programs in SSM will tend to be more tightly coupled with one another than the feature modules in FLX, and adding a feature in the stack of SSM will often require code changes in the other features. The related works discussed so far are often considered to be “specification languages” instead of “programming languages”. For example, they typically do not allow the programmer to define more complex data structures. More recently, there are
126
L. Yang et al. / Resolving Feature Interaction with Precedence Lists in the FLX
three other programming language approaches besides our own: Aspect oriented programming (AOP) [17], Call Processing Language (CPL) [18] and Feature oriented programming (FOP) [19]. AOP textually separates the code of a feature from the base code and put it into a program module called an aspect. An aspect is a nonprocedural program containing a set of (point cut, advice) pairs. A point cut is an assertion on some syntactic artifacts (class and method names, etc.) of the base code; it must pin point some specific statements (called joint points) in the base code. Its corresponding advice specifies how the base code is changed by adding to or replacing the joint points. The AOP compiler will weave the aspect into the base code. In general, the programmer manually reviews code to determine the point cuts to the base code and to other aspects; the advices are not independent of the base code and other aspects. As a result, aspects are not easily reusable without one another. Empirical studies conducted a decade apart showed that AOP does not improve programmer productivity ([20] and [21]), despite studies that showed it can significantly reduce the amount of code to be written (e.g. [22]). The term FOP was first coined by Prehofer in [23] where he introduced procedural language extensions to Java to specify the notion of a feature. But his approach requires resolving the interaction between every pair of features and often by changing code. Batory and his group propose that composition of features follows mathematical formulas (e.g. feature interactions are derivatives as in calculus [24]). But in general his method requires the programmer to significantly reconfigure code manually. Similar to VoiceXML [25], CPL and its variants such as LESS [26] are mark up languages. Adding features with these languages require changing code. We should also mention Service Oriented Architecture (SOA). SOA is a recent architectural approach. It proposes to relax (C2) of the entanglement conditions so that every feature is a process. The service processes interact by requesting and providing services to one another. Many features are required to be executed in the same process, and a deeper analysis showed that SOA exhibits a fractal structure with significant performance and complexity implications (A fractal structure leads to chaos) [27].
7.
Conclusion
FLX is designed to enable the programmer to develop interacting features separately as reusable program modules. The precedence lists are language facilities that allow the programmer to integrate interacting features and resolve their interaction conditions without requiring changing their code. While there are cases where precedence list cannot apply, they are powerful mechanisms: A single precedence can resolve a large amount of interaction conditions for many features. We gave more than ten examples to illustrate their usage including when multiple precedence lists are combined in a feature package. For readability, the examples given in the paper are relatively simple. But we have developed fairly complex software using FLX. We used it to develop more than forty features and feature packages on a simulated telephony system. The telephony systems were developed mainly to test FLX concepts and its compiler. We have started to use FLX to develop something that can be used. We recently finished developing the basic features of a call center based on Skype [28]. We started on the development of a composable operating system.
L. Yang et al. / Resolving Feature Interaction with Precedence Lists in the FLX
127
When FLX was first conceived, reviewers can immediately see that it will help in improving programmer productivity in the development of individual features, because the programmer can focus on the feature independent of other features. Many, however, were skeptical that we are just pushing the complexity to the feature package where the features are integrated. Our experience shows that because FLX can detect feature interaction conditions automatically and provides mechanisms like precedence lists to facilitate interaction resolution, integrating features can usually be accomplished without much difficulty. A number of results on FLX including its interaction detection algorithm, exception handling mechanisms, and language constructs to extend application models are not yet published. But materials (theses, powerpoints) on them as well as a research version of the FLX to Java compiler and example FLX code can be found in its web site [3]. FLX is designed so that programs written in it can be verified using assertions based verification instead of completely relying on testing. The basis of that goal is given in [4].
References 1. Leung, W. H.: Program Entanglement, Feature Interaction and the Feature Language Extensions. Computer Networks, Volume 51, February, 2007, 480-495 2. M. Musuvathi and D. Engler: Model Checking Large Network Protocol Implementations, Proceedings of Symposium on Network Systems Design and Implementation, 2004. 3. www.openflx.org 4. W. H. Leung: On the Verifiability of Programs Written in the Feature Language Extensions, Proceedings of 10th IEEE International Symposium on High Assurance Systems, November, 2007. 5. Leung, W. H.: Writing Reusable Feature Programs with the Feature Language Extensions, Proceedings of Feature Interactions in Telecommunications and Software Systems VIII, IOS Press, 2005. 6. Karnaugh, M.: The Map Method for Synthesis of Combinational Logic Circuits. Transactions of American Institute of Electrical Engineers part I 72 (9): 593-599, November 1953. 7. Harel, D.e.a., Statemate: A Working Environment for the Development of Complex Reactive Systems. IEEE Transactions on Software Engineering, 1990. 16(4). 8. Turner, K. J., “A LOTOS-based Development Strategy,” Formal Description Techniques II, pages 117-132, 1990. 9. Ellsberger, J., D. Hogrefe, and A. Sarma, SDL Formal Object-oriented Language for Communicating Systems. Hemel Hempstead: Prentice Hall Europe, 1997. 10. Berry, G., and Gonthier, G., “The ESTEREL Synchronous Programming Language: Design, Semantics, Implementation,” Science of Computer Programming, 19:87-152, 1992. 11. Ardis, M. A., “Lessons from Using Basic LOTOS,” Proceedings of the International Conference on Software Engineering, May 1994. 12. Turner, K. J., “Modular Feature Specification,” Proceedings of MICON, August, 2001. 13. McMillan, “Symbolic Model Checking,” Kluwer Academic Publishers, 1993. 14. Holzmann, G. J., The SPIN Model Checker : Primer and Reference Manual, Addison-Wesley Professional, September 4, 2003. 15. Plath, M. and Ryan, M. D., “A Feature Construct for Promela,” in SPIN’98 – Proceedings of the 4th SPIN Workshop, November 1998. 16. Plath, M. and Ryan, M. D., “Feature Integration Using a Feature Construct,” Science of Computer Programming, January 2001.
128
L. Yang et al. / Resolving Feature Interaction with Precedence Lists in the FLX
17. Elrad, T., R.E. Eilman, and A. Bader, Aspect-Oriented Programming. Communications of the ACM, October, 2001. 44(10). 18. Lennox, J., X. Wu and H. Schulzrinne, “Call Processing Language (CPL): A Language for User Control of Internet Telephony Services,” IETF RFC 3880, October 2004. 19. Batory, D., J. N. Sarvela and A. Rauschmayer, Scaling Step-Wise Refinement, Proceedings of International Conference on Software Engineering 2003 (ICSE 2003), Portland, Oregon, May 2003. 20. Murphy, G.C., R.J. Walker, and L.A. Baniassad, Evaluating Emerging Software Development Technologies: Lessons Learned from Assessing Aspect-Oriented Programming. IEEE Transactions on Software Engineering, 1999. 25(4). 21. Filho, F., Rubira, C., Garcia, A., “A Quantitative Study on the Aspectization of Exception Handling,” Proceedings of ECOOP Workshop on Exception Handling in OO Systems, July, 2005. 22. Lippert, M., and C. V. Lopes, A Study on Exception Detection and Handling Using AspectOriented Programming, Proceedings of International Conference on Software Engineering, ICSE 2000. 23. Prehofer, C., An Object Oriented Approach to Feature Interaction, Proceedings of the Feature Interaction Workshop, 1997, IOS Press. 24. Liu, J., Batory, D. and Nedunuri, S., “Modeling Interactions in Feature Oriented Software Design,” Proceedings of Feature Interactions in Telecommunications and Software Systems, VIII, June, 2005. 25. www Consortium, “Voice Browser Activity,” 2005, http://www.w3.org/Voice 26. Wu, X., and Schulzrinne, H.: Handling feature interaction in the language for end system services, Computer Networks, Volume 51, February, 2007. 27. Bussler, C.: The fractal nature of web services, IEEE Computer, March 2007. 28. http://skype.com 29. Chen, Y.L., Lafortune, S. and Lin, F., “Resolving Feature Interactions Using Modular Supervisory Control with Priorities,” Proceedings of Feature Interactions in Telecommunications Systems, 1997, IOS Press, Amsterdam. 30. Samborski, “Stack Service Model,” Gilmore, S., and Ryan, M., editors, Language Constructs for Describing Features, Springer-Verlag, London Ltd, 2000/2001.
Feature Interactions in Software and Communication Systems IX L. du Bousquet and J.-L. Richier (Eds.) IOS Press, 2008 © 2008 The authors and IOS Press. All rights reserved.
129
Composing Features by Managing Inconsistent Requirements Robin LANEY, Thein Than TUN, Michael JACKSON, Bashar NUSEIBEH Centre for Research in Computing The Open University Walton Hall, Milton Keynes MK7 6AA, UK Abstract. One approach to system development is to decompose the requirements into features and specify the individual features before composing them. A major limitation of deferring feature composition is that inconsistency between the solutions to individual features may not be uncovered early in the development, leading to unwanted feature interactions. Syntactic inconsistencies arising from the way software artefacts are described can be addressed by the use of explicit, shared, domain knowledge. However, behavioural inconsistencies are more challenging: they may occur within the requirements associated with two or more features as well as at the level of individual features. Whilst approaches exist that address behavioural inconsistencies at design time, these are over-restrictive in ruling out all possible conflicts and may weaken the requirements further than is desirable. In this paper, we present a lightweight approach to dealing with behavioural inconsistencies at run-time. Requirement Composition operators are introduced that specify a run-time prioritisation to be used on occurrence of a feature interaction. This prioritisation can be static or dynamic. Dynamic prioritisation favours some requirement according to some runtime criterion, for example, the extent to which it is already generating behaviour. Keywords. Feature Interaction, Pervasive Software, Event Calculus, Problem Frames
1. Introduction Given a good description of requirements for a feature-rich system, there are advantages, including scalability and traceability [3,14,27,28,19], in solving the feature sub-problems in isolation before composing the partial solutions to give a complete system. Deferring the composition problem supports a better separation of concerns between requirements analysis and the design phase, and is in line with an iterative approach to development [22,12]. The composition problem also raises a number of questions: Are the requirements to be composed consistent with each other? Do the specifications to be composed share assumptions about their environment? Do they embody consistent models? How do we deal with interference between the effects of features on
130
R. Laney et al. / Composing Features by Managing Inconsistent Requirements
the system’s environment? We focus on the first and last of these questions, but in doing so address the others to varying degrees. The contribution of this paper is an approach to resolve, at runtime, undesirable feature interactions arising from inconsistent requirements. Runtime resolution techniques have many advantages over compile time techniques, including minimal weakening of the requirements, and allowing features developed by disparate developers to plug and play [17,4]. Our approach synthesises two complementary techniques: (i) a form of temporal logic called the Event Calculus [18,26], and (ii) a way to compose problems and solutions called Composition Frames [19]. We use a version of the Event Calculus [18,26] to express requirements and domain properties, and systematically derive feature specifications in a way that makes inconsistencies more explicit. We add a Prohibit(...) predicate to the Event Calculus, and use it in feature specifications to prohibit events over specific periods of time, facilitating non-intrusive composition of features. Composition Frames, introduced in [19], are used to mediate between the features at runtime, and provide an argument showing that they satisfy a family of weakened conjunction requirements. The paper is organised as follows. In Section 2, we present a motivating example whilst giving a brief introduction to the Problem Frames approach and also the Event Calculus. In Section 3, we begin by showing how to express requirements and domain properties in the Event Calculus before deriving machine specifications. We then consider the semantics of requirements composition and discuss Composition Frames as a way of reasoning about the relationship between composed requirements and composed specifications in Section 4. In Section 5, we compare our work with other approaches. In Section 6, we discuss some lessons about the composition of requirements, of solutions, and their relationship. We conclude in Section 7 and present future work.
2. Background In this section we introduce the problem frames notation and philosophy, and present an example system that will be used in Sections 3 and 4 to illustrate our technique. We then give an introduction to the Event Calculus and motivate its choice as a tool for addressing some composition concerns. 2.1. Introductory Example Throughout this paper we will use an example that involves developing the specification for a simple “smart home” application [17]. In order to facilitate convenient living, household appliances, such as air conditioners, security alarms and windows are increasingly connected to home digital networks. The functioning of these appliances is controlled by complex software systems known as smart home applications. For example, a security feature may switch on and off lights of the home when the homeowners are away, to give an impression that the house is occupied. The specific example discussed in this paper has two features, and is mainly concerned with the control of a motorized awning window, illustrated below.
R. Laney et al. / Composing Features by Managing Inconsistent Requirements
131
2.1.1. Requirements for features. The requirement for one feature is concerned with the house security (SR), whilst the requirement for the other feature is concerned with the climate control and energy efficiency of the house (TR). Informal descriptions of these requirements are given below. SR: “Keep the awning window shut at night.” TR: “If it is hot indoors (i.e. hotter than the required temperature) and cold outside (i.e. colder than the temperature indoors), open the awning window.” Analyzing a requirement, such as SR or TR, using the Problem Frames approach involves identifying the problem context and matching it to one of several wellknown diagram forms. Starting with the SR requirement, Fig. 1 shows the problem diagram for the security feature. A problem diagram such as this shows the relationship between descriptions of (i) a machine domain denoted by a rectangle with two vertical stripes, (ii) problem world domains, denoted by plain rectangles and (iii) a requirement denoted by a dotted oval. The machine domain implements a solution in order to satisfy the SR requirement. In our discussions, we may refer to a machine as a feature specification or just specification. The problem domains are entities in the world that the machine must interact with, such as Time Panel and Window in Fig. 1, in satisfying the requirement, in this case, SR. The thick lines are called phenomena (a and b) representing shared states and events between the domains involved. Dotted lines are requirement phenomena (a and c). Broadly speaking, SR in Fig. 1 says that if the time panel indicates night time, we expect the window to be shut. The problem diagram for the climate control and energy efficiency feature in Fig. 2 is similar. Again, broadly speaking, the requirement is that if the desired temperature and the indoors and outdoors temperatures are in a certain relationship, we expect the window to be opened. Having informally described the requirements, we now examine the properties of the problem and machine domains.
a:TiP! {NBegin, NEnd} b:SF! {tiltIn, tiltOut} c:W! {WindowShut, WindowOpen}
Figure 1. Problem Diagram for the security feature
132
R. Laney et al. / Composing Features by Managing Inconsistent Requirements
e:TeP! {NiceTemp} f:OTS! {OutTemp} g:ITS! {InTemp} b:CCF! {tiltIn, tiltOut} c:W! {WindowShut, WindowOpen}
Figure 2. Problem Diagram for the climate control and energy efficiency feature
2.1.2. Problem Domains. In Fig. 1, when the time falls between NBegin and NEnd of the Time Panel (TiP) domain, it is night. The prefix TiP! specifies that values of NBegin and NEnd are controlled by Time Panel. The awning window (W), in both Fig. 1 and Fig. 2, has the following properties. When the window sash has a zero degree angle on the window frame, the window is fully shut (WindowShut is true). When the window sash has a twenty degree angle on the window frame, the window is fully open (WindowOpen is true). When the event tiltOut is fired, the window sash starts to tilt out until either the window is fully open, or tiltIn is fired. Similarly, when the event tiltIn is fired, the window sash starts to tilt in until either the window is fully shut, or tiltOut is fired. OutTemp is the temperature outdoors and InTemp is the temperature indoors. NiceTemp of the Temperature Panel (TeP) domain indicates the temperature level desired by the house owner. 2.1.3. Machine Domains. When describing the machines individually, it is necessary to ensure that the specification for each feature’s machine, along with the descriptions of the appropriate domains, is sufficient to establish that each requirement is satisfied. The obligation to demonstrate this is known as the frame concern, and the case that it holds must be made either formally or informally depending on context. In Section 3.2, we discuss a way to do this based on deriving the feature specifications from formal descriptions of the requirements and the window domain. Each of these individual features in isolation can satisfy its own requirement. However, they will conflict whenever the TR machine needs to open the window at night time to adjust the indoors temperature by admitting cooler air, and the SR machine needs to keep the window closed. This conflict is dynamic, in the sense that it will only occur in certain circumstances. Our refinement of requirements into specifications in Section 3.2 highlights this conflict by identifying the events, occurrence of which at certain times may lead to a failure to satisfy some requirement. Therefore, a significant strength of this approach is that it identifies the ways in which a feature could interact with other feature(s) in terms of event occurrences without necessarily knowing what those other features are. Having derived the specification for each feature we must compose the specifications in
R. Laney et al. / Composing Features by Managing Inconsistent Requirements
133
Table 1. Some Event Calculus Predicates Formula
Meaning
Initiates(α, β, τ )
Fluent β starts to hold after action α at time τ
Terminates(α, β, τ )
Fluent β ceases to hold after action α at time τ
Initially(β)
Fluent β holds from time 0
τ1 < τ2
Time point τ 1 is before time point τ 2
Happens(α, τ )
Action α occurs at time τ
HoldsAt(β, τ )
Fluent β holds at time τ
Clipped(τ 1, β, τ 2)
Fluent β is terminated between times τ 1 and τ 2
Trajectory(β1, τ , β2, δ)
If Fluent β1 is initiated at time τ then fluent β2 becomes true at time τ + δ
a way that resolves this conflict at run time. We propose such a technique in Section 4. 2.2. The Event Calculus The Event Calculus [26], first introduced in [18], is a logic system grounded in the predicate calculus. The calculus relates events and event sequences to ‘fluents’, which denote states of a system. It has been used as a way of permitting inconsistency in reasoning about requirements [25]. In our approach to this example problem we use event sequences to describe feature machine behaviours; fluents to describe problem domain states; and we use the rules by which events cause state changes to describe the given properties of the problem domains. Requirements are described as combinations of fluents capturing the required states of the problem world. We will work with a version of the calculus based on Shanahan [26] that is intended to be simple whilst fully supporting the contribution of Section 3. Since the machines for individual features are executed sequentially, the Event Calculus does not have to deal with concurrent events. Concurrency that arises due to composition of multiple features are handled by the composition controller introduced in Section 4. Table 1, also based on Shanahan [26], gives the meanings of the elementary predicates of the calculus. The EC rules in Fig. 3, taken from Shanahan [26], are a way of stating that the fluent β holds if: it held initially and nothing has happened since to stop it holding (EC1); the event α has happened to make the fluent hold and nothing has happened since to stop it holding (EC2); or, the event α happened that caused some fluent β1 to hold, that in turn, after a period of time δ caused this fluent β to hold, and again nothing has happened since to stop the second fluent holding (EC3). Finally, the rule DEF1 says that the fluent β is clipped between τ 1 and τ 2 if and only if there is an event α that happens between τ 1 and τ 2 and the event terminates the fluent β. Following Shanahan, we assume that all variables are universally quantified except where otherwise shown. We again follow Shanahan in adopting the common sense law of inertia, meaning that fluents do not change value unless something happens to cause this. That is, fluents change only in accordance with the meta-rules EC1, EC2 and EC3.
134
R. Laney et al. / Composing Features by Managing Inconsistent Requirements
HoldsAt(β, τ 1) ← Initially(β) ∧ ¬Clipped(0, β, τ 1) HoldsAt(β, τ 2) ← Happens(α, τ 1) ∧ Initiates(α, β, τ 1)∧ τ 1 < τ 2 ∧ ¬Clipped(τ 1, β, τ 2)
(EC1)
(EC2)
HoldsAt(β, τ 3) ← Happens(α, τ 1) ∧ Initiates(α, β1, τ 1)∧ T rajectory(β1, τ 1, β, δ) ∧ τ 2 = τ 1 + δ ∧ τ 1 < τ 2 ≤ τ 3∧
(EC3)
¬Clipped(τ 1, β1, τ 2) ∧ ¬Clipped(τ 2, β, τ 3) Clipped(τ 1, β, τ 2) ↔ ∃α, τ [Happens(α, τ ) ∧ τ 1 < τ < τ 2∧ T erminates(α, β, τ )]
(DEF1)
Figure 3. Event Calculus Meta-rules
3. Formalising Feature Specifications We now address the derivation of feature specifications to meet the requirements in Fig. 1 and Fig. 2. In Section 3.1, we formalize our requirements and the description of the window domain by translating them into the language of the Event Calculus described in the previous section. We then derive feature specifications in Section 3.2 by refining our requirements using the window domain semantics. In this way, we are establishing the argument for the frame concern. 3.1. Formalizing Requirements and Domains The natural language specifications of SR and TR, described in Section 2.1, can be formalized as follow: HoldsAt(IsIn(t, N Begin, N End), t) → HoldsAt(W indowShut, t)
HoldsAt(InT emp > N iceT emp + 1, t)∧ HoldsAt(InT emp > OutT emp + 1, t) → HoldsAt(W indowOpen, t)
(SR)
(TR)
The definition of SR says that if the current time is in the range of NBegin and NEnd, the machine should make sure that the window is shut. The definition of TR says that if the required temperature is lower than the temperature indoors by more than one unit, and the outside temperature is lower than the temperature indoors by more than one unit, the machine should make the window fully open.
R. Laney et al. / Composing Features by Managing Inconsistent Requirements
Initiates(tiltOut, TiltingOut, τ )
(D1)
Trajectory(TiltingOut, τ , WindowOpen, suffopentime)
(D2)
Initiates(tiltIn, TiltingIn, τ )
(D3)
Trajectory(TiltingIn, τ , WindowShut, suffshuttime)
(D4)
Terminates(tiltOut, TiltingIn, τ )
(D5)
Terminates(tiltOut, WindowShut, τ )
(D6)
Terminates(tiltIn, TiltingOut, τ )
(D7)
Terminates(tiltIn, WindowOpen, τ )
(D8)
135
Figure 4. Domain Descriptions in EC
The natural language specification of the window, described in Section 2.1, can be formalized as shown in Fig. 4. In other words, if the window is tilted out, it starts tilting out (D1) until the window is fully open (D2) or the window is tilted in (D7). Similarly, if the window is tilted in, it starts tilting in (D3) until the window is fully shut (D4) or the window is tilted out (D5). When the window is tilted out, it is no longer shut (D6) and when it is tilted in, it is no longer open (D8). 3.2. Deriving Feature Specifications The Event Calculus provides three options for dealing with a fluent expressed using HoldsAt – namely, EC1, EC2 and EC3. Since no window events shuts or opens the window instantaneously, the feature specification based on EC2 does not apply. We, therefore, focus on EC1 and EC3 only. We begin with a refinement based on EC1 which deals with the case where the window was initially shut and nothing has changed. In our refinement, ‘initially’ or time point 0 means the time at which the system containing all composed features is turned on. (State the requirement) HoldsAt(IsIn(t, N Begin, N End), t) → HoldsAt(W indowShut, t) (Refine the conclusion by applying EC1) Initially(W indowShut) ∧ ¬Clipped(0, W indowShut, t) (Apply DEF1 to the second sub-clause)
136
R. Laney et al. / Composing Features by Managing Inconsistent Requirements
Initially(W indowShut) ∧ ¬∃a1, t1 · Happens(a1, t1)∧ T erminates(a1, W indowShut, t1) ∧ 0 < t1 < t (Unify the Terminate sub-clause with D6) Initially(W indowShut) ∧ ¬∃t1 · Happens(tiltOut, t1)∧ T erminates(tiltOut, W indowShut, t1) ∧ 0 < t1 < t (Remove the Terminate sub-clause because it is an axiom) Initially(W indowShut) ∧ ¬∃t1 · Happens(tiltOut, t1) ∧ 0 < t1 < t At this stage, we have a sub-clause whose role is to prevent a certain event happening over a given time period. In order to simplify our feature specifications, we introduce into our Event Calculus the new predicate, P rohibit(α, τ 1, τ 2), with the meaning that the event α should not occur between times τ 1 and τ 2. More formally, Prohibit(α, τ 1, τ 2) ≡ ¬∃α, τ · Happens (α, τ ) ∧τ 1 < τ < τ 2 The refinement can then be completed to give the following partial specification for SR. HoldsAt(IsIn(t, N Begin, N End), t) → Initially(W indowShut) ∧ P rohibit(tiltOut, 0, t)
(SFa)
This partial specification (SFa) says that if the window is shut initially (time 0), the system should prohibit the tiltOut event from time 0 until time t in order to keep the window shut at time t. The second refinement based on EC3 deals with the significant case where the machine needs to tilt in the window sufficiently before the night falls (SFb). For space reasons, we only show the refinement results. HoldsAt(IsIn(t, N Begin, N End), t) → Happens(tiltIn, t1) ∧ t2 = t1 + suf f shuttime∧ t1 < t2 ≤ t ∧ P rohibit(tiltOut, t1, t)
(SFb)
The specification ensures that the window is shut when the night falls and remains shut during the night. Since the window is robust in its response to, for instance, the tiltIn event when it is already shut (it remains shut), or when it is already tilting in (it keeps tilting in), these cases are covered by SFb. Therefore, we obtain the full specification for the security feature from a disjunction of the conclusions in SFa and SFb as shown below: HoldsAt(IsIn(t, N Begin, N End), t) → ((Initially(W indowShut) ∧ P rohibit(tiltOut, 0, t)) ∨(Happens(tiltIn, t1) ∧ t2 = t1 + suf f shuttime∧ t1 < t2 ≤ t ∧ P rohibit(tiltOut, t1, t)))
(SF)
R. Laney et al. / Composing Features by Managing Inconsistent Requirements
137
Applying the same refinement technique, two partial specifications for TR are derived. The first partial specification deals with the case where the window was initially open and nothing has changed, whilst the second partial specification deals with the significant case where the machine needs to tilt out the window sufficiently before the temperature difference becomes large. Again from these two partial specifications, we obtain the following full specification for the climate control and energy efficiency feature. HoldsAt(InT emp > N iceT emp + 1, t)∧ HoldsAt(InT emp > OutT emp + 1, t) → ((Initially(W indowOpen) ∧ P rohibit(tiltIn, 0, t))∨ (Happens(tiltOut, t1) ∧ t2 = t1 + suf f opentime∧ t1 < t2 ≤ t ∧ P rohibit(tiltIn, t1, t)))
(CCF)
4. Composing Features Having derived the specifications for individual features, we now turn to the question of how to compose requirements and feature specifications, using Composition Frames. Since, as Section 2.1 argued, the requirements of the features are not fully consistent, it is not possible to meet the conjunction of SR and TR requirements completely. We will see that the use of Event Calculus in deriving feature specifications in Section 3 and the introduction of the P rohibit(α, τ 1, τ 2) predicate in particular, now give us a more succinct approach to reasoning about the composition controller semantics that we require. Using a family of weakened conjunction operators adapted from [19], we formulate the following ways of combining two general requirements R1 and R2, expressed in terms of control on domains. For the window example, R1 and R2 can be regarded as SR and TR respectively. • Option 1: No Control. Let R1 ∧{any} R2 be the requirement that R1 and R2 should each be met at times when they are not in conflict; but there is no requirement that any conflicts should be resolved and if there are times when conflicts occur, any emergent behaviour is acceptable. For example, the window might sometimes oscillate in a partly open position. • Option 2: Exclusion. Let R1 ∧{control} R2 be the requirement that both R1 and R2 should hold at all times except when the system is actively attempting to satisfy R1, R2 may not be satisfied during that time; and vice versa. The exclusion here is symmetrical. For example, SR might not be satisfied while TR is keeping the window open, and TR might not be satisfied while SR is keeping the window shut. • Option 3: Exclusion with Priority. Let R1 ∧{R1} R2 be the requirement that both R1 and R2 should hold at all times except when the system is attempting to satisfy R1, R2 may not be satisfied during that time. The exclusion here is asymmetrical in favor of R1. • Option 4: Exclusion & Fine Grain Priority. Let R1 ∧{important,R1} R2 be the requirement that R1 ∧{R1} R2 holds, except that any sub-requirement associated with the phenomenon important should be given top priority.
138
R. Laney et al. / Composing Features by Managing Inconsistent Requirements
a:TiP! {NBegin, NEnd} a’:CC! {NBegin, NEnd} e:TeP! {NiceTemp} e’:CC! {NiceTemp} f:OTS! {OutTemp} f’:CC! {OutTemp} g:ITS! {InTemp} g’:CC! {InTemp} b:CC! {tiltIn, tiltOut} b’:SF! {tiltIn, tiltOut, Prohibit(...)} b”:CCF! {tiltIn, tiltOut, Prohibit(...)} c:W! {WindowShut, WindowOpen}
Figure 5. SR and TR fitted to the Composition Frame
Fig. 5 shows how SR and TR may be recomposed with the Composition Frame. This diagram is a product of a simple syntactic transformation involving two steps. First, we introduced a new machine, the Composition Controller, between the machine domain Security Feature (SF) and the world domains (Time Panel and Window) in Fig. 1. The original machine domain (SF) became a world domain in the new diagram, and the phenomena a and b were split by insertion of the new machine. Now, Time Panel, for example, reports to the new machine (phenomena a prefixed by the Time Panel domain TiP) and the new machine may pass it on to the SF domain (phenomena a’ prefixed by the composition controller CC). The same transformation was also applied to the problem diagram in Fig. 2. Second, the resulting two diagrams were merged to give the diagram in Fig. 5. We also added the P rohibit(α, τ 1, τ 2) events to the phenomena b’ and b”. These prohibit events will be generated on the basis of the P rohibit(α, τ 1, τ 2) predicates in our feature specification. The composition controller will interpret them, possibly acting on them and possibly ignoring them, in order to resolve conflicts. We will now specify four versions of the composition controller in Fig. 5 that meet the composition requirement RC as described by each of the conjunction operators (Options 1-4). To choose a resolution of the requirement conflict between SR and TR is to choose the appropriate composition controller. 4.1. Composition Controller for SR ∧{any} TR. The semantics of the first type of composition operator is straightforward. We use a simple formalism to describe the semantics of the controller in which → should be read as stating that the composition controller generates the event on the right when the event on the left happens. Definitions (1 to 4) in Fig. 6 say that the events from Time Panel, Temperature Panel, Out Tempt Sensor and In Tempt Sensor are passed to the SF and CCF domains respectively without prohibition. Similarly in (5 and 6) the events from
R. Laney et al. / Composing Features by Managing Inconsistent Requirements
a:e → a’:e
(1)
g:e → g’:e
(4)
e:e → e’:e
(2)
b’:e → b:e
(5)
f:e → f’:e
(3)
b”:e → b:e
(6)
139
Figure 6. The semantics of SR ∧{any} TR
SF and CCF are propagated to the window without prohibition. That is, all of the prohibit events transmitted in the interfaces b’ and b” to the composition controller are ignored. Since the controller applies no prohibition on events generated by the domains, in particular by SF and CCF domains, any emergent behaviour of the window is possible. For example, if SF has generated tiltIn to shut the window, and as a result the window is closing, and in the mean time the CCF domain generates the tiltOut event to open the window, the composition controller will allow CCF to open the window. In order to address the other composition operators, it is necessary for the composition controller to remember and act on some of the prohibit events it has received. For this purpose, an additional, but quite minimal, machinery is required. Let P be a set that hold tuples of form (e,t1,t2,m) which will represent an assertion that event e is prohibited by the specification of machine m between times t1 and t2. We now allow the → to be guarded by an optional predicate (enclosed in square brackets following the first operand). In the following specifications for composition controller, we assume that no machine can prohibit another machine issuing a prohibit event. 4.2. Composition Controller for SR ∧{control} TR. The controller semantics for dealing with events generated by world domains (1 to 4) applies to this controller. Definitions (5.a to 5.d) and (6.a to 6.d) replace (5) and (6) respectively. Note that t in the expression t1 ≤ t ≤ t2 in Fig. 7 denotes current time. Controller semantics (5.a) says that when the domain SF issues a prohibition on the event e between t1 and t2, the composition controller records the assertion by adding a tuple into P. When SF issues any other event, the controller passes on the event to the window domain, only if the event has not been prohibited by another machine for that time (5.b); otherwise the event is ignored (5.c). If self-prohibitions happen, an error is generated, (5.d). (6.a to 6.d) describes the controller dealing with the events from CCF in a similar fashion. In effect, this controller gives to the SF and CCF domains mutually exclusive control of the window domain over a period of time. 4.3. Composition Controller for SR ∧{SR} TR. The semantics of this controller differs from the previous one in one respect: since events from the prioritized machine SF should not be prohibited, (5.b to 5.d) are not necessary. (5.a) is needed in order that SF can prohibit events and (5) is
140
R. Laney et al. / Composing Features by Managing Inconsistent Requirements
Definitions 1 to 4 and the following: b’:prohibit (e, t1, t2) → insert((e, t1, t2, ‘SF’), P)
(5.a)
b’:e [∀ t1, t2, m · t1 ≤ t ≤ t2 ∧ m = ‘SF’ ∧ (e, t1, t2, m) ∈ / P] → b:e
(5.b)
b’:e [∃ t1, t2, m · t1 ≤ t ≤ t2 ∧ m = ‘SF’ ∧ (e, t1, t2, m) ∈ P ] → ignore (5.c) b’:e [∀ t1, t2, m · t1 ≤ t ≤ t2 ∧ m = ‘SF’ ∧ (e, t1, t2, m) ∈ P] → error (5.d) b”:prohibit(e, t1, t2) → insert((e, t1, t2, ‘CCF’), P)
(6.a)
b”:e [∀ t1, t2, m · t1 ≤ t ≤ t2 ∧ m = ‘CCF’ ∧ (e, t1, t2, m) ∈ / P] → b:e (6.b) b”:e [∃ t1, t2, m · t1 ≤ t ≤ t2 ∧ m = ‘CCF’ ∧ (e, t1, t2, m) ∈ P] → ignore (6.c) b”:e [∀ t1, t2, m · t1 ≤ t ≤ t2 ∧ m = ‘CCF’ ∧ (e, t1, t2, m) ∈ P] → error (6.d) Figure 7. The semantics of SR ∧{control} TR
added in order that SF events are passed on to the window domain unprohibited, thus giving SF events precedence over events from CCF. CCF events are handled in the same way as before (6.a to 6.d). 4.4. Composition Controller for SR ∧{emgOpenW indow,SR} TR. Assume that SF and CCF can open the window in emergency situations (for example, if a fire is detected in the house) by firing the emgOpenWindow event. Again, the semantics of this controller differs from the previous in one respect: since the prioritized event, emgOpenWindow, from the CCF machine should not be prohibited, (6.e) is added. (5) already allows the emgOpenWindow event from the SF machine to pass unprohibited. b”:emgOpenWindow → b:e
(6.e)
It is easy to see that there is nothing in the above composition controller semantics that refers directly to the machine specifications or requirements of the sub-problems. If we treat Fig. 5 as a composition pattern, then the controller we have specified is actually generic, and can be applied to any requirements R1 and R2 that can be specified using the Event Calculus of Section 2.2.
5. Related Work Our work is related, first and foremost, to the feature interaction problem, common in the field of telecommunications [16,27], as well as other domains such as
R. Laney et al. / Composing Features by Managing Inconsistent Requirements
141
email [13]. In particular it is found in application domains where feature interactions are manifest in the environment rather than inside the software [17]. While less ambitious about the extent to which requirements can be composed, our work is also less domain-specific. In [28], work is presented on the conjunction of specifications as composition in a way that addresses multiple specification languages, but the emphasis is less on the relationship between requirements and specifications. Nakamura et al [21] propose an object-oriented approach to detecting feature interactions in services of home appliances. However, their approach uses a design-time, rather than run-time, technique. The whole area of inconsistency management offers a variety of contributions to dealing with inconsistencies in specifications [9,10,11]. Robinson [24], in particular, reviews a variety of techniques for requirements interaction management and Nuseibeh et al [23] discuss a range of ways of acting in the presence of inconsistency. None of these approaches address the decomposition and recomposition of requirements to facilitate problem solving. A number of formal approaches exist where emergent behaviours due to composition can be identified and controlled [1,7]. Our approach differs from these in that we identify how requirements interact and remove non-deterministic behaviour by imposing priorities over the requirements set. In [8], a run-time technique for monitoring requirements satisfaction is presented. This approach is taken further in [6], where requirements are monitored for violations and system behaviour dynamically adapted, whilst making acceptable changes to the requirements to meet higher-level goals. This requires that alternative system designs be represented at run-time. One view of our approach is that it involves the monitoring of when a requirement leads to a machine taking control (including event prohibition) and the taking of appropriate action. Our approach differs further, in that it is more lightweight: we do not need to maintain alternative system designs at run-time. In [15] we sketched some options in composing a sluice gate control machine with a safety machine in order to address safety concerns. That was in the context of a more philosophical discussion of composition and decomposition. The work presented in this paper differs in that we embody the composition as a separate extra machine. This gives us the potential to deal with a wider range of compositions. The Event Calculus has previously been used in software development for reasoning about evolving specifications [5,25] and distributed systems policy specifications [2]. Our work should be seen as complementary to such approaches in that it will allow inconsistencies to be resolved at run-time. Finally, our approach is strongly related to the mutual exclusion problem of concurrent resource usage, but with an explicit emphasis on requirements satisfaction.
6. Discussion In solution space terms composition controllers correspond to the notion of an architectural connector [1]. This allows us to move backwards and forwards be-
142
R. Laney et al. / Composing Features by Managing Inconsistent Requirements
tween architectural and requirements perspectives using the Composition Frame as a reasoning tool. We now consider how our work can be generalized, alternative composition semantics and the significance of the work. It is well understood that in producing a machine to solve a real-world problem there is often a need to implement an analogic model [14] of at least part of the problem domain. Arriving at a conceptual model that can subsequently be implemented is often difficult in itself. In the case of the SF and CCF machines, the models are very simple. This is partly because of the domain assumption that the window is robust. If the window is less robust, it is necessary to explicitly model the position of the window. Composing machines containing such models can be complex because the model in one machine may become inconsistent with the world, due to the world being changed by another machine. It is not difficult to see how the Composition Frame can be generalized to any two machines with a common domain under their control. In the specification we used the notion of a particular machine being in control of the window, including passive partial control specified using the P rohibit(α, τ 1, τ 2) predicate. The same technique should be usable with any two machines. Although our Composition Frame in this example deals with two problems fitting a type of problems called the Required Behaviour Frame, it is easy to see that it would generalize to composing two problems fitting other basic Problem Frames [14] in a similar fashion. For example, in [20] we demonstrate how to compose two problems fitting the Required Behaviour and Commanded Behaviour frames. Whilst much work has been done on protocols for controlling mutual access to resources in program code, less attention seems to have been paid to the problem of systematically gaining control over domains in the real world [14]. Working explicitly with the notion of a machine being in control at certain times, and the use of a temporal semantics, allows us to express the concerns at the requirements stage. In particular, our requirements composition operators make the issue of control explicit. 7. Conclusions and Future Work We have shown how by expressing requirements and domain properties in a temporal logic we can formally derive feature specifications. In itself this refinement style approach is not new. However, we have placed it in the context of a development process based on Problem Frames. The value of this is that in making the properties of the application domain explicit, we increase our confidence that the specified machine will meet the system requirements. Furthermore, by adding the P rohibit(α, τ 1, τ 2) predicate to the Event Calculus and making use of it in machine specifications we have obtained an important new element in our toolbox for composing solutions to feature subproblems. The composition controller needs only to be parameterized and the composition is done non-intrusively in the sense that we have made no changes to the specifications of the machines being composed. We have illustrated this through the application of our approach to an awning window control system in a smart home application.
R. Laney et al. / Composing Features by Managing Inconsistent Requirements
143
We have also shown how to combine two inconsistent requirements in terms of the operators given in Section 4. The Composition Frame allowed us to reason about the relationship between sub-solutions and sub-requirements. We were able to specify composition at a requirements level rather than solely in design or implementation terms. We believe that our approach is scalable, as composition controllers have a simple semantics. Although the specification is in terms of set operations, it would be simple to bound the size of these sets in practice and to implement them efficiently. Future work is planned to formalize the relationship between our requirements composition operators, the Problem Frames for sub-problems, and the composition requirements. We also need to address a wider range of compositions, both in terms of the options in Section 4 and across a larger set of basic Problem Frames. In a large Problem Frames development, sub-parts of domains and amalgamations of domains can appear in different frames. Related to this is the need to apply the approach to more significant case studies. It might be possible to develop patterns for particular domain areas. Given the use of formal derivations of machine specifications, we are developing a reasoning tool to automate our approach in order to support its use in larger systems.
8. Acknowledgements We are grateful for the support of our colleagues at The Open University, in particular, Arosha Bandara, Leonor Barroca, Charles Haley, Jon Hall, Lucia Rapanotti and Michel Wermelinger, and Alessandra Russo of Imperial College. We also acknowledge the financial support of EPSRC for this research.
References [1] R. Allen and D. Garlan. A formal basis for architectural connection. ACM Transactions on Software Engineering and Methodology, 6(3):213–249, 1997. [2] A. K. Bandara, E. Lupu, and A. Russo. Using event calculus to formalise policy specification and analysis. In POLICY, pages 26–39. IEEE Computer Society, 2003. [3] D. Bjørner. Towards posit & prove calculi for requirements engineering and software design: In honour of the memory of professor Ole-Johan Dahl. In O. Owe, S. Krogdahl, and T. Lyche, editors, Essays in Memory of Ole-Johan Dahl, volume 2635 of LNCS, pages 58–82. Springer, 2004. [4] M. Calder, M. Kolberg, E. Magill, and S. Reiff-Marganiec. Feature interaction: A critical review and considered forecast. Computer Networks, 41(1):115–141, 2003. [5] A. S. d’Avila Garcez, A. Russo, B. Nuseibeh, and J. Kramer. Combining abductive reasoning and inductive learning to evolve requirements specifications. IEE Proceedings Software, 150(1):25–38, 2003. [6] M. S. Feather, S. Fickas, A. V. Lamsweerde, and C. Ponsard. Reconciling system requirements and runtime behavior. In Proceedings of IWSSD’98: 9th International Workshop on Software Specification and Design, Ise-Shima, Japan, 1998. IEEE Computer Society Press. [7] J. L. Fiadeiro, A. Lopes, and M. Wermelinger. A mathematical semantics for architectural connectors. In R. C. Backhouse and J. Gibbons, editors, Generic Programming, volume 2793 of LNCS, pages 178–221. Springer, 2003.
144
R. Laney et al. / Composing Features by Managing Inconsistent Requirements
[8] S. Fickas and M. Feather. Requirements monitoring in dynamic environments. In Proceedings of the Second IEEE International Symposium on Requirements Engineering, pages 140 – 147, 1995. [9] A. Finkelstein and I. Sommerville, editors. Special Issue of the BCS/IEE Software Engineering Journal on “Multiple Perspectives”. 1996. [10] C. Ghezzi and B. Nuseibeh, editors. Special Issues on Inconsistency Management in IEEE Transactions on Software Engineering. 1998. [11] C. Ghezzi and B. Nuseibeh, editors. Special Issues on Inconsistency Management in IEEE Transactions on Software Engineering. 1999. [12] J. G. Hall, M. Jackson, R. C. Laney, B. Nuseibeh, and L. Rapanotti. Relating software requirements and architectures using problem frames. In Proceedings of the 10th Anniversary IEEE Joint International Conference on Requirements Engineering, pages 137–144. IEEE Computer Society, 2002. [13] R. J. Hall. Fundamental nonmodularity in electronic mail. Automated Software Engineering, 12(1):41–79, 2005. [14] M. Jackson. Problem Frames. ACM Press & Addison Wesley, 2001. [15] M. Jackson. Why software writing is difficult and will remain so. Inf. Process. Lett., 88(1-2):13–25, 2003. [16] M. Jackson and P. Zave. Distributed feature composition: A virtual architecture for telecommunications services. IEEE Trans. Softw. Eng., 24(10):831–847, 1998. http://dx.doi.org/10.1109/32.729683. [17] M. Kolberg, E. H. Magill, and M. Wilson. Compatibility issues between services supporting networked appliances. IEEE Communications Magazine, 41(11):136–147, 2003. [18] R. Kowalski and M. Sergot. A logic-based calculus of events. New Gen. Comput., 4(1):67– 95, 1986. [19] R. Laney, L. Barroca, M. Jackson, and B. Nuseibeh. Composing requirements using problem frames. In Proceedings of 12th IEEE International Conference Requirements Engineering (RE’04), pages 122–131. IEEE Computer Society, 2004. [20] R. Laney, M. Jackson, and B. Nuseibeh. Composing problems: Deriving specifications from inconsistent requirements. Technical Report 2005/08, The Open University, 2005. [21] M. Nakamura, H. Igaki, and K.-I. Matsumoto. Feature interactions in integrated services of networked home appliances: An object-oriented approach. In S. Reiff-Marganiec and M. Ryan, editors, FIW, pages 236–251. IOS Press, 2005. [22] B. Nuseibeh. Weaving together requirements and architectures. Computer, 34(3):115–117, 2001. [23] B. Nuseibeh, S. Easterbrook, and A. Russo. Making inconsistency respectable in software development. Journal of Systems and Software, 58(2):171–180, 2001. [24] W. N. Robinson, S. D. Pawlowski, and V. Volkov. Requirements interaction management. ACM Computing Surveys, 35(2):132–190, 2003. [25] A. Russo, R. Miller, B. Nuseibeh, and J. Kramer. An abductive approach for analysing event-based requirements specifications. In P. J. Stuckey, editor, ICLP, volume 2401 of LNCS, pages 22–37. Springer, 2002. [26] M. Shanahan. The event calculus explained. LNCS, 1600:409–430, 1999. [27] P. Zave. Feature interactions and formal specifications in telecommunications. IEEE Computer, 26(8):20–30, 1993. [28] P. Zave and M. Jackson. Conjunction as composition. ACM Trans. Softw. Eng. Methodol., 2(4):379–411, 1993.
Feature Interactions in Software and Communication Systems IX L. du Bousquet and J.-L. Richier (Eds.) IOS Press, 2008 © 2008 The authors and IOS Press. All rights reserved.
145
Artificial Immune-based Feature Interaction Detection and Resolution for Next Generation Networks Hua Liu1, Zhihan Liu1, Fangchun Yang1, Jianyin Zhang1 1
State Key Laboratory of Networking and Switching Technology, Beijing University of Posts and Telecommunications. Beijing, China
Abstract. Feature interaction is a major obstacle for the rapid development of new services in telecommunication networks. Based on the similarities between Artificial Immune System and Feature Interaction Management System, an Artificial Immune-based Feature Interaction Management System (AIFIMS) is proposed for the online FI detection and resolution in next generation networks (NGN). AIFIMS consists of three modules: Artificial Immune-based Detection (AID) module, Artificial Immune-based Resolution (AIR) module and Management module. By introducing and exploiting some AIS mechanisms such as multi-protection mechanism, Antigen Recognition, clonal selection, immune learning and so on, AID module can effectively detect FIs in a universal way, and new kinds of FIs can be gradually recognized and memorized also. AIR module resolves the detected FIs fexibly through simulating Antigen Cleanup mechanism. With the experimental verification, AIFIMS is effective to provide a unified framework for detecting and resolving FIs among ever-changing NGN services. Keywords. Feature Interaction, Artificial Immune, FIM
1
Introduction
In telecommunication services, features are optional additional functionalities to the basic function--call control. Typical features include Call Waiting (CW), OCS (Originating Call Screen), or Call Barring (CB) etc. Telecommunication services can be created based on various features. This feature-driven model facilitates the evolutionary service development, which accommodates new requirements by incorporating new features. Usually, features are developed and tested in isolation by different service vendors. Even though several features are verified correct separately㧘 when they are deployed together in the same telecommunication network, interactions may occur between them and cause adverse effect, severely harming to system stability and user expectations. Lots of approaches have been developed to deal with Feature Interaction (FI) since it was first proposed in 1989 [1]. These approaches can be divided into two categories: _________________________________ Corresponding Author: Hua Liu, 3.O. Box 187, No.10 Xitucheng Rd, Haidian District, Beijing, China Email:[email protected]
146
H. Liu et al. / Artificial Immune-Based Feature Interaction Detection and Resolution
offline applicable at design time and online applied at runtime. The former still play a role to check feature compatibility in a single service offering environment, but some major drawbacks limit their usage. Apparently, offline approaches fail to handle FIs related to runtime information, for FI detection and resolution only occur before running the services. Moreover, in order to create appropriate models for further FI detection, most offline approaches require the details of each individual feature. This information may be not available in a multi-vendor environment [2, 3].Online approaches attempt to overcome these problems by handling interactions at runtime. Two classes are distinguished: negotiation based methods and FIM based methods [4]. In negotiation methods [5, 6], communication between agents requires significant architectural changes in telecommunication networks, which limits these methods to the existing telecommunication networks. FIM based methods introduce a special FIM entity which is responsible to treat FIs by intercepting, modifying and routing messages between the control layer and the service layer. But, most of the current FIM methods can only manage one or some specific kinds of FIs efficiently and it is difficult to combine these methods together, even they may be incompatible to each other [3].There lacks a universal way to detect FIs and these methods are incapable of detecting the new emerging FIs as well. Artificial Immune System (AIS) has been researched and applied in wide fields, which achieves many benefits by simulating some mechanisms in the biologic immune system [7]. The basic function of the immune system lies in detecting the intruded nonself cells (i.e. antigen), and then making immune response to eliminate antigens. On the other hand, the major goal of FIM system is to detect feature interactions, and then to resolve the detected interactions. According to this basic metaphor between the immune system and FIM system, this paper proposes an Artificial Immune-based Feature Interaction Management System (AIFIMS) to manage FIs occurring between Parlay ASs or SIP ASs in NGN. By simulating Multi-protection and Antigen Recognition principles, AIFIMS has the advantages to detect FIs universally and efficiently. New kinds of FIs can gradually be recognized and memorized through Clonal Selection and Immune Learning. AIFIMS can resolve the detected FIs flexibly by simulating the Antigen Cleanup mechanism. This paper is organized as follows: section 1.1 gives a brief introduction about AIS; the metaphors between the two systems are built in section 2; then the panoptic view of AIFIMS is given in section 3; in section 4 and 5, the implementation of AID and the FI resolution mechanism in AIFIMS are presented; section 6 validates the detection method through case study; we conclude and give the future work at last. 1.1
Some Applied AIS Principles
The biologic immune system is a highly autonomic system whose basic function is to distinguish between self cells and antigens, and make appropriate immune response to eliminate the intruded antigens. Cells, involved in Antigen Recognition and Response procedure 㧘 are referred to as immune cells. Two classes of immune cells are distinguished: macrophages and lymphocytes. Macrophages are capable of capturing the antigens and represents there crucial information to other immune cells. This procedure is non-antigen specific, i.e. universal to all kinds of antigens, while lymphocytes only recognize and response to special kind of antigens.
H. Liu et al. / Artificial Immune-Based Feature Interaction Detection and Resolution
147
The biologic immune system has many attractive characteristics, and by exploiting some of its principles, AIS tries to achieve similar benefits such as universal recognition, ability of learning and memorization, Self tolerance and adaptability to the external environment etc. Antigen Recognition Antigen Recognition (see the Figure 1) is performed to determine whether the lymphocyte match the antigen. Affinity between receptor and antigen determinant is calculated, and if the result is higher than threshold, the antigen can be recognized by the lymphocyte. This is a procedure of ‘Imperfect Recognition’, which means without the need of perfect match, a lymphocyte can recognize the antigen as long as affinity exceeds the threshold. Lymphocytes Antigen Determinants
Receptors
Antigen Figure 1. Antigen Recognition
Multi-Protection Mechanism Immune system protects the body from being infected through three-layer protection (see the Figure 2). Antigen
Skin Innate Immune
Macrophage Lymphocyte
Adaptive Immune
Figure 2. Multi-protection Mechanism
The first layer is skin and physical environment which obstruct antigen intrusion and kill the intruded ones. The survivals will be processed by the second layer—innate immune. Innate immune is composed of macrophages, which can kill the antigen speedily and universally. After innate immune response, the Antigen Recognition and cleanup abilities will not be enhanced because the crucial information about the intruded antigens are not memorize. Some antigens, successfully passing through these two layers, are handled by adaptive immune which consists of plenty lymphocytes. In this procedure, special lymphocytes are generated to recognize and remove a certain type of antigens. Besides this specialization, this layer also memorizes information about the new antigens to fasten the second response to the same kind of antigens.
148
H. Liu et al. / Artificial Immune-Based Feature Interaction Detection and Resolution
Clonal Selection Clonal Selection explains the adaptive immune response mechanism in AIS[8]. After lymphocytes recognize antigens successfully, they will perform clonal proliferation to memorize more antigens. During the procedure of clone, mutation occurs among the lymphocytes in order to recognize new types of antigens [9]. Self Tolerance In AIS, the newborn immature lymphocytes must undergo Self Tolerance lest the lymphocytes make incorrect immune responses to the self cells. If a newborn lymphocyte matches a certain kind of self cell, it will be eliminated [10]. Essentially, this is a procedure of negative selection. After Self Tolerance, the remainders will become mature to participate in subsequent immune procedures.
2
Metaphors between Immune System and FIM System
Based on the similar functions between FIM system and AIS, the functional entity and the process metaphors are established according to antigen recognition mechanism. The basic metaphors between the two systems are illustrated with Table1.The comprehensive relations, coupled with the overall procedures of FIs detection in AIFIMS, are illustrated by Figure 3. For each pane, the content on the left of colon represents entity or process in AIS and the one on the right indicates the corresponding part in AIFIMS. Section 4.1 explains the FI detection procedures in detail. Table 1. Basic Metaphors Body
Telecommunication System
Artificial Immune System
Feature Interaction Management System
Antigen Recognition
Feature Interaction Detection
Nonself Cell
Feature Interaction
Self Cell
Feature Collaboration
Immune Cell
Feature Interaction Detection Rule/Detector
Immune Response
Feature Interaction Resolution
Bacteriolysin/Antibody
Feature Interaction Resolution Policy
3 AIFIMS--Artificial Immune-based Feature Interaction Management System AIFIMS is constructed based on both FIM method and AIS. As a FIM system, AIFIMS exists between the control layer and the service layer in NGN and is independent of the specific underlying network infrastructure and specific service control protocols. It can be deployed between the Parlay AS and Parlay Gateway or between the SIP AS and Serving Call Service Control Function (SCSCF) in IP Multimedia Subsystem.
H. Liu et al. / Artificial Immune-Based Feature Interaction Detection and Resolution
149
AIFIMS consists of three modules: Artificial Immune-based Detection (AID) module, Artificial Immune-based Resolution (AIR) module and Management module. Feature interaction information is extracted and presented in the AID module. FI recognition and generation of new detection rules are also involved in AID module through exploiting some primary principles in AIS. The detected FIs are able to be resolved in AIR module through simulating Antigen Cleanup mechanism. The management of the system is made possible by the Management module. This module also provides some aided tools for the other modules of the system, such as maintaining detection rules database and resolution policies, extracting and analyzing FI records from the logs, etc. The architecture of AIFIMS is shown in Figure 4. Messages Skin and Physical Environment : Information Preprocess module Antigen Determinant: Formatted Messages Immune Memory Cell: Memorial FI Detection rule Matched
Not Matched
Antigen Recognition :Matching with Memorial Detection rules
Generate antibody: Get Solution from Database of Resolution Policies
Mature Immune Cell: Mature FI Detection rule Matched
Secondary Immune Response : Perform Solution to Remove FI
Antigen Recognition :Matching with Mature FI Detection rules
Aided Immune Cell: FI Co-stimulator
Not Obtained
Obtain Co-stimulation Signal: Affirm FI
Not Matched
Self Cell: Feature Collaberation
Obtained
Death of Lymphocyte˖ Eliminate the detection rule
Divaricated as Immune Memory Cell˖ Store the Detection rule as Memorial FI Detection rule
Self Cell: Feature Collaberation
Generate antibody: Get Solution from Database of Resolution Policies Primary Immune Response : Perform Solution to Remove FI
Figure 3.
Metaphors between AIS and AIFIMS
150
H. Liu et al. / Artificial Immune-Based Feature Interaction Detection and Resolution
AID Module
AIR Module
Information Preprocessing module Mesages
Filter Invalid Messages
Transform Messages
Special_FID module
Format Antigen Special_FI Messages D Detector
Resolution Policies Special_Mb
FI settler Universal_FID module Detection Rule Generator
Universal_FI D Detector
Universal_FID co-Stimulator
generate Self tolerance subSelf
Immatureb
DetectonRuleGeneration subModule
Administrators Self tolerance successfully
Tb
activate Universal_Mb
System Management Module
Univeral_FID subModule
Figure 4. Architecture of AIFIMS
3.1
AID Module
The primary structure of AID Module is set up on the basis of the Multi-protection Mechanism in the AIS. Similar to AIS, AID module is divided into three layers. 1) Corresponding to skin and physical environment in AIS, the first layer of AID is Information Preprocessing Module. It is composed of three submodules: Message Filtrating, Message Transforming and Message Formatting submodule. The first one is used to exclude the invalid messages; the later two transform and format the incoming messages to a universal intermediate style named Action Intention Message. Formal definition of Action Intention Message is described by Eqs. (1). Except this module, all the other modules in AID work based on Action Intention Message. 2) Similar to the innate immune, the second layer of AID is the special FI Detection (specialFID) module, which is composed of a special FI detector and a memorial FI detection rule database named Special_Mb. Only a few detection rules related to some kinds of FIs such as Shared Trigger Interaction, rather than the specific messages or services, can be stored in Special_Mb. The size of Special_Mb is very limited and every rule in Special_Mb is used to detect one kind of FI. The purpose of separating this layer from the third layer—universal FI Detection Module, is to gain detection efficiency and increases coverage of nonself space. 3) The third layer of AID is the universal FI Detection (Universal_FID) module, which functions like the adaptive immune layer in AIS. The Universal_FID is the most complex layer among AID. It has two submodules: Detection Rule Generation submodule and Universal_FID submodule. The former is used to generate the new detection rules by simulating clonal selection in AIS and it is
H. Liu et al. / Artificial Immune-Based Feature Interaction Detection and Resolution
151
composed of a Detection Rule Generator and two databases storing the subset of Self space and Immature Detection Rule Set -- Immatureb .The elements in subSelf represent the known normal service behaviors; the elements in Immatureb are the new generated immature detection rules and will become mature after the successful Self Tolerance. The Universal_FID submodule has a universal FI detector, a co-simulator and two universal FI detection rule databases named Universal_Mb and Tb respectively. The Universal_FID detector detects FIs by simulating the adaptive immune in AIS. The detection rules in Universal_Mb are dependent on the specific services. Tb stores mature detection rules, which have passed through Self Tolerance successfully. Once a rule in Tb has recognized a FI and the co-simulator gives an affirming signal, it will be moved to Universal_M b; otherwise it will be removed from Tb. Special_Mb and Universal_Mb are constituents of Memorial Detection Rule Set— Mbwhose elements have ever successfully detected some FIs. The relations between databases are described in [11]. Figure 3 depicts the work flow of FI detection in high level. Firstly, the messages are intercepted and inputted to Information Preprocessing module. After preprocessing, the formatted messages are passed to the Special_FID module to detect some kinds of FIs efficiently. If the FI is detected by Special_FID, it will be resolved by the AIR module. Otherwise, it will be passed to the Universal_FID module to make further detections. If FI is detected by the rule in Universal_Mb, it will be resolved by the AIR module. If the FI is detected by the rule in Tb, the cosimulation procedure will be executed to determine whether it is really an FI. If true, the FI will be resolved by AIR and the rule will be moved from Tb to Universal_Mb; if not, the rule will be moved from Tb. Section 4 details the procedures of AID. 3.2
AIR Module
The AIR Module is composed of a Resolution Policy database and a FI settler. Resolution Policy database stores FI resolution policies which are inputted by the service users or system administrators. By simulating the Antigen Cleanup mechanism, FI resolutions are performed flexibly, considering both user intentions and system policies. The detailed procedure of AIR is described in section 5.
4 Artificial Immune-based Universal FI Detection Method—AIUD The AIUD mainly includes two procedures: FI Recognition, FI Learning and Memorization. 4.1
Formal Definitions Definition of Action Intention Message After preprocessing, messages are formatted as Eqs. (1):
<message>={<SessionID><Event><ServiceID><msgName>[<msgPara>| …]<Times>} (1)
152
H. Liu et al. / Artificial Immune-Based Feature Interaction Detection and Resolution
Where <SessionID>=N, represents a session identity; <Event>={Address_Analyzed|tBusy|…}, represents the service trigger event; <ServiceID>=N, represents the invoked service identity; <msgName>={||…}, represents the name of the message; ; <msgPara>={|<destAddress>|…}, represents the FI related parameters of the message, the value of msgPara varies with different messages ; <Times>=N, represents the times interacted with the AIFIMS in the session. Definition of Detection Rule Set The whole Detection Rule Set—B is composed of Memorial Detection Rule Set Mb, Mature Detection Rule Set Tb, and Immature Detection Rule Set Immatureb. B is defined by Eqs. (2): B={b | b = [<messages>][][]}
(2)
Where <messages> is defined by Eqs. (1); represents the necessary information for the feature interaction detection; represents the constraints of the feature interaction. Every element in B may own many message fields, data fields and rule fields, or some fields may be empty. represents how many times the rule successfully detects the FI; represents the lifecycle of the rule. Definition of Feature Behavior Set The whole Feature Behavior Set—Ag includes two subsets: Self and Nonself. Elements in Self represent feature collaborations; Elements in Nonself represent undesired feature interaction. Ag is defined by Eqs.(3).<messages> and , have the same meanings as the above definitions Ag= {ag | ag =[<messages>][]} 4.2
(3)
Artificial Immune-based FI Recognition Mechanism
After feature behaviors are represented as Action Intention Message, FIs detection will be performed by simulating the Antigen Recognition mechanism in AIS. The affinity between service behaviors and detection rules can be used to determine whether a series of service behaviors cause FI. The recognition function is defined by Eqs. (4), where function aff(b, ag) serves to calculate affinity. b represents detection rule and ag represents service behaviors which are defined by Eqs. (2),(3) respectively. Many methods can be used as function aff such as Euclidean distance, Hamming distance, rbit continuous match, etc. t is the match threshold. If b and ag are matched, FI will be reported. By this means, Artificial Immune-based FI Recognition method may detect FIs in a universal way. The function fmatch(b, ag) is the primary operation, that can be used in Special_FID , Universal_FID module and Self Tolerance procedure.
⎧1, f match (b , ag ) = ⎨ ⎩ 0,
aff (b , ag ) ≥ t , ∀ b ∈ M b ∪ Tb otherwise
(4)
H. Liu et al. / Artificial Immune-Based Feature Interaction Detection and Resolution
153
In AID, detection rules and feature behaviors are defined in representative and genic style simultaneously. For the detection rules, the representative style consists of multiple readable constraints relevant to FI detection. The representative-style rules, defined by Eqs. (3), are mainly used to represent the extractions of FIs and are stored in the databases such as Mb, and Tb. Similarly, the representative-style feature behaviors defined by Eqs.(3). The genic style is a kind of abstract digital representation of the detection rules and feature behaviors. In AID, genic style may be considered as the binary-coded representation of the corresponding representative style. They are used in many immune mechanisms like Antigen Recognition, Clonal Selection etc. The genic style of the detection rules consists of three segments: Message Gene Segment, Data Gene Segment and Constraint Gene Segment. The genic style of the detection rules is shown in Figure 5. Message Gene Segment is the binary-coded representation of the <messages> parts in the Eqs. (2). This segment includes multiple Message Genes and each one indicates a binary-coded Action Intention Message. Message Genes should be arranged in the sequence according to their emerging time. Data Gene Segment and Constraint Gene Segment respectively correspond to the and parts in the Eqs. (2). The constituents in the genic-style feature behaviors are similar to the ones in the genic-style detection rules, except the absence of the Constraint Gene Segment. Message Gene Segment Message Gene
ĂĂ
Data Gene Segment
Message Gene
Data Gene
ĂĂ
Data Gene
Constraint Gene Segment Constraint Gene
ĂĂ
Constraint Gene
Figure 5. Genic style of the detection rules
The algorithm named Qmatch applies to perform the match procedure between the detection rules and the feature behaviors. The queue also named Qmatch is maintained during matching operations. Every item in Qmatch queue is referred to as the Active Detection Rule which is structured as follows. Class ActiveDetectionRule { Int ruleID; //represents the unique number of every detection rule Int type; //represents whether the rule belongs to Mb (1: yes,0:no) Int lifetime; //represents the rest time of the rule; if expires, the rule //is removed from the queue Float Affinity; //represents the accumulative affinity between //the rule and feature behavior Message expectedMsg; //represents the message expected to be received next. } Two messages are used to trace the matching procedure. The m0 represents the first Message Gene in the detection rule and the mc represents the current Action Intention Message. The match procedure is depicted in Figure6.
154
H. Liu et al. / Artificial Immune-Based Feature Interaction Detection and Resolution
Procedure Qmatch Begin For every Rule Ri in Detection Rule Set Begin If mc = Ri.m0 then Begin Produce new ActiveDetectionRule Item based on Ri; Add the new Item to the Qmatch queue; Ri.m1 -> Item.expectedMsg; Set other attributes in Item; End; End; For every Item in Qmatch queue Begin If mc = Item.expectedMsg then Begin Item.expectMsg = next Item.expectedMsg; Update affinity of Item; If Item.affinity > t then Begin If Item.type = 1 Report the detected FI to AIR; Else Wait for co-stimulation signal; End; End; End; Decrease Item.lifetime; If Item.lifetime <= 0 Removed Item from Qmatch queue; End;
Figure 6. Match algorithm in AID
4.3
Artificial Immune-based FI Learning Mechanism
In order to realize FI Learning and Memorization, there are several immune principles simulated in the Universal_FID module. By these means, not only can the new FI be recognized gradually, but also the second response to the same FI will be much faster than the primary response. 4.3.1
Clonal Selection
In order to generate new detection rules and increase the variety of detection rules, Clonal selection is simulated. The rules stored in Universal_Mb or Tb, which have successfully detected FIs, will be selected to take part in Clonal selection. First, the available rules will be added to Parent set. When the size of Parent set reaches a threshold or periodically, crossover and mutation will be performed on it to produce the child generation. The crossover point and mutation point may be selected according to an average distribution. After that, the children will be store in Immature b as the immature detection rules. The operations are shown in Figure 7.
H. Liu et al. / Artificial Immune-Based Feature Interaction Detection and Resolution
155
Crossover Point
Parent:
A1
B1
C1
A2
B2
C2
Mutation Point Child ren: A1
B1
B2
A2
C1
C2
A 2’
C1
C2
Figure 7. Crossover and mutation of detection rules
4.3.2
Self Tolerance
Similar to Self Tolerance in AIS, the immature detection rules in Immatureb have to be removed if the match in subSelf is found. With Self Tolerance, the false positive ratio of new detection rules can be markedly decreased. If there exists no match, the rules will turn into mature and be moved from Immatureb to mature database Tb. Eqs. (4) can also be used to determine whether a rule and a self cell are matched. Self Tolerance is illustrated by Eqs. (5).
fselfTolerance(Immatureb) = Immatureb㧙{ ib | ib ęImmatureb ш ag ęSelf (5) ġ fmatch(ib, ag) = 1 } 4.4
Whole Process of AIUD
The whole process of AIUD is illustrated by Figure 8.
5
FI Resolution Mechanism
Successful FI resolution is the ultimate goal of FI research. In AIFIMS, the entire FIs are considered as antigens, and the resolution of FIs is realized by simulating the Antigen Cleanup mechanism in AIS. 5.1
Antigen Cleanup Mechanism
There are mainly three approaches to eliminate the intruded antigens in immune system. By inoculating Bacterin, antigenic substance can be introduced into the body to produce immunity to a specific disease. This method applies to prevent some already known diseases. Another approach is that the immune system may automatically generate the antibody in response to a specific antigen. On condition that the immune system can not generate the valid antibodies, medicine must be taken to treat the disease.
156
H. Liu et al. / Artificial Immune-Based Feature Interaction Detection and Resolution
start
Initial Self, Mb , T b, I b , time, t
Preprocess the messages, represent feature behaviors
Feature behavior has the match s_mb in s_Mb?
No
No Yes Yes
Feature behavior has the match u_mb in u_Mb ?
Processed finished?
Yes
End
No Feature behavior has the match tb in Tb?
No
No FI found
Yes Obtain Co-stimulation Signal?
No
Move behavior to subSelf
Yes Move t b to u_Mb Resolve the detected FI Insert u_mb or tb into Parent Set Generate Children if conditions met Add Children into I b Undergo Self Tolerance
Figure 8. Whole process of AID
5.2 FI Resolution Mechanism in AIFIMS In AIFIMS, FI resolution is implemented in AIR module. Similar to the Antigen Cleanup mechanism, FIs are resolved with the following methods: Like inoculating Bacterin, service users and administrators can predefine some resolution polices for the already known FIs. As to the new emerging FIs, new resolutions can be defined by administrators through analyzing FI phenomena in
H. Liu et al. / Artificial Immune-Based Feature Interaction Detection and Resolution
157
system logs and performing experiments. AIR takes both system policies and user intention into considerations to resolve the FIs. Besides that, if no related resolution policy exists for the specific FI, the AIR will perform the default resolutions to automatically resolve the FIs, which is similar to the automatic antibody generation in AIS. This method is widely adopted by various online FI resolution methods. For instance, the Shared Trigger Interaction will be automatically resolved without user or administrator predefining the policies [13]. At last, like taking medicine, if AIR can not resolve the FI automatically, it will ask for the users’ help to indicate how to resolve the FIs. By simulating the Antigen Cleanup mechanism in AIS, AIR is capable to resolve the detected FIs in multi-ways, which contributes to gain flexibility.
6 6.1
Case Study on FI detection A Sample of FI Detection
In this scenario, interaction between Original Call Screening (OCS) and Call Forwarding on No Answer (CFN) is detected by the proposed method. Terminal A has applied for OCS, and terminal C is in the screen list of A; terminal B has applied for CFN and terminal C is the forwarded destination; when A is calling B and B don’t answer it, the call will be forwarded to C. As a result, A is talking with C, which may violate user’s intention. This interaction can be efficient detected by the rules in Universal_Mb. In AIFIMS, OCS is presented as message1: <message1>=<SessionID=0x0001><Event=’Address_Analyzed’><ServiceID=0x0003> <msgName=’route’>{<destAddress㧩0x0102>} <Times=0x0001> = Subsciber (message1.ServiceID) = 0x0001 = ScreenList (data1)
is the address of the caller i.e. terminal A; <destAddress 㧩0x0002> is the address of the callee i.e. terminal B; represents the subscriber of OCS; represents the ScreenList of OCS service and C is included in the list. CFN is presented as message2: <message2>=<SessionID=0x0001><Event=’tNoAnswer’><ServiceID=0x0004> <msgName=’route’> {<destAddress㧩0x0003>} <Times=0x0002> = Subsciber (message2.ServiceID) = 0x0002 = message2.destAddress =0x0003
is the address of the callee and <destAddress = 0x0003> is the forwarded destination when doesn’t answer the call; represents the forwarded destination settled by service subscriber. The following rule mb1 in Universal_Mb is used to detect if interaction occurs between OCS and CFN. According to it, feature interaction behavior consisting of <message1>, <message2>and necessary items will be recognized as FI.
158
H. Liu et al. / Artificial Immune-Based Feature Interaction Detection and Resolution
mb1㧦 <messagei>=<SessionID><Event><ServiceID=0x0003><msgName=’route’><Times> <messagej>=<SessionID><Event><ServiceID=0x0004><msgName=’route’><Times> =Subsciber(messagei.ServiceID) =Subsciber(messagej.ServiceID) =ScreenList(data1) =messagej. destAddress =<messagei.Event!=messagej.Event><messagei.SessionID=messagej.SessionID> <messagei.Times㧨messagej.Times>
6.2
Result of the Case Study
Eight common services were selected to validate the detection method. Table 2 shows the best results of the case study. The parameters of Qmatch were set as follows: the threshold of affinity t is set to 5; the match rate between messages is set to 0.9; the lifetime of active detection rule is set to 10. On this condition, all the interactions among the services are detected without the false negative and positive. On the same condition except a higher t and match rate, the false negative ratio increases; with the lower values of the two parameters, the false positive ratio increases. The system performance varies with the different values of AIUD parameters and this will be carefully studied in the future. Table 2. Results of the Case Study
CW CW CFB CFN FindMe TCS OCS CLIP CLIR
CFB Ĝ Ĝ
CFN Ĝ Ĝ
FindMe Ĝ Ĝ Ĝ Ĝ
TCS Ĝ Ĝ Ĝ Ĝ
OCS
CLIP
CLIR
Ĝ Ĝ Ĝ
Ĝ
CW: Call Waiting; CFB: Call Forwarding on Busy; TCS: Terminating Call Screen; CLIP: Calling Line Identification Presentation; CLIR: Calling Line Identification Presentation Restriction.
7
Conclusion and Future Work
This paper explored the induction of principles in AIS into FI domain in the telecommunication networks. This helps to bring some notable benefits like: an efficient unified FI recognition mechanism, the enforcement learning and memorization of the new emerging FI, the flexibility of FI resolution and adaptability to the ever-changing environment. The work reported in this paper is carried out by the FI research team in State Key Laboratory of Networking and Switching Technology, Beijing University of Posts and Telecommunications.
H. Liu et al. / Artificial Immune-Based Feature Interaction Detection and Resolution
159
The AIFIMS architecture is presented to illustrate how FIs are handled based on some principles in AIS. And also, AIFIMS is a kind of FIM system. It works independent on specific underlying protocols and without the requirement for the detailed feature knowledge. AIFIMS consists of three modules: AID module, AIR module and Management module. The proposed AIUD method is capable of efficiently detecting FIs in a universal manner and gradually recognizing and memorizing of new kinds of FIs. The enhancing detection ratio and efficiency are also achieved by exploiting some principles in AS. By now, the core algorithms such as Qmatch, Clonal Selection and Self Tolerance in the AID module have been largely completed. Some FI resolution mechanisms are designed and the implementation is still in the earlier stage. A prototype of AIFIMS has been established in the lab. The early experimental results have illustrated the proposed AIUD method is efficient to detect various FIs universally. The implementation of FI resolution module is still in process. There still remain some tasks for the advancement of AIFIMS. Future work lies in: study on system performance with different values of parameters; designing more effective presentation of service behaviors and detection rules; improving the automation of message handling process in Information Preprocessing module and the automation of Co-simulation; and enhancing the implementation of FI resolution.
Acknowledgement This work is jointly supported by: the National Natural Science Foundation (No.60672121); Program for Changjiang Scholars and Innovative Research Team in University. Joint Research Program of IBM China Research Lab and Beijing University of Posts and Telecommunications named Artificial Immune-based Service Interaction Management in IMS.
References [1]
Bowen T.F et al, The feature interaction problem in telecommunications systems. Seventh International Conference on Software Engineering for Telecommunication Switching Systems, Jul 1989, 59-62. [2] M. Calder, M. Kolberg, et al, Feature Interaction: A Critical Review and Considered Forecast, Computer Networks, 41(1),pp 115-141, January 2003 [3] Keck D. O, Kuehn P. J, The feature and service interaction problem in telecommunications systems: a survey, IEEE Transactions on Software Engineering, 1998, 24 (10), pp. 779 -796. [4] Reiff-MarganiecS, Runtime Resolution of Feature Interactions in Evolving Telecommunications Systems, PhD Dissertation, University of Glasgow, 2002. [5] N. D. Griffeth, H. Velthuijsen, The Negotiating Agents Approach to Runtime Interaction Resolution. Feature Interactions in Telecommunications Systems, Amsterdam, 1994: 217-235. [6] M. Amer, et al, An Agent Model for the Resolution of Feature Conflicts in Telephony. Journal of Network and Systems Management, 2000, 8(3):419-437. [7] Dasgupta D, Attoh-Okine N, Immunity-based systems: a survey, International Conference on Systems, Man, and Cybernetics, Florida, 1997, pp.369-374 [8] De Castro LN㧘The Construction of a Boolean Competitive Neural Network Using Ideas from Immunology. Neurocomputing,Vol.50c,2003 [9] Perelson A S et al, Immunology for physicists. Review of Modern Physics,Vol.69(4),1997 [10] Miller J FAP. Immune Self-tolerance Mechanisms. Transplantation, 72(8), 2001 [11] Wei WEI, Fangchun YANG, An Artificial Immune-based Feature Interaction Detection Method. In: Proc. ICDT, 2006. August 2006, France.
160
H. Liu et al. / Artificial Immune-Based Feature Interaction Detection and Resolution
[12] Wenjian X,21, Service Interaction Management System Framework Based on Immune Theory , Transaction of Beijing University Posts and Telecommunications, Beijing, Vol.28(6),2005. [13] Wei WEI, Fangchun YANG, A New Method to Resolve the Shared Trigger Interaction, Transaction of Beijing University Posts and Telecommunications, Beijing, Vol.29(6),2006
Feature Interactions in Software and Communication Systems IX L. du Bousquet and J.-L. Richier (Eds.) IOS Press, 2008 © 2008 The authors and IOS Press. All rights reserved.
161
Model Inference Approach for Detecting Feature Interactions in Integrated Systems Muzammil SHAHBAZ a,1 , Benoît PARREAUX b and Francis KLAY b a France Telecom R&D Meylan, France b France Telecom R&D Lannion, France Abstract. Many of the formal techniques are orchestrated for interaction detection in a complex integrated solution of hardware and software components. However, the applicability of these techniques is in question, keeping in view time-to-market delivery and scarcity of available resources. The situation is even more intractable when little or no knowledge of components is provided. We introduce a novel approach of using model inference methods in the domain. We advocate that these methods can be used in detecting feature interactions among components by putting the integrated system under a systematic testing effort and extracting only “context-relevant” models. Our technique allows us to detect those interactions in the system which are normally hidden while testing the components in isolation. We apply our approach to an active problem we are facing in furnishing mobile phone services. Keywords. Feature Interaction, Model Inference, Mobile Services Framework, Mobile Phone Integration
1. Introduction 1.1. Context The feature interaction problem [8,19,7] has been widely studied since the last decade. Basically, the problem is that modern software architectures cannot be monolithic and static, rather they need to be modular and flexible for maintainability or economic reasons such as time-to-market delivery. In order to support such a structure, the notion of feature is defined. A feature is a software piece that adds some functionality in a system. In general, a system includes a basic structure S, some features F1 , F2 ,. . . , and is defined as S × F1 × F2 × . . ., where × is a composition operator. Ideally the behavior of each feature should be independent from others but unfortunately it is never the case, since very often cooperative behaviors are required. This means that there are side effects between features which are called interactions. Some of them are desirable while others may lead to unexpected or unrequired system behaviors. The handling of interaction problems is the ability to detect, understand, classify and manage feature interactions in the system. Actually software interactions arise more 1 Corresponding Author: [email protected]
162
M. Shahbaz et al. / Model Inference Approach for Detecting Feature Interactions
in fields like web services, plugin software architectures, home appliance automation, mobility, etc. There are mainly two approaches to solve this problem: either on-line or off-line. With on-line techniques [22,24,14,4,6], the interactions are handled at system runtime, which is well suited for quick time-to-market delivery and opens a multi-vendor environment. The main disadvantage of this approach is the processing overhead and the impossibility of getting finer interaction resolution. On the contrary, off-line techniques are often a combination of software engineering methodologies [9,26,25] and formal methods [11,17,1,23,12,13]. The software engineering methodologies define a basic structure with a feature integration process. The formal methods goal is to reach to in-depth understanding of interactions by performing logical analysis on system specifications. The advantage of these techniques is to achieve finer interaction treatment. But in order to efficiently applying these techniques, the problem space they are dealing with must be reduced [20]. In this paper, we present a new automated approach to handle the feature interaction problem, applying it to what we think is an original case study, i.e., mobile phones. There are many features found in today’s mobile phones that offer many more capabilities than just simple functions such as voice calls or text messaging. The main problem is that for economic reasons in this context, this critical system needs to be open. For example, a phone manufacturer provides a phone device with basic features, but a phone distributor or a network operator would like to customize it with more features taking advantage of its network services architecture. Finally, a third party, e.g., a game provider, would also like to embed some additional features. In this respect, the phone is an integrated system of multiple components providing required features, plus many others for the purpose of reusability. Different systems may contain similar functionalities but their implementations and logics vary. Furthermore, the behaviors of components in the same system are not independent. They are often assumed to communicate with each other and may invoke required/unrequired functions of each other when accessing critical resources such as private data (e.g., address book) or communication functions (e.g., SMS). The real goal is that once the system is integrated, the integrator wants to know the possible interactions could occur between components. 1.2. Summary of the Approach Our problem domain deals with two kinds of scenarios. The components may be equipped with either extensible documentation listing all implemented features and inputs to the components, or with no technical/formal specifications at all. In both cases, finding interactions between components in an integrated system is a difficult task. In the former case, it is not wise and also may not be possible to exhaustively analyze the whole specification, since only a subset of functions is used. For that purpose, extracting only relevant abstractions from a complex technical corpus is a hard job. In the latter case, the absence of any formal specifications or little knowledge about the component is an obstacle in applying the majority of techniques. Therefore, we try to build our approach to address both issues, which is shortly described as follows. For each component to be integrated in the system, we extract only its contextrelevant model, i.e., the model that specifies only the required features, and then use it to detect interactions in the system with systematic testing techniques. To extract the models, we introduce the work in the machine inference domain [18,5,3,10] and apply
M. Shahbaz et al. / Model Inference Approach for Detecting Feature Interactions
163
methods that can actively learn models out of black box machines with systematic testing effort. Contrary to other feature learning work [24], we are not learning models through general artificial learning techniques such as neural networks or rule based learning. Rather, we use active machine learning approach to devise testing strategy through which the model will be inferred incrementally. Once the preliminary models of each component are extracted, we then integrate the system to detect interactions. We take advantage of the testing performed on each component during its inference and apply those tests to the integrated system. If the component or the system globally does not behave in the same manner as in isolation, then an interaction between two or more components is detected. After that, extracted models are refined iteratively so that they capture new behaviors, and then new tests are performed on the system. The procedure will end when the composite behavior of the system conforms to the inferred models of each component. Roughly speaking, our approach considers components as black boxes and uses an incremental learning method to infer partial models. Thus, we are introducing machine inference work in the domain of feature interaction detection. At the end of the approach, we will have context-relevant models of each component representing the possible interactions in the system and projecting the up-to-date picture of the system behavior. The rest of the paper is organized as follows. First, we introduce as intuitively as possible the general framework of our application, i.e., mobile phone system, in section 2. Later, we formally detail the inference technique of our method and its illustration on our example in section 3. Then the application of our method to an instance of the mobile phone system interaction problem is explained in section 4. Finally, section 5 concludes the paper.
2. Mobile Services Framework Our service framework is to create customized systems by acquiring one general platform upon which components of-the-choice are integrated to build a required solution. Consider a basic platform that helps in customizing mobile phone systems by plugging available set of components from other parties. Such components usually conform to some standard means of interfacing with other components for the purpose of enhancing features in an integrated system. A typical example in mobile applications is a component that is developed under the J2ME environment [15] and provides an interfacing API adherent to some JSR specifications [16]. We describe an example of a system that is built up of such components in Figure 1. The system is a calling system that consists of four components, i.e., Call Screening C S, Address Book AB, Media Manager M M and Call Controller CC. These components offer a variety of functionalities in their respective areas and options to interface with other components. For this purpose, each of them is provided with a very large set of inputs, in many of which we may have no interest. We are interested only in certain functionalities offered by each component to be used in our customized system. The exhaustive testing of the system for detecting possible component interactions with all combinations of inputs is not possible and also not required. Therefore, the goal of the integrator is to detect unknown interactions while keeping the scope of analysis relevant to the context.
164
M. Shahbaz et al. / Model Inference Approach for Detecting Feature Interactions
Figure 1. Calling System
In this example, our context of using components is for their prime tasks, which is described as follows with their context-relevant input (CRI) and output (CRO) details: • Call Screening C S: keeps a blacklist of phone numbers from which calls are not accepted. The list is populated and managed by the user. CRI: An input no# is given to check whether a number is blacklisted, where # is the caller’s number. CRO: The component responds with O K if the specified number is not found in the list, or K O otherwise. • Address Book AB: keeps contact records. The records are managed by the user. CRI: An input sr chquer y is given to search a contact. A query can be either a name or a number. CRO: The component responds with the search result r sltname, #, pr o f ile public, private if the record is found, or null otherwise. • Media Manager M M: manages media files for two main purposes, i) to ring the phone by playing a default tone when a call arrives ii) to play a specified media file. CRI: An input ring# is given to invoke the default ring tone, where # is a number that is not concerned with the basic functionality. An input play f ile is given to play a specified media file. CRO: The component responds by giving command star tde f ault.wav to the internal phone media player for playing the default tone on the input ring# , or by giving star t f ile on the input play f ile . • Call Controller CC: controls the call related operations by using other components. CRI: An input call# is given to invoke the call delivery function of the component. CRO: The component responds by sending ring# to invoke M M as a call incoming notification on receiving clearance of the caller’s number from C S, or does nothing silent otherwise.
M. Shahbaz et al. / Model Inference Approach for Detecting Feature Interactions
165
3. Modeling the Components 3.1. Finite State Machine The use of a finite state machine to model functional aspects of a component is wellknown from the telecom and distributed systems communities. Therefore, we are dealing with systems that can be modeled by some finite state machine. Additionally, we are seeing a component as reactive, i.e., it receives an input from the environment and reacts by providing an output and possibly changes its state. This vision leads us to take a machine into account in which transitions can be labelled with inputs and outputs, also known as a Mealy machine. Another assumption is that the states of the machine are stable, i.e., a machine cannot continue without a stimulus from its environment. Following is the formal definition of our model we use in the approach. Definition: A Finite State Machine M is a six-tuple M = {Q, I, O, σ, λ, q0 }, where Q, I and O are finite sets of states, input symbols and output symbols, respectively. σ Q × I → Q is the state transition function, λ Q × I → O is the output function, and q0 is the initial state. When M is in the current (source) state q ∈ Q and receives i ∈ I , it moves to the target state specified by σ (q, i) and produces an output given by λ(q, i). In order to completely define our model, we require dom(σ ) = dom(λ) = Q × I . For this purpose, we add a loop-back transition on the state where the given input is invalid and add a symbol as the output. 3.2. Model Inference Method We are interested in methods that can infer a component with no detailed knowledge beforehand. This is due to the applicability of our approach to the components for which no formal specifications are available. An algorithm that can conjecture a model, defined in section 3.1, from a black box component is given in [21], adapted from [2]. In the following, we provide a succinct description of the algorithm. The basic requirement for the inference algorithm is to construct an input set through which the algorithm performs testing on the component. The assumption to perform tests on the component is the access to its interfaces, i.e., an input interface from where an input can be sent and an output interface from where an output can be observed. Also, it is assumed that the component can be reset before each test. The algorithm starts by testing the component with different combinations of input symbols and conjectures a model when a certain condition is satisfied. This condition is helpful in order to elucidate conflicts in the conjecture. The test cases are constructed automatically from an observation table T and the results of the test cases, i.e., the output strings of the component for the given input strings, are also recorded back into T as a partial mapping from I ∗ to O ∗ . The domain dom(T ) of T is the set of input strings from which the test cases are constructed. To define the structure of the table, let S and E be two finite sets of finite strings from I ∗ , then S ∪ S · I makes the rows of the table and columns are made by E. Initially, S contains an empty string and E = I , i.e., every input symbol makes one column in the table.
166
M. Shahbaz et al. / Model Inference Approach for Detecting Feature Interactions
The test cases are constructed by concatenating s ∈ S ∪ S · I and e ∈ E, as s · e. The resultant output string of the test case is recorded as an entry in the table, as follows: If α is the output string of the test case s · e, then T (s, e) = α , where α is the suffix of α and |α | = |e|. Let s, t ∈ S ∪ S · I , then we define an equivalence relation ≡ over S ∪ S · I as follows: s ≡ t iff T (s, e) = T (t, e), ∀e ∈ E, i.e., when the rows s and t are same. We denote by s the equivalence class of rows that also includes s. The algorithm stops testing when the table is found closed 2 . A table is closed iff for each t ∈ S · I , there exists s ∈ S, such that s ≡ t. In other words, the stoping condition for testing occurs when no new output is observed in the component for longer sequence of test cases. Whenever table is not closed, t is moved to S and T (t · i, e) is extended for all i ∈ I, e ∈ E. Once the table is closed, a conjecture M = {Q, I, O, σ, λ, q0 } is constructed as follows: • • • •
Q = {s|s ∈ S} q0 = σ (s, i) = s · i, ∀s ∈ S, i ∈ I λ(s, i) = T (s, i), ∀i ∈ I
3.3. Inference of the Calling System We have described a method in section 3.2 to infer an FSM model given in section 3.1 from a component. In this section, we illustrate how to infer the individual preliminary models of components in figure 1 using that method. The starting point of the algorithm is to construct a set of abstract inputs for each component. This is an important step in our approach which aims to approximate or model only interesting aspects of a component. Therefore, we construct the set of inputs that are relevant to our context. It is the same for the outputs, i.e., we record only the relevant outputs in the observation table and brush aside all others. For example, we are interested to model a behavior of the component CC when a call arrives in the presence of C S. The basic relevant input for CC is call#. Also, CC is supposed to communicate with C S to block/unblock a particular call. Therefore, we also include O K and K O (the responses of C S) in its input set, which finally becomes ICC = {call#, O K , K O}. The relevant outputs of CC are no# when communicating to C S, silent when the arriving call is blacklisted and ring# to invoke M M when the call is acceptable. The run of the inference method on CC takes two iterations shown in Table 1 and Table 2 respectively, while Figure 2(a) shows the conjecture from the closed table, i.e., Table 2. Since we are interested in modeling only a single behavior of the component, we eliminate loops and unfold conjecture as shown in Figure 2(b). Also, we do not show in the conjecture the transitions labelled with invalid inputs for the sake of simplicity. This can be seen in the observation table where the entries against these inputs show their invalidity on the respective states. We keep the representation of test cases in the table and i/o behaviors on the conjecture symbolic. However, the run of test cases on actual components requires concretization of symbolic inputs, e.g., # must be replaced by some actual number in order to execute it on CC. If there is a behavioral difference between any two numbers, then the numbers can be represented as two separate symbolic inputs, 2 For the reader who is familiar with the original algorithm [2], the other concept called consistency has
been excluded in the optimized version of the algorithm (see [5]).
167
M. Shahbaz et al. / Model Inference Approach for Detecting Feature Interactions
(a) Conjecture of CC from Table 2
(b) Conjecture of CC from Table 2 (unfolded)
Figure 2. Conjecture of CC from Table 2
i.e., #1 and #2 , respectively, to maintain the deterministic property of the state machine. The criteria for selecting concrete values can be guided through certain domain specific policy, and hence not a part of the current approach. Table 1. Not Closed Table for CC (First Iteration) call#
OK
KO
no#
call# OK
ring#
silent
KO
Table 2. Closed Table for CC (Second Iteration) call#
OK
KO
call#
no#
ring#
silent
OK KO
no# no#
call#, call# call#, O K call#, K O
We construct the input set for C S as ICC = {no# − cl, no# − bl}, i.e., the input when # is acceptable and the input when # is blacklisted, respectively. The relevant outputs are O K in the case when # is acceptable and K O, if blacklisted. The closed observation table for C S is given in Table 3 and the (folded/unfolded) conjecture is shown in Figure 3. Table 3. Closed Table for C S
no# − bl
no# − cl
KO
OK
no# − bl no# − cl
KO KO
OK OK
Similarly, the input set for AB is simply I AB = {sr chquer y} and the relevant output is a query result. For M M, the input set is I M M = {ring#, play f ile} and the relevant outputs are ringing a default tone and playing a media file. The preliminary (unfolded) models of AB and M M are given in Figures 4 and 5 respectively.
168
M. Shahbaz et al. / Model Inference Approach for Detecting Feature Interactions
Figure 3. Conjecture of C S from Table 3 (left) and the unfolded version (right)
4. Detecting Interactions We have explained in the previous section how the preliminary context-relevant models can be inferred from the individual components. In this section, we focus on the method of interaction detection between components after their integration into a system. Our understanding of the concept of detecting interactions is due to the known problem when composing a system from individual components. The assumption that a component behaves in an integrated system the same as in isolation is not valid. This is because of the fact that components exchange data during the process which may lead to some unexpected behaviors. This exchange of data is actually an underlying interaction between components which we want to detect after the system is integrated. Therefore, we define the approach for detecting interaction as follows. The integrated system must exhibit the same behavior as prescribed by the inferred models of each component in the system, for all those test cases that are performed during their inferences. Failure of this indicates the underlying interaction(s) between components.
The collection of test cases performed during the inference of each component will be executed on the integrated system. The observed behaviors of the system as a result of these test cases will be compared to the expected ones, i.e., shown in the inferred models of the components. If there is any divergence found, we narrow down our focus to the i/o interfaces of the components which are involved in the test. If there is a component A whose output stimulates any other component B, the output will be treated as an interaction between A and B. This stimulus may not be seen in the preliminary inferred models of A and B as an output and an input respectively. Therefore, we update these models according to the new context by re-inferring them using the inference method. This can be done by recording the new observations in the observation tables of A and B, and then generating test cases until the tables are closed before making the new conjectures.
Figure 4. Conjecture of AB (unfolded)
Figure 5. Conjecture of M M (unfolded)
M. Shahbaz et al. / Model Inference Approach for Detecting Feature Interactions
169
Table 4. Closed Table for C S after fixing new observations no# − bl
no# − cl
r slt. . .
null
no# − bl
sr chquer y
OK
OK
KO
no# − cl
sr chquer y
OK
r slt. . .
null no# − bl, no# − bl no# − bl, no# − cl
sr chquer y sr chquer y
OK OK
no# − bl, r slt. . . no# − bl, null
We apply the procedure explained above to the system in Figure 1. The models of each component are already inferred, as described in section 3. Now, we apply test cases of each component on the system. Let the test case call# has been executed on CC, where # is blacklisted. The expected behavior of CC is to remain silent (as seen in its inferred model in Figure 2(b)) after receiving K O from C S. When the same test case is executed on the integrated system, it starts playing a media file. This divergence leads to investigate the involving components, i.e., CC and C S, according to the models. It is found that C S emits O K which CC interprets as the number is not blacklisted and then sends ring# to M M for call incoming notification. This means that CC is behaving as expected by its inferred model, whereas C S is diverging. It is observed that when receiving no#3 from CC, C S emits an output sr chquer y, where quer y is the number # and stimulates another component AB. The expected behavior of AB is to give out the result of the search query if the contact is found, or null otherwise. It turns out that # is found in the address book and hence AB responds with r sltname, #, pr o f ile, which changes the behavior of C S and generates O K instead of K O. This also discovers the underlying implementation of C S that if the number is found in AB then it should not be blocked. The new observation is recorded in the observation table of C S, shown in Table 3 and the new (unfolded) conjecture is shown in Figure 6. The expected behavior of M M on receiving input ring# is to invoke a default tone (as seen in the model in Figure 5), whereas the system behavior is noticed as playing a media file. This divergence finds an interaction between M M and AB as follows. M M searches the number (given in the ring command) in the address book. If the contact is found, it picks the contact profile as public or private, and plays the respective media file configured with the specific profile. Since the contact # is found in this case, M M plays the media file configured to its profile by sending command star t pr o f ile.wav to media player. The new (unfolded) model of M M is shown in Figure 7.
5. Conclusion We have presented a new approach for feature interaction detection in an integrated system of components using a machine inference method. We built our approach so that it 3 The interpretation of this input in the inferred model of C S in Figure 3 is: no# − bl, since # is blacklisted
in this example
170
M. Shahbaz et al. / Model Inference Approach for Detecting Feature Interactions
Figure 6. New Conjecture of C S (unfolded)
Figure 7. New Conjecture of M M (unfolded)
could be applied in both scenarios, i.e., when the components are provided with complex and huge formal specifications of the features but only few of them are required in the system, and when the components are just seen as black boxes (no specifications are provided). We showed how the use of machine inference methods can cater both scenarios by inferring only the context-relevant partial models of the components, and later how the interactions can be detected automatically by comparing it with the inferred models of the individual components. Our example lies in the framework of mobile phone system customization, in which telecom industries are facing problems for detecting feature interactions after the system is integrated. There are several points under discussion in connection with the improvement of the overall approach. We are studying techniques to incorporate domain knowledge in the inference method that can guide more effective test cases. Also, the component integration requires some decision points about the way components are integrated. These decision points can help in improving test stopping criteria for testing. Regarding scalability, we are experimenting our technique on a kind of systems that typically consist of large number of components. Various questions seem to us interesting. For example, how efficient is our technique to compare a behavior of such system with the inferred partial models of the large-scale components? Secondly, how can the models can be enriched with parametric details of the components, so that they should not blow up in size? It seems also interesting to use constraints defined on these parameters. Thirdly, how the test cases of individual components can be combined as a basis for testing an integrated system? We believe that the proposed method is complementary to those currently being studied. This new approach changes the way in which the problem is tackled, since the specification of the component in use is limited to what is necessary.
References [1] [2] [3]
Marc Aiguier, Karim Berkani, and Pascale Le Gall. Feature specification and static analysis for interaction resolution. In Proceedings of the Formal Methods Symposium, LNCS, pages 364–379, 2006. Dana Angluin. Learning regular sets from queries and counterexamples. Information and Computation, 2:87–106, 1987. Dana Angluin. Queries revisited. Theor. Comput. Sci., 313(2):175–194, 2004.
M. Shahbaz et al. / Model Inference Approach for Detecting Feature Interactions
[4]
[5] [6] [7] [8] [9]
[10] [11] [12] [13]
[14] [15] [16] [17]
[18] [19] [20] [21] [22]
[23]
[24] [25]
[26]
171
M. Arango, L. Bahler, P. Bates, M. Cochinwala, D. Cohrs, R. Fish, G. Gopal, N. Griffeth, G. E. Herman, T. Hickey, K. C. Lee, W. E. Leland, C. Lowery, V. Mak, J. Patterson, L. Ruston, M. Segal, R. C. Sekar, M. P. Vecchi, A. Weinrib, and S.-Y. Wuu. The touring machine system. Commun. ACM, 36(1):69–77, 1993. Jose L. Balcazar, Josep Diaz, and Ricard Gavalda. Algorithms for learning finite automata from queries: A unified view. In Advances in Algorithms, Languages, and Complexity, pages 53–72, 1997. R. Buhr, M. Amyot, D. Elammari, T. Quesnel, and S. Gray. Feature-interaction visualization and resolution in an agent environment. Muffy Calder, Mario Kolberg, Evan H. Magill, and Stephan Reiff-Marganiec. Feature interaction: a critical review and considered forecast. Comput. Networks, 41(1):115–141, 2003. E. Cameron. A feature interaction benchmark for in and beyond, 1994. Jane Cameron, Kong Cheng, Sean Gallagher, Fuchun Joseph Lin, Peter Russo, and Daniel Sobirk. Next generation service creation: Process, methodology, and tool integration. In Kristofer Kimbler and Wiet Bouma, editors, Proc. 5th. Feature Interactions in Telecommunications and Software Systems, pages 299–304. IOS Press, Amsterdam, Netherlands, September 1998. Jonathan E. Cook and Alexander L. Wolf. Discovering models of software processes from event-based data. ACM Trans. Softw. Eng. Methodol., 7(3):215–249, 1998. Lydie du Bousquet and Olivier Gaudoin. Telephony feature validation against eventuality properties and interaction detection based on a statistical analysis of the time to service. In FIW, pages 78–95, 2005. Amy P. Felty and Kedar S. Namjoshi. Feature specification and automated conflict detection. In Feature Interactions Workshop. IOS Press, 2000. J. Paul Gibson. Towards a feature interaction algebra. In Kristofer Kimbler and Wiet Bouma, editors, Proc. 5th. Feature Interactions in Telecommunications and Software Systems, pages 217–231. IOS Press, Amsterdam, Netherlands, September 1998. Seth Homayoon and Harmi Singh. Methods of addressing the interactions of intelligent network services with embedded switch services. IEEE Communications Magazine, pages 42–47, December 1988. Java 2 Platform, Micro Edition - J2ME. http://java.sun.com/javame/index.jsp. Java Specification Requests. http://jcp.org/en/jsr/all. Hélène Jouve, Pascale Le Gall, and Sophie Coudert. An automatic off-line feature interaction detection method by static analysis of specifications. In Proceedings of the 8th International Conference on Feature Interactions in Telecommunications and Software Systems (FIW’05), pages 131–146. IOS Press, 2005. Michael J. Kearns and Umesh V. Vazirani. An introduction to computational learning theory. MIT Press, Cambridge, MA, USA, 1994. Dirk O. Keck and Paul J. Kuehn. The feature and service interaction problem in telecommunications systems: A survey. IEEE Trans. Softw. Eng., 24(10):779–796, 1998. Kristofer Kimbler, Carla Capellmann, and Hugo Velthuijsen. Comprehensive approach to service interaction handling. Comput. Netw. ISDN Syst., 30(15):1363–1387, 1998. Keqin Li, Roland Groz, and Muzammil Shahbaz. Integration testing of components guided by incremental state machine learning. In TAIC PART, pages 59–70. IEEE Computer Society, 2006. David Marples and Evan H. Magill. The use of rollback to prevent incorrect operation of features in Intelligent Network based systems. In Kristofer Kimbler and Wiet Bouma, editors, Proc. 5th. Feature Interactions in Telecommunications and Software Systems, pages 115–134. IOS Press, Amsterdam, Netherlands, September 1998. Malte Plath and Mark D. Ryan. The feature construct for SMV: Semantics. In Muffy H. Calder and Evan H. Magill, editors, Proc. 6th. Feature Interactions in Telecommunications and Software Systems, pages 129–144, Amsterdam, Netherlands, May 2000. IOS Press. S. Tsang and E. Magill. Behaviour based run-time feature interaction detection and resolution approaches for intelligent networks, 1997. Greg Utas. A pattern language of feature interaction. In Kristofer Kimbler and Wiet Bouma, editors, Proc. 5th. Feature Interactions in Telecommunications and Software Systems, pages 98–114. IOS Press, Amsterdam, Netherlands, September 1998. Pamela Zave and Michael Jackson. A component-based approach to telecommunication software. IEEE Softw., 15(5):70–78, 1998.
172
Feature Interactions in Software and Communication Systems IX L. du Bousquet and J.-L. Richier (Eds.) IOS Press, 2008 © 2008 The authors and IOS Press. All rights reserved.
Considering Side Effects in Service Interactions in Home Automation - an Online Approach Michael Wilson a Mario Kolberg b Evan H. Magill b a Sysnet Ltd., 457 Sauchiehall Street, Glasgow G2 3LG [email protected] b Department of Computing Science and Mathematics, University of Stirling, Stirling. FK9 4LA, {mko,ehm}@cs.stir.ac.uk Abstract. The feature or service interaction problem within home networks is an established topic for the FI community. Interactions between home appliances, services and their environment have been reported previously. Indeed earlier work by the authors introduced a device centric approach which detects undesirable behaviour between appliances and their effects on the environment. However this previous work did not address side-effects between components of the modelled environment. That is some appliances do not only affect the environment through their main function, but may do so also in other ways. For instance, an air conditioner cools the air as its main function, but also decreases the humidity as a side-effect. Here we extend our earlier approach to handle such side effects effectively and discuss previously unreported results.
1. Introduction The intelligent home is a reality. Increasingly, newly built homes are equipped with intelligent home equipment for improved audio visual experiences, security and comfort. These devices are controlled by software services provided by a number of vendors. Typically, these services are run from a central point within the home, a residential gateway which may take the form of a set top box. The OSGi (Open Services Gateway Initiative) gateway [1] is one such gateway which is able to run and manage services. Devices will register with the gateway, allowing services to use a multitude of devices. OSGi becomes the glue which connects devices and services [2]. However, with such interworking between different services and devices, unexpected or undesirable outcomes are inevitable. Unexpected or undesirable outcomes are known as Service Interactions, or Feature Interactions [3]. This is when the action of one service, or device, has an impact on another. The phenomenon is well understood and documented within the telephony domain as it has been the focus of research for over a decade with work widely published. However, the problem has also been identified and investigated in other domains, such as E-mail [4] and web-services [5], and more recently, within home automation [6,7,8,9].
M. Wilson et al. / Considering Side Effects in Service Interactions in Home Automation
173
1.1. Service Interactions in the Home Although the service interaction problem in the home is similar to the feature interaction problem in telephony, there are some differences. The main difference is that many more interactions happen indirectly. They happen through an additional level - the environment. Here, the environment can be room temperature or movement in the room, for example. Previous approaches in telephony do not use the environment to detect interactions. Offline, a-priori and captive environment approaches are not suitable because in the home a service can behave differently depending on the devices available and how a service is configured. Further, the configuration of services and devices in the home can easily change as home networking protocols have been designed to specifically support ad hoc networking (UPnP for example). Services such Home Ventilation and Air Conditioning, Security, and Power Control Services are able to use a number of devices to achieve their goals. However, since services are automatically controlling several devices, each carrying out their own role, it is inevitable some conflicts will occur. This section highlights the issue of negative interactions. Feature interactions can occur for a number of reasons. The most common are conflicting goals and broken assumptions [3]. Services pursue specific goals, e.g. a Power Control service switches off unnecessary appliances to save energy. However, a feature of a security service might act to switch on lights to pretend the home is occupied. Clearly, the goals of the two services conflict. Similarly, services need to make certain assumptions about their environment. For instance, one assumption of the security service is that when activated, nobody is at home, therefore appliances will not be used and there should be no movement. However, the climate control service may control the window blinds to prevent the sun from unnecessarily heating up the home. Here the assumption of the security service that no appliances will be used, is violated by the climate control service. The remainder of the paper is organised as follows. Section 2 discusses the employed runtime approach together with its constituting parts: the three layer model, environmental variables, service priorities, device database and the service interaction manager. This section introduces the novel concept of side effects in feature interaction handling. It also contains a short description of the dynamic behaviour of the approach. Section 3 presents the working of the approach using a detailed example. Section 4 discusses results together with a decsription of the testbed, service examples and interaction scenarios. At the end of that section the results of the approach are compared against the taxonomy for service interactions in home networks by Kolberg et al [6]. This is followed by some concluding remarks.
2. A Runtime Environmental Approach Traditional approaches to feature interaction have been service centric, concentrating on the actions, goals, and assumptions of the services. Further, many of these approaches are off-line. Work by Nakamura et al [8] and Metzger and Webel [9] does concentrate on the device and environment, rather than just the service. However, these approaches are offline. Although off-line approaches are useful for detecting some interactions, they only
174
M. Wilson et al. / Considering Side Effects in Service Interactions in Home Automation
work in a system where all services and devices are known. In the home this is unlikely as services from many vendors will work together. Even if all services were known, the configuration of the devices will be unclear as devices will join and leave the network. Also, the services which control devices may behave differently depending on the devices available at runtime. Indeed the services may change over time. This makes an off-line approach unworkable within the home. Therefore, in this paper a runtime, feature manager based approach is advocated. While feature manager based approaches require central control, this is not a problem in the home as typically the residential gateway is centralised and all devices and services register on the same platform. The approach has to be transparent to users. If a negative interaction occurs, it should be avoided. However, if the interaction is positive, it should be allowed, as this is desirable. 2.1. Basic operation of the Feature Manager Some interactions can be detected at the device level – two services try to use one device. However, others seem unconnected and cannot be verified by monitoring network messages. For instance there is an interaction between the security service and the climate control service where the climate control service operates a fan while the security service is active. The movement of the fan triggers the security service. This conflict does not happen at the device level, it happens elsewhere - in the environment. The movement of the fan is detected by the sensors of the security service. For this reason, the environment layer is included and is central to this approach. The approach works by controlling access to the environment layer by using access locking. The approach assumes that there will be a residential gateway in the home where services are managed and executed [10]. The services which run on the gateway will send commands directly to the devices. Since these messages are sent at runtime, a live manager is required. This manager must intercept messages which are sent from the service to the device and determine whether a particular command will cause a negative interaction. Crucially, the manager distinguishes between positive and negative interactions. Positive interactions result from two or more services or devices working together towards a common goal. In contrast, a negative interaction is where the outcome is undesirable. After analysing an instruction sent to the device, if the manager decides the action will cause either no interaction or a positive interaction, the message will be forwarded to the device. However, if the manager detects a negative interaction, the message will not be allowed to proceed to the device. Instead, the manager adds an entry to its log and discards the message. 2.2. The three layer approach The advocated approach uses three layers. The top layer is the service layer which contains the services that automate the home. These services may use one or a combination of home appliances (devices) which are located in the second layer. The approach has been designed in a way that the underlying device communication protocol is not relevant making the approach very flexible. The device layer can contain two types of devices: input devices and output devices. An input device, such as a thermometer, will only monitor an aspect of the environment
M. Wilson et al. / Considering Side Effects in Service Interactions in Home Automation
175
(e.g. room temperature) and return readings to services. An output device, such as a heater, will alter the environment in some way (increase temperature). Where a physical device contains both functions (input and output) it would be seen as two devices in the approach similarly to the UPnP approach which splits devices in such a way. Finally, the bottom layer is the environmental layer containing environmental variables. These variables are a representation of a room’s environment. Examples of environment variables include: room movement, room temperature, room lighting levels, humidity, and smoke. 2.3. Locking devices and variables Within the device and environment layers, controlling access to devices and environmental variables is achieved through locking. For devices, locking is only necessary for output devices – as input devices do not affect their environment in any way. Simply locking a variable is too crude for this approach. For example, when a heater is active the room temperature variable would be locked. Since the variable is locked, another heater would not be able to also heat the room. Although there is an interaction here, it is a positive one and should be allowed. On the other hand it would not be correct to share the temperature variable between a heater and an air-conditioner as the two devices have conflicting goals. Concepts were used from the Biased Protocol [11]. Especially, the concepts of Shared Locks and Exclusive Locks have been adapted in this approach. One important aspect with home devices are side effects. Quite often a device does not only affect the environment through its primary function, but also in other ways. It is not sufficient to treat side effects in the same way as primary effects. This approach decrees that each device can affect the environment through one primary effect and zero or more side effects. If a device affects the environment in two equally important ways, the device should be split into two separate logical devices. For instance, a television affects the light levels and the noise variables. Both can be argued to be primary effects. Hence the television needs to be split into an audio device and a video device, the first affecting the noise variable and the second the light variable as primary effects. If a service wishes to lock a device, or a device wants to lock a variable, they must be locked with one of five options: • NS : Not Shared. The variable or device is locked and may not be altered by any other device or service. This lock is similar to the exclusive locks in the biased protocols. • S+ : Shared, but increase only. The variable is shared on the condition that anyone wishing to use the variable must increase it. Therefore, two devices may lock a variable with S+ if they both increase value. This allows two heaters to operate. • S – : Shared, but decrease only. Like the previous setting, the variable is shared on the condition that anyone wishing to alter the variable must decrease it. • S± : Shared. The variable or device is shared and it is unknown whether the variable will be decreased or increased in value. This lock is not compatible with S – or S+ because S± could go either way (increase or decrease). This lock type can be likened to the shared lock type from the biased protocol. • SE : Side Effect. This flag is placed on an environment variable by a device. Like the locks, SE must be placed on a variable before the device is allowed to run. A
176
M. Wilson et al. / Considering Side Effects in Service Interactions in Home Automation
device can place the SE flag on any environment variable, provided the variable is not already locked with NS. If a variable is locked with S+, S – or S±, SE can be added to the variable. By using these locks, devices are able to cooperate and work to a common goal while conflicting actions will be avoided. Table 1 summarises the list of locks above and √ shows the combination of locks which are allowed ( ).
Figure 1. Locking – allowed combinations.
Many services or devices may lock a device or variable with matching S+, S– or S±. However, only one service or one device has access when a device or variable is set with NS. Once a lock has been set, it is sometimes not clear when the task is complete and the lock can be lifted. This approach assumes that a session begins when a service starts using a device and finishes when the service closes or switches the device off. Therefore, when a lock is placed on a device, the lock is valid until the service finishes using the device. 2.4. Service Priorities By using the locks as described above, a strict first come – first served order is implied. However, in certain circumstances this is not adequate. For instance suppose the home is being burgled and the VCR is in use by the entertainment service to record the owner’s favourite show (locked with NS). Since the VCR is in use, the home security service is unable to access the device to record the intruder. Service priorities have been introduced to resolve such scenarios. Each service gets a priority number assigned. A priority value may change as new services are added to the home, or a user’s preferences change. The priorities of services will range from 1 to n, where n is the total number of services in the gateway. Priority 1 is the lowest and n is the highest. There is one special priority: -1 (no priority) for services that have not been assigned a priority. 2.5. The Device Database In order to understand the effects of the actions of devices on the environment the approach needs to have knowledge on what each device does. For instance if a heater is turned on, it will produce heat, which in turn affects its environment and increases the room temperature. It is noted that this database contains general information on devices (types), e.g. a lamp produces light, an air-condition cools. It does not contain details on
M. Wilson et al. / Considering Side Effects in Service Interactions in Home Automation
177
which devices (or services) are deployed in a particular home. Adapting to the latter during runtime is the main advantage of online approaches. Hence for this approach such information is not necessary. Therefore a database which holds device details is required. The database of devices is likely to be constantly growing as new devices come onto the market. There is an issue with the location of such a database. A local database would be rather large and difficult to mange and keep up to date with new devices. In contrast, if the database were to be hosted remotely, all homes would be able to share the same data but a constant connection to the internet is required. Also in the future appliances may store information about their behaviour internally, however, this cannot be assumed at present. Consequently, for the purpose of this paper, a remote database is used. When querying the remote database, the manager supplies a device type, such as heater, air-fan, and television. The type supplied is matched with the description within the database and the device specifications in an XML format are returned. An Example is shown in Figure 2. This shows a heater is an output device, and the default usage for a service is NS. The XML also shows the environmental variables this device affects, Temperature and Humidity. Clearly, increasing the temperature is the primary goal for using a heater, hence the change in temperature is the primary effect for the heater. The change in humidity levels is a side effect as influencing humidity is usually not the primary goal of using a heater. The default value for Temperature is S+ as temperature will be increased. For Humidity, the value is SE, to indicate the side effect. Various other attributes are included in the device description, such as the default lock values for the device. This makes it possible for this approach to work without having any input from the service - fully operational on its own. 2.6. The Service Interaction Manager (SIM) The manager operates as a standard service within the service gateway within the home. The manager intercepts messages and authorises them after checks have been carried out. There are two possible outcomes: either forward the message to the device or reject the message. To make the decision the manager analyses the state of the required devices and associated environmental variables based on an internal image of the state of all devices. The manager generates the view of the home by searching the gateway for all device objects registered and consulting the remote device database obtaining all associated environmental variables required for each device. If a device joins or leaves the network this change will be picked up by the manager which will update its view. If the manager authorises a command to a device, the manager records the new device state, thus keeping itself up to date and consistent. If a device is controlled directly by the user, perhaps the user has pressed play or stop buttons on the VCR, this new state must be recorded in the managers view. Importantly, even if the SIM notes that the new state causes an interaction, the new state still has to be recorded as this is the current state of the device. 2.7. Dynamic Behaviour of the approach A model which shows the home services as well as devices and their relationship with the environmental variables has been developed (Fig. 3). All services, devices and envi-
178
M. Wilson et al. / Considering Side Effects in Service Interactions in Home Automation
1. 2. 3. 4. 5. 6. 7. 8.
Heater Output NS deviceOn <SuggestedDeviceUsage use="NS" /> <EnvironmentalVariable name="Temperature" defaultValue="S+" duration="3" /> 10. <EnvironmentalVariable name="Humidity" defaultValue="SE" duration="3" /> 12. 13. 14. deviceOff 15. <SuggestedDeviceUsage use="" /> 16. <EnvironmentalVariable name="Temperature" defaultValue="" duration="0" /> 18. <EnvironmentalVariable name="Humidity" defaultValue="" duration="0" /> 20. 21. Figure 2. Description of Device Type ’Heater’
ronmental variables would normally be included. However, for clarity, only one service, two devices (an input device and output device) and one environmental variable has been included here. In this example, the service layer contains one service and the service name is shown (Figure 3(b)), along with the priority of the service (a). In the device layer ‘ Sensor T’ is surrounded by a double rectangle whereas ‘Device Y’ has a single rectangle. The double rectangle represents an input device and the single rectangle represents an output device. Both primary and side effects are shown. The fact this is an output device is shown by the direction of the arrows (f) and (g). The environmental layer shows the variable Z that is monitored by the sensor device. On the other hand, the arrow (g) shows that the output device will also affect the environment variable Z. Figure 3(c) shows the lock for controlling access to the device itself. This lock is set by the service. Generally, a service will use NS, as it is unlikely to want another service using the device while it is in use. If a service does not understand how to set the lock for the device, a default lock from the device description database, which is normally NS, is used. Once access to the device has been gained, the manager must ensure the device is able to gain access to all required environmental variables (i). The variables a device affects when it carries out specific actions are obtained from the remote device database. Figure 3(e) shows the proposed lock for the environmental variable. The arrow (g), points to the variable where the lock is to be set. (h) shows the current lock of the environmental variable. Locks are shown both at the bottom of devices and on the environmental variables because the variable may already be locked by another device.
Env. Layer
Device Layer
Service Layer
M. Wilson et al. / Considering Side Effects in Service Interactions in Home Automation
1
(a) (b)
Service X
Sensor T
179
-
(c)
Device Y
(d)
-
(e)
-
Environmental Variable Z
(f) (g) (h) (i)
Figure 3. Outline of Model
3. A running Example – Interaction between Climate Control and Security Services The security service monitors movement in the home through the use of PIR sensors. If there is movement detected an alarm is raised by ringing a bell. The climate control service keeps the house at a comfortable temperature by using air conditioning, heating, and by opening windows. Clearly, while the alarm is armed, it would not be appropriate for the climate control to open windows in the home. Not only would opening the windows cause movement, triggering the alarm, but the two services have conflicting goals. The security service aims to keep the home secure, while the climate control is to cool the home by opening the window which renders the home insecure. The climate control service uses a thermometer (input device) to get the room temperature. In this home, there is also an external thermometer, so the outside temperature can be obtained, but for clarity this is not shown in the figures. To control the temperature the service can use a heater to increase temperature, an air-conditioner to decrease temperature and a window. The open window will change room temperature either up or down, depending on the outside temperature. During opening and closing, the window will create movement as a side effect. Similarly, the heater and air condition will affect humidity as a side effect. Therefore, among the three devices, three environmental variables are required: temperature, movement, and humidity. The security service requires a motion sensor to detect movement, and two output devices: an alarm control panel, which is used to set or disable the alarm, and an alarm bell. When the alarm is triggered, the bell device is used to draw attention. These devices affect movement and sound as environmental variables. It is assumed that the service priorities have been set by the user (or a service provider on their behalf). The climate control has been set with the lowest priority, ‘ 1’ , and the security service with ‘ 2’ . Therefore, the security service has priority over the climate control service. This setup is shown in Figure 4.
2
1
Climate Control (HVAC)
-
-
-
Heater
Air Con
Window
-
-
-
-
-
Humidity
Security
Temperature
-
-
-
NS
Alarm Control Panel
-
Motion Sensor
NS
NS
Movement
Device Layer
Thermometer
Service Layer
M. Wilson et al. / Considering Side Effects in Service Interactions in Home Automation
Bell -
-
Sound
Environment Layer
180
(a)
(b)
(c)
Figure 4. Arming the security service
As the security service is armed, the security service accesses the alarm device. Since this device is not in use, it is able to gain access and lock it with NS as it does not want anyone else using the device (Figure 4(a)). Using the default values from the device description database, the manager knows that for the ‘ arm’ command, the movement variable should be locked using NS (Figure 4(b)). Since the variable is not currently locked, this value is set in the variable (Figure 4(c)). The security service is now armed and monitoring the home. The climate control service is also active, but it does not require any devices, other than the thermometer notifying the service of a change in room temperature. 3.1. Handling the Interaction Assume that the temperature within the home starts to rise and the climate control service (knowing that it is cooler outside) is to open a window to allow the cool air in. If the service interaction manager were not in place, the climate control service would open the window making the home insecure and also triggering the alarm. With an active manager the command to open the window will be checked by the manager. First, the climate control service must be able to access the window. Since the window is unused, the service is granted access and sets the device lock with NS, Figure 5(b). The dashed lines indicate temporary links prior to authorisation by the manager. The manager can now attempt the device’s proposed locks for the environmental variables. The two proposed lock boxes are shown (Figure 5(c) and (d)). Since it has been determined that the temperature outside is colder than inside, the device needs to lock the temperature variable with S– (c) and (g). Also, as opening the window will cause movement as a side effect, the device needs to lock movement with SE (d). However, this fails as movement is already locked with NS (f), and NS and SE are not compatible (cf. Figure 1). Furthermore, the lock on the movement variable can not be overwritten as it has been set by a service with a higher priority. Hence the window device is unable to open and the interaction has been successfully avoided. If the climate control service truly was a smart service, it would search for an alternative way of cooling the room and select the air condition. This would be successful and both services would work in harmony.
2
1
Climate Control (HVAC)
Security
(a)
Thermometer
-
NS
Air Con
Window
-
(c)
-
-
-
S-
-
NS
Alarm Control Panel
SE
Motion Sensor
NS
Device Layer
-
Heater
(b)
181
Service Layer
M. Wilson et al. / Considering Side Effects in Service Interactions in Home Automation
Bell -
(d)
(g)
S-
-
Humidity
Temperature
NS
Movement
-
Sound
Environment Layer
(e)
(f)
Figure 5. Avoiding the interaction
4. Experimentation and Results To show the effectiveness of the approach, experimentation on 11 scenarios was carried out. In the following the testbed is introduced followed by a summary of the obtained results. 4.1. Testbed The testbed (cf. Fig. 6 includes a selection of devices (UPnP and X.10), the service management framework (OSGi) and the home control services. X.10 was chosen as one of the control protocols because it is currently widely used in homes. The protocol is popular because of the availability [12] and cost of the components. Also, typical household devices (e.g. lamps, fans, heaters) can be used and no new wiring is required as it uses the power lines as the transport medium. For experimentation, a range of X.10 modules were used including a CM11 gateway model, lamp modules (together with lamp), appliance modules (with a desk fan) and motion sensors as well as a virtual X.10 window opener. UPnP is a new home networking protocol and is growing with more OEM companies becoming members. However, at present, only UPnP routers and internet gateways are available to buy off the shelf. Devices required for this test-bed, such as heaters or air conditioners are not available yet. Therefore, virtual devices were used. Using virtual devices offers flexibility for both creating and controlling the device. To create a virtual UPnP device, a UPnP SDK, including UPnP stack, was used. For this experiment, an open source UPnP stack was used - CyberLink [13]. As well as forming the base for the UPnP devices, it was also used to create the UPnP driver for the service management framework. Each UPnP device developed had a simple GUI for user input (e.g. set device on or off, or set channel, etc.). These are the kind of controls one would expect on simpler devices. The XML device description and service definitions followed those published by the UPnP Forum. Where definitions for devices were not available, the basic device specification was used [14]. The following UPnP devices were created and used in the testbed: Thermometer – a device which reads room temperature. When queried it returns the room temperature. Also when the temperature changes, subscribed parties are no-
182
M. Wilson et al. / Considering Side Effects in Service Interactions in Home Automation
• X.10 Window • UPnP Blinds • X.10 Motion • X.10 Fan Sensor & Receiver
• X.10 Lamp
• DHCP Server
Power line
• X.10 Gateway
• UPnP Alarm panel
• UPnP Thermometer
• UPnP Heater
IP Network
• Residential Gateway
USB Cable • UPnP Air-conditioner
• SIP Server guilder.cs.stir.ac.uk
• USB Webcam
• UPnP VCR
• UPnP T.V.
• UPnP Heater 2
Figure 6. The testbed used for experimentation
tified. As this is not a real device, a slider-bar is used to manually change the temperature. Heater – a simple heater device with two options: on or off. Since this is a virtual device and to mirror what would happen in reality, the heater finds all thermometers in the room and increases their temperature. This addition did not change the functioning of the device. Air-conditioner – like the heater, this is a simple device which has two options: on or off. Similar to the heater, to simulate reality the device finds thermometers and decreases the temperature. Television – a device which tunes into a channel and displays the picture sent by a TV station. The TV has a number of functions: on and off, change the channel, and volume up and down. For experimental purposes, a TV station had to be created. This is a Java server which devices can connect to. Images are then sent by the server at regular intervals (1 per second). VCR – the VCR records to file any images it was sent. The user can then tune the TV into the VCR and play back the recorded images. The options available in this device are: on and off, play, record and set channel to record. Window blinds – This device simulates a small motor which can open and close the blinds accordingly. Additionally, two support devices were used in the test-bed. These were: a USB web camera, and a SIP Server [15]. The IBM Service Management Framework was used as the implementation of the OSGi gateway. An X.10 and UPnP driver had to be developed as these are not included in the SMF. The UPnP driver is based on the Cybergarage UPnP stack [13] and follows the UPnP driver specification in the OSGi standard [16]. Similarly, the X.10 driver was developed using an existing open source Java X.10 API [17]. The Java Communications API [18] was used to access the communications port. The following services were implemented on the OSGi gateway. Heating, Ventilation and Ai-conditioning (HVAC) – This service monitors the temperature in the home and keeps it at a comfortable temperature as defined by the user.
M. Wilson et al. / Considering Side Effects in Service Interactions in Home Automation
183
Home Security Service (HSS) – This service is used in conjunction with an alarm device. The service can be set, and configured, through the alarm device. The role of this service is to monitor the home for intruders and alert the owner of such an event. This service also has the away from home feature which makes the home look occupied when the home is empty. Power Control Service (PCS) – The aim of this service is to reduce the amount of power a home consumes. It does this by turning off devices when no one is home. The service can be set to turn on devices which consume a lot of energy when electricity is cheaper - a washing machine, for example. Home Entertainment Service (HES) – This service controls entertainment devices, such as the TV, stereo, and VCR/DVD devices. One of the main features of this service is that it can in principle monitor viewing habits and automatically record certain television shows. The service does allow an option to manually set the time and date to record a television programme. Communications Support Service (CSS) – This service supports the use of email and telephone. The user can be notified of email arriving through their television. Also, any incoming telephone calls can be displayed on the screen; the user can decide whether to accept the call or not. Humidity Control Service (HCS) – Like the HVAC service, this service controls the humidity by employing a humidifier and a dehumidifier. A humidistat is used as the input device. Carbon Monoxide Safety Service – This service monitors the CO content in the air. If the levels are too high it will open a window and alert the user. 4.2. The test cases For testing all of the above services were used. The security service and carbon monoxide safety service have priorities, 2 and 3 respectively, set. Through experimentation, it was found that a ‘first come, first served’ approach was adequate for all but safety services. However, it must be noted that this does depend on user preferences. Therefore, a user may wish to set all services to have a priority. The following list provides details on the individual interaction scenarios. Scenario 1: HSS:AFH vs PCS – During the absence of the home owner, the security service’s away from home feature turns appliances on to give the impression that someone is at home. However, the power control service turns appliances off to save energy. Assuming the security service is running first, it has locked the TV and lamp with NS, therefore no other service can access them. Consequently, when the power control service tries to gain access to turn the devices off, its request is denied. If, however, the power control service accesses and locks the devices first, since the security service has a higher priority, the power control service will be overridden by the security service. Therefore the security service will be allowed access to both the lamp and the TV. Scenario 2: HSS:Alarm vs HES – The security service is triggered while the home entertainment service is recording a TV program on the VCR (locked NS). The HSS wants to record pictures from the security camera on the VCR. Because the HSS
184
M. Wilson et al. / Considering Side Effects in Service Interactions in Home Automation
has a higher priority than the HES, the the HES has to give up control of the VCR. The VCR can then be reassigned to the security service. Thus, the picture of the burglar can be recorded on tape. If the HSS is active first and the HES tries to record a programme then HSS:Alarm feature has acquired the VCR and locked it with NS. The home entertainment service is then refused as it has a lower priority. Scenario 3: HSS:AFH vs HVAC – Here, the away from home feature of the security service is turning on lights and closing the blinds following a pattern to make the home appear occupied. However, the HVAC service wants to increase the room temperature by opening the blinds. However, the HVAC service cannot get access to the blinds as they are locked (NS) and it has a lower priority as the security service. The blinds remain closed. Scenario 4: HVAC vs HSS:Alarm – The HSS service monitors movement in the home. The HVAC service is set to cool the home. Realising it is cooler outside, the climate control wants to open the window to allow cool air in creating movement and triggering the alarm service. This interaction was discussed in detail in the previous section. Scenario 5: Within HVAC – Issue 1 – The problem here is that the climate control service has been configured incorrectly and can potentially allow both the air conditioner and heater to operate simultaneously. Since the heater increases room temperature, the temperature variable is locked with S+. The climate control then tries to turn on the air conditioner. Since the air conditioner device is not in use it gains access, but it cannot lock the room temperature variable with S– as this is incompatible with S+. Both devices have also a side effect on humidity (SE). Both devices in this scenario can successfully lock the humidity variable with SE. Hence for the interaction detection this is without consequence. Scenario 6: Within HVAC – Issue 2 – This interaction is similar to the interaction above, but here the heater and air conditioner are not used at the same time, but sequentially in a loop, that is the home is heated up, only to be cooled down again, and then be heated again and so on. Unfortunately, this is one type of interaction which this approach cannot detect. This is because after the heater, or air conditioner, has completed its task, it releases locks on all its variables. These variables are then free to be locked by any other device. Scenario 7: within HSS – This interaction occurs when the alarm feature is monitoring the home for intruders and the away from home feature wants to lower the blinds creating movement which in turn triggers the alarm. In this example, the alarm is armed and the movement variable is locked with NS. When the away from home feature tries to lower the blinds, it needs access to the blind device (ok) and the movement and temperature variables. However, locking the movement variable is not permitted. Therefore, the blinds cannot be lowered and the interaction is avoided. Scenario 8: within HVAC – The first seven scenarios have shown how the approach avoids negative interactions. However, the approach must also allow devices to cooperate and work together to achieve a common goal. This example shows two heaters working together to heat a room quickly. The service locates two heaters. Since no other service is using the devices, access
M. Wilson et al. / Considering Side Effects in Service Interactions in Home Automation
185
is granted. Then the temperature variable needs to be locked with S+ by both heaters. The first heater is able to lock with this value successfully. When the second heater tries to lock with S+, the variable is already in use. However, this is allowed because its value of S+ is compatible with the locked value of S+. Furthermore, because of the side effect of both heaters in terms of decreasing the humidity, the humidity variable needs to be locked by both using SE. Again this is permitted. Both heaters can operate. Scenario 9: HES and HVAC – This is a second positive interaction. The air condition is switched on by the HVAC service and the owner is watching a movie through the entertainment service. Now the entertainment service tries to close the blinds to prevent any glares on the TV. The air condition has locked the temperature variable with S–. The entertainment service gets access to the blind device (NS) and movement variable (SE). The temperature variable is in use (S-) but the HES service also uses S- for access. This is allowed and the blinds can close. Scenario 10: CMSS vs HSS – The security service is armed, but as the carbon monoxide levels exceed safe levels, the CMSS tries to open a window. This scenario requires priorities. HSS is assigned a priority of 2 and the CMSS has been set priority 3. If while the security service is active, the carbon levels exceed a safe limit, the CMSS must open a window. Access to the window is granted (NS), however, access to the movement variable is locked by the HSS service (NS). But since CMSS has a higher priority than HSS, CMSS is able to get access to the variable. The window can be opened. A similar interaction may occur between the climate control service and CMSS. The climate control service may be trying to heat the home (using heaters). By opening a window this would let cold air in. The conflict is the same – the CMSS wants to open the window, whereas the other services (HVAC or HSS) want to remain shut. Scenario 11: HCS and HVAC – This scenario illustrates side effect of the heater on the humidity variable (SE). Clearly, there is an issue between the humidity control service and the HVAC service on the humidity variable. However, as the heater accesses the humidity variable as a side effect (SE) it is compatible with the lock by the HCS (S+), and both services are allowed to work simultaneously. 4.3. Results Table 1 summarises the results obtained from experimentation. All positive interactions have been allowed by the approach, and all negative interactions except for the looping ones have been avoided. Hence the approach can handle successfully a very large proportion of interactions in the home. Table 2 shows the types of interactions which the approach avoids. Kolberg et al. [6] identified four types of interaction, with looping a special case of SAI. The approach is not able to handle the looping interaction type because the locked variables are released, and can then be locked by another device.
186
M. Wilson et al. / Considering Side Effects in Service Interactions in Home Automation
Scenario
pos./neg.
Description of Interaction
1
negative
HSS:AFH with PCS
2
negative
HSS:Alarm with HES
3
negative
HSS:AFH with HVAC
4
negative
HVAC with HSS:Alarm
5
negative
Within HVAC (issue 1): wasting energy
6
negative
Within HVAC (issue 2): Looping
7
negative
Within HSS AFH with Alarm features
8
positive
Within HVAC
9
positive
HES with HVAC
10
negative
CMSS with HSS
11
negative
HCS with HVAC Table 1. Summary of results
Interaction type Multiple action interaction (MAI) Sequential action interaction (SAI) Looping (Special case of SAI) Shared trigger interaction (STI) Missed trigger interaction (MTI)
handled by approach √ √ √ √ √ × √ √ √ √ √
Handled by approach √ √ × √ √
Table 2. Interaction types handled by the approach
5. Conclusions In this paper we presented a runtime technique which prevents negative interactions while allowing positive interactions to occur. The approach considers incompatible accesses to devices and aspects of the environment (environmental variables). This paper introduces side effects of devices on the environment as a novel concept in feature interaction improving the accuracy of the approach. With side effects, minor influences of devices on the environment (a heater reduces humidity) can be captured in the model. A comprehensive case study with a wide variety of services and devices is presented demonstrating the power of the approach. It is shown that except for looping interactions, the approach can handle all other types of interactions identified in home networks. Because the approach builds on the OSGi gateway, it is protocol independent. That is, there are no restrictions put on which communication protocols can be used by the devices. Interaction between services in the home are a real issue. Services form a number of sources will "meet" for the first time when deployed into the home. However, it cannot be expected that a technical expert will be at hand to sort out any incompatibilities between services and devices. It just has to work.
References [1]
OSGi: The Open Services Gateway Initiative. http://www.osgi.org.
M. Wilson et al. / Considering Side Effects in Service Interactions in Home Automation
[2]
187
P. Dobrev, D. Famolari, C. Kurzke, and B.A. Miller. Device and service discovery in home networks with OSGi. IEEE Communications Magazine, 40(8), August 2002. [3] E. J. Cameron, N. Griffeth, Y.-J. Lin, M. E. Nilson, W. Shnure, and H. Velthuijsen. Towards a Feature Interaction Benchmark for IN and Beyond. IEEE Communications Magazine, 31(3):64–69, March 1993. [4] R. Hall. Feature interactions in electronic mail. In [19], pages 67–82, May 2000. [5] M. Weiss. Feature interactions in web services. In [20], pages 149–156, June 2003. [6] M. Kolberg, E. Magill, and M. Wilson. Compatibility issues between services supporting networked appliances. IEEE Communications Magazine, 41(11), November 2003. [7] M. Wilson, E.H. Magill, and M. Kolberg. An online approach for the service interaction problem in home networks. In IEEE Consumer Communications and Networking Conference (CCNC-2005), Las Vegas, USA., January 2005. [8] M. Nakamura, H. Igaki, and K. Matsumoto. Feature interactions in integrated services of networked home appliances. In [21], pages 236–251, June 2005. [9] A. Metzger and C. Webel. Feature interaction detection in building control systems by means of a formal product model. In [20], pages 105–122, June 2003. [10] F. T. H. den Hartog, M. Balm, C. M. de Jong, and J. J. B. Kwaaitaal. Convergence of residential gateway technology. IEEE Communications Magazine, 42(5), May 2004. [11] A. Silberschatz and P.B. Galvin. Operating System Concepts. Addison Wesley, 5th edition, 1998. [12] LetsAutomate.co.uk. http://www.letsautomate.co.uk/, viewed: 18/05/2007. [13] CyberLink development package for UPnP devices. http://www.cybergarage.org/net/ upnp/java viewed: 18/05/07. [14] UPnP Basic Device Specification. http://www.upnp.org/standardizeddcps/basic.asp viewed: 18/05/07. [15] SIP Express Router (SER). http://www.iptel.org/ser/ viewed: 18/05/07. [16] The Open Services Gateway Initiative. OSGi Service Platform, Release 3. IOS Press, 2003. [17] Jesse Peterson X.10 API. http://www.jpeterson.com/category/software/, viewed: 18/05/2007. [18] Java Communications API. http://java.sun.com/products/javacomm/index.jsp viewed: 18/05/07. [19] M. Calder and E. Magill, editors. Feature Interactions in Telecommunications and Software Systems VI. IOS Press (Amsterdam), May 2000. [20] D. Amyot and L. Logrippo, editors. Feature Interactions in Telecommunications and Software Systems VII. IOS Press (Amsterdam), June 2003. [21] S. Reiff-Marganiec and M. Ryan, editors. Eigth International Conference on Feature Interactions in Telecommunications and Software Systems. IOS Press, June 2005.
188
Feature Interactions in Software and Communication Systems IX L. du Bousquet and J.-L. Richier (Eds.) IOS Press, 2008 © 2008 The authors and IOS Press. All rights reserved.
Detecting and Resolving Undesired Component Interactions by Runtime Software Architecture Gang HUANG1 Key Laboratory of High Confidence Software Technologies, Ministry of Education School of Electronics Engineering and Computer Science, Peking University 100871, Beijing, China
Abstract. Middleware enables distributed components to interact with each other in diverse and complex styles, which increase the occasions of undesired components interactions (UCIs). Such UCIs may cause serious problems, e.g. quality violation, function loss, and even system crash. In this position paper, we present an online approach for detection and resolution of UCIs based on runtime software architecture (RSA), which does three contributions. First, RSA is implemented by reflection which is a powerful and popular mechanism to monitor and control component-based systems at runtime so that UCIs could be detected and resolved online. Second, RSA is a type of software architecture which is a comprehensive and commonly used model for design and analysis of componentbased systems and then helps to represent and analyze UCIs and plan corresponding resolutions. Third, RSA can monitor and adapt runtime systems automatically under the guide of well-defined patterns so that fully automatic detection and resolution of UCIs become possible and reusable.
1. Introduction As a popular runtime infrastructure for distributed systems, middleware has to support much more diverse and complex interactions for coping with the increasing demand on information technology and the extremely open and dynamic nature of the Internet. Middleware utilizes the underlying system software (i.e. operating systems, network protocols and database management systems) to enable the interaction between distributed components which may be developed by different programming languages and running on different machines. Basically, middleware enabled interaction can be classified by the number of servers and the ordering of messages. If we consider the functional or non-functional properties, middleware enabled interaction will become much more complex because middleware usually enforces the transaction, security, persistence and many other wellrecognized non-functional properties by intercepting component interactions. These middleware capabilities facilitate the development, deployment and integration of 1
Corresponding Author: Gang HUANG; Email: [email protected], Tel: 86-10-62757670, Fax: 86-10-62751792.
G. Huang / Detecting and Resolving UCIs by Runtime Software Architecture
189
distributed components, as well as increase the occasions for distributed components to interact with each other in an undesired way mainly because of the dilemma: on one side, more and more functions and qualities related to interactions are encapsulated by middleware and usually become invisible and uncontrollable for applications; on the other side, application developers have to get enough knowledge and control of interactions to ensure that these interactions are performed in the way desired by customers. The undesired component interactions (UCIs) may cause serious problems, e.g. quality violation, function loss, and even system crash. In this position paper, we present an online approach to the detection and resolution of UCIs based on our previous work. This approach originates from studying feature interaction problems out side of telecommunications and most related work can be found in [1].
2. Online Approach based on Runtime Software Architecture The nature of middleware and distributed components determines that the detection and resolution of UCIs have to be done at runtime. Usually, application developers cannot get enough details of middleware, e.g. source codes and design documents. Even if they use open source middleware, they cannot get enough knowledge because it is very hard to completely study their source codes, which are usually hundreds of thousands or several millions lines. Further, the semantics of interactions are determined mainly by some critical runtime information. As a result, it is too difficult to detect UCIs only from the development artifacts of applications and middleware. No matter what UCIs are, the corresponding resolution has to change the system more or less. Since a large-scale distributed system is usually operated in different locations by different parties and 7(days) x 24(hours) high availability is a critical requirement for these systems, it is impractical to stop the system to perform resolutions. The monitoring and control of runtime systems should be powerful enough, customizable and extensible. Interactions between distributed components usually traverse most parts of the middleware whose details should be monitored and controlled for detecting and resolving UCIs. However, thorough monitoring and control will inevitably cause significant performance penalty while the detection of a given UCI may only take place for a while and the resolution only happens when an UCI is detected. Thus, the monitoring and control should be customizable for limiting the performance penalty. Further, the monitoring and control provided by the middleware are from the perspective of middleware vendors and cannot fit for all conditions since UCIs are usually from the perspective of application developers. Consequently, the monitoring and control should be extensible for application specific mechanisms. Since UCIs are always specific to application semantics, it is very difficult (in fact, impossible in many cases) to automatically detect and resolve all UCIs. This fact leads to the other two principles of our approach: A holistic and comprehensible model is necessary to represent and organize monitored results and plan resolutions so that people or intelligent agents can understand the system completely, including runtime details and application semantics from some global views; A mechanism to define executable actions in detection and resolution is needed so that dealing with UCIs can be semi or fully automatic and expert’s skills and experiences on UCIs can be reused through different applications.
190
G. Huang / Detecting and Resolving UCIs by Runtime Software Architecture
According to the above rationales, we propose an online approach to detecting and resolving UCIs, as shown in Figure 1.
Figure 1. Online Approach for UCI Detection and Resolution
First of all, reflection is used to implement powerful, customizable and extensible monitoring and control of runtime systems. Reflection is a widely used programming technique to assure that changes made on a set of software runtime entities immediately lead to corresponding changes of another set of software runtime entities, and vice versa [4]. We employ reflective middleware to monitor and control the state and behavior of the whole middleware and some parts of applications. We also employ reflective component model to install and manage application specific monitoring and control implemented by application developers. Secondly, runtime software architecture (RSA) is used to build up a holistic, understandable and operable model of the runtime system. RSA allows people or intelligent agents to perform real-time observation and adaptation of runtime systems from the perspective of software architecture [4]. There are two types of RSA: the platform RSA is a bottom-up view for middleware vendors and operators while the application RSA is a top-down view for application developers and administrators. More importantly, RSA can enrich its semantics by automatically analyzing the designtime software architectures of middleware and applications. Thirdly, antipattern is used to define a series of monitoring and control actions for automating the detection and resolution of UCIs. Antipattern is a special type or extension of design pattern which defines a bad or poor implementation of a part of software and provides one or more refactoring plans to revise the bad implementation. Distinguished with that current antipattern detecting and refactoring are applied to source codes and design artifacts (e.g. UML models), we apply antipatterns to runtime systems via RSA [5]. For the detection and resolution of UCIs, the part of a runtime system containing UCIs is considered as a bad implementation in an antipattern while the resolutions are described as refactoring plans. After such an antipattern is defined, it will be automatically executed by RSA. It should be noted that the goal of the above approach is to achieve self-adaptive middleware-based systems. We have already demonstrated this approach in the context of performance optimization, reliability improvement and system deployment. And we consider that UCIs belong to another attractive and important context.
G. Huang / Detecting and Resolving UCIs by Runtime Software Architecture
191
3. Illustrative Sample We have already found many UCIs in middleware-based systems [1]. Here, we will illustrate the whole approach with a UCI which is quite similar to the feature interaction between call forwarding (CF) and terminating call screening (TCS).
Figure 2. Illustrative Sample of Automated UCI Detection and Resolution
Figure 2.A shows the pattern that abstracts the implementation of decentralized load balancing: (1) A client component sends a call to a server component; (2) After the server receives the call, the server either processes the call if it is free or has enough resources or (3) forwards the call to another server if it is busy or has not enough resources. For preventing malicious or careless client requests, every server will perform access control by authenticating and authorizing the principal embedded in a client call. In decentralized load balancing, a server is independent and does access control by itself. If a client has the access right to server X but not to server Y, the client call will be rejected when server X forwards the call to server Y. However, different mechanisms and configuration of decentralized load balancing middleware will cause non-deterministic results which may not be desired by some stakeholders. If the middleware only changes the target of the call from server X to server Y (Figure 2.C), the call will be rejected as desired by server Y. If the middleware further changes the principal of the call to runAs (Figure 2.B, means the call is sent by server X instead of by the client), the call will be accepted as desired by server X and finally satisfying the requirement of the client (i.e. processing the call). A UCI occurs! Just like the feature interaction between CF and TCS in telecom, the above UCI can be resolved by restraint, either restraining the access control of server Y by changing the principal of the call to runAs (Figure 2.B), or restraining the processing requirements of the client and server X by keeping the principal unchanged (Figure 2.C). Unlike the telecom case, the UCI may be better resolved by coordination. In decentralized load balancing, a server rejects a call due to more than one reason, e.g., malicious calls, lack of resources, preventing a client or a group from consuming too many resources. It implies that a client call may be rejected only because it reaches the server in an improper time. Assume server Y rejects the client call because the group the client belongs to consumes too many resources (like the access control of the group account of IEEE digital lib). So, server X can resend the call to server Y after a while (Figure 2.D, more details can be found in [2]) and the call may be processed by server
192
G. Huang / Detecting and Resolving UCIs by Runtime Software Architecture
Y after resending some times. Under such coordination between the two servers, the UCI is resolved much better than the restraining resolution. Though the above detection and resolution are manual and ad hoc, they would become automatic and reusable if people define the four UML models in Figure 2 because such pattern-like models are general enough for different applications and could be automatically executed by RSA [5]. Briefly, Figure 2.A is regarded as an antipattern and can be automatically detected by analyzing the execution trace. Figure 2.B, C and D are regarded as solutions or good patterns. By comparing the difference between the antipattern and good pattern, a set of refactoring operations can be derived and then ordered by analyzing the dependencies between them. Finally, these ordered refactoring operations will be translated to the adaptation mechanisms provided by the middleware, which will execute the refactoring at runtime. In J2EE performance benchmark, the detection mechanisms (can be activated or deactivated on demand) increase the response time by 10% and put a very little impact on the throughput. The resolution mechanisms are activated only when the UCI occurs. RSA has already been applied to PKUAS (our J2EE AS), JOnAS (an open source J2EE AS) and Fractal (an open source reflective component model). In the coming future, we will study how to automatically evaluate the correctness, effectiveness and cost of the resolution by such as architecture analysis and reasoning methods before actual adaptation. We will apply RSA more widely and study more UCI cases.
Acknowledgements This work has been supported by the National Grand Fundamental Research 973 Program of China under Grant No.2005CB321805, the National Natural Science Foundation of China under Grant No. 90412011, 90612011 and 60403030.
References [1] [2]
[3]
[4]
[5]
Huang, G., X. Liu, H. Mei. An Online Approach to Feature Interaction Problems in Middleware based Systems. Science in China, series F, Springer, to appear in 2007. Liu T., G. Huang, G. Fan, H. Mei, The Coordinated Recovery of Data Service and Transaction Service in J2EE, 29th Annual International Computer Software and Applications Conference (COMPSAC), Edinburgh, Scotland, 2005, pp. 485-490. Liu, X., G. Huang, W. Zhang, H. Mei, Feature Interaction Problems in Middleware Services, 8th International Conference on Feature Interactions in Telecommunications and Software Systems (ICFI), 2005, UK, pp. 313-319. Huang, G., H. Mei, F. Yang. Runtime Recovery and Manipulation of Software Architecture of Component-based Systems. International Journal of Automated Software Engineering, Springer, 2006, Vol. 13 No. 2, pp. 251-278. Ling Lan, Gang Huang, Weihu Wang, Hong Mei. A Middleware-based Approach to Model Refactoring at Runtime. 14th Asia-Pacific Software Engineering Conference, 2007, pp. 246-253.
Doctoral Symposium
This page intentionally left blank
Feature Interactions in Software and Communication Systems IX L. du Bousquet and J.-L. Richier (Eds.) IOS Press, 2008 © 2008 The authors and IOS Press. All rights reserved.
195
Sensor Network Policy Conflicts Gavin A. Campbell Computing Science and Mathematics, University of Stirling, Stirling FK9 4LA, UK e-mail: [email protected] Abstract. Policy conflict is the equivalent of feature interaction in traditional services. Conflicts between the actions of policies occur at execution time. Potential conflicts must be detected and resolved. Using an established policy system and core language, a specialised policy language for wireless sensor network management has been defined. This short paper describes the policy language and discusses policy conflicts, external sensor network restrictions, and possible approaches to resolving these issues. Keywords. Ontology, Policy, Policy Conflict, Sensor Network
1. Introduction A good analogy can be found between feature interaction as understood in telephony and policy conflict as found in policy-based management. This paper describes a policybased approach being developed for the P ROSEN project (http:// www.prosen.org.uk). The project is developing condition monitoring techniques using wireless sensor networks, with wind farms being used to validate the approach. Within P ROSEN, policies are designed to monitor and manage the sensor network. When two policies become eligible for execution at the same time, their actions may conflict. Potential conflicts must be detected and resolved appropriately. Policy conflict detection and resolution are discussed for sensor networks. The ACCENT project developed a policy-based system for (Internet) call control [1]. The approach employs a policy language named A PPEL [2] with a generic foundation that is extended for particular application domains. Using the core A PPEL language as a base, a policy language for sensor network management has been defined. Section 2 describes some commonly anticipated conflicts in sensor network policies, while section 3 suggests methods of resolving such policy conflicts.
2. Sensor Network Policy Conflicts 2.1. Policy Language Structure A PPEL is defined by a core language schema that is extended for each application domain. In particular, extensions define triggers, condition parameters and actions specific to the domain. The call control language specialisation uses a large selection of triggers
196
G.A. Campbell / Sensor Network Policy Conflicts
and actions, each with a small set of parameters (between zero and two). The policy language for sensor network management is radically different in detail, reflecting the obvious distinction between these fields. Sensor network policies have a single external trigger or action for communicating with an external entity such as a sensor node, an operator console or software agent. A trigger is carried by device_in, while an action is carried by device_out. These carry five parameters, all strings. The ‘type’ defines the nature of the trigger or action, such as ‘sensor reading’ or ‘display alert’. The ‘entity’ identifies the external component, with the ‘instance’ being a unique identifier for this. The ‘period’ defines duration reported in a trigger or the time for which an action should be performed. Finally, ‘parameters’ carries string of values that qualify the trigger or action. The ‘type’ and ‘entity’ parameters are mandatory, while the others are optional. The result is a very simple but powerful language. The approach to the language design is discussed in [3], while a complete definition of the language and sample policies are detailed in [2]. As there is only one external sensor network policy action, conflicts may occur between the parameters of a pair of ‘device out’ actions. Conflicts must be detected and resolved by the policy system. Situations that may result in policy conflict are described in the following subsections. 2.2. General Policy Conflicts Conflict occurs when two policies attempt to set same entity instance simultaneously. For example, one policy may wish to set the wind speed reporting interval on sensor node 53 to 10 minutes, while another policy wishes to set this to 20 minutes. Similar conflicts could arise when setting threshold values or any entity parameter. The ‘parameters’ string changes format depending on the ‘type’ of action, allowing detection of conflicting values. A policy action might also conflict with a prior action and not just a concurrent action. The approach therefore allows conflicts to be detected between actions and states (which incorporate a history of actions). 2.3. Goal Conflicts Conflicts can also occur between policies and goals. A goal is a high-level operational objective defined by a human operator (e.g. ‘maximise sensor battery life’). A goal is realised by refinement into a set of suitable policies in order to achieve it (e.g. ‘use compression to minimise transmission time’, ‘transmit routine data only every hour’). Issues could arise when a policy attempts to change a value which inadvertently conflicts with a goal. For example, the goal of conserving battery power conflicts indirectly with a policy that requires wind speeds over 20 m/s to be reported immediately. In addition, conflicts may occur between goals themselves (e.g. the battery life goal conflicts with the goal of reporting significant anomalies promptly). 2.4. External Resource Conflicts The conflicting situations discussed so far consider conflicts that arise within goals and policies. A further kind of conflict can arise within the sensor network itself. Sensor nodes have limited memory, bandwidth, electrical power and processing capacity. Such
G.A. Campbell / Sensor Network Policy Conflicts
197
constraints affect the ability of a sensor to perform multiple actions simultaneously. For example, one policy might require a sensor node to compute a cross-correlation of received data while another policy might require all data to be checksummed using a CRC. Superficially these actions are conflict-free, but as both are processor-intensive they cannot be carried out simultaneously. While this type of conflict could be viewed as out of the scope of the policy system, it is nonetheless desirable to account for resource restrictions.
3. Policy Conflict Resolution In the ACCENT approach, resolutions are defined by policies that share the same structure as regular policies, but differ in their triggers and actions. Resolutions may be generic or specific to a sensor network. The following subsections give examples of conflict resolution for sensor networks. 3.1. General Policy Conflict Resolutions A generic resolution selects one of the conflicting actions based on some general attribute. For example, it may select the earlier defined policy or the one that applies to a higher domain (e.g. all wind speed sensors rather than a particular one). The A PPEL language allows a policy to have a preference (i.e. priority) as to how strongly the policy should be applied in the event of conflict. Possible preferences are ‘must’, ‘should’ and ‘prefer’, with negative forms of these plus ‘don’t care’. A generic resolution might select the more strongly preferred policy. Generic resolutions are useful for similar kinds of actions. New generic resolutions for sensor network management might include choosing the action which best aids in achievement of a goal. This would involve analysing the actions as part of the goal-refinement process to determine if one action is more effective. Specific resolutions for policy conflict are the kinds of actions that a sensor network can perform. For example, a human operator might be alerted to the conflict and ask to take action. If the conflict is deemed to be non-critical (say, backing up logs), it might be resolved by delaying one action. 3.2. Goal Conflict Resolution The resolution of conflicts involving goals has not yet been implemented and is largely in the early stages of consideration. Much as for policies, a goal might be defined with a preference as to how strongly it must be adhered to. Resolution may be deferred to goal refinement when operational policies are derived. 3.3. External Resource Conflict Resolution To avoid sending actions that could overload the sensor resources, it is necessary to note external resource conflicts within the policy system. One solution might be to model resource information in the ontology that has already been developed for sensor networks. Each resource could be described along with its constraints. For example, limitations
198
G.A. Campbell / Sensor Network Policy Conflicts
could include data (bandwidth, processing), parameterisation (memory, processing), and computation (processing, memory, power). This information could be used by the policy system to link with possible action/parameter combinations and detect situations which may cause resource conflicts.
4. Conclusion The topic of conflicts among policies for sensor networks has been discussed, along with possible resolution methods. Conflicts occur among policy actions. For sensor network policies, there is a single external action so conflicts arise among parameter combinations. A conflict occurs when two actions attempt to perform the same task on the same entity instance. Clashes among policies and goals (high-level system objectives) or among goals are also anticipated. Conflict resolution involves selecting one of the conflicting actions (generic resolution) or executing specific actions. Further conflicts may occur external to the policy system due to resource limitations such as memory, power, processor capacity and bandwidth. Such constraints mean two seemingly viable policy actions might not be possible concurrently due to resource limitations. New techniques to model such circumstances are required, making use of resource characteristics modelled in an ontology.
Acknowledgements The work here on the P ROSEN project was supported by the UK Engineering and Physical Sciences Research Council under grant C014804. The policy language for sensor network management was developed jointly with my supervisor Prof. Ken Turner. The policy system and core policy language were initially developed by members of the AC CENT project. Thanks are also due to the P ROSEN team at the Universities of Essex, Lancaster and Strathclyde.
References [1] Stephan Reiff-Marganiec and Kenneth J. Turner. The ACCENT policy server. Technical Report CSM-164, Department of Computing Science and Mathematics, University of Stirling, UK, December 2005. [2] Stephan Reiff-Marganiec, Kenneth J. Turner, and Lynne Blair. APPEL: The ACCENT project policy environment/language. Technical Report CSM-161, Department of Computing Science and Mathematics, University of Stirling, UK, December 2005. [3] Kenneth J. Turner, Gavin A. Campbell, and Feng Wang. Policies for sensor networks and home care networks. In Mohammed Erradi, editor, Proc. 7th. Int. Conf. on New Technologies for Distributed Systems, pages 273–284. Cana Print, Rabat, Morocco, June 2007.
Feature Interactions in Software and Communication Systems IX L. du Bousquet and J.-L. Richier (Eds.) IOS Press, 2008 © 2008 The authors and IOS Press. All rights reserved.
199
Considering Safety and Feature Interactions for Integrated Services of Home Network System a
Ben YAN a Graduate School of Information Science, Nara Institute of Science and Technology Abstract. Assuring safety in the home network system (HNS) is a crucial issue to guarantee high quality of life. In this position paper, we first review our previous work, formulating three kinds of safety for the HNS integrated services: local safety, global safety, and environment safety. We then present a method that validates safety for integrated service. Finally, we discuss a perspective in how the safety can be assured when considering the feature interaction problem. Keywords. Home Network System, Safety Property, Design by Contract, Java Modeling Language, Feature Interactions
1. Formalizing Safety in Home Network System The home network system (HNS, for short) is comprised of networked home appliances and sensors, which is one of the most promising applications of the emerging ubiquitous computing technologies. The HNS enables flexible integration (or orchestration) of different home appliances and sensors through the network, which achieves value-added integrated services [5]. In developing and providing the HNS integrated services, the service provider must guarantee that the service is safe for inhabitants, house properties and their surrounding environment. That is, a service is free from any condition that can cause [injury or death to home users and neighbors], or [damage to or loss of home equipments and the surrounding environment]. Since the service is typically implemented as a software application, appliances are often operated automatically by the application, but not by the human user. Also, one integrated service operates multiple appliances, which yields global dependencies among different appliances. Moreover, since multiple integrated services can be executed, unexpected functional conflicts may occur among the services. Thus, a single fault in the service application can cause serious accidents to the user. In general, the safety is characterized by some properties to be satisfied by a user (or a system). In our previous work [6], we have formulated three kinds of safety properties in the context of HNS integrated services. Local Safety Property A safety property lp is called a local safety property iff lp is defined within a single appliance d in the HNS. Typically, lp is derived as a safety instruction for using d.
200
B. Yan / Considering Safety and Feature Interactions for Integrated Services of HNS
Let LocalP rop(d) = {lp1 , lp2 , ..., lpm } be a set of all local safety properties with respect to the appliance d. For a given integrated service s, let App(s) = {d1 , d2 , ..., dn } be a set of networked appliances used by s. Then, we define LocalP rop(s) = ∪di ∈App(s) LocalP rop(di ) which is a set of local safety properties with respect the service s. The following property is an example of the local safety property of an electric kettle: [L1] Do not open the lid when the electric kettle is in the boiling mode. Global Safety Property A safety property gp is called a global safety property iff gp is defined over multiple appliances d1 , d2 , ..., dn . Typically, gp is a safety instruction of an integrated service s that uses d1 , d2 , ..., dn . We denote GlobalP rop(s) = {gp1 , gp2 , ..., gpk } to represent a set of all global safety properties for the service s. The following is an example of the global safety property for any integrated service that uses a gas valve and kitchen equipments. [G1] While the gas valve is opened, the ventilator must be turned on. Environment Safety Property A safety property ep is called an environment safety property iff ep is defined as the environmental or residential constraints, which exist independently of any appliances or services. EnvP rop = {ep1 , ep2 , ..., epl } denote a set of all environment properties given. The following environment property might be derived from the safety guideline of the house: [E1] The total current used simultaneously must not exceed 30A. Safety of HNS Integrated Services Let P be a set of safety properties. For a service s, we write s P iff s satisfies all properties contained in P . Then, we define the safety of s as follows. • • • •
s is locally safe iff s LocalP rop(s). s is globally safe iff s GlobalP rop(s). s is environmentally safe iff s EnvP rop. s is safe iff s is locally, globally and environmentally safe.
Thus, the safety validation problem is defined as follows: Input: An integrated service s, LocalP rop(s), GlobalP rop(s), EnvP rop. Output: s is safe or not.
2. Safety Validation by Design by Contract Since any safety flaws in an integrated service s can lead to serious accidents, we consider it crucial to remove the flaws before s is actually deployed in the HNS. In [6], we have proposed a framework of safety validation using object-oriented modeling and design by contract (DbC) [4], which is applied to testing phase of s. The framework first introduces an object-oriented modeling technique of HNS to clarify the relationships
201
B. Yan / Considering Safety and Feature Interactions for Integrated Services of HNS DigitalTV -channel -volumeLevel -soundInputMode -soundOutputMode -visualInputMode -visualOutputMode -workingStatus +selectChannel() +selectVolumeLevel() +setSoundInputMode() +setSoundOutputMode() +setVisualInputMode() +setVisualOutputMode() +playTv() +stopTv() +PauseTv() +upChannel() +downChannel() +upVolume() +downVolume() +getTvStatus()
Light
DVDTheaterService
-brightnessLevel +setBrightnessLevel() +getLightStatus()
RelaxService
-dvdSoundLevel -soundMusicMode -tvChannel -:
-dvdInputSource -soundMusicMode -kettleMode -:
+callService() +getTheateSerStatus()
CookingPreparationService -windLevel -LightBrightnessLevel -:
ShowerService -beginTime -waterTemperature -:
+setSpeed() +closeCurtain() +callService() +openCurtain() +getShowerSerStatus() +getCurtainStatus()
+callService() +getCookSerStatus() +callService() +getRelaxSerStatus()
ElectricKettle <>
Home Service
Appliance -applianceName -applianceSpecification -applianceCurrentElectricStatus -powerStatus +on() +off() +getApplianceSpecification() +getCurrentConsumption() +getPowerStatus()
-serviceNum -workingState +startService() +cancelService() +pauseService() +resetService()
-environmentRequirment -currentEnvironment +getCurrentStatus() +getEnvironmentRequirement()
+getAppliancePowerSupplyRequirement() +getApplianceEnviromentyRequirement()
SoundSystem BathAirConditioner
HotWaterSystem
Ventilator -windLevel -onTime -offTime +setOnTime() +setOffTime() +setWindLevel() +getVentilatorStatus()
GasValve -workingStatus +openGas() +closeGas() +getGasStatus()
-waterTemperature -startTime -endTime +setWaterTempreature() +setStartTime() +setEndTime() +getHotWaterSystemStatus()
Java Source Code with JML Annotation
-temperature -lidStatus -heatingMode
Services
+setWorkingMode() +setTemperature() +openLid() +closeLid() +getKettleStatus()
Specification -appliancePowerSupplyRequirement -applianceEnvironmentRequirement
-musicMode -volumeLevel -soundInputSource -workingStatus +setMusicMode() +setVolumeLevel() +setInputSource() +playMusic() +stopMusic() +pasueMusic() +upVolume() +downVolume() +getSoundSystemStatus()
Curtain -speedLevel
AirConditioner
-onTime -offTime -windLevel -temperature -mode
-onTime -offTime -windLevel -temperature -mode
+setOnTime() +setOffTime() +setWindLevel() +setTemperature() +setWorkingMode() +getBathAirConStatus()
+setOnTime() +setOffTime() +setWindLevel() +setTemperature() +setWorkingMode() +getAirConStatus()
Test Case Generation
DVDPlayer -volumeLevel -workingStatus -inputSource -palySpeed -soundOutputMode -visualOutputMode +setInputSource() +setSoundOutputMod +setVisualOutoutMod +setVolumeLevel() +playDvd() +stopDvd() +pauseDvd() +fastForward() +fastRewind() +upVolume() +downVolume() +getDvdStatus()
Test Suites
Appliances
Home
JML Compiler Java Instrumented Byte Code Services
Appliances
Home
!
!
!
Test Driver (J-Unit)
Verdicts
Figure 1. Object-oriented model of HNS
Local Safety
Global Safety
Environment Safety
Figure 2. Safety validation with JML
among the HNS components (i.e., appliances, services and the home) [5]. Fig. 1 shows the overview of the proposed model. The model mainly consists of three kinds of objects (classes): Appliance, Service, and Home. These classes forms the following relationships to match well the intuition of the HNS and integrated services: [R1: a Home has multiple Appliances], [R2: a Home has multiple Services], and [R3: a Service uses multiple Appliances]. Assuming that the HNS is implemented according to the model, we then embed the given safety properties into the source code of appropriate objects. For this, we encode each safety property as a contract of DbC (i.e., a pre-condition, a post-condition, or a class invariant). More specifically, LocalP rop(s), GlobalP rop(s) and EnvP rop are encoded as certain DbC contracts, and are respectively embedded into Appliance, Service and Home objects. The source code with the DbC contracts are compiled into instrumented target code which involves check routines of the contracts. Then, we conduct testing of the instrumented code. While running the testing, if any DbC contract is broken, an exception is thrown and thus the security flaw can be detected. Fig. 2 shows an overview of the safety validation, where the source codes are written in the Java language and the JML (Java Modeling Language) [1,3] is used for writing the DbC contracts.
3. Feature Interactions and Safety in HNS The safety validation framework presented above is basically for each individual integrated service, and is executed before the service is deployed in the HNS. However, even if every service is proven to be safe, combined use of multiple services may violate some safety properties, due to feature interactions (FIs) among the services. The safety violation by the FIs can be formulated as follows: For a given pair of integrated services s1 and s2 , FI-(L)(Local Safety Violation): [s1 LocalP rop(s1 )] ∧ [s2 LocalP rop(s2 )] ⇒ [s1 + s2 LocalP rop(s1 ) ∪ LocalP rop(s2 )].
FI-(G)(Global Safety Violation): [s1 GlobalP rop(s1 )] ∧ [s2 GlobalP rop(s2 )] ⇒ [s1 + s2 GlobalP rop(s1 ) ∪ GlobalP rop(s2 )].
FI-(E)(Environment Safety Violation): [s1 EnvP rop] ∧ [s2 EnvP rop] ⇒ [s1 + s2 EnvP rop].
202
B. Yan / Considering Safety and Feature Interactions for Integrated Services of HNS
where the operator + denotes a composition operator of two services 1 . The above definitions the safety violation appear to be quite similar to the definition of the conventional feature interactions in telephony. However, there are some domain-specific issues. Taking them into account, we are currently developing a method for safety validation with FIs. • The three types of the safety violation could be dealt with different methods. • The safety properties in HNS must be assured at all costs. There is no desirable interactions with respect to the safety violation. • The formalization of FIs in HNS, presented by Nakamura et al. [5], would contain some cases of the safety violations. However, not all interactions cause a safety violation. • Our safety validation framework [6] can be used only if both s1 and s2 are provided by the same service provider. Otherwise, an alternative online approach is necessary. • The online approach would be implemented on the central home server, which manages all appliances and environment properties. • The sophisticated execution control of multiple services (e.g., resource locking [2]) would be promising to prevent FIs that leads to the safety violation. The execution control restricts the semantics of the service composition (i.e., the operator +). Acknowledgment: This research was partially supported by the Comprehensive Development of e-Society Foundation Software program of the Ministry of Education, Culture, Sports, Science and Technology, the Ministry of Education, Science, Sports and Culture, Grant-in-Aid for Young Scientists (B)(No. 18700062) and Scientific Research (B) (No. 17300007), and by JSPS and MAE under the Japan-France Integrated Action Program (SAKURA). The author thanks to Prof. Masahide Nakamura at Kobe University, Prof. Lydie du Bousquet at Joseph Fourier University, and Prof. Ken-ichi Matsumoto at Nara Institute of Science and Technology, for the supervision of my research. References [1]
[2] [3] [4] [5]
[6]
L. du Bousquet, Y. Ledru, O. Maury, and P. Bontron, ”A case study in JML-based software validation,” Proc. of 19th Int. IEEE Conf. on Automated Software Engineering (ASE’04), Linz, pp.294-297, IEEE Computer Society Press, Sep. 2004. M. Kolberg, E. H. Magill, and M. Wilson, ”Compatibility issues between services supporting networked appliances,” IEEE Communications Magazine, vol.41, no.11, pp.136-147, Nov. 2003. G. T. Leavens and Y. Cheon, ”Design by Contract with JML,” Internet: http://www.jmlspecs.org, May. 2006. B. Meyer, ”Applying Design by Contract,” IEEE Computer, vol.25, no.10, pp.40-51, Oct.1992. M. Nakamura, H. Igaki,and K. Matsumoto, ”Feature Interactions in Integrated Services of Networked Home Appliances -An Object-Oriented Approach-,” Proc. of Int’l. Conf. on Feature Interactions in Telecommunication Networks and Distributed Systems (ICFI’05), pp.236-251, Jul. 2005. B. Yan, M. Nakamura, L. du Bousquet,and K. Matsumoto, ”Characterizing Safety of Integrated Services in Home Network System,” Proc. of 5th Int’l. Conf. on Smart homes and health Telematics (ICOST2007), pp.130-140, Jun. 2007.
1 The
detailed semantics of the composition are not given here.
Feature Interactions in Software and Communication Systems IX L. du Bousquet and J.-L. Richier (Eds.) IOS Press, 2008 © 2008 The authors and IOS Press. All rights reserved.
203
Problem-Oriented Feature Interaction Detection in Software Product Lines Andreas CLASSEN a [email protected] Computer Science Department, University of Namur 5000 Namur, Belgium a
Abstract. Feature interaction detection in the context of systems that are highly integrated into their environment, such as embedded or software-intensive systems, is different from classical feature interaction detection. The physical environment may be “the source of additional interactions” as interactions may be caused by, and occur in, the system’s physical environment. We thus propose an approach for automated detection of feature interactions in the environment which is based on feature diagrams capturing variability, problem diagrams describing the system in its context and event calculus formulae allowing for automated reasoning. Feasibility of the approach is demonstrated through a proof-of-concept tool implementation and an in-depth illustration. Keywords. Formal Verification, Feature Interactions, Software Product Lines, Requirements Engineering
1. Introduction Systems that are highly integrated into their environment are a focus of feature interaction detection research [5]. As Metzger points out, feature interaction detection in this context is different from classical feature interaction detection, because the physical environment may be “the source of additional interactions” [5]. This was already illustrated by Calder et al. [1] on a control system for automobiles. Another interesting example is the case of smart home control systems [4,5]. Interactions can actually pass through shared environment variables. The heating service of such a system, for instance, can start a fan, causing false alarms in the security service [4]. These features (or services) are often developed independently, either by different service providers or by different teams in a company offering a large product line [7] to their customers. Eventually, this means that feature interaction detection in this context has to take into account two different perspectives. On the one hand, the software product lines perspective is needed in order to determine the actual systems that have to be verified. The system description perspective, on the other hand, has to be considered because it provides the descriptions against which interaction-related properties will eventually be checked. In the context of our approach, we further structure this last perspective by adopting the Zave and Jackson requirements engineering reference model [9] which
204
A. Classen / Problem-Oriented Feature Interaction Detection in Software Product Lines
divides system descriptions into three categories: requirements (R), domain assumptions (W ) and specifications (S).
2. Feature Interaction Detection Feature interactions in embedded and environment-integrated systems can generally be seen as causal chains initiated by one service that interfere with other services, causing undesirable behaviour of the system. Depending on the system’s purpose and on its environment, these causal chains can be very complex, and it would probably be impossible to verify them by hand. We thus suggest an automatable approach based on formal verification. Basically, we mostly follow current off-line approaches by doing model-checking on feature descriptions in order to detect interactions.
Figure 1. An illustration of the feature interaction detection procedure, scetching the detection of the interaction between security and heating service in a smart home.
The basic idea of our approach is to verify each product of a product line, based on the first proof obligation of the Zave and Jackson framework [9], equation 1, which serves as correctness proof for a system (including systems consisting of a single feature): S, W R
(1)
This proof obligation expresses the fact that the requirements have to be satisfied if both, the specification and the assumptions about the world, are satisfied. Now given a set of features p = f1 ..fn , expressed as Si , Wi , Ri for i = 1..n and n ≥ 2, we say that features f1 ..fn interact if the following holds: • they satisfy their individual requirements in isolation, ∀fi ∈ p . Si , Wi Ri
(2)
• they do not satisfy the conjunction of these requirements when put together, n i=1
Si ,
n i=1
Wi
n i=1
Ri
(3)
205
A. Classen / Problem-Oriented Feature Interaction Detection in Software Product Lines
• and removing any feature from p results in a set of features that do not interact. ∀fk ∈ p .
i∈{1..k−1,k+1..n}
Si ,
i∈{1..k−1,k+1..n}
n
Wi
Ri
(4)
i∈{1..k−1,k+1..n}
A feature interaction in a system s = {f1 ..fq } is then any set p ⊆ s such that its features interact.1 In addition to equation 1, there are other proof obligations that need to be verified, namely to make sure that equation 1 is not trivially verified. The approach can then be described by four different algorithms covering the various verifications, the detail of which is presented in [2]. Fig. 1 illustrates this by showing how the previously mentioned interaction between security and heating service in a smart home is detected. Both features are first verified in isolation, and then in combination. The first verification is assumed to pass, the second assumed to fail. This indicates that an interaction is present, which will then probably lead to changes in the feature diagram or in the system descriptions in order to avoid or correct the interaction. The algorithms defining our approach build on the two perspectives introduced in Section 1. Feature diagrams are used to model and describe the variability, and eventually to determine the different systems belonging to the product line, i.e. the valid configurations of the feature diagram [8]. In turn, each feature of this feature diagram is mapped to a problem diagram [3] providing its three constituent descriptions: R, W and S.
Figure 2. FIFramework screenshot and workflow.
3. Automation The procedure sketched above can be largely automated, provided an automatable formalism for expressing the R, W and S descriptions is chosen. In our approach we chose the event calculus, which is based on first-order predicate logic, and allows intuitive expression of causal relations in the real world [6]. Furthermore, the event calculus comes with several implementations of which we chose Mueller’s discrete event calculus rea1 Note that in classical logic, the satisfaction relation would be monotonic and systems satisfying this definition impossible. Depending on the formalism used, however, the relation may be not monotonic, hence its interest.
206
A. Classen / Problem-Oriented Feature Interaction Detection in Software Product Lines
soner (Decreasoner) [6], which basically transforms a set of event calculus formulae into a SAT problem, passes it to a SAT-solver and interprets the results. On top of Decreasoner we built an Eclipse-plugin, FIFramework,2 which is effectively a proof-of-concept implementation of the suggested approach. Fig. 2 gives a high level overview of the workflow and of what the tool does. Essentially, the user specifies his product line in terms of features, each feature being represented by a file containing event calculus formulae. These files as well as the list of products serve as an input for FIFramework, which composes the descriptions relevant to the proof, sends them to Decreasoner and interprets the result. The tool thus automates all verification steps of the approach. 4. Conclusion The high degree of integration in the environment, as in the case of embedded control devices or software-intensive systems, leads to feature interactions that are not restricted to the software, but may be caused by, and occur in, the physical environment of the system. We thus propose an approach for automated detection of feature interactions in the environment which is based on feature diagrams capturing variability, problem diagrams describing the system in its context and event calculus formulae allowing for automated reasoning. A proof-of-concept tool implementation for the Eclispe platform demonstrates its feasibility. Furthermore, an in-depth illustration, based on the smart home example case, is provided in [2]. Benefits of this approach to feature interaction detection are (i) its foundations in a well accepted requirements engineering reference model, which allows the approach to be very general; (ii) the ability to detect interactions exterior to the machine and (iii) an approach that not only focuses on single-system development, but also covers the case of product line engineering. References [1] Calder, M., Kolberg, M., Magill, E.H., Reiff-Marganiec, S.: Feature interaction: a critical review and considered forecast. Computer Networks 41(1) (2003) 115–141 [2] Classen, A.: Problem-oriented modelling and verification of software product lines. Master’s thesis, University of Namur, Belgium (June 2007) [3] Jackson, M.A.: Problem frames: analyzing and structuring software development problems. AddisonWesley Longman Publishing, Boston, MA, USA (2001) [4] Kolberg, M., Magill, E.H., Wilson, M.: Compatibility issues between services supporting networked appliances. IEEE Comm. Magazine 41(11) (2003) 136–147 [5] Metzger, A.: Feature interactions in embedded control systems. Computer Networks 45 (2004) 625–644 [6] Mueller, E.T.: Commonsense Reasoning. Morgan Kaufmann (2006) [7] Pohl, K., Bockle, G., van der Linden, F.: Software Product Line Engineering: Foundations, Principles and Techniques. Springer (July 2005) [8] Schobbens, P.Y., Heymans, P., Trigaux, J.C., Bontemps, Y.: Feature Diagrams: A Survey and A Formal Semantics. In: Proceedings of the 14th IEEE International Requirements Engineering Conference (RE’06), Minneapolis, Minnesota, USA (September 2006) 139–148 [9] Zave, P., Jackson, M.A.: Four dark corners of requirements engineering. ACM Transactions on Software Engineering and Methodology 6(1) (1997) 1–30 2 Available
online at www.classen.be/references/mscthesis.
Feature Interactions in Software and Communication Systems IX L. du Bousquet and J.-L. Richier (Eds.) IOS Press, 2008 © 2008 The authors and IOS Press. All rights reserved.
207
How to Guarantee Service Cooperation in Dynamic Environments? Lionel Touseau University of Grenoble, France [email protected]
Abstract. The rise of communicating devices has led to more and more machine to machine applications. In this context devices can be modeled using serviceoriented computing. Furthermore service-oriented approach paradigms support the dynamic nature inherent in this kind of applications. However even though devices and their services can cooperate in the same application, they are likely to belong to independent organizations. As a consequence each part of the application need guarantees on the service availabilities, and this can be achieved through service level agreements.
Introduction In the past few years machine to machine has been a subject of growing interest. The expansion of wide varieties of communicating devices that range from sensors to electronic household appliances, and the development of wireless technologies are partially responsible for that success. These devices can be modeled using a serviceoriented approach thanks to an effective support for heterogeneity and dynamism. Thus remotely controllable devices can be used through simple method invocations, sensors can push events when a threshold is reached, etc. However the degree of dynamism offered by service-oriented computing and machine to machine applications makes difficult the anticipation of the system behaviour, for services can be introduced and removed (temporarily or not) dynamically from the execution environment at runtime. In addition to this problem, a service requester cannot control the life-cycle of its service provider because services are often managed by independent organizations. Consequently cooperation between services in a same application need guarantees on the availability of each one. A solution to tackle this problem is the use of service level agreements in the machine-tomachine context.
1 Challenges in Machine-to-Machine Interactions The rise of intelligent devices with communication capabilities will probably lead to the “Internet of Things” [1] also known as machine-to-machine (M2M) or ubiquitous and pervasive computing. The variety of such devices has considerably widened for the past few years, from more and more complex mobile phones to Radio Frequency Identification (RFID) tags or sensor networks that perform measurements on the
208
L. Touseau / How to Guarantee Service Cooperation in Dynamic Environments?
physical world. Embedded devices and systems in vehicles, home and building automation are just some of the domains tackled by the next wave of computing. However in such a context, devices that interact do not necessarily belong to the same organization. For instance, in home automation, a HVAC (Heating, Ventilating & Air Conditioning) system of some vendor can use information from thermometers or temperature sensors provided by another vendor. More generally M2M applications tend to be composed of independent components managed by different organizations, which results in applications that cross organizational boundaries. In addition, the architecture of this kind of systems is likely to evolve dynamically. This dynamic evolution may be the consequence of a maintenance operation (replacing an old device by a new one) or a substitution. For example if a sensor runs out of battery, the application will use data of another available sensor that produces the same type of measurement, even though this latter may be managed by another provider.
2 Dynamic Service-Oriented Computing Service-oriented architecture (SOA) [2] is an appropriate solution to tackle the complexity of ubiquitous computing, particularly with the takeoff of Web Service standards that have contributed to the current success of SOA. SOA enhances reuse of COTS and legacy softwares thanks to a loose coupling (only service contracts are published, not their implementation), and thus enables the composition of heterogeneous devices and protocols. Service-oriented computing (SOC) relies on the following pattern: service providers publish their services in a registry or through a specific protocol, so that service requesters can discover the services they require. In dynamic SOC, services can be registered or unregistered at anytime and service requesters can be notified of these changes through asynchronous events. Furthermore in dynamic SOC since services can be introduced or removed to/from the execution environment at runtime, the execution platform is able to support context-awareness [3] and dynamic evolutions of devices. An application can therefore be reconfigured at runtime in order to react to context changes. For instance, if a user starts watching a movie in the living-room, he should be able to pause it and resume it in his bedroom on another screen. The video stream is simply sent to another media renderer service depending on the user location that can be detected by a bluetooth device for example. However a component of a service-based application cannot control the life-cycle of a service that does not belong to him. For entertainment purpose applications like the one described above this may not be a serious problem, but as soon as economic or critical concerns are at stake, each part of the application need actual guarantees on the behaviour of other parts. Let's consider a fire detection system that uses data collected on all smoke level sensors installed in each apartment. Those sensors can belong to different providers as long as they produce the same type of data: a smoke level. If a fire is detected, then a report and an alert are sent to the fire station, and the fire doors are locked. These data are aggregated at each floor before being transmitted to the fire detection system. Now if the service responsible for the information aggregation on the last floor (or even a smoke level sensor of a flat) becomes unavailable, then a fire could spread without being noticed by the fire detection system. That is a reason why service discontinuity should be defined in the service contract.
L. Touseau / How to Guarantee Service Cooperation in Dynamic Environments?
209
3 Service Level Agreements and Service Discontinuity To keep the flexibility of dynamic service-oriented approach but also a certain level of guarantee that the system will behave as expected, the life-cycle of a service must be defined in a legal agreement as well as the consequences of any disrespect of this agreement. In this way, if a service were to be removed temporarily (e.g. for maintenance purposes of the physical device) and if this interruption of service is specified by an agreement, then it should not be considered as an error but rather as a normal phenomenon. The service requester will consequently not try to perform a substitution with another provider of a similar service. A service level agreement (SLA) is an agreement negotiated and signed by the contracting parties where the level of the provided service is formally defined. SLA models, like the one described in the WS-Agreement specification [4], generally include (non-exhaustively): x The agreement context : signatory parties and possible third parties entrusted to enforce the agreement, an expiration date and any other relevant information. x A description of the service including functional and non-functional aspects such as quality of service. x Guarantees and obligations of each party, which are mainly domain-specific. x Penalties incurred if a term is not respected. Then at runtime, SLAs are enforced by service level management (SLM) [5] mechanisms: monitoring, assessment of the respect of the terms, application of predefined policies, etc. So in order to cope with cross-organizational service interactions in a highly dynamic environment, the idea is firstly to add specific information related to service discontinuity in service level agreements and the corresponding actions to undertake in case of disrespect of any term of the contract (e.g. blacklisting of the service provider, refund, blocking of the service requester calls, etc). And secondly the execution environment should be able to understand the content of the agreement, to monitor the interactions between the involved parties and eventually to react to any breach of the contract. In addition to that service discontinuity aspect service level agreements are free to cover other domain-specific non-functional properties that can be usually found in such agreements (security, performance, etc). Thus, organizations can cooperate in M2M applications with the guarantee that the services they use will not be unavailable long enough to disturb the functioning of the whole application. Even if it were to be the case, policies and penalties defined in the agreement would be enforced.
4 Summary and future work Service-oriented computing and more precisely dynamic service-based applications match the needs of ubiquitous computing. However such composite applications cannot belong to only one organization and that inevitably implies cross-organizational service communications. In this context there is an actual need for guarantees on the service level, especially on service availability. However for the time being, SLA solutions
210
L. Touseau / How to Guarantee Service Cooperation in Dynamic Environments?
generally focus on IT applications and do not address the M2M domain nor the problem of service discontinuity. The solution briefly described in this paper consists in the addition of information specific to the temporal availability of services in SLAs and taking this new information into account in service level management. Future work will define precisely the content of guarantees and policies regarding service interruptions and try to validate this approach through different patterns of service interactions: request/response, publish/subscribe and producer/consumer. The latter is useful to implement sensor-based applications [6] which are omnipresent in the “Internet of Things”.
References 1. International Telecommunication Union, “The Internet of Things”, Executive Summary, ITU Internet Reports 2005, November 2005 2. H. Cervantes and R. S. Hall, “Chapter I: Service Oriented Concepts and Technologies”, in the book “Service-Oriented Software System Engineering: Challenges and Practices” (ISBN 1-59140-426-6) edited by Zoran Stojanovic and Ajantha Dahanayake, Idea Group Publishing, 2005. 3. Joëlle Coutaz, James L. Crowley, Simon Dobson and David Garlan, “Context is key”, 2005, Commun. ACM, Volume 48, ACM Press 4. Grid Resource Allocation Agreement Protocol (GRAAP) WG, “WS-Agreement Specification”, March 2007, http://forge.gridforum.org/sf/projects/graap-wg 5. Alexander Keller, Heiko Ludwig, “Defining and Monitoring Service-Level Agreements for Dynamic eBusiness”, 16th Conference on Systems Administration (LISA 2002) 6. C. Marin, M. Desertot, “SensorBean: A Component Platform for Sensor-Based Services”, proceedings of the International Worshop of Middleware for Pervasive and Ad-Hoc Compouting (MPAC), Grenoble, France, November 2005
Feature Interactions in Software and Communication Systems IX L. du Bousquet and J.-L. Richier (Eds.) IOS Press, 2008 © 2008 The authors and IOS Press. All rights reserved.
211
Impact of Aspect-Oriented Software Development on Test Cases Romain Delamare IRISA / INRIA Rennes, Campus Universitaire de Beaulieu Avenue du Général Leclerc, 35042 Rennes Cedex – France e-mail: [email protected]
1. Introduction Aspect-oriented software development aims at developing cross-cutting concerns as separate units (called aspects). The aspects are composed of a pointcut and an advice. The pointcut is a set of joinpoints (the points of the base program where the aspects can be woven) and is often described with a regular expression. The advice is the piece of code that is woven at every joinpoint of the pointcut. AspectJ is an aspect-oriented language that provides an efficient mechanism to compose aspects in Java program at compile time or at runtime. Blair and Pang [2] have proposed an architecture for feature driven software development using aspect-oriented programming (AOP). The idea is to have two levels: a base layer written in an object-oriented language (e.g. Java) and a meta-layer written in an aspect-oriented language (e.g. AspectJ). The base code of the features is implemented on the base layer and the code dealing with feature interactions is implemented on the meta-layer. In these kind of architecture, especially when adding new features, testing AOP still remains a challenge. Most of the works on testing in AOSD focus on the detection of errors introduced by AOSD. Alexander et al. [1] have proposed a fault model for AOSD. This paper has identified the possible fault and their possible locations in an AOSD program. Other works [4,5] have proposed solutions for testing AOP, adapting either code-based or modelbased testing techniques. In this paper we focus on the impact of a new feature on the test cases. We assume that there is a base program where the feature is added and that this base program has test cases. In the case of AOP with AspectJ, we can know exactly where the new feature has been composed. On the other hand, it is possible to know what part of the program is tested by the existing test cases. Thus, by combining these information, it is possible to identify the test cases that should still pass after introducing the feature and which test cases might have to be modified. Section 2 illustrates through examples the impact analysis we want to perform and briefly discusses the tool that has been developed in Eclipse. Section 3 concludes with future work.
212
R. Delamare / Impact of Aspect-Oriented Software Development on Test Cases
A +m() +n() +z()
z
TestA +testM() +testN() +testZ()
m n z
: calls Figure 1. Class diagram of the first example.
2. Impact of AOSD on Test Cases To evaluate the impact of the aspects on the test cases for the base program we propose a static analysis that checks if the test cases are impacted. The main idea is that if a test case executes a method where an aspect has been woven, then its is impacted by this aspect. It means that the result expected may be modified. If it is the expected behaviour of the new feature then the only thing to do is modifying the test case. If it is not the case, then a feature interaction might have been detected. Section 2.1 describes two examples that illustrate this analysis. Section 2.2 details a tool that implements this analysis within the Eclipse IDE. 2.1. Examples Figure 1 shows the class diagram of the first example. There is a class A which has three methods, m,n and z. The class TestA tests the class A, with one test case for each method. The following code is an AspectJ aspect that weaves an advice before each execution the method z: aspect { before ( ) : execution (∗ A. z ( ) ) { . . . } }
The information already provided to the user is that z is impacted by an aspect. For instance in the Eclipse environment, z is annotated with an arrow. This kind of information is useful but we want to go beyond and provide more information about the test cases. We want to warn that the test cases testZ and testN are impacted by the aspect and they may need to be modified: z is directly impacted by an aspect so testZ that calls it is impacted, and n calls z too, so testN is also impacted. This information is useful because it helps the tester to maintain the test cases with the evolution of the system. Moreover, a test case that is not impacted should still pass without modification after the weaving of the aspects. This analysis can be performed statically because it does not involve polymorphism. Otherwise, only an over-approximation of the impacted test cases can be computed. For instance the example of figure 2 involves inheritance and dynamic binding. Here, n calls a method define as abstract, and the actual method that is executed is only known at runtime. Suppose this aspect is added:
213
R. Delamare / Impact of Aspect-Oriented Software Development on Test Cases
b
A +m() +n()
TestA +testM() +testN()
B +z()
b.z C +z()
m n
D +z() : calls
Figure 2. Class diagram of second example with polymorphism.
Classes and Aspects
AspectJ Compiler
JUnit Test Cases
produces
Weaving Relations Model
Spoon Analyser
reports
Warnings
Figure 3. The Implemented Analysis Process
aspect { before ( ) : execution (∗ C. z ( ) ) { . . . } }
It is not possible to know statically if the test case testN is impacted until the execution. We decide to warn the user, as it guarantees that we do not warn the tester about test cases not impacted. 2.2. Implementation A tool implementing this analysis as been implemented in the Eclipse IDE, by combining information provided by the AspectJ compiler and a static analysis of the source code with Spoon [3]. Figure 3 depicts the implemented process. The AspectJ compiler provides information at compile time. To obtain this information, we have implemented a listener class that is registered at the Eclipse start-up. Then, at every compilation of an AspectJ program we build a model that depicts the methods and aspects and where the aspects are woven. Spoon is a static analyser for Java program by providing an abstract tree view of the source code. It is well integrated within Eclipse and allows us to report errors, warnings and messages to the user. After compilation and with Spoon, we detect every JUnit test case and we check that they do not invoke a method impacted – directly or indirectly – by an aspect. When it is the case, we report a warning on the test case impacted. Performing this analysis statically allows it to be well integrated in the development environment, and it is almost costless for the tester. Moreover, as we don’t execute the test cases, we can analyse test cases that throws an exception.
214
R. Delamare / Impact of Aspect-Oriented Software Development on Test Cases
3. Conclusion In this paper we have presented an analysis that evaluates the impact of AOSD on the test cases, and its integration within Eclipse. The next step is to actually evaluate the testability of AOSD with some case studies. The goal is to start with a base program without aspect, and to generate unit testing test suites for it. Then we add new functionality to the core by adding aspects. So we can evaluate the number of test cases impacted by the aspects, and the number of test cases that should be actually rewritten. This evaluations will provide useful information on the reusability of the test cases for the core system. Finally, future works will evaluate the effort to test the aspects.
References [1]
Roger T. Alexander, James M. Bieman, and Anneliese A. Andrews. Towards the systematic testing of aspect-oriented programs. Technical report, Colorado State University, 2004. [2] Lynne Blair and Jianxiong Pang. Aspect-oriented solutions to feature interaction concerns using AspectJ. In Feature Interactions in Telecommunications Systems. IOS Press, 2003. [3] Renaud Pawlak, Carlos Noguera, and Nicolas Petitprez. Spoon: Program analysis and transformation in java. Technical Report 5901, INRIA, May 2006. http://spoon.gforge.inria.fr. [4] Tao Xie and Jianjun Zhao. A framework and tool supports for generating test inputs of AspectJ programs. In Proceedings of the 5th International conference on Aspect-Oriented Software Development, pages 190–201, New York, NY, USA, 2006. ACM Press. [5] Dianxiang Xu and Weifeng Xu. State-based incremental testing of aspect-oriented programs. In Proceedings of the 5th International conference on Aspect-Oriented Software Development, pages 180–189, New York, NY, USA, 2006. ACM Press.
Feature Interactions in Software and Communication Systems IX L. du Bousquet and J.-L. Richier (Eds.) IOS Press, 2008 © 2008 The authors and IOS Press. All rights reserved.
215
Subject Index 3GPP IP Multimedia Subsystem (IMS) 13 Alloy 66 66 APPEL artificial immune 145 business processes 99 call control 66, 83 conflict detection 66, 83 contextual component frameworks 33 data dependency 49 design by contract 199 event calculus 129 feature interaction management 13 feature interaction resolution 114 feature interaction(s) 1, 21, 33, 49, 66, 99, 114, 129, 145, 161, 199, 203 feature language extensions 114 FIM 145 formal reasoning 1 formal verification 203 home care system 54 home network system 199 information assets 21 java modeling language 199 license conflicts 21 logic model checking 66 middleware 49
mobile phone integration 161 mobile services framework 161 model inference 161 next generation networks 13 ontology 83, 195 83 OWL pervasive software 129 policy conflict 54, 99, 195 policy 66, 83, 195 policy-based management 54 problem frames 129 product line documentation 33 program entanglement 114 publish/subscribe infrastructures 33 requirements engineering 203 reusable feature modules 114 safety property 199 sensor network 195 service broker 13 Service Capability Interaction Manager (SCIM) 13 service oriented architecture 99 software dependencies 33 software product line engineering 1 software product lines 33, 203 testing 1
This page intentionally left blank
Feature Interactions in Software and Communication Systems IX L. du Bousquet and J.-L. Richier (Eds.) IOS Press, 2008 © 2008 The authors and IOS Press. All rights reserved.
217
Author Index Campbell, G.A. Chavan, A. Chen, X. Cheng, K.E. Classen, A. D’Andrea, V. Delamare, R. du Bousquet, L. Esfandiari, B. Gangadharan, G.R. Gorton, S. Huang, G. Jackson, M. Klay, F. Kolberg, M. Laney, R. Layouni, A.F. Leung, W.H. Lin, F.J. Liu, H. Liu, Z. Logrippo, L. Magill, E.H.
83, 195 114 49 13 203 21 211 vii, viii 21 21 99 49, 188 129 161 172 129 66 114 13 145 145 viii , 66 172
Marples, D. Mei, H. Metzger, A. Nuseibeh, B. Ouabdesselam, F. Parreaux, B. Ramachandran, K. Redmiles, D.F. Reiff-Marganiec, S. Shahbaz, M. Silva Filho, R.S. Teng, T. Touseau, L. Tun, T.T. Turner, K.J. Wang, F. Weiss, M. Wilson, M. Yan, B. Yang, F. Yang, L. Zhang, J.
viii 49 1 129 vii 161 114 33 99 161 33 49 207 129 54, 66, 83 54 21 172 199 145 114 145
This page intentionally left blank
This page intentionally left blank
This page intentionally left blank