Enterprise Interoperability IV
Keith Popplewell • Jenny Harding Raul Poler • Ricardo Chalmeta Editors
Enterprise Interoperability IV Making the Internet of the Future for the Future of Enterprise
123
Organiser
Promoter
Editors Professor Keith Popplewell Coventry University Future Manufacturing Applied Research Centre CV1 5FB Coventry UK
[email protected] Doctor Jenny Harding Loughborough University Mechanical and Manufacturing Engineering Leicestershire LE11 3TU UK
[email protected]
Professor Raul Poler Polytechnic University of Valencia Research Centre on Production Management and Engineering EPSA Pza Ferrandiz y Carbonell, 2 03801 Alcoy Spain
[email protected] Doctor Ricardo Chalmeta Universidad Jaume I. Av Sos Banyant Grupo de Integración y Re-Ingeniería de Sistemas DLSI s/n 12071 Castellon Spain
[email protected]
ISBN 978-1-84996-256-8 e-ISBN 978-1-84996-257-5 DOI 10.1007/978-1-84996-257-5 Springer London Dordrecht Heidelberg New York British Library Cataloguing in Publication Data A catalogue record for this book is available from the British Library Library of Congress Control Number: 2010923777 © Springer-Verlag London Limited 2010 Apart from any fair dealing for the purposes of research or private study, or criticism or review, as permitted under the Copyright, Designs and Patents Act 1988, this publication may only be reproduced, stored or transmitted, in any form or by any means, with the prior permission in writing off the publishers, or in the case of reprographic reproduction in accordance with the terms of licences issued by the Copyright Licensing Agency. Enquiries concerning reproduction outside those terms should be sent to the publishers. The use of registered names, trademarks, etc. in this publication does not imply, even in the absence off a specific statement, that such names are exempt from the relevant laws and regulations and therefore free for general use. The publisher makes no representation, express or implied, with regard to the accuracy of the information contained in this book and cannot accept any legal responsibility or liability for any errors or omissions that may be made. Cover design: eStudioCalamar, Figueres/Berlin Printed on acid-free paper Springer is part of Springer Science+Business Media (www.springer.com)
Preface Keith Popplewell1, Jenny Harding2, Raul Poler3, Ricardo Chalmeta4 1
2
3
4
Future Manufacturing Applied Research Centre, Coventry University CV1 5FB Coventry (United Kingdom) General Chairperson of I-ESA’10
[email protected] Mechanical and Manufacturing Engineering Loughborough University Leicestershire LE11 3TU (United Kingdom) General co-Chairperson of I-ESA’10
[email protected] Research Centre on Production Management and Engineering Polytechnic University of Valencia EPSA. Pza Ferrandiz y Carbonell, 2, 03801 Alcoy (Spain) Chairperson of the I-ESA’10 International Programme Committee
[email protected] Grupo de Integración y Re-Ingeniería de Sistemas. DLSI. Universidad Jaume I. Av Sos Banyant s/n 12071 Castellon (Spain) co-Chairperson of the I-ESA’10 International Programme Committee
[email protected]
Enterprise Interoperability (EI) is the ability of an enterprise or more generally an organisation to work with other enterprises or organisations without special effort. The capability to interact and exchange information both internally and with external organisations (partners, suppliers, customers, citizens…) is a key issue in the economic and public sector. It is fundamental in order to produce goods and/or services quickly and at lower cost, while ensuring higher levels of quality, customisation, services and security. Today, Enterprises maximise flexibility and speed of response to changing external conditions. They develop knowledge of, and links with, other Enterprises or Organisations with which they can collaborate to provide the products and services that customers demand. The issue of interoperability within the Enterprise is therefore no longer limited to the interoperability between silos of systems within single companies, but has become one of interoperability throughout a Network of Enterprises. I-ESA’10 (Interoperability for Enterprise Software and Applications) is the sixth of a series of conferences, this time under the motto “Making the internet of the future for the future of enterprise”. As Europe and the world focus research on a Future Internet of Things, Services, Knowledge and People, the opportunities for enterprise collaboration are expanding. The focus of interoperability research and
vi
Preface
application is therefore on these opportunities and the new technologies and methodologies needed to exploit the Future Internet to the full. I-ESA’10 reports on current research in this field, as well as the underlying developments in enterprise interoperability which have been the foundation for this domain. The I-ESA’10 Conference was organized by Coventry University and the Virtual Laboratory for Enterprise Interoperability (INTEROP-VLab) and sponsored by the International Federation for Information Processing (IFIP). The world's leading researchers and practitioners in the area of Enterprise Integration from government, industry and academia have contributed to this book. As a result, Enterprise Interoperability IV is a unique anthology presenting visions, ideas, research results, industrial experiences and problems on business interoperability. This book is organized in seven parts addressing the major research in the scope of Interoperability for Enterprise Software and Applications: I II III IV V VI VII
Business Interoperability Enterprise Modeling for Enterprise Interoperability Semantics for Enterprise Interoperability Architectures and Frameworks for interoperability Platforms for Enterprise Interoperability Interoperability Scenarios and Case Studies Standards for Interoperability
Coventry Loughborough Alcoy Castellón April 2010
Keith Popplewell Jenny Harding Raul Poler Ricardo Chalmeta
Acknowledgements
We would like to thank all the authors, invited speakers, International Programme Committee members, International Senior Programme Committee members, Steering Committee members and participants of the conference that made this book a reality and the I-ESA’10 a success. We express our gratitude to all organizations which supported the I-ESA’10 preparation, especially the Virtual Laboratory for Enterprise Interoperability (INTEROP-VLab) and its UK Pole and the International Federation for Information Processing (IFIP). We are deeply thankful to the local organization support notably Joanna Sychta, Muqi Wulan, Mohammad Al-Awamleh and Shagloof Dahkil for their excellent work for the preparation and the management of the conference.
Contents
Part I Business Interoperability Ad-hoc Execution of Collaboration Patterns using Dynamic Orchestration Jean-Pierre Lorre, Yiannis Verginadis, Nikos Papageorgiou and Nicolas Salatge ......................................................................................................... 3 Alternative Process Notations for Mobile Information Systems Sundar Gopalakrishnan and Guttorm Sindre ......................................................... 13 Conceptual Framework for the Interoperability Requirements of Collaborative Planning Process María M.E. Alemany, Faustino Alarcón, Francisco C. Lario and Raul Poler ....... 25 Improving Interoperability using a UML Profile for Enterprise Modelling Reyes Grangel, Ricardo Chalmeta, Cristina Campos and Sergio Palomero ......... 35 Towards Test Framework for Efficient and Reusable Global e-Business Test Beds Nenad Ivezic, Jungyub Woo and Hyunbo Cho ....................................................... 47 Unified Reversible Life Cycle for Future Interoperable Enterprise Distributed Information Systems Zhiying Tu, Gregory Zacharewicz and David Chen ............................................... 57 Part II Enterprise Modeling for Enterprise Interoperability A Meta-model for a Language for Business Process Characterizing Modelling Shang Gao and John Krogstie ................................................................................ 69
x
Contents
A Tool for Interoperability Analysis of Enterprise Architecture Models using PiOCL Johan Ullberg, Ulrik Franke, Markus Buschle and Pontus Johnson ..................... 81 A UML-based System Integration Modeling Language for the Application System Design of Shipborne Combat System Fan Zhiqiang, Gao Hui, Shen Jufang and Zhang Li .............................................. 91 Contribution to Interoperability of Executive Information Systems Focusing on Data Storage System Guillaume Vicien, Yves Ducq and Bruno Vallespir .............................................. 101 GRAI-ICE Model Driven Interoperability Architecture for Developing Interoperable ESA Lanshun Nie, Xiaofei Xu, David Chen, Gregory Zacharewicz and Dechen Zhan......................................................................................................... 111 Model for Trans-sector Digital Interoperability António Madureira, Frank den Hartog, Eduardo Silva and Nico Baken ............. 123 Transformation from a Collaborative Process to Multiple Interoperability Processes Hui Liu and Jean-Pierre Bourey .......................................................................... 135 Part III Semantics for Enterprise Interoperability A Manufacturing Foundation Ontology for Product Life Cycle Interoperability Zahid Usman, Robert I.M. Young, Keith Case and Jenny A. Harding ................. 147 A Semantic Mediator for Data Integration in Autonomous Logistics Processes Karl Hribernik, Christoph Kramer, Carl Hans and Klaus-Dieter Thoben .......... 157 An Ontology Proposal for Resilient Multi-plant Networks Rubén Darío Franco, Guillermo Prats and Rubén de Juan-Marín ...................... 169 Collaboration Knowledge Ontologies to Support Knowledge Management and Sharing in Virtual Organisations Muqi Wulan, Xiaojun Dai and Keith Popplewell ................................................. 179 Mediation Information System Engineering for Interoperability Support in Crisis Management Sébastien Truptil, Frédérick Bénaben, Nicolas Salatgé, Chihab Hanachi, Vincent Chapurlat, Jean-Paul Pignon and Hervé Pingaud.................................. 187 Service Value Meta-Model: An Engineering Viewpoint Zhongjie Wang, Xiaofei Xu, Chao Ma and Alice Liu ........................................... 199
Contents
xi
Part IV Architectures and Frameworks for interoperability An Interoperable Enterprise Architecture to Support Decentralized Collaborative Planning Processes in Supply Chain Networks Jorge E. Hernández, Raul Poler and Josefa Mula ............................................... 213 Business Cooperation-oriented Heterogeneous System Integration Framework and its Implementation Yihua Ni, Yan Lu, Haibo Wang, Xinjian Gu and Zhengxiao Wang ...................... 225 Collaboration Knowledge Management and Sharing Services to Support a Virtual Organisation Muqi Wulan, Xiaojun Dai and Keith Popplewell ................................................. 235 Developing a Science Base for Enterprise Interoperability Yannis Charalabidis, Ricardo J. Goncalves and Keith Popplewell ..................... 245 From Pipes-and-Filters to Workflows Thorsten Scheibler, Dieter Roller and Frank Leymann ....................................... 255 Risk Sources Identification in Virtual Organisation Mohammad Alawamleh and Keith Popplewell..................................................... 265 Service-based Model for Enterprise Interoperability: Context-Driven Approach Alexander Smirnov, Tatiana Levashova and Nikolay Shilov ................................ 279 Transformation of UML Activity Diagram to YAWL Zhaogang Han, Li Zhang and Jimin Ling ............................................................ 289 Part V Platforms for Enterprise Interoperability Gap Analysis of Ontology Mapping Tools and Techniques Najam Anjum, Jenny Harding, Bob Young and Keith Case ................................. 303 Networked Enterprise Transformation and Resource Management in Future Internet Enabled Innovation Clouds Brian Elvesæter, Arne-Jørgen Berre, Henk de Man and Man-Sze Li................... 313 Opportunity Discovery Through Network Analysis Alessandro Cucchiarelli and Fulvio D’Antonio ................................................... 323 Reflections on Aspects of Enterprise Interoperability Jane Hall and Klaus-Peter Eckert ........................................................................ 331 Specification of SETU with WSMO Wout Hofman, Menno Holtkamp and Michael van Bekkum ................................. 341
xii
Contents
Part VI Interoperability Scenarios and Case Studies A Success Story: Manufacturing Execution System Implementation Albin Bajric, Kay Mertins, Markus Rabe and Frank-Walter Jaekel .................... 357 Enabling Proactive Behaviour of Future Project Managers Georgios Kapogiannis, Terrence Fernando, Mike Kagioglou, Gilles Gautier and Collin Piddington ................................................................................................. 367 Gaps to Fill Between Theoretical Interoperable Quality and Food Safety Environment and Enterprise Implementations David Martínez-Simarro, Jose Miguel Pinazo Sánchez and Raquel Almarcha Vela.......................................................................................... 377 How to Develop a Questionnaire in Order to Measure Interoperability Levels in Enterprises Noelia Palomares, Cristina Campos and Sergio Palomero ................................. 387 Knowledge Sharing and Communities of Practices for Intra-organizational Interoperability Philippe Rauffet, Catherine Da Cunha and Alain Bernard .................................. 397 Part VII Standards for Interoperability Aligning the UEML Ontology with SUMO Andreas L. Opdahl ............................................................................................... 409 Emerging Interoperability Directions in Electronic Government Yannis Charalabidis, Fenareti Lampathaki and Dimitris Askounis ..................... 419 Enterprise Collaboration Maturity Model (ECMM): Preliminary Definition and Future Challenges Juncal Alonso, Iker Martínez de Soria, Leire Orue-Echevarria and Mikel Vergara....................................................................................................... 429 Towards a Conceptualisation of Interoperability Requirements Sihem Mallek, Nicolas Daclin and Vincent Chapurlat ......................................... 439 Use Profile Management for Standard Conformant Customisation Arianna Brutti, Valentina Cerminara, Gianluca D’Agosta, Piero De Sabbata and Nicola Gessa......................................................................................................... 449 Index of Contributors ........................................................................................... 461
Part I
Business Interoperability
Ad-hoc Execution of Collaboration Patterns using Dynamic Orchestration Jean-Pierre Lorre1, Yiannis Verginadis2, Nikos Papageorgiou2 and Nicolas Salatge1 1 2
EBM WebSourcing, 4 rue Amélie, Toulouse, France Institute of Communications and Computer Systems, National Technical University of Athens, 9 Iroon Polytechniou Str., Athens, Greece
Abstract. Industrial enterprises realize that in order to function and survive in constantly changing grounds, they need to collaborate dynamically by forming network-based collaborative alliances of a temporary nature. During such collaborations, several segments of work can be often identified as recurring and be reused. The exploitation of this repetition is considered to be critical for these environments, so approaches that propose the use of Collaboration Patterns (CPats), seen as loose workflows of services, can be proven valuable. In this paper we present the use of CPats in virtual organizations that are executed in an ad hoc way. We propose a schema that combines the initiatives of CPats along with a dynamic service orchestration engine based on an autonomic framework called Fractal. Such approach allows reorganizing service workflow at run-time in order to take into account collaboration network plasticity. Keywords: Collaboration pattern, SOA, dynamic service orchestration, BPEL
1. Introduction Modern industrial enterprises as complex and dynamic management systems can be considered as a myriad of concurrent processes executed by a huge set of technological or human entities. In a competitive market environment, these enterprises cope with critical challenges, in order to survive, by trying to establish efficient collaborative mechanisms that may even span their intra-organizational frontiers. The basic motivation behind the necessity of collaborating network enterprises is the intention of exploiting new business opportunities, delivering superior quality products and services and at the same time keeping the production cost as low as possible [1] [2]. Enterprises often realize that the single enterprise model is no longer sufficient to cover all aspects of their needs and try to take part in enterprise collaborative formations referred to as virtual organizations (VOs) [3], [4], [5]. In a virtual
4
J.P. Lorre, Y. Verginadis, N. Papageorgiou and N. Salatge
organization, the participating firm is no longer a physical entity with a stable mission. Instead it is a dynamic entity systematically involved in temporary alliance via networks structures [6]. In such dynamic and virtual but legally consolidated schemas, the arising collaboration issues are considered to be of critical importance. In these challenging environments there is a need for adapting the ways of collaboration in order to reflect the current conditions. A way to address this challenge is by identifying and appropriately reusing Collaboration Patterns (CPats). CPats are segments of collaboration work, which can be identified as recurring and can be reused [7]. The reuse of CPats can constitute an advantage in collaborative environments such as VOs, as there is an increased need for modeling, executing, monitoring and supporting the dynamic nature of collaborations. The overall aim of using CPats is to enable adaptivity and ad hoc realization of collaborations with respect to changing circumstances. In the Service Oriented Architecture (SOA) context, executive forms of CPats is formalised in the BPEL (Business Process Execution Language from the OASIS standardisation body) orchestration language that describe how VOs partners collaborate thanks to a set of services. Consequently, in order to introduce ad hoc nature in terms of run-time collaboration we propose to use the Maestro dynamic orchestration engine that allows process instance re-organization. The main contributions of the paper are the following. First, the paper in section 2 introduces and presents the demanding nature of collaboration patterns along with the proposed use of Maestro for adequately supporting the ad hoc nature that the solution of CPats must present. In section 3 we focus on our CPat execution proposition that involves fractal-based “Maestro” engine for supporting ad hoc workflow executions. In section 4, we give an illustrative scenario of the execution aspects of a specific CPat. Finally, this paper concludes with the related work (section 5) and a discussion that concludes our research effort, in section 6.
2. Collaboration Patterns 2.1. The Demanding Nature of Collaboration Patterns The notion of collaboration involves the complex nature of vast number of processes that may span across organizational/geographical boundaries, through the World Wide Web. The need for coping with iterative problems or reusing iterative segments of collaboration processes (Collaboration Patterns) seems more imperative than ever before. In general, the participants of a collaboration process need to be collocated either physically or virtually in order to make a decision or work together. During such collaborations, recurring segments of solutions to problems or individual actions can be identified and introduced as CPats for future use [7]. These patterns can also be regarded as prescriptive, providing means for modeling collaborative tasks and protocols of cooperation [8], while they can guide
Ad-hoc Execution of Collaboration Patterns using Dynamic Orchestration
5
the configuration of Collaborative Working Environments to meet the requirements of the participants [9], [10]. Among the important elements that are needed for describing a CPat are the triggering mechanisms and the actual solution that they propose [7]. In terms of a system for handling knowledge based collaborations inside VOs, services may produce many events that might be relevant for other services. It is clear that all these influences, due to their ad-hoc nature, cannot be defined in advance explicitly, but they can be categorized and used as triggering mechanisms of recurring segments of collaborative work. In addition, a CPat determines the basic structure of the solution to a particular collaborative problem, while it does not always specify a fully detailed solution. A pattern provides a scheme for a generic solution to a family of problems, rather than a prefabricated module that can be used ‘as is’. The implementation of this schema must be done according to the specific needs of the design problem at hand. CPats are not only useful to users within the same domain as the pattern, but frequently a pattern is useful in other domains as well. As CPat execution we define first the retrieval and instantiation of the appropriate CPat for the collaboration situation (e.g., problem, preconditions, triggering mechanisms), [7] and second the execution of the CPat by the appropriate “execution engine” (e.g., workflow engine, groupware tool, human tasks, etc.). 2.2. SYNERGY Collaborative Framework CPats are used in the context of SYNERGY FP7 research project that defines a collaboration framework based on Service Oriented Architecture concepts. Even if it is not the aim of this paper to detail it, figure 1. depicts main building blocks. It consists of an Event Driven Architecture build on top of the Petals Enterprise Service Bus (ESB). Events are managed at the bus level while Complex Event Processing (CEP) engine allows identifying business events that match event patterns. CPats are triggered by those business events thanks to the CPat assistant which is in charge of selecting corresponding CPat from the CPat Knowledge Base CPat assistant also initializes the Ad-hoc workflow engine which is mainly in charge of executing the proposed by the CPats solution to collaborative problem. Collaboration Moderator Service is in charge of raising awareness of the priorities and requirements of other contributors while the Partner Knowledge Management Service (PKMS) provides effective mechanisms to manage and utilise all the knowledge in the SYNERGY Knowledge Bases.
6
J.P. Lorre, Y. Verginadis, N. Papageorgiou and N. Salatge
CPat editor
CEP editor
CPat Knowledge Base
Event Type Knowledge Base
CPat assistant
Collaboration Moderator Service
Partner Knowledge Management Service
CEP engine Ad-hoc workflow engine Event manager (pub/sub)
Enterprise Service Bus SYNERGY Portal
Fig. 1. SYNERGY technical architecture
2.3. Maestro Workflow Engine for Collaboration Patterns Ad-hoc workflow engine leverage the Maestro BPEL engine that takes into account dynamicity requirement stated previously. Maestro engine is an Enterprise Service Bus (ESB) component able to orchestrate services based on BPEL 2.0 language. An ESB is a standard-based integration platform that combines messaging, Web services, data transformation, and intelligent routing to reliably connect and coordinate the interaction of significant numbers of diverse services across extended enterprises with transactional integrity [11]. Once deployed, services are available inside the ESB or they can be exported outside the ESB. When the BPEL service receives a request, it starts a new instance of corresponding CPat solution. Maestro Engine is basically composed of two layered components (Figure 2): •
Maestro-core which is in charge of creating and executing abstract PetriNet instances (e.g an instance of process). These instances are composed of classical elements existing in Petri-Net framework (node, transition, etc.). Maestro-core is the basis for multiple process languages. Native support for any process language can be build on top of such Petri-Net Virtual Machine. The runtime behavior of each activity in the process graph is delegated to a dedicated API (Application Programming Interface). Each process language deals with specialization of a set of activity types. An activity implements the runtime behavior and corresponds to one activity type. So, building a process language on top of the Maestro-core is as easy as creating a set of activity implementations. Each activity is then inserted
Ad-hoc Execution of Collaboration Patterns using Dynamic Orchestration
•
7
in a node of the objects graph. This is realized by using the Maestro4BPEL interpreter as detailed below. Maestro4BPEL interpreter traduces the BPEL description (e.g. tag elements like <process/>,
etc.) into objects managed by Maestro-core (e.g. petri-net objects like process, node, transition). At the end of this step, an objects graph is built, which contains a set of nodes. Each node (except initial node and end nodes) is highly coupled with incoming and outgoing transitions. An execution object is also created. This object follows the step by step execution of objects graph. If a node has several outgoing transitions, the parent execution creates several child executions to follow each branch.
Fig. 2. Maestro Engine Framework
Dynamicity of BPEL process instances is obtained by basing Maestro-core on the Fractal component model, which means that each Petri-net node and transition has been reified into Fractal component. The Fractal component model [9], [12] is a programming-language-independent component model, which has been introduced for the construction of highly configurable software systems. The Fractal model combines ideas from three main sources: software architecture, distributed configurable systems, and reflective systems. From software architecture, Fractal inherits basic concepts for the modular construction of software systems, encapsulated components, and explicit connections between them. From reflective systems, Fractal inherits the idea that components can exhibit meta-level activities and reify through controller interfaces part of their internal structure. From configurable distributed systems, Fractal inherits explicit component connections across multiple address spaces, and the ability to define meta-level activities for runtime reconfiguration. 2.4. Related Work Within the domain of cross-organizational collaboration, patterns have been mainly used to facilitate the modeling and enactment of inter-organizational business processes [13]. In the domain of Communities of Practice (CoP) task patterns have
8
J.P. Lorre, Y. Verginadis, N. Papageorgiou and N. Salatge
been proposed, mainly in a theoretical level, to define which information patterns are to be created in particular steps of collaboration [14]. Collaboration Engineering (CE) is a research-based approach that aims to provide repeatable, tool-based processes, consisting of elementary items called thinkLets to model recurring collaborative tasks but from the point of view of a meeting facilitator [15]. Petri Nets have been used to model workflows for many years [16]. Others have proposed the π-calculus as a workflow model [17]. Control flow patterns [18] along with data flow [19] and resource patterns [20] have been proposed in order to assess the capabilities of the different workflow technologies, languages and models. To our opinion all of these approaches are too strict and not appropriate for collaborative processes which incorporate knowledge-based, and thus very difficult to prescribe, tasks. On the other hand, UAM method introduces activity patterns [21] in order to facilitate the reuse of ad-hoc collaborative activities and provides all the necessary underlying technologies to achieve that. Where it seems to be weak is in the modeling of complex, structured and event-based processes since the flow of the actions is modeled only as a list or a tree of activities with no constraints. Other approaches focus mainly in the extraction of patterns from collaborative systems with ad-hoc characteristics [22]. Our goal is to address the specificity frontier of collaboration support systems [23], i.e. to support a range of collaboration types, from fixed to ad-hoc. In this respect, our work extends research on activity patterns [21] and proposes CPats as a type of design patterns applicable on event-based systems [24].
3. Maestro Reorganisation Patterns As explained before, since Maestro-core is based on Fractal component model, it enables reconfiguration of running process instances. In this aim, Maestro-core provides a factory containing several reorganization patterns such as convert Flow to Sequence, convert If to Flow etc. Figure 3. depicts the logic and the component representation of the reorganization pattern used to convert flow control instruction into a sequence control instruction. In the component representation, each component is either a node with a functional behaviour like Flow, Sequence, Invoke, If, etc. or a transition with non-functional behaviours like log, persistence, etc. A node can have incoming or outgoing transitions, a parent node and child nodes. A Flow control instruction is represented in our component architecture by a component (or a node) with a Flow behaviour. This behaviour allows the execution in parallel of all child nodes of the flow component (node N2 and N3 on figure 3). All child nodes in the Flow control instruction are executed by a different thread. A Sequence control instruction is represented by a component with a Sequence behaviour. This behaviour allows the execution starting the first child node of the sequence component (node N2 on figure 2). All child nodes in the Sequence control instruction are executed in the creation order in the same thread.
Ad-hoc Execution of Collaboration Patterns using Dynamic Orchestration
9
The used algorithm to achieve the conversion of the flow control instruction into a sequence control instruction is straightforward (read the component representation from right to left): 1. Change the flow behavior of the component by the sequence behavior. Thus, instead of executing all child nodes of the component, only the first child node is executed. 2. Add a transition (T) between node N2 and node N3. Thus, when the activity of node N2 is finished, the node N3 is executed. When the node N3 is achieved, the sequence component activity is finished, so, the node N4 is started.
Figure 3. Reorganization patterns: Flow to Sequence
All these reorganization patterns are contained in the API of the Maestro factory. This factory can be used by the CPat run-time services to reorganize one or several running process instances.
4. Scenario In this section, we give an example of how we can use and execute CPats using the Maestro ad-hoc workflow engine for allowing the modification of the actions described by the CPat solution at the CPat instantiation phase (e.g., modify a sequence of actions from sequential to parallel, delete an action, add an new action etc.). The CPat of our example deals with the case in which a VO member decides to run and support a meeting after a project review has taken place. In the following table we present the specific CPat along with the most important aspects of its description.
10
J.P. Lorre, Y. Verginadis, N. Papageorgiou and N. Salatge Table 1.
CPat
Name:
Category:
Business Pattern
Problem:
Discuss the work progress and the results of a project review.
VO lifecycle phase:
Any
Pre-Conditions:
X (variable) number of VO members available
Triggers: Triggers of Exceptions: Roles: Input Information:
No.
CPat 1
(Meeting Organiser decides to schedule a meeting) AND (Project review completed) One or more VO members report unavailability at last moment (variable) VO Meeting Organiser, VO members / Meeting Participants Project’s DoW (Description of work), VO members’ contact details, EU guidelines, Review Report 1.
Solution:
Involves an action list: a. Meeting Organiser schedules the meeting (describe reason, propose date) b. Meeting Organiser selects meeting participants c. All meeting participants agree upon the meeting date and method (online or offline) d. Meeting Organiser prepares and sends a draft agenda. e. …
2.
Output Information:
Involves the usage of collaborative tools: ¾ engage with collaboration tool (e.g. www.doodle.ch) to find date based on collaborators’ availabilities ¾ virtual collaboration space… Meeting minutes document
Duration:
X (variable) Days
Exception:
Post- Conditions: Related CPats:
Meeting took place, Agreed meeting minutes stored in the system, Reason for having the meeting resolved CPat
The proposed CPat solution involves a set of actions to be undertaken by the VO meeting organizer and the VO members that will participate. We note that the CPat solution constitutes only a “possible” sequence of actions. CPat 1 involves the sequential execution of several actions. This structure is flexible to change when the CPat is being instantiated in order to satisfy the requirements of a particular situation. In our case, during instantiation phase, the VO meeting organizer decides that it would be much more appropriate to start CPat 1 by having action “d” executed in parallel with action “a”, instead of executing the several actions defined in the CPat solution in sequence. Maestro engine undertakes this flow transformation task by applying the “Sequence to Flow” reorganization pattern.
Ad-hoc Execution of Collaboration Patterns using Dynamic Orchestration
11
5. Conclusion This paper presents a framework for running ad-hoc collaborative patterns. Such patterns encapsulate reusable frames of collaboration activities. The approach uses a dynamic orchestration engine, Maestro, which leverages the Fractal component model in order to allow reorganization of flow in the proposed CPat solutions. Modification is managed by the collaboration pattern assistant [7] that triggers the need for reorganization and by Maestro-core that provides a library of reorganization algorithms. The virtual organizations as collaborative environments with increased needs can be favoured by the combined strength of collaboration Patterns and the dynamic nature of the Maestro engine. We believe that this proposal can cope with the critical challenges that a VO environment poses. Further work will be carried out in terms of implementing this framework into event driven environments for validating it across real case scenarios in VOs lifecycles. Our intention is to also use, test and validate CPats in an even broader environment, such as virtual breeding environments or in plain collaboration environments which are situated outside the borders of virtual organizations. Finally, we believe that it will be very interesting to examine ways to use CPats, in order to deal with unanticipated exceptions, that may occur in dynamic collaborative environments.
6. Acknowledgements This work has been partially funded by the European Commission regarding the project SYNERGY (Supporting highlY-adaptive Network Enterprise collaboration thRouGh semanticallY-enabled knowledge services), ICT No 63631. The authors would like to thank the project team for comments and suggestions.
7. References [1] [2] [3] [4] [5] [6] [7]
Fox MS. Issues in enterprise modeling. Systems, Man and Cybernetics, 'Systems Engineering in the Service of Humans', Conference Proceedings, vol. 1, pp. 86-92, 1993. Matos C, Afsarmanesh H. Collaborative Networked Organization: A research agenda for emerging business models, Springer Press, 2004. Davidow WH and Malone MS. The Virtual Corporation. Harper-Collins, New York, 1992. Mowshowitz A. Virtual Organization. ACM 40, 9 (Sept. 1997), 30–37. Mowshowitz A. Virtual organization: A vision of management in the information age. Inf. Soc. 10 (1994), 267–288. Ahuja MK, Carley KM. Network Structure in Virtual Organizations, Organization Science, v.10 n.6, p.741-757, June 1999 Verginadis Y, Apostolou D, Papageorgiou N, Mentzas G. Collaboration Patterns in Event-Driven Environment for Virtual Organisations. Intelligent Event Processing –
12
[8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24]
J.P. Lorre, Y. Verginadis, N. Papageorgiou and N. Salatge Association for the Advancement of Artificial Intelligence (AAAI) Spring Symposium 2009, Stanford, USA Molina, A. and Bell R. A Manufacturing Model representation of a flexible manufacturing facility. Journal of Engineering Manufacture: Proceedings of the Institution of Mechanical Engineers 213(B), 1999, pp. 225-246. Bruneton E., Coupaye T, Leclercq M., Quema,V. and Stefani JB. The Fractal Component Model and its Support in Java. Software - Practice and Experience 2006; 36:11-12. Slagter R, Biemans M. and Jones VM. Service Orchestration for Collaboration Patterns. Technical Report TR-CTIT-05-71, Centre for Telematics and Information Technology, University of Twente, Enschede, ISSN 1381-3625, 2005. Chappell D. Enterprise Service Bus: Theory in Practice, O'Reilly Media, 2004. Merle P., Stefani JB. A formal specification of the Fractal component model in Alloy. INRIA research report No 6721, 2008. Norta A., Hendrix M. and Grefen P. A Pattern-Knowledge Base Supported Establishment of Inter-organizational Business Processes. R. Meersman, Z. Tari, P. Herrero et al. (Eds.): OTM Workshops 2006, LNCS 4277, 2006, pp. 834–843. De Moor, A. Community Memory Activation with Collaboration Patterns. Proceedings of the 3rd International Community Informatics Conference (CIRN 2006) Prato Italy, pp. 1. Briggs, R. O. Collaboration Engineering with ThinkLets to Pursue Sustained Success with Group Support Systems. Journal of Management Information Systems, 19, 3164, 2003. van der Aalst, W. M. P. The Application of Petri Nets to Workflow Management. JOURNAL OF CIRCUITS SYSTEMS AND COMPUTERS, 8, 21-66, 1998. Puhlmann, F. & Weske, M. Using the pi-Calculus for Formalizing Workflow Patterns. LECTURE NOTES IN COMPUTER SCIENCE, 3649, 153, 2005. van der Aalst WMP, ter Hofstede AHM, Kiepuszewski B. and Barros AP. Workflow Patterns. Distributed and Parallel Databases, 14(3), 2003, pp. 5–51. Russell N, ter Hofstede AHM, Edmond D. and van der Aalst WMP. Workflow Data Patterns. Technical report, (FIT-TR-2004-01), 2004a. Russell N., ter Hofstede AHM, Edmond D. and van der Aalst WMP. Workflow Resource Patterns. BETA Working Paper Series, WP 127, Eindhoven University of Technology, 2004b. Moody P, Gruen D, Muller MJ, Tang J. and Moran TP. Business activities patterns: a new model for collaborative business applications. IBM Systems Journal 45(4), 2006, pp. 693-694. Biuk-Aghai R. Patterns of Virtual Collaboration in Online Collaboration Systems. Proceedings of the IASTED International Conference Knowledge Sharing and Collaborative Engineering, ACTA Press, 2004, pp. 57-62. Bernstein A. How Can Cooperative Work Tools Support Dynamic Group Processes? Bridging the Specificity Frontier, CSCW’00, Philadelphia, PA, 2000. Paschke A. Design Patterns for Complex Event Processing. 2nd International Conference on Distributed Event-Based Systems (DEBS'08), 2008.
Alternative Process Notations for Mobile Information Systems Sundar Gopalakrishnan1 and Guttorm Sindre1 1
Department of Computer and Information Science, Norwegian University of Science and Technology (NTNU), Sem Sælands vei 7-9 , NO-7491 Trondheim Norway
Abstract. Model-based requirements specifications may be performed using a wide variety of notations, emphasizing different standards and information needs. Most of these notations have been developed with more traditional information systems development in mind, not specifically for mobile information systems. While many of the same modelling approaches will be relevant for mobile information systems, too, there are reasons to believe that the modelling needs in that domain might be somewhat different. In particular, the location and context of an information processing activity will be less fixed than for a desktop-based system, and this may also cause a bigger need for modeling this location and context. This paper explores some alternative notation variants for UML activity diagrams to address these needs. Some notations are proposed and compared, using a case study from the home care domain. Keywords: Requirements specifications, model-based, UML, use case diagrams, BPMN
1. Introduction A large number of diagrammatic modeling techniques have been used in information systems analysis and requirements engineering, such as goal modeling [1] conceptual modeling [10], and process modeling [4] including UML, use cases [11] and BPMN. For requirements specification [5] [6], there exist different diagrammatic modeling languages typically to address a lot of different aspects [7], such as • • •
WHAT (information is supposed to be handled by the system, e.g., in terms of classes and relationships) HOW (is the system going to solve its tasks, e.g., in terms of process models) WHEN (are tasks going to be performed, e.g., shown in more precise process models including timing)
S.Gopalakrishnan and G. Sindre
14
• •
WHY – in terms of goals or business rules WHO – in terms of agents / roles. Some languages typically combine WHY and WHO, such as i* or Kaos [3]
For mobile information systems another question emerges, namely WHERE? Most naively this can be considered in a purely geographical sense – where is a certain activity supposed to be performed or a certain service supposed to be offered? However, it can also be considered from a multi-channel [12] point of view – through which kind of equipment is the functionality going to be offered? [13] [14] Or from the point of view of the working context: In what situation is the user when needing the functionality, e.g., is it going to be used in a car, and is the car standing still or driving, and in the latter case: is it the person driving the car who actually needs to use the system at the same time? All this may have important bearings on the requirements and later design of the system, and would therefore be relevant to capture in early stage system models. This paper is organised with related works in section 1.2, proposed diagram notation in section 1.3 followed by evaluation in section 1.4, conclusion and with possible future work in section 1.5.
2. Related Works The Volere specification approach [8] provides a general template for all kind of requirements, but as mentioned earlier mobile requirements specification is unique and could not specify the “where” aspect. RE techniques mostly uses diagrams for requirements and visual notations play important role in all engineering disciplines. The works carried out in MPOWER project [17] uses UML notations in their home care system extensively and concludes UML profiles can be used as a mechanism for toolchains based on OMG’s Model Driven Architecture(MDA) and UML standards[16]. Larsson [7] proposes the three building blocks for knowing the processes list How, What and Why, adds Who for use oriented design approach and omitted the ‘Where’ concept. Veijalainen [15] mobile ontologies development identifies the ‘where’ aspect, but excludes the ‘what’ aspect.
3. Proposed Diagram Notation With the background of Home care system [9] case study, consider the simple UML [2] activity diagram in Fig. 1. A brief summary of home care system is given below: The shift leader is responsible for the work at the zone, answering to Trondheim municipality. Nurses typically have more somatic knowledge than the home care assistants, and are to support social needs. The home care assistants typically work more broadly work with personal care before lunch, and then with home care (i.e. practical tasks in the home). The diagram in Fig. 1 typically focuses on HOW the process is performed (the activity boxes and decision diamonds) and by WHOM (swimlanes, typically roles in the organization). WHAT could be included by rectangles representing information objects manipulated by the activities (not done here for the sake of
Alternative Process Notations for Mobile Information Systems
15
simplicity). WHY is not so much included in this type of diagram, but could be addressed in another type of diagram, for instance a goal hierarchy. The WHERE dimension is typically not included in this type of diagram. Since the process is about home care and the loop indicates that the home care assistant is visiting multiple patients during the day, one could make the guess that the assistant might need to drive a car or use some other kind of transportation to get around to the various clients, but this is not explicit in the model. However, this could be interesting both for understanding the process and for discussing potential improvements. For instance, when it comes to understanding: -
Is the activity “Get preparatory information about patient” performed while in office (i.e., before going to see the patient), during transportation, or at the patient’s home? Similarly, is “Log info about visit” performed at the patient’s, upon returning to the car, or when returning to the office?
Fig. 1. Example UML activity diagram for a home care process
And, when it comes to possible improvement: -
Assuming the assistant currently returns to the office after each patient, which increases the drive time and makes for a less efficient process –
16
S.Gopalakrishnan and G. Sindre
-
-
how to distinguish this diagrammatically from the other alternative of only returning to the office after visiting all patients? Assuming that the assistant currently makes temporary logs on paper, which then have to be transcribed electronically after the visiting round – how to distinguish this from the practice of logging electronically while at the patient’s or in the car, using mobile equipment? Assuming that the shift leader wants more flexibility, e.g., dynamically reallocating patients between different assistants, either based on assistant capacity (for instance, one assistant might end up using extra time with one patient due to some complications, thus not having time for all patients on her list, while another assistant might have available capacity) or based on patient availability (e.g., one patient is not available for home care for a certain period of the day because she needs to go to the dentist or whatever, and therefore wants to be rescheduled). This could be achieved by communicating interactively with the assistants while they are on their rounds, but this would require the use of mobile equipment and necessary documentation about patients being available through that equipment, rather than assistants having received the needed documentation on paper prior to their rounds.
For both these main purposes (process comprehension and process improvement) it could be worthwhile to look into notations which explicitly show WHERE a task is performed, in what context, and with what equipment. Here we will explore various ways to do this: -
attempt A: using existing notation as-is, just adding some explanatory annotation attempt B: redefine the usage of swimlanes attempt C: using color or other notation to indicate WHERE-aspects
In Fig. 2 the first approach is illustrated. The diagram is largely the same as we just showed, only that some annotational boxes have been added where appropriate. Also, it is clearly shown where the activities of the home care assistant are performed and under what conditions. This would be important input to further requirements work, for instance knowing that preparatory info about the patient should be possible to acquire while driving the car, the system must be able to provide this info to the home care assistant as audio, either from mobile equipment or equipment belonging to the car, since if the assistant has to use his hands or eyes a lot to get the info, this could interfere with safe driving. A possible disadvantage of these annotations is that they might clutter up the diagram with a lot of extra nodes. In the next diagram another approach is shown, redefining the usage of swimlanes. Now, these are no longer used for indicating the organizational WHO but instead to show WHERE. For WHO, stick figures are used instead, since this is more consistent with some other diagram types in UML (e.g., use case diagrams). However, this diagram has another disadvantage, namely that a lot of edges are needed to indicate who performs which task, especially for the “Homecare assistant” who performs most of the tasks. Hence, Fig. 3 looks a lot messy and worse than Fig. 2 at this point. An attempt to mend it is shown in Fig. 4, where the parenthesis “all other activities” is added
Alternative Process Notations for Mobile Information Systems
17
to the name of the Homecare assistant actor, so that the connecting lines can be dropped. In addition to the mentioned parenthesis, the stick figure for the HCA is also made bigger to indicate that this is the prime actor of the process. However, this would only work nicely if there is one particular actor who is responsible for most of the activities in a diagram. If there are several actors each responsible for a number of activities, it would be impossible to avoid the mess of a lot of edges.
Fig. 2. Home care example with annotations showing place and context
18
S.Gopalakrishnan and G. Sindre
Fig 3. Home care example with redefined swimlanes
Fig. 4. Attempt to improve on Fig. 3
Another idea could be the use of colour, such as in Fig. 5 (a).
Alternative Process Notations for Mobile Information Systems
19
The disadvantage is that a colour legend has to be established, but if the same colours are then used consistently over a number of modelling projects, this will be a small investment, and gradually it might be less needed to have the legend visible in every diagram. Alternatively there might just be a link to the legend so that it could be accessed if needed. Another advantage of the color version, as illustrated in Fig 5 (b) is that if several process design alternatives are put side by side, it becomes very easy to quickly spot differences in planned locations of activities. The reader would see almost immediately that the only differences between the two diagrams is that the prep info and logging activities are performed in different places (car vs. office, car vs. patients’ home). This notation might therefore be particularly useful for easily discussing alternative process designs by means of location. Also, since the use of colors means that one avoids to add extra nodes (in contrary to the annotation variant) or extra edges (with the stick figure actor variant), diagram complexity is not so much increased here, except the reader must of course understand the usage of the colors. Alternatively patterns can be used instead of colours. It is of course possible to illustrate differences between process design alternatives by the other approaches, too. But for the annotation version Fig. 2 in particular this would be a lot more subtle, the differences only being shown as different texts within annotation rectangles, therefore needing close inspection of the model to be discovered by the user. Another possible advantage of color is if there are different options for how to perform an activity, e.g., if the worker has a personal choice whether to do the prep info activity in office or while driving the car. With the color approach, this could easily be shown by having two different colors inside the node for that particular process step, again easy for the reader to spot quickly and not increasing diagram complexity that much as shown in Fig.6.
5 (a)
5 (b) Fig 5. Example using color
20
S.Gopalakrishnan and G. Sindre
Fig. 6. Including an optional choice of location for a process step
4. Evaluation In this paper we used the home care system [9] at Trondheim city in Norway as a case study. The UML diagram notations presented in section 1.3 is based on a simple task in the complete home care system. It is regarded more important to understand alternate notations in view with mobile information systems than presenting the whole scenario of mobile care system. Before evaluation of these notations it is necessary to understand the some information about mobile care system at Trondheim [9]. The home care involves services being offered in the home of the customer, including practical help and home nursing care. In the ‘Mobile Care’-project, it is planned to better support the mobile aspects of the home care service by providing the employees continuous access to Gerica and other relevant systems from wherever they are using a PC/PDA-solution. This is seen in connection to the ‘Wireless Trondheim’-project, which is currently building a mobile broadband for Trondheim. In other parts of Trondheim, one so far has only UMTS or GPRS coverage (i.e. with lower bandwidth for data transfer). The 4 alternate process notations presented in section 1.3 just reflect the same task by home care assistant who visits patients according to the list given by shift leader. Using Gerica on PDA, the home care assistant logs the info about patients on the go intimating patient health status to the nurse at hospital receive further
Alternative Process Notations for Mobile Information Systems
21
info about the logs. By Fig. 1 notations, these tasks performed can be understood but it lags the information where it is done. Also it hard to make dynamic log change for example the shift leader wants to prioritise patients schedule could not reflect in this notation. Fig. 1 notation is not reflecting the state of art use of mobile technologies in home care system. In Fig. 2, the notations are better than Fig. 1 comparatively to have where the tasks are carried out. At the cost of extra nodes in Fig. 2, where the activities and under what conditions has notified. But these extra nodes will be disadvantage if the same notation is used to represent entire home care system. The swim lanes and stick diagrams are representing where and who respectively in Fig. 3. This notation has a disadvantage of lot of edges on who performs the tasks can be overcome by assigning HCA to all other activities as in Fig. 4. Keeping a colour legend and assigning consistently through modelling project is performed by notation in Fig. 5. This notation seems to be simple and easy to perform a large project for example home care system provided a colour legend established and used consistently throughout the project. The same notation is improved in Fig. 6 showing optional choice of performing tasks. On using patterns instead of colour, certainly patterns have an advantage with respect to colour-blindness, on the other hand patterns might be slightly more difficult to distinguish quickly, especially if there are many locations/contexts that must be differentiated between so that one has to use many different patterns, which would make them more subtle and therefore require closer inspection of the model. With the annotation approach it could be described in text in the rectangle, e.g., “In office or while driving to patient’s home, according to worker’s preference” – easily understandable but needing closer inspection to spot. (and also causing annotation texts to be longer, which might mean that rectangles would have to be bigger, thereby cluttering up the diagram more, etc.). With the swimlane + stick figure approach, it gets very hard to illustrate such optional choices. Of course, an activity could be put on the boundary between two swimlanes, but this will only work if the alternative locations are actually on adjacent swimlanes. In other cases, one would simply have to put occurrences of the same activity step in several swimlanes and perhaps indicating the choice with a decision diamond, which would increase the diagram complexity quite a lot. Also, it might overload the decision diamond if this is used both for decisions concerning what to do, and decisions concerning where to do it. Table 1. Evaluation of proposed notations with simple and large models Notation
Trad. UML Annotated Loc.Swimlanes Loc.Swiml. with big actor colours Colours with optional choice
Min. deviation from standard Simple Large
Expressiveness
Intuitive / Easy to read Simple Large
Less Complexity
Simple
Large
Simple
Large
++ + -
++ + -
-+ +
-+ +
++ -
++ ---
++ + +
++ ---
-
-
+
+
-
--
+
-
-
-
++ ++
++ ++
+ +
+ +
++ ++
++ ++
S.Gopalakrishnan and G. Sindre
22
Table 2. Evaluation of proposed notations with SEQUAL framework Notation
Traditional UML Act. Diag. Annotated Location Swimlanes Loc. Swim. with big actor colours Colours with opt. choice
SEQUAL Framework on Language Quality Organizational Appropriateness
Domain Appropriateness
Comprehensibility Appropriateness
+ + + + ++ ++
+ + + + ++ ++
++ ++ + ++ ++
In Table 1, the proposed notations for small and large models are evaluated against four key features viz, minimal deviation from standard, expressiveness, intuitive/ easy to read and less complexity of models. In Table 2, by using SEQUAL framework, the notations are evaluated against its language quality viz, organizational, domain and comprehensibility appropriateness.
5. Conclusion and Future Work This paper analyse all the stated notations including the standard UML activity diagram and proposes alternate possible notations to specify mobile information systems. This specification analysis is performed with simple model in home care system starting with with UML activity diagram notation. The same is carried out with different alternate possible notations so that it can completely provide informations on what, who, how, why and where aspects and evaluated in the above section. Although to some extent it can be realised that some diagrams are better or worse than others, e.g., that Fig. 3 is a way too messy, empirical investigations are needed to make strong conclusions about what ideas are worth pursuing. Evaluation results are tabulated in Table 1 and 2 concluding colours and colours with optional choice have more positive features than other proposed notations. A wide range of different experiments can be envisioned where competing notations are compared to each other and the different notations would not have to be UML either, they could also be based on other modelling languages.
6. References [1] [2] [3]
Mylopoulos, J., Chung, L., and Tu, E., From Object-oriented to Goal-oriented Requirements Analysis Communications of the ACM. 1999,42(1): 31-37. Zhu, L. and I. Gorton, UML Profiles for Design Decisions and Non-Functional Requirements, in Proceedings of the Second Workshop on Sharing and Reusing architectural Knowledge Architecture, Rationale, and Design Intent. 2007, IEEE CS. Oliveira, A.d.P.A., et al., Integrating scenarios, i*, and AspectT in the context of multi-agent systems, in Proceedings of the 2006 conference of the Center for Advanced Studies on Collaborative research. 2006, ACM: Toronto, Ontario, Canada.
Alternative Process Notations for Mobile Information Systems [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17]
23
Pavlovski, C.J. and J. Zou, Non-functional requirements in business process modeling, in Proceedings of the fifth on Asia-Pacific conference on conceptual modelling - Volume 79. 2008, Australian Computer Society, Inc.: NSW, Australia. Ohnishi, A. Software requirements specification database based on requirements frame model in Requirements Engineering, Proc. of the 2nd Int. Conference on. 1996. Mujumdar, S., et al., A model-based design framework to achieve end-to-end QoS management, in Proceedings of the 43rd annual Southeast regional conference Volume 1. 2005, ACM: Kennesaw, Georgia. Larsson, A. V., Designing for use in a future Context – Five Case Studies in Retrospect, PhD thesis No:1034, 2003, Institute of Tech., Linkoping Univ., Sweden. URL: http://www.volere.co.uk/template.htm, accessed on 7th Dec 2009 URL: http://research.idi.ntnu.no/trimaks, accessed on 7th Dec 2009 Sølvberg, A., Data and what they refer to, in P.P.Chen et al. (eds.): Conceptual Modeling, 1999, pp.211-226, Lecture Notes in Computer Science, Springer Verlag. Booch, G., Rumbaugh, J. & Jacobson, I., “The Unified Modeling Language: User Guide Addison-Wesley, 1999. URL: http://m3w.idi.ntnu.no/ accessed on 7th 2009 Gopalakrishnan, S. and Sindre, G., Taxonomy of Mobility-Related Requirements", IEEE CS(I-ESA'09), Beijing, China, 20-22 Apr, 2009. Gopalakrishnan, S. and Sindre, G., A Revised Taxonomy of Mobility-Related Requirements, In Proc. International Workshop on Management of Emerging Networks and Services (MENS'09), St.Petersburg, Russia, 12-14 Oct, 2009 Veijalainen, J., Developing Mobile Ontologies; who, why, where, and how? IEEE, 07. Moody,D.L., The “Physics” of Notations: Towards a Scientific Basis for Constructing Visual Notations in Software Engineering. IEEE Transanctions on Software Eng. 09. Walderhaug, S., Stav, E. and Marius Mikalsen, M., Experiences from Model-Driven Development of Homecare Services: UML Profiles and Domain Models, LNCS 2009.
Conceptual Framework for the Interoperability Requirements of Collaborative Planning Process1 María M.E. Alemany1, Faustino Alarcón1, Francisco C. Lario1, Raul Poler1 1
Research Centre on Production Management and Engineering (CIGIP), Polytechnic University of Valencia, Camino de Vera s/n, Valencia 46022, Spain
Abstract. In order to achieve the multiple benefits of the collaborative planning (CP) process in a supply network (SN) context, some interoperability-related obstacles should be saved. The design and characterization of the CP process is a critical time to identify not only the interoperability requirements (IR), but also the CP components they affect. This paper presents a conceptual framework for the simultaneous characterization of the CP process and the identification of the potential IR to be saved for implementing collaboration. This early IR identification can help define the interoperability problem space that is essential for the solution space definition, thus saving subsequent costs and efforts. Keywords: collaborative planning process, supply networks, conceptual framework, interoperability requirements
1. Introduction Collaborative Planning (CP) is defined as a joint decision-making process for aligning the plans of individual Supply Networks (SN) members for the purpose of achieving coordination (adapted from [1]). Coordination should be achieved between the plans of SN decision-makers at different temporal levels (temporal integration) and at the same temporal level (spatial integration). Different authors strength the importance of analysing the collaboration under a process perspective [2;3] and the necessity of making processes inter-operable to achieve the collaboration benefits [4]. [5] affirm that the first step to give solutions
1
This research has been partially carried out in the framework of a Subproject funded by the Science and Technology Ministry of the Spanish Government, entitled PSS-370500-20063 "SP3 Integración operativa de la Cadena de Suministro.", within the project PSE370500-2006-1.
26
MME Alemany, F Alarcón, F.C. Lario, and R. Poler
for making a process inter-operable is the identification of the interoperability requirements (IR). In this context, [6] propose a framework for the classification and a methodology for collection of the requirements for enterprise modelling interoperability. [7] propose a framework able to handle the diversity of collaborative processes based on different computational and communication requirements. [8] describe how to connect internal processes of two companies supported by the cross-organizational business process meta-model. [9] present a framework and general requirements for seamless interoperability in a collaborative network environment. Different tools, ICT platforms, reference models, frameworks [10;11] and its evaluation for interoperability have been proposed. However, up to our knowledge, there is no any conceptual framework for CP focusing on the process perspective and, additionally, addressing the interoperability issue. Though works have been identified that tackle with the interoperability issue, they are mainly focused on classifying the IR and analysing existing interoperability solutions much more than on reporting when? and where? the potential IR could appear along the process. Even more, there has not been found any work linking the IR with the specific CP process. Being identified this gap in the literature, this paper proposes a conceptual framework for the CP process characterization that allows the IR identification during the design of the CP process, in a structured way. The process design and analysis is a critical phase for the early IR identification and the latter search for interoperability solutions, thus saving later costs and efforts. The paper is structured as follows. The next section details the components of the CP conceptual framework and their relationship with the IR. The last section provides some conclusions relating the applicability and benefits of the proposal.
2. Description of the CF Components for the CP Process: Identification of IR The conceptual framework of this paper presents the same fundamental components as those proposed by [12], but with two main differences: 1) the contents of each element of the conceptual framework refer to the CP process and not to the Collaborative Order Promising Process, and 2) special attention has been paid to the IR in each element’s description. The IR are based on those proposed by [10; 9; 13] in a collaboration context and have been completed by the authors. These IR requirements have been classified using the interoperability levels (data, services, processes and business) proposed by INTEROP NoE for the enterprise interoperability framework [10]. A process will be defined when the answers to questions (a)-(g) are known (Table 1). The answer to each question corresponds with the characterization of a conceptual framework component which is described and linked with the IR in this section (Table 2).
Conceptual Framework for the IRs of Collaborative Planning Process
27
Table 1. Components of the conceptual framework THE PROCESS ITSELF
THE INFORMATION
(a) What activities are to be carried out? (b) Who is responsible, who has the authority to carry them out and with what? (c) When and how are they to be carried out?
EVALUATION ASPECTS
(d) The process inputs,
(f) The process objectives,
(e) The process outputs,
(g) The performance indicators
Table 2. Relationship among each CF component and the IR INTEROPERABILITY Levels of Interoperability
Interoperability Requirements
What activities?
PROCESS ITSELF
INFORMATION REQUIREMENTS
Who and with what?
Process Inputs & Outputs
When and How?
EVALUATION ASPECTS (Ex ante) Objectives
(Ex post) Performance Indicators
Modes of decision-making [10] Alignment of conflicting objectives and goals Methods of work [10] Definition of domain of decisions and actions
BUSINESS
Agreement in responsibility and authority Learning Compatibility of organizational structure Integrated evaluation of inter-organizational activities and processes Base-detailed level of business representation [13] Definition and specification of the inter-organizational collaborative activities [10]
PROCESS
Business documents choreography [9] Definition of the interdependence information Achieve the roles/tasks fulfilment [9] Identification of potential activities to be automated
SERVICE
Linking applications and services addressing aspects related to interfaces, ICT platforms [9] Messaging (communication and message format) [9] Data integration [10] Information exchange [10] Semantics of the exchanged information [9] Format of the exchanged information [9] Information detail
DATA
Accessibility to reliable and real-time information [9] Update of information Management of the stored information [9] Security (confidentiality, authentication, non-repudiation) [9] Business documents formats and contents [9]
Tools and methods used in different companies in the four areas of interoperability concerns (data, services, processes, businesses) should be identified and compared, and their compatibility assessed. The answers to the questions of table 3 could be used as a validation principle for determining if an IR should (a “yes” answer) or not (a“no” answer) be omitted [14]. Table 3. IR validation principles [14] • •
CONCEPTUAL COMPATIBILITY
ORGANIZATIONAL COMPATIBILITY
TECHNOLOGICAL COMPABITIBILITY
Syntactic: is the information to be exchanged expressed with the same syntax? Semantic: does the information to be exchanged have the same meaning?
•
•
•
Persons: are authorities/ responsibilities clearly defined at both sides? Organization: are the organization structures compatible?
•
Platform: are the IT platform technologies compatible? Communications: do the partners use the same protocols of exchange?
28
MME Alemany, F Alarcón, F.C. Lario, and R. Poler
2.1. The Collaborative Planning Process What activities have to be carried out? The activities in such a process are dependent on the level of detail adopted for its representation and, therefore, the answer to the subsequent questions ((b)-(c)) is conditioned by the first question (a). Indeed, to achieve the inter-organizational IR integration process, the selection of a specific language for the business process description and the methods to link the disjointed view of the processes are essential, particularly in terms of the level of detail [13]. Therefore, to integrate the different planning sub-processes of each organization at the different CP temporal levels, we identified a base-detail level process representation that considers a base-activity as a set of activities for a specific temporal level of the CP process that: 1) has the same function, 2) has to be considered simultaneously (executed in the same planning period and at the same time point) given their strong interdependency and 3) has the same responsible party. In the following section we refer to the base-activity. The following main CP activities classification, based on the function performed, are proposed: • •
•
•
•
Initial and final activities: these activities determine when the process should be initiated and finalized, respectively. Planning activities: through their execution, a decision related to the SN Planning function is performed (i.e., to generate a production plan). To define them, the IR decision-making modes, the IR working methods, and the definition of the IR domain and decisions need to dealt with. Initial plan generation activities: these are in charge of generating an initial solution to start a negotiation process, should it exist. To define them, the IR decision-making modes, the IR working methods, and the definition of the IR domain and decisions need to be dealt with. Call activities: are in charge of making the execution of other activities possible. At the base-level business representation adopted, it usually links activities from the planning activities belonging to different responsible parties and/or organizations. They are essential in achieving all the IR related with messaging such as communication, message format, business document format, security, etc. Information Management Activities: they refer to those activities where either some type of information transformation, mainly due to spatial or temporal integration, or a calculus based on previous information, is made: o
spatial integration: at the same temporal level it would be necessary to pass information in the appropriate form to other SN members (e.g., to calculate the total requested supply quantity to each producer’s facility from the distributor’s distribution plan expressed as a vehicle and a facility).
o
temporal integration: as one moves down the decision-making temporal hierarchy, planning horizons shorten and the granularity of the information used increases in such a way as to permit the explicit representation and the timing of the key events [15]. Then, it would be necessary to
Conceptual Framework for the IRs of Collaborative Planning Process
29
aggregate/disaggregate the information relating to time, items and resources exchanged among the activities belonging to the different temporal levels. These types of activities are essential to solve the IR data integration, the semantics of the IR information exchanged, the availability of the IR information, and managing the IR information stored. •
Conditional activities: they check the accomplishment of certain conditions. Depending on whether these conditions have been achieved or not, the subsequent activities to be triggered differ. Conditional activities are essential for implementing the negotiation process and its finalization, and also for harmonizing the different decision-making modes and the IR working methods.
The above classification of activities could be useful for the definition and specification of inter-organizational IR collaborative activities. An interorganizational activity may be considered to link activities whose responsible party belongs to different organizations, and/or that the part of the physical SN under its control belongs to different organizations. Furthermore, the above classification linked with the answer to “with what?” can help in the identification of potential activities to become automated IR. Who is responsible to perform each activity and with what? Once the activities to be carried out in the CP process has been clarified: responsibility for their execution, authorizations and computer applications that could be of use in the decision-making process should all be defined. We distinguish four main categories of responsible parties (Table 4). A mediator is an additional SN member who controls the rules of the negotiation process, e.g., controlling the (timing of) interactions among members. A mediator may have the capability to evaluate the SN members’ plans, generate plans and present them to all the SN members for their evaluation, or propose the distribution of efficiency gains among SN members. For other activities than the planning type ones, another part or application will be defined as their responsible parties. The definition of responsibilities is strongly dependent on the SN organization and decision characteristics; however it is possible to provide some guidelines (Table 4). Table 4. Relationships among the different types of activities and those responsible RESPONSIBLE/ ACTIVITIES Planning Initial plan generation Call Information management Conditional
Planning decision-maker
Mediator
Another part
Application (automated)
At this point the compatibility of the IR organizational structures should be addressed. Furthermore, the definition of the responsible parties of the planning activities depends on the characteristics of the physical, decision and organization scope of these activities. The physical, decision and organization scope of each activity will allow the definition of the decisions and actions (IR) domain. The designation of responsibility for each activity will contribute to achieve the IR roles/tasks fulfillment. Furthermore, the specification of IR responsibility and
30
MME Alemany, F Alarcón, F.C. Lario, and R. Poler
authority is essential in defining IR security (confidentiality, authentication, non repudiation) and accessibility to IR information (authorization). Conditional activities require the IR related to negotiation specifications and collaborative agreement to have been previously defined. Therefore, the IR of available business documents for this type of activities has to be dealt with. Regarding the answer to the question with what?, it may be stated that planning activities and initial plan generation activities are usually supported by applications based on simulation, heuristic or mathematical programming models, which are partly automated because they require their responsible party’s validation. The remaining CP activities can be easily automated. Therefore in this step, not only the IR of linking applications and services addressing aspects related to interfaces, ICT platforms, exchange and accessibility, security and standards. but also those in relation to harmonizing the different IR decision-making modes and working methods will be addressed. When must CP activities be done? The execution of an activity depends on whether it is period-driven or eventdriven. Although planning activities are usually period-driven (i.e., executed at predefined time intervals, namely the re-planning period) due to temporal integration, they require a previous execution of the planning activities belonging to higher temporal levels that transmit them shared information. Furthermore, when negotiation takes place to achieve spatial integration, planning activities can be executed several times in the same re-planning period until the stopping negotiation criteria are achieved. Therefore, when negotiations among different planning activities take place their timing can be considered a mixture of perioddriven and event-driven activities. It is recommendable that all the planning activities belonging to the same temporal decision level present the same re-planning periods. Therefore, SN members must agree about the most suitable re-planning period for the CP process (IR decision-making modes and working methods). Every time an activity is performed, the information to be exchanged with other IR activities, accessibility to reliable and real-time IR information and IR information updating will be achieved. Finally, the timing of CP process activities is essential in specifying the sequence in which the business documents and information are exchanged in each inter-organizational activity (IR business documents choreography) and the instant when accurate information should be available (IR accessibility). 2.2. Information Requirements Information constitutes the input and output of the CP process. Additionally for the CP process, knowing the local or private information (i.e., not shared information with other activities), and the global or interdependent information (i.e., information shared with other activities) for each activity becomes specially relevant for defining collaboration. The local information of an activity always constitutes an information input or output of the whole process, while part of the global information can remain only as internal flow.
Conceptual Framework for the IRs of Collaborative Planning Process
31
The information to be exchanged among the different CP process activities constitutes one of the pillars for constructing links among inter-organizational subprocesses and requires the IR of the data level. So the first task in solving the IR related to the data level consists in identifying and defining this interdependent information. Furthermore, the information required and generated by an activity strongly depends on the previously specified IR modes and decision-aid tools for decision-making, the working methods employed and the definition of the IR decisions and actions domain. Therefore, there is a bi-directional relationship between these two last aspects that should be evaluated to adjust them both simultaneously. Identifying the required information for it to be shared can facilitate the later implementation of tightly coupled information technology and standards for information interchange. Furthermore, it is essential for establishing the following IR: communication, message format, business documents formats and contents, security, semantics of the information exchanged, managing the information stored, information availability, etc. Information Requirements for Planning and Initial Plan Generation Activities A planning activity attempts to obtain the value of its decision variables (output) that simultaneously optimize or achieve a satisfactory result of an objective/criteria (output) and respect the different limitations based on the available information (input). In table 5 a description of the input and output information of this planning activities is presented. Table 5. Input & output information of planning and initial plan generation activities INPUT INFORMATION Parameters or Data Local Parameters * Private attributes of the planning elements belonging to the system of the planning activity (e.g. costs, capacity, times, etc.) Interdependent Parameters *Shared attributes of the planning elements belonging to the system of the planning activity (e.g. demand forecasts). It is known as information sharing [8]. * Output information of other activities: ÆFrom planning activities: values of the decisions and/or criteria of its planning problem (interdependent decision variables and criteria). It is known as joint decisionmaking [8]. - Spatial integration - Temporal integration Æ From other planning activities
OUTPUT INFORMATION Tentative or Definitive Plan Local Decision Variables/Criteria * Private Decision Variable/Criteria: they are the decision variables/criteria values once executed the planning activity and solved the planning problem that are are not transmitted to other activities. Interdependence DecisionVariables/Criteria *Interdependent Decision Variable/ Criteria: they are those decision variables/criteria values which, in some way, are passed to other activities. This output data becomes input data for others activities, more specifically they become interdependent parameters of other planning activities. Æ Final Decision Variables/Criteria: their values cannot be changed under any condition during the negotiation process or, simply, because there is no negotiation. ÆNon-Final Decision Variables/ Criteria: their values can be modified during the negotiation process: -Temporal integration: disaggregation of decisions for being implemented(e.g. production volume of families should be disaggregated referred to articles) - Spatial integration: their values are adjusted before reaching the stopping criteria of the negotiation process (e.g. ordered/supplied quantities)
Input information of a planning activity refers to parameters and represents known characteristics or properties of SN entities (i.e., capacity of a facility, production cost of an item, etc.), or a relationship among them (processing times of an item on a facility). The interdependent parameters derived from the output decisions of other planning activities belonging to the upper temporal levels or the same temporal levels constitute the links for achieving temporal and spatial integration, respectively. The Output Information of a planning activity is usually integrated by the values of its decision variables and its criteria, and can be partly transmitted to
32
MME Alemany, F Alarcón, F.C. Lario, and R. Poler
other activities, thus becoming the input data (interdependence parameters) for those activities. Identifying the IR of update information is of special relevance to know whether the decision variables of an activity for each negotiation cycle are final or non final. The updating of non final decision variables has to be done for each negotiation cycle until the collaborative agreement specifications have been achieved. It is clear that the IR accessibility to interdependence information should be allowed for at least the responsible party of all the activities implied during the interaction cycle. Information requirements for Management Activities The input information for Management Activities relates with the level of the information detail required for the rest of activities, the format and the IR semantics. These activities collect the interdependence output information of other activities and transform it into the required format for the interdepence input information of other activities, thus achieving spatial and temporal integration. Information requirements for Conditional Activities Input information for conditional activities mainly consists in specifications regarding the collaborative agreement and output information (interdependence variables and/or criteria value) of other activities. Output contains information about whether a condition is accomplished, usually if the stopping negotiation criteria is reached. Depending on the result obtained, the subsequent sequence of activities to be triggered differs. 2.3. Evaluation Aspects Objectives (ex-ante evaluation) Objectives relate with the CP process goals. For collaborative processes, we can distinguish between the CP process objectives as a whole and the activity objectives. In this step, a consensus should be reached among the often conflicting IR objectives of the different SN members under the responsibility of different activities to achieve interoperability at the IR business level. •
•
The CP process objectives can be of three main types [1]. a) alignment of SN material flows, b) search for the SN optimum and c) search for a fair solution consisting in the evaluation of the SN solution as a whole with regard to the desired solution for each SN member. Objectives of activities depend on the type of activity being considered: Planning Activities and Initial Plan Generation Activities Objectives usually consist in simultaneously optimizing the local and interdependence criteria. If the responsible party is a mediator, the objectives of each SN member should be adequately integrated to accomplish the SN objectives as a whole. Conditional activity objectives relate with the accomplishment of a pre-defined criterion. The most usual criteria are the achievement of a pre-defined number of negotiation rounds, an aspiration deviation among the interdependence decision variables for the other SN members (e.g., ordered and supplied quantities), or/and the fairness evaluation of the SN members.
Conceptual Framework for the IRs of Collaborative Planning Process
33
Performance Indicators (ex-post evaluation) Performance indicators are essential to discover to what extent the established objectives are being fulfilled for each SN member and for the entire SN. The definition of the proper SN performance indicators and their threshold could allow the integrated evaluation of inter-organizational activities and IR processes. Once they have been defined, the information required to calculate them must be accessible and updated (accessibility to reliable and real-time IR information). The most usual CP process performance indicators relate to [16]: a) costs (inventory, production, warehousing and transportation costs), b) times (production operation and transport times), c) customer service level (stock out quantities and backorder quantities and d) volume flexibility.
3. Conclusions This paper presents a conceptual framework for simultaneously characterizing both the CP process and the potential IR to be saved for each component of the process. This research attempts to contribute in the alignment of the specific CP business process in the networked organizations, through guiding in the task of making the different SN planning sub-processes interoperable. Interoperability requires the identification of the shared elements and possible barriers between them [14] that is the main objective of this paper. In this sense, enterprise modelling can help identify the IRs trhough the modeling of interactions and information exchanges that occur in collaborations. The detection of IR can be used as a first step for models measuring interoperability through maturity models and other models [14] and determinining the impact of interoperability investments on business [17]. The latter selection of IRs to be solved should help achieve the optimal level of interoperability [18]. At this moment, the framework is being validated trough an application to a ceramic tile supply chain. Future research lines should be focus on a more exhaustive IR identification and the procurement of specific solutions for different IR supported by a methodology.
4. References [1] [2] [3] [4] [5]
Stadtler H. (2009) A framework for collaborative planning and state-of-the-art, OR Spectrum 31 ( 1): 5-30 Osório A.L., Camarinha-Matos L.M. (2008) Distributed process execution in collaborative networks. Robotics and Computer-Integrated Manufacturing 24: 647– 655. Camarinha-Matos L.M., Afsarmanesh H., Galeano N., Molina A. (2009) Collaborative networked organizations – Concepts and practice in manufacturing enterprises. Computers & Industrial Engineering 57: 46–60. Xu H, Koh L, Parker D. (2009) Business processes inter-operation for supply network co-ordination. International Journal of Production Economics 122(1):188-99. Mykkänen, J.A; Tuomainen M.P. (2008) An evaluation and selection framework for interoperability standards, Information and Software Technology 50: 176–197.
34 [6] [7] [8] [9] [10] [11] [12] [13] [14]
[15] [16] [17]
[18]
MME Alemany, F Alarcón, F.C. Lario, and R. Poler Ducq Y, Chen D, Vallespir B (2004) Interoperability in enterprise modelling: requirements and roadmap. Advanced Engineering Informatics 18(4):193-203. Osorio AL, Camarinha-Matos LM. (2008) Distributed process execution in collaborative networks. Robotics and Computer-Integrated Manufacturing 24(5):64755. ATHENA, , (2005) D.A2.2: Specification of a Cross-Organisational Business Process Model, Version 1.0", ATHENA IP, Deliverable D.A2.2. Chituc, C. M.; Azevedo, A.; Toscano, C. (2009) A framework proposal for seamless interoperability in a collaborative networked environment, Comput. Ind. 60(5): 317338. Chen D., Doumeingts G., Vernadat F. (2008) Architectures for enterprise integration and interoperability: Past, present and future, Comput. Ind. 59(7): 647-659 Chen D, Doumeingts G (2003): European initiatives to develop interoperability of enterprise-basic concepts, framework and roadmap, Annual Reviews in Control 27: 153–162. Alarcón F, Alemany MME, Ortiz A. (2009) Conceptual framework for the characterization of the order promising process in a collaborative selling network context. International Journal of Production Economics, 120(1):100-114. O'Brien W. J., Hammer J., Siddiqui M., Topsakal O. (2008) Challenges, approaches and architecture for distributed process integration in heterogeneous environments, Adv. Eng. Inform 22(1): 28-44. Chen D., Vallespir B., Daclin N. (2008) An Approach for Enterprise Interoperability Measurement. Proceedings of the International Workshop on Model Driven Information Systems Engineering: Enterprise, User and System Models (MoDISEEUS'08) Montpellier, France. 1-12 Muckstadt J. A., Murray D. H., Rappold J. A., Collins D. E (2001): Guidelines for Collaborative Supply Chain System Design and Operation, Inform. Syst. Front. 3 (4): 427-453. Beamon B. M., Chen V. C. P, (2001) Performance analysis of conjoined supply chains, Int. J. Prod. Res. 39 (14): 3195-3218. Lebreton, B., Legner, C. (2007) Interoperability Impact Assessment Model – Framework and Application. In Goncalves, R. J., Müller, J. P., Mertins, K., Zelm, M. (Eds.). Enterprise Interoperability II, Proceedings of the 3rd International Conference on Interoperability for Enterprise Software and Applications (I-ESA 2007), Berlin: Springer. 725–728. Legner C., Lebreton B. (2007) Business Interoperability Research: Present Achievements and Upcoming Challenges. Electronic Markets, 17(3): 176-186.
Improving Interoperability using a UML Profile for Enterprise Modelling Reyes Grangel1, Ricardo Chalmeta1, Cristina Campos1, Sergio Palomero1 1
Grupo de Investigación en Integración y Re-Ingeniería de Sistemas (IRIS), Dep. de Llenguatges i Sistemes Informàtics, Universitat Jaume I, 12071 Castelló, Spain
Abstract. Enterprise Modelling has been successfully used for modelling different enterprise dimensions such as business processes or organisation. However, some problems, like the lack of interoperability, are not solved in this context because the large number of different Enterprise Modelling Languages that exist make it difficult to exchange enterprise models performed by different enterprises. To solve this problem, in this paper we present a Framework that can be used to model enterprise dimensions, at the same time making interoperability between resulting models much easier to achieve. This Framework is based on UML using UML Profiles, and on the existing Enterprise Modelling metamodels proposed to solve interoperability problems. This paper provides a general description of the Framework that was implemented, as well as a detailed explanation of the UML Profile that was developed to represent one of the enterprise dimensions that can be modelled with the Framework, in particular the Organisational Structure of an enterprise. Keywords: Interoperability, Enterprise Modelling, Enterprise Modelling Language, UML, UML Profile, Organisational Structure
1. Introduction One of the main problems in Enterprise Modelling is the huge number of proprietary Enterprise Modelling Languages (EMLs) that exist, since interoperability problems increase among systems that use different EMLs when they want to exchange enterprise models [1-4]. Initiatives such as UEML1 [6-8]
1 Unified Enterprise Modelling Language first developed by the UEML Thematic Network [5] and then worked on by INTEROP NoE [3].
36
R. Grangel, R. Chalmeta, C. Campos and S. Palomero
and POP*2 [9, 10] provide common exchange formats to smooth the exchange of enterprise models in order to solve interoperability problems at Enterprise Modelling level. These two initiatives have been defined with the objective of making it easier to accomplish exchanges among enterprises that used different EMLs. For this reason, they propose common metamodels based on the main EMLs in which several enterprise dimensions, such as 'Process', 'Organisation', 'Product', etc. and the concepts related to each one of them are defined. MDA, on the other hand, is an emerging paradigm defined by the OMG. In this context, a lot of research work is being conducted in relation to PIM (Platform Independent Model) and PSM (Platform Specific Model), but the characterisation of CIM (Computation Independent Model) and the features that an enterprise model must satisfy in order to be considered a CIM and generate appropriate software is still in progress [11]. In fact, there exist several working groups inside the OMG3, which are working on this topic and whose main results (Business Motivation Model (BMM), Business Process Modeling Notation (BPMN), Semantics of Business Vocabulary and Business Rules (SBVR), etc.) highlight the interest of this topic. The objective of this paper is to present a Framework based on the Unified Modeling Language (UML) for modelling enterprises that takes these two contexts into account. In particular, one of the profiles of this Framework, the UML Profile for modelling the Organisational Structure of an enterprise, is detailed. The main reason why this is more than just another Enterprise Modelling Profile, that is different from the existing ones [12, 13] is that it is based on the accepted metamodels performed in UEML and POP*, in which interoperability problems have been solved. These proposals provide metamodels that make it easier to exchange of enterprise models between enterprises that use different EMLs. They do not, however, provide an implementation of these metamodels to be used for engineers to model enterprises directly. The UML Profiles developed inside the Framework presented in this paper are based on these metamodels and they can be used to model enterprises directly. Section 2 describes the main concepts related to UEML and POP* for modelling enterprises, especially organisational dimension, and different approaches that use UML for Enterprise Modelling. In Section 3, the Framework proposed based on UML for Enterprise Modelling is presented. Section 4 shows the detailed UML Profile for modelling Organisational Structure of an enterprise, and Section 5 outlines the conclusions and the main lessons learned for the application of the Framework.
2
Acronym of the different enterprise dimensions: Process, Organisation, Product, and so on (represented by a star), proposed by ATHENA IP [4]. 3 This is the Business Modeling & Integration Domain Task Force whose mission is to develop specifications of integrated models to support management of an enterprise.
Improving Interoperability using a UML Profiles for Enterprise Modelling
37
2. Enterprise Modelling 2.1. Using UEML and POP* In its latest version, UEML contains (1) a construct template, which provides a common structured format for describing modelling constructs; (2) a common ontology, to maintain the classes, properties, states and events used to describe modelling constructs; (3) the meta-metamodel, in which the top part is for managing the relationships between languages, their diagram types and their modelling constructs, the bottom part shows the structure of the common ontology, and the middle part is for breaking down modelling constructs and mapping them onto the common ontology [8, 14]. The UEML approach for describing modelling constructs is sufficiently powerful to support the integrated use of a broad variety of languages and models. However, the resulting theory and tools also need to be empirically validated and evaluated in real case studies [8, 14]. On the other hand, within its specification POP* also includes the POP* metamodel, which describes the set of basic modelling constructs that have been defined and their relationships, as well as some guidelines that describe the management and use of the POP* metamodel. A thorough explanation of the POP* metamodel and its corresponding methodology can be found in [9]. This work includes the description of the POP* metamodel with the dimensions that have been defined so far, i.e. Process, Organisation, Decision and Infrastructure. Organisation dimension expresses the formal and informal organisational structures of an enterprise, as well as the different stakeholders and relationships that form part of this organisation. The metamodel for the Organisation dimension proposes the following concepts with which to model this dimension: Organisation Unit, Organisation Role, Role (General Concept), Enterprise Object (General concepts), and Person. 2.2. Using UML The Unified Modeling Language (UML) has become a standard visual language for object-oriented modelling that has been successfully used for modelling information systems in very different domains [15]. However, UML is a generalpurpose modelling language that can also be useful for modelling other types of systems such as, for example, an enterprise [12, 13]. Other works, such as [16], point out the possibility of using UML as a language for Enterprise Modelling, although how and under which conditions this can be performed are explained in [17]. However, the benefits of model-driven approaches and the new UML2 specification provided by the OMG suggest the need to provide more practical examples for Enterprise Modelling with UML based on these recent works [18], and especially for Enterprise Knowledge Modelling.
38
R. Grangel, R. Chalmeta, C. Campos and S. Palomero
Furthermore, despite the fact that the weakness of the stereotype mechanism is pointed out in [17], the new UML2 specification [15] provides profiles that are more complete than those in version 1.5 [19]. It will therefore be possible to customise UML in a better way [20, 21]. Moreover, taking into account the number of diagrams provided in UML2 and the fact that the previous works use mainly 'Class Diagrams', it would be interesting to clarify exactly which UML2 diagrams are useful at the CIM level and then to specify which part of CIM models must be transformed into PIM models, since according to [22] there must surely be degrees of CIMness.
3. Conceptual Framework for Enterprise Knowledge Modelling The Framework for Enterprise Modelling presented in this paper is part of a bigger project dedicated to the development of Knowledge Management Systems (KMSs). The main results of this project are: (1) the KM-IRIS Methodology for the Implementation of KMSs [23]; and the proposal presented in this paper to model enterprises, the aim of which is to represent Enterprise Knowledge at the CIM level in order to obtain a graphical model called an Enterprise Knowledge Map [24]. Table 1. Proposal for Enterprise Knowledge Modeling Abstraction Level
Metamodel
UML Profile
Model
CIMKnowledge
Knowledge
UML Profile for KM
Knowledge
CIMBusiness
Organisation
UML Profile for GM UML Profile for OSM
Organisation
UML Profile for AM UML Profile for BRM UML Profile for SM
Structure
UML Profile for BM
Behaviour
Diagram Blocks Ontological Knowledge Goals Organisational Structure Analysis Business Rules Product Resource Process Service
In general terms, this proposal is based on MDA and defines a framework for developing conceptual models of KMSs at the CIM level. In other words, it allows enterprise knowledge to be modelled on the CIM level at two levels of abstraction, which are required due to the great complexity of this level (see Table 1.): • CIM-Knowledge: this corresponds to the top level of the model at the CIM level; the enterprise is represented from a holistic point of view, thus
Improving Interoperability using a UML Profiles for Enterprise Modelling
•
39
providing a general vision of the enterprise focused on representing enterprise knowledge that will later be detailed in a local way in successive lower levels. CIM-Business: here, the vision of enterprise knowledge is detailed by means of a representation of its business, according to three types of models, i.e. the Organisational Model, the Structure Model and the Behaviour Model.
The Framework (see Figure 1) also adheres to the following principles: (1) it is focused on Enterprise Modelling, since it takes into account enterprise dimensions and previous works leading to initiatives such as UEML and POP*; and (2) it is a user-oriented modelling framework, since it should be developed at the CIM level by domain experts. A summary of the Framework showing its levels of abstraction, metamodels and profiles, as well as the models and diagrams proposed for each level are shown in Table 1 [24].
Fig. 1. Models and diagrams defined within the Framework
4. Modelling Organisational Structure of an Enterprise The Framework for Enterprise Knowledge Modelling presented in this paper is capable of representing the Organisational Structure of an enterprise by means of different diagrams: • Blocks, Ontological and Knowledge Diagrams: these are able to represent the ontology defined for the human resources and the specific knowledge and instances defined for each category.
40
R. Grangel, R. Chalmeta, C. Campos and S. Palomero
•
Organisational Structure Diagram: this makes it possible to represent the organisation of the human resources in an enterprise, mainly by means of organisational units and the roles played by employees. • Resource Diagram: this makes it possible to model material and human resources. In this latter case, the diagram represents properties such as competencies, skills, knowledge, curriculum, and so on. This section shows an excerpt from the Framework in order to describe how Organisational Structure can be modelled in enterprises. It therefore shows an excerpt from the Organisation Metamodel and the UML Profile for OSM. 4.1. Organisational Structure Metamodel Figure 2 shows an excerpt from the Organisational Structure Metamodel. The constructs needed to represent human resources in an organisation in order to model Organisational Structure are as follows (see Fig. 2). • Enterprise: it represents any kind of organisation. This construct makes it possible to represent both an individual enterprise and an isolated entity (such as an extended or virtual enterprise) made up of different enterprises with distinct legal personalities. The following attributes are defined: - collaboration: it specifies the type of cooperation that enterprises maintain with other organisations in their environment. Its values can be any of those from the enumeration 'EnterpriseCollaborationType': single, extended or virtual. - legalStatus: it specifies the legal form that an enterprise has. - legalName: it indicates the legal name assigned to an enterprise. - cif: it specifies the fiscal identifier of an enterprise. • Unit: it represents each of the logical groups that exist in enterprises to implement their organisation, and may be one of the following types: department, organisational unit, section, and subsection. Furthermore, this construct makes it possible to design tree-shaped hierarchies that describe the organisational structure of enterprises. The following attributes are defined for this class: - type: it specifies the category the unit belongs to and may be one of the categories defined in the enumeration, 'UnitType': department, organisationalUnit, section or subsection. - isLeaf: it indicates whether a unit can no longer be broken down into another level, that is to say, if it is a leaf in the hierarchical tree that makes up organisational units. - connection: it indicates which of the two possible types of connection defined in the enumeration 'UnitConnectionType' (internal or external) exists among specific units in the organisation.
Improving Interoperability using a UML Profiles for Enterprise Modelling
-
41
location: it specifies the physical location of a unit.
Fig 2. An excerpt from the Organisational Structure Metamodel
•
•
• •
JobProfile: it represents a set of tasks that are related and require complementary and specific competences in order to be performed. The job profile defines the tasks and roles that should be performed by whoever has this job profile. The following attributes are defined for this class: - level: it indicates the hierarchical level on which the work position is defined, possible levels may be one from among the following defined by the enumeration 'LevelType': collaborative, strategic, tactic or operative. Employee: it represents the people that carry out a specific work in an enterprise, occupy a work position and have a specific role. The following attributes are defined for this class: - dni: it specifies the unique identifier of each employee. Role: it represents the attitudes and skills that are required for a particular job profile. Task: it represents the individual actions that are responsibility of a single individual and that are assigned to a particular job profile.
42
R. Grangel, R. Chalmeta, C. Campos and S. Palomero
4.2. Implementation of the Proposal From a technological point of view, this Proposal was implemented using the capacity of UML2 to extend a metamodel, that is to say, using a UML2 Profile. The UML2 Profile was defined for Enterprise Knowledge Modelling at the CIM level, following an MDA approach and the principles detailed above, and it was implemented using the IBM Rational Software Modeler [25]. The Profile provides the constructs needed to create the models proposed earlier (see Table 1). The 'UML Profile for OSM' (see Fig. 3) is one of the profiles of this Framework (see Table 1). It allows the organisational structure of an enterprise to be represented, showing the division of work carried out in departments, sections, subsections, etc. as well as the different profiles of jobs existing in enterprises, related employees, roles played by these employees and associated tasks.
Fig. 3. UML Profile for Organisational Structure Modelling
This profile makes it possible to develop the 'Organisational Structure Diagram' (one of the components of the 'Organisation Model') in order to represent the chart diagram of an enterprise. This diagram can include both the organisational units as the job profiles, roles and employees in each one of them. The main stereotypes of the 'UML Profile for OSM' that can be utilised to develop the Organisation Structure Diagram are shown in Table 2.
5. Conclusion The Framework outlined in this paper allows enterprises to be modelled using UML as EML. This Framework was applied in two real cases, an audit enterprise
Improving Interoperability using a UML Profiles for Enterprise Modelling
43
and a foundation, the results of which were used to improve iteratively the metamodels and the profiles presented in this paper. The proposal can be used to model any kind of enterprise, although as explained in the KM-IRIS Methodology [23] the first step is to define which kind of models are suitable bearing in mind the size of the enterprise, its strategic objectives, and so forth. Table 2. Stereotypes and icons that can be used within the 'Organisational Structure Diagram' Stereotype
Elements to model
<<Enterprise>>
Individual or collaborative enterprise
<>
Any of the organisational units of an enterprise, i.e. departments, organisational units, sections, subsections, etc.
<<JobProfile>>
Job profiles
<<Employee>>
Employees in the enterprise
<>
Employees' roles
<>
Tasks in a particular work place
Icon
<>
The main lessons learned is the difficulty involved in explaining the meaning of each concept proposed in the Framework to final users, since, in a similar way to UML, it is necessary to use diagrams to explain what each concept is and how it can be used to represent the particular case of an enterprise. For this reason, future work is centred on providing a number of templates that can guide the process of modelling and, on the other hand, on validating and improving the Framework with more real cases from different sectors and types of enterprises.
6. Acknowledgements This work was funded by DPI2006-14708 and the EC within the 6th FP, INTEROP NoE (IST-2003-508011 [3]).
44
R. Grangel, R. Chalmeta, C. Campos and S. Palomero
7. References [1] [2]
[3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19]
Chen, D., Doumeingts, G.: European initiatives to develop interoperability of enterprise applications–basic concepts, framework and roadmap. Annual Reviews in Control 27 (2003) 153–162 Doumeingts, G., Chen, D.: Interoperability development for enterprise applications and software. In Cunningham, P., Cunningham, M., Fatelnig, P., eds.: Building the Knowledge Economy: Issues, Applications, Case Studies. eBusiness, IOS Press Amsterdam (2003) INTEROP: Interoperability Research for Networked Enterprises Applications and Software NoE (IST-2003-508011). http://www.interop-noe.org (2010) ATHENA: Advanced Technologies for interoperability of Heterogeneous Enterprise Networks and their Applications IP (IST-2001-507849). http://www.athena-ip.org (2010) UEML: Unified Enterprise Modelling Language Themantic Network (IST-200134229). http://www.ueml.org (2010) UEML: Unified Enterprise Modelling Language Project (IST-2001-34229). http://www.ueml.org (2004) Deliverable 1.1. Berio, G., Opdahl, A., Anaya, V., Dassisti, M.: Deliverable DEM 1: UEML 2.1. Technical report, INTEROP-DEM (2005) Opdahl, A., Berio, G.: A Roadmap for UEML. In: Enterprise Interoperability. Springer-Verlag (2007) Ohren, O.P.: Deliverable DA1.3.1. Report on Methodology description and guidelines definition. Technical report, ATHENA Project (IST-2003-2004) (2005) Grangel, R., Chalmeta, R., Schuster, S., Peña, I.: Exchange of Business Process Models using the POP* Meta-model. In: Int. Workshop ENEI’2005. Vol. 3812/2005 of LNCS, Springer-Verlag (2005) Grangel, R., Chalmeta, R., Campos, C.: Enterprise Modelling, an overview focused on software generation. In Panetto, H., ed.: I-ESA Workshops of the INTEROP-ESA Int. Conf. EI2N, WSI, ISIDI and IEHENA 2005, Hermes Science Publishing (2005) Eriksson, H., Penker, M.: Business Modeling with UML: Business Patterns at Work. J. Wiley (2000) Marshall, C.: Enterprise Modeling with UML. Designing Successful Software Through Business Analysis. Addison Wesley (2000) Opdahl, A.: The UEML Approach to Modelling Construct Description. In: Enterprise Interoperability. Springer-Verlag (2007) OMG: UML 2.1.2 Superstructure. Object Management Group. version 2.1.2 formal/2007-11-04 edn. (2007) Panetto, H.: UML Semantics Representation of Enterprise Modelling Constructs. In: ICEIMT. (2002) 381–387 Berio, G., Petit, M.: Enterprise Modelling and the UML: (sometimes) a conflict without a case. In: Proc. of 10th ISPE International Conf. on Concurrent Engineering: Research and applications. (2003) 26–30 Grangel, R., Bourey, J.P., Chalmeta, R., Bigand, M.: UML for Enterprise Modelling: a Model-Driven Approach. In: I-ESA’06. (2006) OMG: OMG Unified Modeling Language Specification, version 1.5. Object Management Group. formal/03-03-01 edn. (2003)
Improving Interoperability using a UML Profiles for Enterprise Modelling
45
[20] Fuentes, L., Vallecillo, A., Troya, J.: Using UML Profiles for Documenting WebBased Application Frameworks. Annals of Software Engineering 13 (2002) 249–264 [21] Noran, O.: UML vs. IDEF: An Ontology-Oriented Comparative Study in View of Business Modelling. In: ICEIS (3). (2004) 674–682 [22] Berrisford, G.: Why IT veterans are sceptical about MDA. In: Second European Workshop on Model Driven Architecture (MDA) with an emphasis on Methodologies and Transformations, Kent, Computing Laboratory, University of Kent (2004) 125– 135 http://www.cs.kent.ac.uk/projects/kmf/mdaworkshop/submissions/Berrisford.pdf. [23] Chalmeta, R., Grangel, R.: Methodology for the Implementation of Knowledge Management Systems. Journal of the American Society for Information Science and Technology 59 (2008) 1–14 John Wiley & Sons. [24] Grangel, R., Chalmeta, R., Campos, C.: A Modelling Framework for Sharing Knowledge. In: Int. Conf. KES 2007. Vol 4693/2007 of LNAI, Springer-Verlag (2007) [25] IBM: IBM Rational Software Modeler Development Platform 6.0.1. http://www306.ibm.com/software/rational/ (2010)
Towards Test Framework for Efficient and Reusable Global e-Business Test Beds Nenad Ivezic1, Jungyub Woo1 and Hyunbo Cho2 1 2
National Institute of Standards and Technology, 100 Bureau Drive, Gaithersburg, MD 20899, USA Pohang University of Science and Technology, San 31, Hyoja, Pohang 790-784, Korea
Abstract. We introduce a novel Agile Test Framework (ATF) for eBusiness systems interoperability and conformance testing. Largely, existing test frameworks have been designed to only consider a limited collection of specific test requirements. This makes it difficult to apply these frameworks to other similar standards specifications and to support cooperative work in building distributed global test systems. ATF addresses the core issues found in these traditional testing frameworks to alleviate the resulting inefficiencies and lack-of-reuse problems that arise in the development of eBusiness test beds. Keywords: testing, eBusiness, interoperability, conformance, test framework, test bed, reusability
1. Introduction There exists an increasing need to build efficiently test beds in support of interoperable e-Business applications. For example, a recent international activity was completed under auspicies of the European Committee for Standardization (CEN) to perform a feasibility analysis of the Global Interoperability Test Bed (GITB) for eBusiness systems [1]. The goal of this activity was to establish a foundation for development of a common test framework (i.e., the GITB) for efficient development of global, distributed test beds for eBusiness. Specifically, the activity envisions and recommends development of a shared architecture for the test framework. The objective of this paper is to contribute to a basis for architectural development of the test frameworks that enable efficient building and re-use of test beds for interoperable eBusiness applications. Here, we introduce the Agile Test Framework (ATF) that represents a refinement of our lessons learned of developing test beds for e-Business systems over many years.
48
N. Ivezic, J. Woo, and H. Cho
The rest of the paper is organized as follows: first, we summarize findings of the feasibility analysis of the Global Interoperability Test Bed (GITB) for eBusiness systems. Then, we identify the issues that disqualify certain test frameworks from becoming a solid basis for the GITB. Next, we present design decisions that resolve the previous issues and describe the conceptual architecture for the proposed ATF framework. Finally, we summarize the findings in the paper.
2. Recommendations for the Global Interoperability Test Bed The following is a summary of the GITB project recommendations that directly impact the test framework architectural concerns [1]: • •
•
•
Promote the development of an integrated test framework which (1) does not hard-code a specific standard at any layer and (2) is capable of handling testing activities at all layers of the interoperability stack. Support the implementation of the test framework in a decentralized approach, as a network of multiple test beds and test services. The essential foundations are (1) a standardized testing framework (covering test execution and test case model), (2) a coherent architecture, (3) a common platform, and (4) access method. Also, the functional requirements should be implemented as plug-and-play components to leverage existing standards. In addition, emphasis needs to be given to non-functional capabilities, such as modularity and extensibility. Ensure that industry consortia and standard development organizations (SDOs) address the needs for testing as part of the standardization effort and to enable them to use the test framework. Specifically, in order to allow testing, standards specifications need to (1) complement narrative or semi-formal specifications by formal representations wherever possible; (2) define conformance criteria for compliance with the standard; and (3) comprise implementation guidelines and outline the testing approach and test cases. Conceive the GITB architecture as a set of test services, provided by several test service providers, which may be plugged in test infrastructure based on common interfaces. Focus on modularity and reusability of testing services and components. Provide for test services categorization in support of development of standardized interfaces.
Clearly, a foremost concern is that the test framework architecture promotes efficient development and reusability of testing facilities across different domains and different standards.
3. Issues with Test Frameworks Most existing test frameworks are limited in terms of reuse. Even new frameworks do not aim to maximize reuse of testing facilities and materials, although at least
Towards Test Framework for Efficient and Reusable Global e-Business Test Beds
49
one recent test framework has invested significant effort to address dynamic, configurable, and fully automated execution of conformance and interoperability test scenarios [2]. Most test frameworks are proposed with the purpose of meeting the requirements of testing for only a specific standard and a specific Test User’s usage. Therefore, their scope is generally narrow, and the provided functionalities cannot be applied easily to similar testing requirements in which, for example, a related standard specification is the subject of testing. For instance, the IIC test framework focuses on the ebMS (ebXML Messaging Specification) [3], the RossettaNet self-test toolkit only deals with the conformance test for RossettaNet PIP systems [4], and the WS-I tool only checks the data integrity of Web Service request/response messages [5]. That is, to test conformance with respect to each standard specification, development of different test frameworks was required such that the facile adaptation of the framework to related standard specifications was forbidden. For example, the IIC framework requires significant changes before it is applicable to related messaging standards, such as WS messaging. Even test frameworks, such as TTCN [6] and eTSM [7], that have been developed to overcome such a narrow scope of application, cannot effectively deal with new similar test requirements, because the test frameworks were not designed with sufficient modularity and extensibility.
Fig. 1. Activity Diagram for Standards-based Testing using Existing Test Frameworks
Figure 1. conceptually describes the typical usage of existing test frameworks – from test requirements to test bed execution. Once the Test User introduces their specific requirements for the intended usage of the standard specification of interest (e.g., by providing a specific business process or pattern of usage), the Test Case Developer composes test cases based on the grammar and structure of test case scripts designed for that particular test framework. At this time, the composed
50
N. Ivezic, J. Woo, and H. Cho
test case is machine-readable and tightly bound to the specific test requirements and specific test platform. By this we mean that the current test frameworks are designed for specific test requirements, specific user environments, and specific implemented testing tools – a specific test platform. The figure indicates this situation by situating the ‘Compose Executable Test Case’ and ‘Implement Test Bed’ activities within the Implementation Phase, supported by the Platform Specific Model (PSM). On the other hand, as shown on the right side of Figure 1, the Test Bed Architect extracts the test bed functional requirements from the Test User’s test requirements to implement a specific test bed based on these functional requirements. At the Execution Phase, the Specific Test Bed uses the Executable Test Cases and verifies the system under test. In summary, existing test frameworks have been designed to only consider a limited collection of specific test requirements, which makes it difficult to apply these frameworks to other standards specifications (regardless of similarity) or to support cooperative work in building distributed global test systems because of the following problems: •
Problem I. Test case dependency on the test bed and test case implementation decisions: The existing frameworks promote test case designs that depend on specific test bed design decisions. For example, a IIC test case cannot represent SUT activities during a test because the IIC test beds, which follow the IIC test framework specification, do not utilize information from the SUT activities (i.e., the test beds cannot control the SUT). This obviously prevents reuse of the test cases in situations in which new test beds are developed according to a common test framework but with different implementation decisions. In the IIC test framework case, new test beds, in which the test bed role changes (say, from a customer to a supplier), would not be able to reuse the previously developed test cases. Also, most test case designs are developed with consideration for specific test implementation requirements, such as a standard specification. For example WS-I tool’s test case cannot represent non-XML content as a test material, because it is designed only for XML-based web service testing. Therefore, the test case design does not support reusability, because application of the existing test case designs to other standards is often not feasible. Obviously, these two dependencies make test case development a time-consuming and costly process, because Test Case Developers are required to simultaneously consider both constraint types when developing test cases;
•
Problem II. Low configurability of test beds: Most test bed designs only consider a specific testing purpose and take into account specific test bed design decisions. It is, therefore, difficult to adopt one test bed to another, similar, testing situation. Such test beds cannot be re-configured and do not allow new components to be introduced in a pluggable manner to address new functional requirements; and
Towards Test Framework for Efficient and Reusable Global e-Business Test Beds
•
51
Problem III. Low modularity and extensibility of test cases: Most test cases contain both procedure and verification instructions for the data. However, the existing test framework designs do not treat the two types of information separately. Consequently, it is difficult to extract verification data from a test case in order to reuse a specific part of the test case in a related, similar, testing situation.
4. A Service Oriented Architecture for Agile Test Framework 4.1. Overall Approach Our purpose is to resolve the problems identified with the existing test frameworks by introducing new design decisions within a proposed test framework. Most existing test cases depend on both the specific test case implementation requirements and the test bed implementation (as described in Problem I). To resolve this problem, we remove the dependency of the test cases by introducing a two-layer test case design consisting of abstract and executable test case definitions. •
ATF_Design Decision I: Two-level test case design. The ATF test case structure consists of an abstract test case and an executable test case. The abstract test case is a human-readable, high level description of a test case. To enhance the reusability of the test case, an abstract test case does not contain specific implementation-level information. On the other hand, an executable test case is a machine-readable, low level description of a test case. The executable test case can be obtained by transformation from an abstract test case, when provided additional specific testing environment information (e.g., messaging protocol specification, a specific verification component, and a communication channel between the SUT and test bed). The existing test frameworks promote test case designs that are executable test cases. The addition of abstract test case descriptions is expected to have three benefits: 1) the abstract test case can be developed early in the design process without requirements for the test bed systems details, providing a clear specification of intent for the test case; 2) the abstract test case can be easily reused when similar test requirements arise, because it has no dependency on a specific test bed implementation; and 3) the abstract test case helps the testing user to understand the executable test case and its procedure and rules. That is, the abstract test case provides a kind of three-way bridge between the Standard Developer, who provides test requirements, the Testing User who verifies the system under test, and the Test Case Developer, who interprets the requirements to create the actual test case. Most existing test bed implementations cannot be re-configured to easily accommodate varying and dynamic functional requirements (as captured in Problem II). This low re-configurability problem makes a test bed difficult to reuse and adapt for new but related functional requirements. To resolve this
52
N. Ivezic, J. Woo, and H. Cho
problem, we propose the use of pluggable test components and infrastructure designs for the test bed implementation. •
ATF_Design Decision II: Pluggable test components and infrastructure designs. The ATF test bed execution model will consist of a test infrastructure with pluggable test components. The test infrastructure will allow assembly of the test components into the implementation of a desired testing service. The test infrastructure is independent of any specific standard and/or Test User’s environment. That is, it may be used for any type of testing without modification. In contrast, pluggable test components are designed and implemented by a test service provider to enable a specific function required for a specific standard and/or Test User’s environment. Normally, the test infrastructure would not provide the functions of the pluggable test component. • ATF_Design Decision III: Event-driven Test Execution design. In an ATF test bed, the pluggable test components are loosely coupled to the test infrastructure. Because it is impossible for the test infrastructure to control a potentially very large set of arbitrary pluggable test components, ATF relies on a generic transaction handler, instead of allowing direct interfacing between the test components. When a component attempts to interact with another component, it sends data to an event board instead of the target component. The event board stores the various types of interaction data as events so that every component can inquire and retrieve a specific event. All of the activities of the pluggable components and the test infrastructure are coordinated via events. When a Service Provider designs a pluggable test component, the design is registered with the Test Service Model Repository to be discovered and reused in similar testing situations. The pluggable test component is automatically deployed (configured) into the test infrastructure prior to a test execution. Introduction of the pluggable test component design anticipates three benefits: 1) the test infrastructure may be consistently reused for many types of testing, 2) the pluggable test components may be conditionally reused for the same testing requirements, and 3) automatic configuration makes the test bed more extensible and reusable. Most existing test cases are designed without separating the procedural definitions and verification rules (as captured in Problem III). This makes the test case virtually impossible to reuse and difficult to manage for large scale test suites (i.e., sets of test cases). To resolve this problem, we introduce two design decisions: •
ATF_Design Decision IV: Modular test case design. The abstract test case will consist of two design modules: assertion and procedure modules. Such a design yields an abstract test case which is not only easier to read but also easier to reuse, because the Test Bed Manager may reuse each module separately when the test case is adapted for different test requirements. In the executable test case, procedure and assertion content (scripts) will be separated for reuse in new executable test cases.
Towards Test Framework for Efficient and Reusable Global e-Business Test Beds
53
•
ATF_Design Decision V: Event-centric test case design. Every type of event - from low-level protocol signals (e.g. Acknowledgements) to business document receipts - may be captured by the event board and wrapped into a standard event envelope. Virtually all testing events are captured, stored, and may be used to trigger an arbitrary testing activity, such as a data extraction procedure or verification action. Because every assertion or procedure module in the test case contains triggering conditions, each module can be independently introduced and managed. By introducing a modular and event-centric design for test cases, we expect two benefits: 1) each module, either procedure or assertion, may be independently reused by another test case; and 2) because the test case module is not affected by other changes in the module, a Test Case Developer may easily manage large scale test suites.
Fig. 2. Procedure Architecture of the Agile Test Framework
4.2. Service Oriented Architecture for Agile Test Framework This section provides a high-level view of the ATF architecture and supported processes, as illustrated in Figure 2. Because the ATF architecture is designed according to the ATF design decisions discussed previously, Figure 2 illustrates the major differences between ATF and existing test frameworks, shown in Figure 1. First, the design phase (corresponding to Platform Independent Model (PIM)) is positioned between the conceptual and implementation phases. Actions in this phase are abstract and independent of a specific test bed implementation. The
54
N. Ivezic, J. Woo, and H. Cho
previous Test Bed Architect role now is broken into the three new roles: Test Service Provider, Test Bed Manager, and Test Framework Provider. The new roles and their activities reflect the drive towards greater modularity and reusability. Each test case has separate modules for verification rules and procedural data. The following are descriptions of the key processes supported by the ATF architecture: •
•
•
•
•
Analyze Test Requirements: The Test Case Developer investigates the requirements given by a standard specification and a specification for intended usage of the standard. He/she determines the functions that need to verify the identified Test User’s SUT. Compose Abstract Test Cases: An abstract test case consists of usage and assertion scripts. From the given test requirements, a collection of abstract test cases is developed (1) to represent the required scenarios as usage scripts that extract test items from the SUT, and (2) to provide predicates (i.e., true–false statements) as assertion scripts to determine whether the test item is true or false. The generated abstract test case is not machinereadable because it is generated without concern for a specific harness script. That is, an abstract test case does not describe a concrete test environment (e.g., message protocol) or specific configuration (i.e., included test components) for a specific test bed. Therefore, an abstract test case is independent of a specific test bed implementation and is a kind of meta-model for an executable test case. Compose Executable Test Cases: An executable test case consists of verification and procedure scripts. Such an executable test case is generated from a corresponding abstract test case after considering a specific test harness specification (or script). In other words, after the test environment and configuration information regarding the partner(s) and target SUT(s), pluggable test components to be used for the testing, protocol and schema to be used for business documents, etc., are decided and specified within the test harness script by the Test Bed Manager, the Test Case Developer may generate a machine-readable executable test case from an abstract test case. This test environment and configuration information is encoded as a harness script by the Test Bed Manager. The generated executable test case is registered into the test case repository. Analyze Functional Non-stationary Requirements: The Test Bed Manager investigates functional non-stationary requirements based on the test requirements. A non-stationary requirement is a specific test service requirement for a specific testing situation. That is, the test service is required for a testing scenario at hand but it may not be used in other testing situations without additional adaptation. Design and Register Pluggable Test Components: The pluggable test component provides a specific test service. A Test Service Provider designs the pluggable test component according to the standard specification and the interface specification for the pluggable test components. Once the pluggable test component is designed, it should be registered in a public
Towards Test Framework for Efficient and Reusable Global e-Business Test Beds
•
•
•
•
55
service registry. At that time, the Test Service Provider registers the information, such as technical capabilities, service interface, and index of service categorization, which are described in Section 5.4. Search and Select Pluggable Test Components: The Test Bed Manager searches for the relevant pluggable test components that are needed for the specific test bed. The Test Bed Manager decides the optimal set of pluggable test components among the alternative components. Generate Harness Script: The Test Bed Manager composes the harness script, which contains the test environment and configuration information describing the partner(s) and target SUT(s), pluggable test components to be used for the testing, the protocol and schema to be used for business documents, and other information necessary to assemble the required test bed. Implement Pluggable Test Component: The Test Service Provider implements and customizes a pluggable test component based on the pluggable test component design and specific interface requirements described in the harness script provided by the Test Bed Manager. Assemble Test Bed and Execute Test Case: A test bed is automatically configured on the basis of information contained in the harness script. That is, a test bed is assembled from the test infrastructure and the selected loosely coupled pluggable test components. Finally the test bed interprets and executes the executable test case to verify the system under test.
5. Conclusions In this paper, we have presented a novel Agile Test Framework (ATF) for B2B testing, which addresses the core issues found in traditional testing frameworks to alleviate the resulting inefficiencies and lack-of-reuse problems that arise in the development of B2B test beds. First, traditional test case designs rely on specific test bed and test case implementation decisions. In the ATF architecture, this issue is addressed by introducing the concept of an abstract test case, which insulates the test case specifications from the implementation details and from the specific testing platform decisions. The executable test cases are obtained by way of a transformation, only when the specific test bed configuration is specified. Second, traditional test execution models rely on specific testing requirements and do not consider the reuse of testing modules in similar situations. In the ATF architecture, the issue is addressed by introducing the concepts of test infrastructure, the pluggable test components, and event-driven execution. The test infrastructure contains readily reusable core functionalities that allow assembly of the test components into novel implementations of desired testing services. By way of event-driven execution, ATF provides a generic transaction handler, an event board, which coordinates and facilitates communication between the infrastructure and pluggable components. Third, traditional test case designs do not recognize and separate procedural and verification aspects of the test case, which makes test cases impossible to
56
N. Ivezic, J. Woo, and H. Cho
reuse. In the ATF architecture, the issue is addressed by introducing modular and event-centric test case design. The modular design separates two types of information: assertions and procedural information. The event-centric design allows assertion and procedure modules in the test case to contain triggering conditions, which allows each module to be independently managed and reused. Following these ATF architectural decisions, we are presently completing the necessary details for the test case design and execution model. The detailed design will be followed by a comparison of the proposed architecture with other existing test framework architectures. Our goal is to assure that the global, distributed eBusiness test beds are efficiently developed by reusing existing testing services while successfully addressing complex e-Business testing situations.
6. Disclaimer Certain commercial software products are identified in this paper. These products were used only for demonstration purposes. This use does not imply approval or endorsement by NIST, nor does it imply these products are necessarily the best available for the purpose.
7. References [1] [2] [3] [4] [5] [6] [7]
GITB (Global eBusiness Interoperability Test Bed Methodologies), on line at www.ebusiness-testbed.eu, accessed December 2009. TestBATN – Testing Business, Application, Transport, and Network Layers Web Site, on line at www.srdc.com.tr/testBATN, accessed December 2009. IIC ("ebXML IIC Test Framework Version 1.0." OASIS), on line at http://www.oasisopen.org/committees/download.php/1990/ebXML-TestFramework-10.zip, accessed December 2009. RosettaNet (RosettaNet Ready Self-Test Kit (STK) User’s Guide Release Version 2.0.7). RosettaNet, 2004. WS-I tool site, on line at http://ws-i.org, accessed December 2009. TTCN-3 site, on line at http://www.ttcn-3.org, accessed December 2009. TaMIE Site, on line at http://www.oasis-open.org/committees/tamie, accessed December 2009.
Unified Reversible Life Cycle for Future Interoperable Enterprise Distributed Information Systems Zhiying Tu1, Gregory Zacharewicz1 , David Chen1 1
IMS-LAPS / GRAI, Université de Bordeaux – CNRS, 351, Cours de laLibération,33405, TALENCE cedex, France
Abstract. This paper aims at improving the re-implementation of existing information systems when they are called to be involved in a system of systems, i.e. a federation of enterprise information systems that interoperate. The idea is reusing the local experiences coming from the development of the original information system with the process of Model Discovery and Ontological approach. We give first, a review of ongoing researches on Enterprise Interoperability. The MDA can help to transform concepts and models from the conceptual level to the implementation. The HLA standard, initially designed for military M&S purpose, can be transposed for enterprise interoperability at the implementation level, reusing the years of experiences in distributed systems. From these postulates, we propose a MDA/HLA lifecycle to implement distributed enterprise models from the conceptual level of federated enterprise interoperability approach. In addition to this classical development, we propose a model reversal methodology to help re-implement the legacy information system, in order to achieve the interoperability with other systems. Keywords: Interoperabilitty, HLA, MDA, FEDEP, Information System
1. Introduction In the globalised economic context, the competitiveness of an enterprise depends not only on its internal productivity and performance, but also on its ability to collaborate with others. This necessity led to the development of a new concept called interoperability that allows improving collaborations between enterprises. No doubt, in such context where more and more networked enterprises are developed; enterprise interoperability is seen as a more suitable solution to total enterprise integration. Since the beginning of 2000, several European research projects have been launched to develop enterprise interoperability (IDEAS, ATHENA, INTEROP). Three main research themes or domains that address interoperability issues were identified, namely: (1) Enterprise modeling (EM) dealing with the representation
58
Z. Tu, G. Zacharewicz and D. Chen
of the inter-networked organization to establish interoperability requirements; (2) Architecture & Platform (A&P) defining the implementation solution to achieve interoperability; (3) Ontologies (ON) addressing the semantics necessary to assure interoperability. This paper intends to propose an improvement of the re-implementation of existing information systems when they are called to be involved in a system of systems. Compare to the traditional IS and Simulation development process, we involve the bottom-up process of Model Discovery and Ontological approach into our framework. In the framework, first, we align MDA and HLA to propose a lifecycle to implement distributed enterprise models from the conceptual level of federated enterprise interoperability approach. Besides that, a model reversal methodology is integrated with the MDA/HLA development lifecycle to assist reengineer the legacy information system, in order to achieve interoperability with other systems. The paper is structured as follows. We start out with a survey of related work in the area of HLA, MDA and their harmonization, as well as in the area of reengineering to model-driven architectures (section 2). Then, we describe our motivation of this work (section 3). After that, we explain two main parts of our methodology, alignment of MDA and HLA FEDEP (section 4) and Model Reversal (section 5).
2. IS and Simulation Development Life Cycle: State-of-the-Art 2.1. HLA The High Level Architecture (HLA) is a software architecture specification that defines how to create a global software execution composed of distributed simulations and software applications. This standard was originally introduced by the Defense Modeling and Simulation Office (DMSO) of the US Department Of Defense (DOD). The original goal was reuse and interoperability of military applications, simulations and sensors. In HLA, every participating application is called federate. A federate interacts with other federates within a HLA federation, which is in fact a group of federates. The HLA set of definitions brought about the creation of the standard 1.3 in 1996, which evolved to HLA 1516 in 2000 [1]. The interface specification of HLA describes how to communicate within the federation through the implementation of HLA specification: the Run Time Infrastructure (RTI). Federates interact using services proposed by the RTI. They can notably “Publish” to inform about an intention to send information to the federation and “Subscribe” to reflect some information created and updated by other federates. The information exchanged in HLA is represented in the form of classical object class oriented programming. The two kinds of object exchanged in HLA are Object Class and Interaction Class. Object class contains object-oriented data shared in the federation that persists during the run time; Interaction class data are just sent and
Unified Reversible Life Cycle
59
received information between federates. These objects are implemented within XML format. More details on RTI services and information distributed in HLA are presented in [1]. The FEDEP (Federation Development and Execution Process) describes a highlevel framework for the development and execution of HLA federation. FEDEP uses the seven-step process to guide the spiral development of the simulation system through phases of requirements, conceptual modelling, design, software development, integration, and execution [2]. 2.2. MDA The first methodology studied is Model Driven Architecture (MDA). This methodology defined and adopted by the Object Management Group (OMG) in 2001, (updated in [3]) is designed to promote the use of models and their transformations to consider and implement different systems. It is based on an architecture defining four levels, which goes from general considerations to specific ones. •
• • •
CIM Level (Computation Independent Model): focusing on the whole system and its environment. It is also named « domain model », it describes all work field models (functional, organisational, decisional, process…) of the system with an independent vision from implementation. PIM Level (Platform Independent Model): modelling the sub-set of the system that will be implemented. PSM Level (Platform Specific Model): that takes into account the specificities related to the development platform. Coding Level: The last level consists in coding enterprises applications (ESA: Enterprise Software Application).
To complete this description, a Platform Description Model used for the transformation between PIM level and PSM level is added to these four kinds of models corresponding to four abstraction levels. 2.3. MDA/HLA Harmonization In [4], Andreas Tolk mentions that while the HLA can help the MDA to improve in its principles concerning distributed simulation systems, the MDA can help the HLA implementers to improve their products based on the experiences of the OMG partners and the related software industry. Meanwhile, [4] proposes to consider five aspects for merge HLA and MDA together: Domain Facilities, Pervasive Services, RTI as Middleware, Federation Development Tools, and Data Engineering. A solution of applying MDA to HLA is proposed in [5]. In this methodology, the Component-based development (CBD) is underlined. The design goals and philosophy of CBD also share many similarities with the goals of HLA federate development, including the promotion of reuse and interoperability. However, HLA currently has no standardized development approach to CBD, known as a
60
Z. Tu, G. Zacharewicz and D. Chen
component model, analogous to the CORBA Component Model (CCM) or the Enterprise Java Bean (EJB) container. As a result, there is no commonality in design and implementation between federates, nor any formal way of separating and reusing the behavioural aspects of an HLA component. This results in an increase in development and maintenance costs, as well as impacting on the potential for reuse outside of the FOM for which the federate was built. As the result, they introduce SCM (simulation component model) into HLA (fig.1.). SCM is one potential candidate for the definition of a standardized model for component-based simulation development. This architecture employs a component model based on the OMG’s CCM, and describes how the separation of integration logic and simulation behaviour can be achieved for an HLA federate. This SCM development model also provides a commonality of design between federates, allowing them to access both HLA and extended CBD services (such as data transformation services between federates and the FOM) in a consistent manner.
Fig. 1. Simulation component model
2.4. MDA/FEDEP As mentioned in [6], FEDEP and MDA are successful within their particular community. FEDEP and MDA are separate, but alignable development life cycle. Each follows basic systems engineering process through analysis is done in the following areas: • • • •
Definition of requirements Definition of attributes and behaviors through a functional analysis of the requirements Expression of requirements, functions and analysis in narrative and graphical form Design of the ‘product’
Unified Reversible Life Cycle
61
Based on the FEDEP MDA alignment shown in fig. 2, [6] also proposes a development lifecycle as illustrated in fig. 3, which surrounds the testing phase, in order to implement the VV&A.
Fig. 2. FEDEP MDA alignment
Fig. 3. FEDEP MDA alignment based lifecycle
2.5. MDA-Based Reverse Engineering A framework to reverse engineering MDA models from object oriented code is presented in [7]. This framework is based on the integration of compiler techniques, metamodeling and formal specification and it also distinguishes three different abstraction levels linked to models, metamodels and formal specifications, as fig. 4 shows. In this framework, the model transformations are based on classical compiler construction techniques at model level. At the metamodel level, MOF metamodels are used to describe the transformations from model level. MOF metamodels describe families of ISMs, PSMs and PIMs. Every ISM, PSM and PIM conforms to a MOF metamodel. Metamodel transformations are specified as OCL contracts
62
Z. Tu, G. Zacharewicz and D. Chen
between a source metamodel and a target metamodel. MOF metamodels “control” the consistency of these transformations. The level of formal specification includes specifications of MOF metamodels and metamodel transformations in the metamodeling language NEREUS that can be used to connect them with different formal and programming languages The transformations are based on static and dynamic analysis at model level [7]. •
•
Static analysis extracts static information that describes the software structure reflected in the software documentation (e.g., the text of the source code). Static information can be extracted by using techniques and tools based on compiler techniques such as parsing and data flow algorithms. Dynamic analysis extracts dynamic analysis information, which describes the structure of the run-behavioral, by using debuggers, event recorders and general tracer tools. Dynamic analysis is based on an execution model including the following components: a set of objects, a set of attributes for each object, a location and value of an object type for each object, and a set of messages. Additionally, types such as Integer, String, Real and Boolean are available for describing types of attributes and parameters of methods or constructors.
Fig. 4. MDA-based reverse engineering framework
3. Studied Context A multiple agent/HLA enterprise interoperability methodology is proposed in [9]. It mentions several enterprises who all participate in a cooperative project, and they need to exchange various information any time, so the framework (fig. 5) defined in the methodology provides a platform based on HLA, where each enterprise plays as a federate and connect to each other.
Unified Reversible Life Cycle
63
Fig. 5. Interoperable System of Information-Systems
However, most of the enterprises will have their own IT system, and sometimes the cooperation among those companies won’t last permanently, some of them could join this group for a very short period time and then quit. Consequently it seems not adequat and luxurious to launch a totally new IT project for this short time cooperation. As the result, how to build the federate for each enterprise merits consideration. Refering to the existing methodologies mentioned in previous sections, we propose a new approach, as the fig. 6 shows, to solve this problem. This methodology will involve MDA&HLA FEDEP alignment in order to use both of their advantages to realize proper component reuse and rapid development. Besides that, in order to achieve a rapid federate development according to the legacy IT system, the model reverse engineering is integrated with this alignment.
4. Alignment of MDA and HLA FEDEP In this section, we propose a new development lifecycle to reconstruct HLA FEDEP and MDA, and generate a new five steps development framework (as shown in fig. 6). This new methodology aims to adopt the strong points from both HLA FEDEP and MDA while overcoming their weak points, then, to achieve proper component reuse and rapid development. Phase 1: Domain Requirment Definition The main task of this phase is to collect clear and enough requirements from customer in order to define the objective of the system, to describe the environment of the system, the senario of the system. At the same time, all these definition and description need to be reasonable, understandable for all the stakeholder. CIM of MDA has more similar task with Define Federation Objectives, Develop Federation scenario together in HLA FEDEP. As the result, we align them in this phase, to convert the user requirement, which is more textual based, into more visual model, such as UML use case to derive the federation requirement, etc.
64
Z. Tu, G. Zacharewicz and D. Chen
Fig. 6. IS and Simulation Devellopment Life Cycle
Phase 2: Domain Scenario Systematization The main task of this phase is to refine the domain scenario and business process defined in the first phase, to identify and describe the entities involved in the scenario and business process. And then, to define the relationships among entities and behaviors, events for each entity, etc. This phase integrates PIM in MDA, which describes the operation of system but doesn’t address the detail platform information yet, as well as steps of Perform Conceptual Analysis, Develop Federation Requirements and Select Federates in HLA FEDEP, which also define and select general participators of the federation, then describe their relationship, behaviours and event in general. Phase 3: System Model Specialization In this phase, according to the technique chosen and platform selected, the system needs to be refined, for instance, to refine federation and federate structure, to allocate functions and attributes, etc. Detailed design will carry out at this time. This phase integrates the following parts in MDA and FEDEP. PSM in MDA, which is in the form of software and hardware manuals or even in an architect’s head, will be based on detailed platform models, for example, models expressed in UML and OCL, or UML, and stored in a MOF compliant repository. The Prepare federation design, Prepare plan, Develop FOM, and Establish federation agreement in FEDEP will produce federate responsibilities, federation architecture, supporting tools, integration plan, VV&A plan, FOM, FED/FDD and time management, date management, distribution agreements, etc. Phase 4: System Implementation This phase’s task is to transfer the specific system model into code, to create the executable federation and runable federate.
Unified Reversible Life Cycle
65
At this level, MDA has various transformation techniques from model to code. In the FEDEP, Implement Federate designs will provide modified and/or new federates and supporting database. Implement Federation Infrastructure will provide implemented federation infrastructure and modified RTI initialization data. Plan Execution and Integrate Federation will provide execution environment description and integrated federation. Phase 5: Test Throughout the previous steps of the MDA and HLA FEDEP alignment process, testing is essential to ensure fidelity of the models. Testing phase includes the Test Federation, Execute Federation and Prepare Outputs, and Analyze Data and Evaluate Results in HLA FEDEP. Meanwhile, it will also refer to the outputs from the previos steps, such as the original user requirment in the first step, and federation test criteria from second phase.
5. Model Reversal This section describes a brand-new process of model reverse engineering with different senarios constraints (see fig. 6). The reverse process will re-characterize the legacy system in order to capitalize on the information and functions of the existing system, and make it easy for reusing in a new HLA compliant system. This methodology will assite to HLA FEDEP / MDA alignment mentioned in previous section, to fully achieve rapid development of federation and/or federate based on the legacy IT systems. We distiguish two kinds of reversal scenarios as following. 1. First, when an enterprise intents to start exchanging information in a new cooperative project with other enterprises. In that case, the HLA federation has not been created yet, so we propose to reverse the code of the legacy information systems to the first definition phase (domain requirment definition). Then from top to down, we generate the model for each phase, finally we produce a federation and federate rapid development template. 2. Second, if an enterprise intents to paticipate in a existing cooperative project and exchange data with other heterogeneous IS. Thus, we assume an HLA federation has already been created. Here, according to the HLA FEDEP, federate starts to be considered from the second step (Perform Conceptual Analysis) as the reversal scenario 2 shows in fig. 6. Therefore it is not necessary to reverse to the first phase, the reversal can stop at the second phase (Domain scenario systematization). One will only reuse the model of the existing federation to create the model for the federate related to the legacy system of the new participator. Finally, the model of the existing federation and the new federate model are used to generate the code template for the new federate for rapid development.
66
Z. Tu, G. Zacharewicz and D. Chen
6. Conclusion and Future Works Based on the state-of-the-art, we have proposed a new systematic methodology, which is a valuable outcome of the combination of HLA FEDEP & MDA alignment and Model reverse engineering. This methodology provides a new five steps process to develop models of simulation starting from conceptual enterprise models. In addition, it also bridges the gap from concepts to implementation in the field of enterprise modelling by offering a new standardised and reversible approach. This methodology seems promising regarding to real enterprise information system requirement of distribution, federated interoperability and agility of adapt to dynamic context. Compared with other techniques, which can solve the interoperability problem, such as SOA, our methodology is trying to provide a standard service API for all the participants, who can use this API to develop communication agent which adapts to the existing IT system without changing it. Up to now, this work is still in a research process. The methodology presented (each phase of HLA/MDA alignment and the model reversal process) still needs to be refined and detailed. A case study will allow testing the proposed approach in an industrial context.
7. References [1]
IEEE std 1516.2-2000. IEEE Standard for Modeling and Simulation (M&S) High Level Architecture (HLA) - Federate Interface Specification, Institute of Electrical and Electronic Engineers. [2] IEEE std 1516.3-2003. IEEE Standard for Modeling and Simulation (M&S) High Level Architecture (HLA) - Federation Development and Execution Process (FEDEP), The Institute of Electrical and Electronic Engineer. [3] OMG, 2003. MDA Guide Version 1.0.1. Object Management Group, Document number: OMG / 20030601 Available from: www.omg.org/docs/-omg/03-06-01.pdf [accessed 15 June 2009] [4] Andreas T, (2002), Avoiding another Green Elephant—A Proposal for the Next Generation HLA based on the Model Driven Architecture. Proceedings of the 2002 Fall Simulation Interoperability Workshop: 02F-SIW-004 [5] Shawn P, (2003), The Next Step Applying the Model Driven Architecture to HLA. Proceedings of the 2003 Spring. Workshop: 03S-SIW -123 [6] Trbovich S, Reading R, (2005) Simulation and Software Development for Capabilities Based Warfare: An Analysis of Harmonized Systems Engineering Processes. Proceedings Spring Simulation Interoperability Workshop: 05S-SIW-106 [7] Favre L, Martinez L, Pereira C, (2008), MDA-Based Reverse Engineering of Object Oriented Code. SERA’08: 153-160 [8] Favre L, (2008), Formalizing MDA-based Reverse Engineering Processes. SERA’08: 153-160 [9] Zacharewicz G, Chen D, Vallespir B, (2008), HLA Supported, Federation Oriented Enterprise Interoperability, Application to Aerospace Enterprises. Proceedings of 2008 EURO International Simulation Multi-conference: 08E-SIW-074 [10] Meta Object Facility (MOF) Specification v1.4, OMG Document formal/02-04-03, http://www.omg.org/cgi-bin/apps/doc?formal/02-04-03.pdf
Part II
Enterprise Modeling for Enterprise Interoperability
A Meta-model for a Language for Business Process Characterizing Modelling Shang Gao1 and John Krogstie1 1
Department of Computer and Information Science, Norwegian University of Science and Technology, Sem Sælands Vei 7-9, NO-7491 Trondheim, Norway
Abstract. In this paper, a meta-model for the business process characterizing model (BPCM) is defined. The defined meta-model, mainly capturing the syntax and semantics of the business process charactrizing model, is intended to guide the development of business process support systems. In addition, the relation between the SCOR model and the class process in the BPCM meta-model is addressed. Furthermore, the mapping from the BPCM meta-model to BPMN meta-model in a combined framework for developing business process support systems is depicted. Keywords: Meta-model, Business Process Characterizing Model (BPCM), Process Modelling, SCOR
1. Introduction The development of a business process support system is a complicated process that not only requires solving technical problems, but also needs taking the organizational problems related to the business domain into consideration. Different modelling techniques with various purposes and orientations, such as, agent-oriented modelling approach (i*[19]), and process-oriented modelling approach (BPMN [17]), have been proposed and used in developing business process support systems. These approaches have been successfully used in providing some key concepts which are essential to develop business process support systems. However, these modelling techniques often fail in the aspect of providing a modelling approach which can facilitate communications among business stakeholders. Therefore, a good understanding of business domain knowledge is a prerequisite to succeed in developing an effective business process support system. Some articles (e.g. [14] [19] [8]) have pointed out the importance of capturing organizational knowledge and requirements before process modeling design and requirements elicitation.
70
S. Gao and J. Krogstie
Further, with the growing interest in mobile commerce [16], there is an increasing need for methods and techniques that help in supporting process system development in mobile settings. Consequently, it makes the concept of context essential when designing and developing business process support systems, especially in mobile settings. Advanced business support systems are supposed to be able to support users according to the specific context environment. It is believed that there is a tight complementary relationship between activities in process models and context variables. And modelling context information is not a standalone task. It needs to be integrated with high level business modelling and analysis to design and elicit business process systems. Therefore, a new language needs to be developed to deal with the needs mentioned above. As illustrated in [7], one of the main ways of utilizing models is to describe some essential information of a business case informally in order to facilitate communications among stakeholders. In [9], by taking inspiration from this idea, we proposed the business process characterizing model (BPCM), which can be seen as an important early, business-oriented model in a modeling-based project. A BPCM model is characterized by its ability to address the key concepts (actor, goal, context, business domain, etc) involved in a business model development process. BPCM aims to provide an enhanced ability to understand and communicate business processes to all stakeholders involved in the development lifecycle. Such a model shows some essential elements of the business solution to be developed, anchored in business-oriented terminology. The main objective of this paper is to define a BPCM meta-model, mainly capturing the syntax and semantics of the BPCM, which is intended to ensure traceability among different models involved in the development of business process support systems through a combined framework [8]. A meta-model is a model which is used to specify certain features of a model. In this research work, the BPCM meta-model is used to describe the BPCM modelling constructs and relationships of the constructs. The remainder of this paper is organized as follows. Section 2 briefly describes BPCM. Section 3 defines a BPCM meta-model. In section 4, we illustrate the relation between the BPCM meta-model and the SCOR Model. Mapping from the BPCM meta-model to the BPMN meta-model is depicted in section 5. Finally, this paper is concluded in section 6.
2. Business Process Characterizing Model (BPCM) In the course of business process support system development, model developers focus on operational and procedural aspects of process systems, while various business stakeholders are more likely to express different concerns with regard to process models in terms of business oriented concepts. Most process modelling languages, like BPMN [18] or EPC [1], do not offer the necessary support mechanisms to express business oriented concerns in connection with process models. On the one hand, business stakeholders need a simplified model to express their concerns from the organizational perspective easily. On the other hand, process model developers would like to get the business
A Meta-model for a Language for Business Process Characterizing Modelling
71
process system specification in a better modelling language to ease their communications with business stakeholders. Successful business process support system requires the understanding of needs from various stakeholders involved in the business process. Therefore, in order to tackle these problems, we proposed a language for business process characterizing modelling in [9]. BPCM modelling aims to provide an enhanced ability to understand and communicate business processes to all stakeholders involved in the business process development project. The main objective is to assist both business stakeholders and model developers in the development lifecycle. Additionally, the BPCM model can be integrated into a combined framework for development of business process support systems [8], consisting of other modelling techniques, such as goal modelling and process modelling. Table 1 summarizes the general definition of the elements in a business process characterizing model. Table 1. Definition of the elements in BPCM
Element Process Resource Actor Context Business Domain Goal
Process Type
General Definition The business process people want to characterize. This element can be related to a common business process ontology such as SCOR [3]. This element is inspired by the resource concept in the REA framework [13]. This element can clearly address what are consumed and what are gained in a business process. This element describes the people and organizations with different roles involve in a business process. This element can illustrate who are important to which business process. It includes contextual characteristics in terms of locations and networks providing connections between the devices and others. This element classifies the business domain. We tend to link this element to the North American Industry Classification System (NAICS). This element can address what kinds of goals need to be fulfilled in the business process. The goals may be related to operational goals and strategical goals. Operational goals are related to hard-goals, usually covering functional requirements; while strategic goals are related to soft-goals, which set the basis for non-functional requirements. According to REA [11], REA does not model only exchanges but also conversions. Exchange and conversion can be seen as two typical process types.
In order to better illstruate BPCM modelling approach, a small example is provided in Table 2. In the realm of organizing conferences, the main actor is the organizational unit set up to organize the conference. This actor liaises with a number of other organizations to deliver the conference to the participants, for example: the conference proceeding is to be published by a professional publisher a couple of days before the conference. The reason to use a publisher is that having a renowned publication will attract more submissions.
72
S. Gao and J. Krogstie Table 2. BPCM model for producing conference proceedings BPCM Name
BPCM Elements
Publish proceedings with professional publisher
Process
Publish proceedings (ref to SCOR S3 Source Engineer-to-order Product (Proceedings)) Proceedings proposal (including previous acceptance rate for conference, acceptance rate of current conference, conference organizer) Organizing Committee, Publisher, Shipping company PC over LAN/WLAN, internet communication 561920 Convention and Trade Show Organizers 51113 Book Publishers (for publisher) Hard Goals: Proceedings physically delivered to conference organization a couple of days before conference starts(to be able to prepare delegate package) Soft goal: Minimize cost of proceedings Soft goal: Maximize the perceived status of publication Conversion (money for published proceeding)
Resource Actor Context Business Domain Goal
Process Type
Since some elements in the BPCM model refer to other ontologies or systems as presented in the Table 1, a brief description of those concepts and the motivation to incorporte those concepts is provided here. Concerning the element context, there is no universal or absolute definition for context. [5] describes context as “typically the location, identity, and state of people, groups and computational and physical objects”. Context is the reification of the environment, that is, whatever provides a surrounding in which the system operates. People can base their own perceptions or understanding to define context in different ways. In order to better design business process support system, it is crucial to collect and deliver contextual information in a better way. By including context element in a BPCM model, the correspondent business process support system is able to serve people better in mobile computing settings. Recently, W3C has released a draft version of the Delivery Context Ontology (DCO). This ontology constructed in OWL provides a model of characteristics of the environment in which a device interacts with the web or other services. In this research work, we incorporate some key entities of DCO into the element context of BPCM. Some other research work has also started addressing the relationship between context and system development at the requirement level. For instance, [2] investigates the relation between context and requirements at the beginning of goal oriented analysis, and [15] extends the application of the problem frames approach with context monitoring and switching problems. The SCOR model is developed by the supply chain council and aims to describe the operations of various supply chain constructs. It classifies the operations of supply chain as Plan, Source, Make, Deliver and Return. The main advantage of the SCOR model is to facilitate the knowledge sharing and cooperation among the various participants interested in that particular domain of knowledge. Moreover, having a set of standardized ontologies for the domain knowledge of business process support systems development will enhance the
A Meta-model for a Language for Business Process Characterizing Modelling
73
interoperability between the various process support systems. Integrating SCOR with the element process of BPCM for the development of business process support systems will result in a reusable, easy to integrate knowledge repository. REA [11] was originally conceived as a framework for accounting system, but it has subsequently broadened its scope and developed into an enterprise domain ontology. The duality of resource transfer is essential in commerce. It never happens that one agent simply gives away a resource to another without expecting another resource back as compensation. As illustrated in [8], the element resource of BPCM is important in identifying relevant tasks or activities for the construction of process models. For each resource in a BPCM, it should include a message flow which links two associated tasks in a BPMN process model, whereby the source of the message flow connected to the dependee’s task and the destination of the message flow connected to the depender’s task. Last but not least, the North American Industry Classification System (NAICS) is a standard for the collection, tabulation, presentation, and analysis of statistical data describing the U.S. economy. NAICS is based on a production-oriented concept, meaning that it groups establishments into industries according to similarity in the processes used to produce goods or services. Each business process is labeled with a business domain. This is of help for model users to search or retrieve business processes within specific business domain.
3. BPCM Meta-model It is important to note that the BPCM model aims to be an easy to understand modeling language for all business stakeholders and model developers. Most process modeling languages (e.g. BPMN) do not offer the necessary mechanisms to include organizational or contextual concerns associated to a process model. What the BPCM modeling language distinguishes from them is to provide some constructs to tackle those concerns. The aim of this BPCM meta-model is to aid model developers with the various concerns from business stakeholders in designing the desired process models. The BPCM meta-model is described in Figure 1, which presents the major modeling concept and the relationships between them. UML class diagram is used to create the BPCM meta-model. The central class of the meta-model is process. One process may consists of zero or more sub process(es) by the representation of UML compositions. All other main classess can be associated with the central class process. Around the central class, business stakeholders could express different concerns, like required actors, required resources, contextual information, and so on. We use UML generalizations in case of elements extensions (e.g. context, process types). Then, process model developers can take these concerns into consideration when designing process models accordingly. The delivery context ontology mainly focuses on the following three entities: a). environment including information about the location and network, b). software describes whether the delivery context supports certain APIs, document formats, operating systems, etc, c). Hardware provides information about various hardware
74
S. Gao and J. Krogstie
capabilities including display, input, memory, etc. For the class context, we do not attempt to cover all entities above. We focus on two major aspects: location and network information. For example, a user using its device is connected to an information system through a wireless network or wired network. Mobile workers need to work in various locations (e.g. at home, in the office, on the way). Business process scenario itself is a complex, dynamic network that involves many business stakeholders. Integrating SCOR with the BPCM meta-model for the business process development process will facilitate knowledge sharing and communication among the various parties involved in the processes. As illustrated in Figure 1, we have incorporated process best practices, process levels from SCOR into the BPCM meta-model. The BPCM meta-model can also be transformed to OWL-based ontology. The ontology is built by defining classes, subclasses, properties, and instances which present the elements in BPCM meta-model. The BPCM ontology classes in the class browser is shown in Figure 2. The Ontologies is developed using Protégé software. Protégé is a free open-source software tool that was developed at Stanford University for building ontologies. Since the BPCM meta-model is centered with the class process, all other constructs are constructed relatively to process. The key role of the created ontology here is to form a repository to store various BPCM models. In this way, the existing BPCM models in an ontology repository may become reference models for various purposes. In addition, the ontology can also be seen as a reusable knowledge base for sharing and retrieving purposes in the future development. For instance, the class business domain can be used as a key for the querying purpose. Relevant information from similar areas can be retrieved by querying keywords of specific business domain.
Figure 1. BPCM Meta Model
A Meta-model for a Language for Business Process Characterizing Modelling
75
Figure 2. BPCM Ontology Classes
4. Relation Between SCOR and Process in BPCM Meta-model SCOR is a process reference model designed for effective communications among supply chain partners. SCOR integrates concepts of business processes, benchmarking, and best practices into a cross-functional framework. From business process reengineering perspective, the SCOR model builds a hierarchy of supply chain processes, which can be divided into three levels of details [3]: process type, process category, and process element. The process element level provides and documents enough details of process specifications, consisting of input and output parameters, process performance metrics, and best practices. Consequently, these detailed specifications can convey a better understanding of the relevant processes in BPCM which fits with the purpose of including BPCM model in the process support systems development lifecycle. The SCOR model does not follow any of the standardized or well structured business process modeling techniques such as BPMN or UML activity diagram. In contrast, the SCOR model provides the capability for business stakeholders to extract the existing supply chain knowledge, such as resources required for a specific process. In addition, the SCOR model also provides a mechanism for various users to reuse and share concepts and knowledge in a supply chain process. Having the SCOR model integrated with the BPCM meta-model for business process support systems development will result in a reusable, easy to integrate knowledge base. The SCOR model can be represented as a hierarchy of supply chain processes as depicted in the upper layer in Figure 3, where the level 1 and level 2 are for
76
S. Gao and J. Krogstie
representing more generic issues, such as planning and making, and the level 3 is for representing specific processes. Since the SCOR model is based on business process reengineering, this makes SCOR a process centric model. The processes in the SCOR model are the processes that are found in any supply chain. Concerning the relation between SCOR and the element process in the BPCM meta-model, each process element in the level 3 of the SCOR model can map to one process in the BPCM model as presented in Figure 3. This mapping aims to transform the detailed process element information in level 3 of the SCOR model to relevant elements in the BPCM meta-model. In addition, a process in a BPCM model may consist of several sub processes. The process elements information defined in the level 3 of the SCOR model might not be practical in terms of explicitly usefulness and utilization with respect to technical process models construction. However, having this information in place forms a good reusable knowledge base which facilitates communications between business stakeholders and model developers. Furthermore, the SCOR model makes sure that the relevant process elements in the level 3 from the organizational point of view are appropriately placed in the BPCM model in terms of various processes.
Figure 3. Relation between SCOR and Process in BPCM meta-model
A Meta-model for a Language for Business Process Characterizing Modelling
77
5. Mapping from BPCM Meta-model to BPMN Meta-model In this section, we illustrate the relation between the elements in the BPCM metamodel and the operational level process model in a combined framework [8] for development of business process support systems. In this combined framework, we consider a relatively informal model, more specifically BPCM here, which can ease the communication and cooperation between business stakeholders and technical model developers, as a starting point for developing a business process support system. In [12], they also argued that it is often beneficial to start a modeling-based project with an informal model and then develop the visual models. And then the BPCM models will be derived and developed into visual process models and executable models. Lastly, those models can be used as inputs for deriving a candidate IT system. The combined framework covering coarse grained business process characterizing modelling, and goal modelling and process modelling provides several benefits. It supports users to view the system from different perspectives. It also provides a mechanism for mapping of correspondences between meta-models of different models. Since BPMN has relatively higher expressiveness and ability to map directly to executable process languages such as business process execution language (BPEL) [10] and XPDL [6] compared to other process modeling notations, we select BPMN as the candidate notation for the operational level process model in the combined framework. Based on the BPCM meta-model proposed in section 3 and the BPMN 1.0 meta-model proposed by WSPER Group1, which is a class diagram that shows the elements of the BPMN language and their relationships, we analyze the mapping between those two models by considering Curtis framework on process modelling approach [4] which is based on four basic aspects: functional aspect, behavioral (control) aspect, organizational aspect, and informational aspect. BPCM: actor maps to BPMN: pool or lane, which covers organizational aspect. Likewise, BPCM: resource map to BPMN: data objects, which represents informational aspect. Concerning functional aspect, BPMN: task can be derived from BPCM: resource and actor in terms of complementary requirement table approach defined in [8]. The aspect which is least covered by the BPCM meta-model is the behaviroal (control) perspective for enacting business processes. The input and output parameters of the process element in the level 3 of the SCOR model may have implicitly effort on determining the order of activities and control flows. Refined BPCM meta-model with some extra SCOR properties and entities may help resolve this issue, which will result in better mappings between two metamodels. Further, we find that the traceability of the BPCM model to the BPMN model is improved in terms of the identified overlaps between two meta-models.
1
The BPMN meta-model is available at: http://www.ebpml.org/wsper/wsper/bpmn1.0.jpg
78
S. Gao and J. Krogstie
6. Conclusion In this paper, we describe a BPCM meta-model, which provides a foundation for building business process support systems. And the BPCM ontology classes on the type level are built. In addition, the relation between the SCOR model and the class process in the BPCM meta-model is addressed. Furthermore, the mapping from the BPCM meta-model to BPMN meta-model is depicted. However, it must be admitted that the usage and evaluation of the BPCM meta-model in the combined framework is currently quite limited since we have not tested it in some case studies. Future research will apply the BPCM meta-model and the combined framework in connection to supporting a loosely organized conference series.
7. References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12]
[13] [14] [15]
Aalst, W.v.d.: Formalization and Verification of Event-driven Process Chains. Information and Software Technology. 41 (10), 639-650 (1999) Ali, R., Dalpiaz, F. and Giorgini, P.: A Goal Modeling Framework for Selfcontextualizable Software (eds.). 29, 326-338. LNBIP, Springer (2009) Council, S.-c. SCOR Model 8.0 Quick Reference Guide, (2006) Curtis, B., Kellner, M.I. and Over, J.: Process modeling. Commun. ACM. 35 (9), 7590 (1992) Dey, A.K., Abowd, G.D. and Salber, D.: A conceptual framework and a toolkit for supporting the rapid prototyping of context-aware applications. Human-Computer Interaction. 16 (2), 97-166 (2001) Fischer, L.: workflow handbook 2005. Workflow Management Coalition (WfMC) (2005) Fowler, M.: UML Distilled: A Brief Guide to the Standard Object Modeling Language. Addison-Wesley, Reading, MA (2003) Gao, S. and Krogstie, J.: A Combined Framework for Development of Business Process Support Systems. (eds.) The Practice of Enterprise Modeling. 39, 115-129. LNBIP, Springer (2009) Gao, S. and Krogstie, J.: Facilitating Business Process Development via a Process Characterizing Model. In: International Symposium on Knowledge Acquisition and Modeling 2008. IEEE CS (2008) Havey, M.: Essential Business Process Modeling. O'Reilly Media, CA (2005) Hruby, P.: Model-Driven Design Using Business Patterns. Springer-Verlag New York (2006) Maiden, N.A.M., Jones, S.V., Manning, S., et al.: Model-Driven Requirements Engineering: Synchronising Models in an Air Traffic Management Case Study. In: Persson, A. and Stirna, J. (eds.) CAISE 2004. LNCS vol. 3084, pp. 368-383. Springer, Heidelberg (2004) McCarthy, W.E.: The REA accounting model: a generalized framework for accounting systems in a shared data environment. 57, 554-578 (1982) Regev, G. and Wegmann, A. Defining Early IT System Requirements with Regulation Principles: The Lightswitch Approach Proceedings of the Requirements Engineering Conference, 12th IEEE International, IEEE Computer Society, (2004) Salifu, M., Yu, Y. and Nuseibeh, B.: Specifying Monitoring and Switching Problems in Context. In: RE 2007. IEEE CS Press (2007)
A Meta-model for a Language for Business Process Characterizing Modelling
79
[16] Siau, K., Lim, E.-P. and Shen, Z.: Mobile Commerce – Promises, Challenges, and. Research Agenda. Journal of Database Management. 12 (3), 4-13 (2001) [17] White, S.: Business Process Modeling Notation Version 1.0. Business Process Modeling Initiative (BPMI.org) (2004) [18] White, S.A. Introduction to BPMN, (2005) [19] Yu, E.: Modelling strategic relationships for process reengineering. PHD Thesis, University of Toronto (1996)
A Tool for Interoperability Analysis of Enterprise Architecture Models using Pi-OCL Johan Ullberg1, Ulrik Franke1, Markus Buschle1 and Pontus Johnson1 1
Industrial Information and Control Systems, KTH Royal Institute of Technology, Osquldas v. 12, SE-10044 Stockholm, Sweden
Abstract. Decision-making on enterprise-wide information system issues can be furthered by the use of models as advocated by the discipline of enterprise architecture. In order to provide decision-making support, enterprise architecture models should be amenable to analyses. This paper presents a software tool, currently under development, for interoperability analysis of enterprise architecture models. In particular, the ability to query models for structural information is the main focus of the paper. Both the tool architecture and its usage is described and exemplified. Keywords: Enterprise Architecture, Probabilistic Relational Models, Software tool, Interoperability
1. Introduction During the last decade, enterprise architecture has grown into an established approach for holistic management of information systems in an organization. Enterprise architecture is model-based, in the sense that diagrammatic descriptions of the systems and their environment constitute the core of the approach. A number of enterprise architecture initiatives have been proposed, such as The Open Group Architecture Framework (TOGAF) [1], Enterprise Architecture Planning (EAP) [2], the Zachman Framework [3], and military architectural frameworks such as DoDAF [4] and NAF [5]. What constitutes a “good” enterprise architecture model has thus far not been clearly defined. The reason is that the “goodness” of a model is not an inherent property, but contingent on the purpose the model is intended to fill, i.e. what kind of analyses it will be subjected to. For instance, if one seeks to employ an enterprise architecture model for evaluating the performance of an information system, the information required from the model differs radically from the case when the model is used to evaluate system interoperability.
82
J. Ullberg, U. Franke, M. Buschle and P. Johnson
Enterprise architecture analysis is the application of property assessment criteria on enterprise architecture models. For instance, one investigated property might be the information security of a system and a criterion for assessment of this property might be “If the architectural model of the enterprise features a communication link with high availability, then this yields a higher level of interoperability than if the availability is low.” Criteria and properties such as these constitute the theory of the analysis and may be extracted from academic literature or from empirical measurements. This theory can be described using probabilistic relational models (PRMs) [6] which will be further detailed in section 1.3 Many interoperability concerns are related not only to various attributes of the architecture model but rather to the structure of the model, e.g. that two communicating actors need to share a language for their communication. With the actors and languages being entities in the metamodel, this property corresponds to ensuring that particular relationships exist in the architecture model instance. This paper presents a software tool under development for the analysis of enterprise architecture models. This tool extends previous versions of the tool [7][8] and the main contribution of this article is to illustrate how to query the architecture models for structural information such as the language compatibility as described above. This extension is based on the Object Constraint Language (OCL) [9] and is further detailed in section 1.3. The enterprise architecture frameworks described above generally suffer from the lack of methods for evaluating the architectures [10]. A number of enterprise architecture software tools are available on the market, including System Architect [11] and Aris [12]. Although some of these provide possibilities to sum costs or propagate the strategic value of a set of modeled objects, none have significant capabilities for system quality analysis in the sense of ISO-9126 [13], i.e. analysis of non-functional qualities such as reliability, maintainability, etc. However, various analysis methods and tools do exist within the software architecture community, including the Architecture Tradeoff Analysis Method (ATAM) [14] and Abd-Allah and Gacek [15]. In particular, various approaches such as LISI [16] and i-Score [17] have been suggested for assessing interoperability. However, these are generally not suitable in the enterprise architecture domain. Assessment Scoping
Evidence Collection Model Builder Function
Analysis Structural Analysis Function
provid ed by
Provid ed by 1..*
Provides 1 System
Administrat ed by 0..*
Administr ates 1
0 . . *
0.7 0.2 0.1 <Source id="Erik Johansson"/> <source id="Pontus Johnson"/>
Error locate with
System administrator
MTTF
Responsetime
MTTR
Repairtime
PRM
used by
Customer S upport : Funct ion
Fault Management : Function
provid ed by
Availiability
Availability
Uses 0 . . *
System Availiability
Calculation Function
Evidence.Interview.Juliet M e d i u m
provides administ r ated by
provides CRM Syst em : Syst em
Maintenance system : System
Availability
Availability
Reliability
Reliability
administr ates with
administrat es
Juliet : System administrat or Respons iveness Evidence. Document.Log H i g h
administ r ated by
Evidence. Document. M Log e d i u m
administrat es
Evidence.Observation. Juliet H i g h
Experience
Joseph : System adminis trator Responsiveness
Evidence.Observat ion.Joseph H i g h
Ex perience
Evidence.Interview.Juliet H i g h
E videnc e.I nterview.Joseph H i g h
Scenarios
Evidence
Instantiated PRM
Fig. 1. The process of enterprise architecture analysis with three main activities: (i) setting the goal, (ii) collecting evidence and (iii) performing the analysis
A Tool for Interoperability Analysis of Enterprise Architecture Models using Pi-OCL
83
2. Process of Enterprise Architecture Analysis Enterprise architecture models have several purposes. Kurpjuweit and Winter [18] identify three distinct modeling purposes with regard to information systems, viz. (i) documentation and communication, (ii) analysis and explanation and (iii) design. The present article focuses on the analysis and explanation since this is necessary to make rational decisions about information systems. An analysiscentric process of enterprise architecture is illustrated in Fig. 1 above. In the first step, assessment scoping, the problem is described in terms of one or a set of potential future scenarios of the enterprise and in terms of the assessment criteria (the PRM in the figure) to be used for scenario evaluation. In the second step, the scenarios are detailed by a process of evidence collection, resulting in a model (instantiated PRMs, in the figure) for each scenario. In the final step, analysis, quantitative values of the models’ quality attributes are calculated, and the results are then visualized in the form of enterprise architecture diagrams. More concretely, assume that a decision maker in an electric utility is contemplating the changes related to the measurement of customer electricity consumption. The introduction of automatic meter reading would improve the quality of the billing process and allow the customers to get detailed information about their consumption. The question for the decision maker is whether this change is feasible or not. As mentioned, in the assessment scoping step, the decision maker identifies the available decision alternatives, i.e. the enterprise information system scenarios. Also in this step, the decision maker needs to decide on how to determine how this scenario should be evaluated, i.e. the assessment criteria, or the goal of the assessment. Oftentimes, several quality attributes are desirable, but we simplify the problem to the assessment of interoperability. During the next step, to identify the best alternative, the scenarios need to be detailed to facilitate an analysis of them. Much information about the involved systems and their organizational context may be required for a good understanding of their future interoperability. For instance, it is reasonable to believe that a reliable medium for passing messages would increase the probability that communication is successful. The availability of the message passing system is thus one factor that can affect the interoperability and should therefore be recorded in the scenario model. The decision maker needs to understand what information to gather, and also ensure that this information is indeed collected and modeled. When the decision alternatives are detailed, they can be analyzed with respect to the desirable property or properties. The pros and cons of the scenarios then need to be traded against each other in order determine which alternative should be preferred.
3. An Enterprise Architecture Analysis Formalism A probabilistic relational model (PRM) [6] specifies a template for a probability distribution over an architecture model. The template describes the metamodel M for the architecture model, and the probabilistic dependencies between attributes of
84
J. Ullberg, U. Franke, M. Buschle and P. Johnson
the architecture objects. A PRM, together with an instantiated architecture model I of specific objects and relations, defines a probability distribution over the attributes of the objects. The probability distribution can be used to infer the values of unknown attributes. This inference can also take into account evidence on the state of observed attributes. A PRM Π specifies a probability distribution over all instantiations I of the metamodel M. As a Bayesian network [6] it consists of a qualitative dependency structure, and associated quantitative parameters. The qualitative dependency structure is defined by associating attributes A of class X (A.X) with a set of parents Pa(X.A), where each parent is an attribute, either from the same class or another class in the metamodel related to X through the relationships of the metamodel. For example, the attribute satisfied of the class Communication Need may have as parent CommunicationNeed.associatedTo.communicatesOver.isAvailable, meaning that the probability that a certain communication need is satisfied depends on the probability that an appropriate message passing system is available. Note that a parent of an attribute may reference a set of attributes rather than a single one. In these cases, we let X.A depend probabilistically on an aggregated property over those attributes constructed using operations such and AND, OR, MEAN etc. Considering the quantitative part of the PRM, given a set of parents for an attribute, we can define a local probability model by associating a conditional probability distribution with the attribute, P(X.A |Pa(X.A)). For instance, P(CommunicationNeed.satisfied=True|MessagePassingSystem.isAvailable=False) =10% specifies the probability that communication need is satsified, given the availability of the message passing system. 3.1. Pi-OCL PRMs do not, however, provide any means to query the models for structural information such “as given two actors with a need to communicate, do these actors have a common language (modeled as a separate object)?” The Object Constraint Language (OCL) is a formal language used to describe constraints on UML models. Writing these types of constraints in natural language would lead to ambiguities. In order to resolve this, OCL provides a means to specify such constraints in a formal language without the need for extensive mathematical knowledge. OCL expressions typically specify invariant conditions that must hold for the system being modeled or queries over objects described in a model. [9] This ability to query models would be of great benefit to interoperability analysis. OCL is, however, a side effect free language, so that the application of an OCL statement cannot change the content of the model. Furthermore it is deterministic and thus incapable of allowing uncertainty in the analysis. This section briefly describes how these two aspects of OCL are addressed by the extension to probabilistic imperative OCL or Pi-OCL for short. For a more comprehensive treatment see [19]. For the language to become imperative, i.e. to be able to change the state of the model, the introduction of an assignment operator is necessary. Turning OCL into a probabilistic language also requires redefinition of all operations in OCL, which is accomplished through the mapping of each operation to a Bayesian network and then combining these networks into larger
A Tool for Interoperability Analysis of Enterprise Architecture Models using Pi-OCL
85
networks corresponding to a complete Pi-OCL statement. For the sake of the analysis it is also necessary to introduce an existence attribute E in all classes and relationships corresponding to the probability that the class or relationship exists. Each expression views the model from the perspective of a context object C. The probability that an instantiated object O and its attributes O.A exist are thus not only dependent of O.E but also the existence of the relationships from the context object C to O. This context dependent existence is a key factor in the definition of all other OCL operations. To provide an example of how the operations are defined the following is a description of the equality operator 2 : . Since the original OCL equality operator is defined using the deterministic type Boolean, this operation needs to be refined. The probability that a given object o1 equals some other object o2, o1=o2 is calculated as follows: …
. , 0
…
.
Where … . , … . is the joint probability that there is a link from the context object to each of the objects o1 and o2 and id(object : OclAny) : Integer is a function that returns a unique identifier for each object. For a more comprehensive treatment of all of OCL statements in the Pi-OCL setting see [19]. The probability model of an attribute in a PRM expressed in terms of parent attributes can be replaced by Pi-OCL statements. Such statements allow not only attributes to constitute the parents of an attribute but also various aspects pertaining to the structure of the model. A PRM coupled with Pi-OCL statements thus constitutes a formal machinery for calculating the probabilities of various architecture instantiations. This allows us to infer the probability that a certain attribute assumes a specific value, given some (possibly incomplete) evidence of the rest of the architecture instantiation.
4. Design of the Tool Here a software tool supporting the enterprise architecture analysis process is outlined. The structural design of the tool is based upon the three basic processes illustrated in Fig. 1. Relating to the previous section, the first process step concerns the identification of the decision alternatives and the goals (assessment criteria). An example of a goal is to maximize the interoperability of an IT landscape. Goals are codified in a PRM, which also represents how various attributes and the structure of the model affect the goals (through the probability model and Pi-OCL statements). For instance, the PRM might state that the availability of the message passing system will affect the interoperability of the scenario. The Scenarios are not detailed in this step, but only given a name. In the second process step, the scenarios are elaborated. An example of a scenario might be to choose vendor X for the automatic meter reading system. In order to assess whether this is a better scenario than the one based on vendor Y,
86
J. Ullberg, U. Franke, M. Buschle and P. Johnson
more information is required. For instance we might need to detail the communication mediums employed by the respective vendors. Collected information on matters like these is called Evidence, and the process of evidence collection is supported by the tool. From the evidence, the Model Builder Function creates an instantiated PRM, which is a comprehensive enterprise architecture scenario model. The model thus typically contains instantiations of entities such as actor, message passing system, language, etc. Many times, the collected evidence will not be sufficient to allow full certainty of the values of entity attributes (for instance the attribute availability of the entity message passing system). They are therefore defined as random variables, allowing the representation of uncertainty. Recall that an important purpose of enterprise architecture models is to answer relevant questions, such as whether the interoperability of scenario A is higher than in scenario B. Oftentimes, it is not possible to directly collect information about properties such as interoperability, but it is possible to collect evidence pointing in a certain direction (e.g. information on the availability of the message passing system) and create models in which the structure of the model can be used for such evidence. Based on the created architecture model the structural analysis function use the Pi-OCL statements defined in the PRM to evaluate various structural concepts of the architecture. This is done by mapping the statements to Bayesian networks, evaluating these networks and writing the result back to attributes of the architecture model. Employing Bayesian theory once again, the Calculation Function calculates the values of those attributes that could not be credibly determined directly. The tool employs the collected evidence, the result of the structural analysis as well as the causal relationships specified in the PRM as input to these calculations. Central artifacts in enterprise architecture frameworks and tools are graphical models. The instantiated PRMs containing the enterprise architecture scenarios is therefore employed for displaying the results and the tool allows various views of the models. 4.1. Architecture of the Tool The tool is implemented in Java and uses a Model-View-Controller architecture, thus having these three main parts, illustrated in Fig. 2. The data model for PRMs and instantiated PRMs is specified in XSD files. The data binding tool Castor [20] allows simple marshalling and unmarshalling of XML models to and from java objects (the Model). The model elements are contained by the Widgets of the Net-Beans Visual Library that also update the view when the model changes. Model operations are controlled by a number of internal tools. This essentially corresponds to the functions of the design outlined above and a set of supporting tools. These internal tools, acting as controllers, provide uniform access to their functionality as they are all implemented in a common way, using the singleton pattern. One internal tool of particular interest here is the Pi-OCL evaluation tool that, based on the structure of the model, can perform various assessments of the
A Tool for Interoperability Analysis of Enterprise Architecture Models using Pi-OCL
87
architecture. At the core of the tool is a Pi-OCL interpreter created using SableCC [21]. SableCC create an abstract syntax tree representation of Pi-OCL statements based on the grammar and provides a skeleton for traversal of the syntax tree using the visitor pattern. This skeleton is then refined to evaluate the result of the Pi-OCL statements. As this analysis is probabilistic each of the nodes in the abstract syntax tree are evaluated using Bayesian networks by employing the Smile library [22] for calculations. The results are then written back to attributes of the model. The view is implemented as a part of the JApplication framework. The view is in charge of handling all user interactions that are not concerned with the drag-anddrop modeling features of the scene. Several methods handle button and menu actions. Also, loading and saving tools are called by the view. Model
Controller
XSD PRM
Java PRM
XSD iPRM
Java iPRM
View
Fig. 2. High level architecture of the tool, illustrating the main components. "iPRM" is shorthand notation for instantiated PRM
5. Employing the Tool for Interoperability Analysis In this section a short example of the employment of the tool will be outlined. Interoperability is defined by IEEE as the ability of two or more systems or components to exchange information and use the information that has been exchanged [23]. This can be seen as a communication need and the interoperability analysis would correspond to ensuring that this need is satisfied. Consider the following metamodel for interoperability analysis where two or more Actors share a Communication Need. The actors communicate over a message passing system, e.g. the internet and encode the messages sent over the message passing system in a format, the language. On such a metamodel and interoperability definition the following very simple theory of interoperability can be expressed: 1) for interoperation to take place there need to be a path for message exchange between the actors of a communication need. 2) the actors must share at least one language for expressing their communication and finally 3) the message passing system
88
J. Ullberg, U. Franke, M. Buschle and P. Johnson
Fig. 3. A sample PRM for interoperability analysis detailing the important entities and relationships needed for the analysis
these actors use must be available This theory can be expressed as a PRM using the tool, cf. Fig. 3 where the abovementioned entities are modeled. The Communication Need class contains three attributes, the satisfied attribute corresponding to the goal of the analysis and two attributes corresponding to statement 1 and 2 of the theory. Since the third statement was concerned with the availability of the message passing system this is also included as an attribute of the message passing system class. Turning to the probability model of the PRM the first two statements in the theory are based on the structure of the architecture and can be expressed using with the following Pi-OCL statements: Context: CommunicationNeed self.pathExist := self.associatedTo->forAll(a1 : Actor, a2 : Actor | a1 <> a2 implies a1.communicatesOver.communicatesOver-1 -> exists(a2)) self.LanguageMatch := self.associatedTo->forAll(a1 : Actor, a2 : Actor | a1 <> a2 implies a1.uses -> exists(l:Language | a2.uses->exists(l)) As can be seen from these statements the result of the Pi-OCL evaluation is assigned to two of the attributes of the class communication need. These attribute need to be aggregated and combined with the third statement in the theory, that pertaining to the availability of the message passing system. This can be expressed in terms of parent attributes of CommunicationNeed.satisfied, the goal of the analysis. The parents of this attribute thus are: CommunicationNeed.pathExists, CommunicationNeed. languageMatch and CommunicationNeed.associatedTo. communicatesOver.isAvailable, see the solid arrows in the PRM of Fig. 2 . Turning to the quantitative part of the probability model the theory states that all these three criteria must be met for the communication need to be satisfied, an AND operation can thus be employed for aggregation.
A Tool for Interoperability Analysis of Enterprise Architecture Models using Pi-OCL
89
Fig. 4 depicts the concrete modeler where the architectural model is created based on the PRM. This means that specific instances of the abstract concepts are created to reflect the scenario at hand. By entering evidence on the states of the attributes and using the structural analysis function it is possible to use PRM inference rules to update the model to reflect the impact of the architecture and the evidence. The user can thus find updated probability distributions for all attributes in the model.
Fig. 4. An instatiated PRM for the modeled scenario
In this example we have a Communication Need get meter reading between the Actors Billing System and Meter Reading System. These are connected using a X.25 leased line of type message passing system. The meter reading system uses two languages, the common information model (CIM) and EDIEL whereas the billing system only use EDIEL Entering information regarding the availability, of the message passing system and running both the structural analysis and the inference will result in the prediction of a interoperability value for the modeled scenario.
6. Conclusions In this paper, a tool for analysis of enterprise architecture scenarios with a particular focus on interoperability assessments is outlined. The tool, currently under development, allows specification of the assessment theory in terms of a metamodel and a probability model encoded as a PRM with Pi-OCL statements. The Pi-OCL statements are used for structural analysis of the architecture. Based on the metamodel it is possible to model the architecture scenarios and using the theory assesses the interoperability of the scenario. Information system decision making is supported by allowing quantitative comparisons of the qualities, such as interoperability, of possible future scenarios of the enterprise information system and its context.
90
J. Ullberg, U. Franke, M. Buschle and P. Johnson
7. References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23]
The Open Group. The Open Group Architecture Framework, Version 8, 2005 Spewak, S.H., Hill. S.C., Enterprise Architecture Planning. Developing a Blueprint for Data, Applications and Technology, John Wiley and Sons, 1992 Zachman, J., A framework for information systems architecture, IBM Systems Journal, 26(3), 1987 DoD Architecture Framework Working Group, “DoD Architecture Framework” Version 1.0, 2003 NAF, “NATO C3 Technical Architecture”, Volume 1-5, Version 7.0, Allied Data Publication 34, 2005 L. Getoor, N. Friedman, D. Koller, A. Pfeffer, and B. Taskar. Probabilistic relational models. MIT Press, 2007. Johnson, P., Johansson, E., Sommestad, T. Ullberg, J., A Tool for Enterprise Architecture Analysis, Proceedings of the 11th IEEE International Enterprise Computing Conference (EDOC 2007), 2007 Ekstedt, M. et al. A Tool for Enterprise Architecture Analysis of Maintainability, Proc. 13th European Conference on Software Maintenance and Reengineering, 2009 Object Management Group Object Constraint Language specification, version 2.0 formal/06-05-01, 2006 Chen, D. et al., Architecture for enterprise integration and interoperability: Past, present and future, Comput Industry (ind), 2008 IBM, Rational System Architect, http://www-01.ibm.com/software/awdtools/ systemarchitect/, accessed: Dec 2009 Scheer, A.-W., Business Process Engineering: Reference Models for Industrial Enterprises, 2nd ed., Springer-Verlag, 1994 International Organization for Standardization/International Electrotechnical Commission, “International Standard ISO/IEC 9126-1: Software engineering – Product quality – Part 1: Quality model,” ISO/IEC, Tech. Rep., 2001. Clements, P., R. Kazman, M. Klein, “Evaluating Software Architectures: Methods and Case Studies”, Addison-Wesley, 2001 Gacek, C., “Detecting Architectural Mismatch During System Composition”, PhD. Thesis, University of Southern California, 1998 Kasunic, M., Anderson, W., “Measuring Systems Interoperability: Challenges and Opportunities” Technical Note, CMU/SEI-2004-TN-003, Software Engineering Institute, Carnegie Mellon University, Pittsburgh, 2004. Ford, T., Colombi, J., Graham, S., Jacques, D., “The Interoperability Score”, Proceedings of the Fifth Annual Conference on Systems Engineering Research, Hoboken, NJ, 2007 Kurpjuweit, S., Winter, R.: Viewpoint-based Meta Model Engineering. In Proceedings of Enterprise Modelling and Information Systems Architectures (EMISA 2007), Bonn, Gesellschaft für Informatik, Köllen, pp. 143-161, 2007 Franke U. et. al, π-OCL: A Language for Probabilistic Inference in Relational Models (2010) to be submitted ExoLab Group, The Castor Project, http://www.castor.org/ accessed Dec. 2009 Gagnon, É., SableCC, an Object-Oriented Compiler Framework, Master Thesis, McGill University Montreal, 1998 Decision Systems Laboratory of the University of Pittsburgh, SMILE, http://genie.sis.pitt.edu/ accessed Dec. 2009 IEEE, Standard Glossary of Software Engineering Terminology. Std 610.12-1990, The Institute of Electrical and Electronics Engineers, New York, 1990.
A UML-based System Integration Modeling Language for the Application System Design of Shipborne Combat System Fan Zhiqiang1, Gao Hui2, Shen Jufang1, Zhang Li1 1 2
Dept. of Computer Science and Technology, BeiHang University, Beijing, P.R.China Systems Engineering Research Institute, China State Shipbuilding Corporation, Beijing, P.R.China
Abstract. The Shipborne combat system (SCS) includes many application systems. The integration of application systems is necessary because they are distributed and need to interoperate with each other. For such integration, an integration model needs to be developed. Firstly, the integration framework of application systems of SCS is discussed, which involves three layers: data integration, function integration and workflow integration. Secondly, integration metamodel based on the framework is introduced. Data, application system, function, component, workflow and their relationships are explained in detail. Finally, a system integration modeling language for application system design (SIML4ASD) based on the metamodel is defined by extending UML using profile mechanism. SIML4ASD is easy to learn, use and understand for the system designers and can be used to describe integration models clearly and exactly in multi-view. Keywords: integration framework; integration metamodel; SIML4ASD; UML
1. Introduction The Shipborne Combat System (SCS) is a system of personnel, weapons and devices used to execute early-warning, tracking, communications, navigation, target identification, data processing, threaten assessment, and control various shipborne weapons against the enemy [1]. SCS includes hardware devices (e.g. computers and switch) and software systems (e.g. OS and database). As the software technology has become more and more mature, there are more and more application systems. These application systems are supporting the combat business, and play an important role in SCS. Application systems are distributed on different hardware devices and they work in collaboration to complete combat tasks. There is a need of interoperation among them. Therefore, the integration of application
92
F. Zhiqiang, G. Hui, S. Jufang and Z. Li
systems is very important in their design. Three characteristics of application systems should be considered: 1) there are large amount of data exchanges between application systems when they carry out a combat task collaboratively. 2) different application systems have different functions but some of them have several identical functions. Some times, the functions of system may be changed dynamically. Hence SCS is usually designed to consist of functional blocks and application system is the composition of those blocks. Functions are user-oriented and thus, do not fit to be used as system development guide. SCS is usually designed and developed using component-based technology. Functions are the composition of components. Therefore, for the system design and dynamic function change, the relationship between application system, function and component needs to be established; 3) in order to improve efficiency and achieve automation, part of or whole business processes are designed as workflow which is executed by application systems automatically. The actual activities of workflow are executed by components. Therefore, the relation of workflow and components needs to be established when designing a workflow. Besides that, the features of workflow in SCS should also be considered, e.g. some activities have time constraint and/or cannot be interrupted when being executed. Because of these three characteristics of SCS, application systems are usually integrated by three-layer (data layer, function layer and workflow layer) framework. Data, application system, function, component, workflow and the relationships between these layers need to be described clearly in integration model. However, existing modeling languages are not fully capable of describing the integration model. For example, IDEF can be used to describe system and function using IDEF0 [2], data using IDEF1X [3] and workflow using IDEF3 [4], but it cannot afford to describe the components. UML [5] can be used to describe workflow using activity diagram, data using class diagram and component using component diagram, but the consumption or production relationship between data and component cannot be established. Moreover, the features of SCS (e.g. time constraint) can not be reflected in the workflow model described using UML or IDEF. Thus a new system integration modeling language (SIML) for description of application system integration model and communication of designers is necessary.
2. Application System Integration: Framework and Layers Generally speaking, system integration can be divided into several layers. Deng [6] and Wang [7] proposed a three-layer integration framework of information system which consists of Network layer, Data Layer and Application Layer. Zhang [8] suggested a conceptual framework which consists of five layers, i.e. physical, data, application, business process/functionality and presentation. Edward and Jeffrey’s framework for business integration divided integration into two layer: system and organizational layer, and subdivided system integration into data, application and business process integration [9]. Giachetti gave a framework of enterprise information integration which involves network, data, application and process integration [10]. Drawing on these results, we propose a hierarchical integration
A UML-based System Integration Modeling
93
framework presented in Fig. 1 for the integration of application systems based on the three features mentioned in section I. Application System B
Workflow
Workflow Integration
Function
Function
Data
Data
Integration
Integration
Application System A
Fig. 1. A Integration Framework of Application Systems
Integration at Data Layer: the data layer lays a strong basis for the integration of application systems. Data integration virtually associates the data of different systems and provides the application system with data in a uniform format. Integration at Function Layer: the integration at this layer helps to understand the function of application system and the component realizing specific function. The function integration mainly aims at addressing the composition of components. Integration at Workflow Layer: workflow is the computerised facilitation or automation of a business process, in whole or part [11]. The integration of which asks for the cooperation of different application systems and their compliance with the workflow logic. The framework presents not only horizontal integration, but also vertical integration. Integration between data layer and function layer occurs when a component produces or consumes data. Integration between function layer and workflow layer happens when component executes activities of workflow.
3. Integration Metamodel of Application Systems The main elements involved in the integration framework are: data, application system, function, component and workflow. The conceptual model is showed in Fig. 2. Workflow consists of activities. Activity is executed by component with data consumption or production. Application system participating in the execution of workflow includes functions implemented by components. Based on the integration framework, detailed metamodel is introduced as follows. 3.1. Metamodel at Data Layer The main elements and relationship in the metamodel are descripted in Fig. 3. Data is composed by DataItem related with Type which can be generalized as ConstructedType, SimpleType and CollectionType. A SimpleType is a fixedlength type. CollectionType means a collecton of any type. ConstructedType refers to the type constructed by other types, and Data itself is a kind of ConstructedType.
94
F. Zhiqiang, G. Hui, S. Jufang and Z. Li
Data has zero or more DataInstance. The relationship between Data and DataInstance is Instantiation. The relationships of Data include Generatlization, Agregation and Composition with the same defination in UML [5]. DataDomain is the classification of Data. A Data can only be in one DataDomain.
Fig. 2. Conceputal Model of Application System Integration
3.2. Metamodel at Function Layer The model showed in Fig. 4 explains the relationships among Application System, Function, Component and Data. ApplicationSystem refers to the software system support combat business in SCS. Different ApplicationSystems may have serveral same Functions. Function is the decomposition of application system from users’ view. It cannot be used as system development guide, the application system is decomposed into components from the designers’ and developers’ view. Component only refers to single component instead of composed components since component composition is consider as Function in the metamodel. One Function can be implemented by one or more Component, and different Functions can be implemented by one Component. ComponentInstance and Interface related to component have the same symentics as in UML [5]. FunctionalConstraint and NonfunctionalConstraint are the text description of functional and nonfunctional requirements of AplicationSystem, Function or Component. The vertical integration relationship among data layer and function layer is reflected by the consumption or production relationship between Data and BasicElement which may be ApplicationSystem, Function or Component.
Fig. 3. Metamodel at Data Layer
Fig. 4. Metamodel at Function Layer
A UML-based System Integration Modeling
95
3.3. Metamodel at Workflow Layer The main elements of workflow in SCS are Activity, DataPool, ControlFlow, ObjectFlow and RealtimeConstraint as presented in Fig. 5a. Activity: ExecuteActivity is a action which is executed by only one component instance. The detailed SignActivity and LogicActivity are showed in Fig. 1.5c. AndSplitActivity splits a control flow into multiple concurrent control flows. OrSplitActivity splits a control flow into multiple control flows, and only one of them can be executed. AndJoinActivity is used to join multiple control flow as one control flow synchronously. OrJoinActivity joins multiple control flow as one control flow without synchronization. DataPool: Data in workflow is generalized as ApplicationData related to the business and ControlData used to make decision at OrSplitActivity. DataPool is the element storing and managing one or more kinds of application data instances. ControlFlow: ControlFlow connects two activities with zero or more kinds of control data and condition used for decision making at OrSplitActivity. ObjectFlow: ObjectFlow from DataPool to ExecutionActivity can be divided into two categories represented in Fig. 1.5b: ConsumeFlow and ReferenceFlow. ComsumeFlow means the objects stored in DataPool are comsumed. ReferenceFlow means the objects are just be used. RealtimeConstraint: In SCS, some activities, control flows and application data instances have time constraint represented by the RealtimeConstraint element. Li [12] has concluded six kinds of time constraints: delay constraint, flow delay, limited duration, deadline, time distance constraint and fixed-date. The delay constraint is suitable for the time constraint of workflow in SCS. Some activities and control flows can’t be interrupted which is represented by the Interrupt attribute. The SCondiction attribution of ExecutionActivity means the start condition of the activity. The vertical integration relationship of data layer and workflow layer is reflected by the relationships between Data and DataPool, ControlFlow and ObjectFlow. The relationships among Workflow and ApplicationSystem, ExecuteActivity and ComponentInstance reflect the integration relationship of function layer and workflow layer.
b. ObjectFlow, ControlFlow and Data
a. main elements of workflow
c. SignActivity and LogicActivity
Fig. 5. Metamodel at Workflow Layer
96
F. Zhiqiang, G. Hui, S. Jufang and Z. Li
4. UML Based System Integration Modeling Language For the description of integration model and the communication of designers, a modeling language is needed. In this section system integration modeling language for application system design (SIML4ASD) based on the metamodel is defined by extending UML2.0. 4.1. SIML4ASD SIML4ASD reuses part of elements of UML2.0 with the <<merge>> relationship, and then extends some elements of UML using profile mechanism. SIML4ASD divides elements into three packages which are data, function and workflow. Data Package: The elements in data package are ApplicationData, ControlData, DataInstance, DataDomain, Instantiation, Generalization, Composition, Aggregation and Class Diagram, where, the first three are the element extended from UML and the others are the elements of UML. Function Package: ApplicationSystem, Function, FunctionalConstraint, NonfunctionalConstraint, Component, ComponentInstance, Interface, Association, Realization, InterfaceRealization, Dependency, Component Diagram, UseCase Diagram, Component Diagram and Sequence Diagram are in function package, where, the first four are extended from UML, the others are the elements of UML. Association is connection from ApplicaitionSystem to Function. Realization is connection from Component to Function. InterfaceRealization and Dependency connect Component to required and provided Interface separately. Besides this, fucntion package imports data package, which means the elements in data package are included in fucntion package. Thus the Instantiation in data package can be used to connect Component to ComponentInstance. Workflow package: the elements in workflow package are Workflow, ExecutionActivity, DataPool, ObjectFlow, ReferenceFlow, ConsumeFlow, ControlFlow, ExceptionFlow, OrSplitActivity, OrJoinActivity, AndJoinActivity, AndSplitActivity, InitialActivity and FinalActivity. All of these are extended from UML. Workflow package imports the function package. The extended elements of SIML4ASD are listed in Table 1.1. The relationships among Application System, Function, Component, and Data are expressed by CDataType and PDataType attributes. The relationship between DataInstance and ObjectFlow/ControlFlow are expressed by the DataType attribute. These three attributes use “;” to split the different DataType. The relationship of ExecutionActivity and ComponentInstance are expressed by the ExeCmpt attribute recording the component name. The SCondtion attribute recording activity name is used to express the start condition of ExectutionActivity. Time constraint is expressed by the four attributes: minimum duration (Tmin), maximum duration (Tmax), actual duration (Trun) and time unit (Unit). 4.2. Multi-view Modeling of SIML4ASD Integration model involves many elements. It is hard to describe all elements in one view. Due to the difference of people’s concerns with integration model, it is
A UML-based System Integration Modeling
97
better to describe the model in multiple views. SIML4ASD has three views base on UML diagrams: data view, function view and workflow view. The Data and the relationship between Data can be described in data view using the class diagram. Function view has five sub-views. Application system view is used to describe the application systems and data exchange between them using component diagram. Application system function view is used to describe the function of each system using “use case” diagram. Component View can be used to describe a component and its interface using component diagram. The interaction of components can be expressed via component interaction view using sequence diagram. The function to component view is to describe the relationship of function and component, i.e. which function is implemented by which components. Workflow View is used to describe workflows by activity diagram. Table 1.1. Stereotype of SIML4ASD Modeling Concept / Stereotype / UML Element DataDomain / ASD_DD / Package ControlData / ASD_CD / Class DataInstance / ASD_DI / ClassInstance ApplicationSystem / ASD_AS / Component Function / ASD_F / UseCase Component / ASD_C / Component FunctionalConstraint / ASD_FC / Constaint NonfunctionalConstraint / ASD_NFC / Constaint Workflow / ASD_WF / Activity ExecutionActivity / ASD_EA / Action InitialActivity / ASD_IA / InitialNode FinalActivity / ASD_FA / ActivityFinalNode AndJoinActivity / ASD_AJA / JoinNode OrJoinActivity / ASD_OJA / MergeNode AndSplitActivity / ASD_ASA / ForkNode OrSplitActivity / ASD_OSA / DecisionNode ControlFlow / ASD_CF / ControlFlow ExceptionFlow / ASD_EF / ControlFlow DataPool / ASD_DP / DataStoreNode ObjectFlow / ASD_OF / ObjectFlow ReferenceFlow / ASD_OFR / ObjectFlow
Extended attributes
CDataType:string PDataType:string
Tmin:double Tmax:double Trun:double Unit:enum
DataType:string
ConsumeFlow / ASD_OFC / ObjectFlow ApplicationData / ASD_AD / Class ExecutionActivity / ASD_EA / Action ControlFlow / ASD_CF / ControlFlow
ExeCmpt:string SCondition:string DataType:string
98
F. Zhiqiang, G. Hui, S. Jufang and Z. Li
5. Related Work Since system integration involves many areas, there is no such language that is sufficient to describe all elements involved in system integration. Balasubramanian [13] and Shetty [14] , researcher of Institute for Software Integrated Systems of Vanderbilt University, provided SIML by composing two kind of Domain Specific Modeling Language respectively. The SIMLs provided by Balasubramanian and Shetty are used to describe the integration model of two kind of systems developed in different language and style. The integration of application systems involves three layers, i.e. data layer, function layer and workflow layer. The modeling languages studying data, function and workflow have made following progress. Data modeling: the metamodel is the core of a language. Currently, the wellknown data metamodel are CORBA [15], DDS [16], EXPRESS [17] and IDEF1X [3] data metamodel. CORBA supports fewer data types, especially the collection type. DDS provides richer data types that are closer to user. The UML profile for DDS is also released by OMG. The data types supported by EXPRESS are less than DDS. IDEF1X is a data modeling language used to develop semantic data model. DDS supports the most of data types among the four. It is the main reference of the metamodel at data layer. But the latter is more detailed than the former. Function modeling: the main function modeling methods are Functional Flow Block Diagram (FFBD) [18], Hierarchical Input Process Output (HIPO)[19], N2 Chart[20], Structured Analysis and Design Technique (SADT)[21], IDEF0[2] and UML[5]. Function modeling for the application system integration of SCS is aimed at describing application system, function, component and the relationships among them. These function modeling languages, unlike UML, cannot describe the components. But the relationship between functions and components are not defined in UML. Workflow modeling: there are mainly six workflow modeling methods: 1) Activity network diagram based modeling method [22]; 2) Petri Net based modeling method [23]; 3) State Diagram based modeling method [24]; 4) dialog model based modeling method [25]; 5) UML activity diagram based modeling method [5]; 6) IDEF3 based modeling method [4]. The methods based on Petri Net, UML activity diagram and IDEF are the most representative of the six methods above. The first is commonly used for workflow simulation and analysis due to Petri Net theory, but it’s hard to learn and understand. The latter two are used widely for workflow modeling because they are easy to learn and understand. But UML activity diagram is better than IDEF3 in expansibility and that activity diagrams are a recognized form of modeling real time systems [26]. Since model describing is the primary purpose of SIML4ASD, the method based on UML activity diagram is chosen. According to the survey above, the UML and IDEF are suitable for the integration modeling. Zhou [27] has used UML to describe the system integration model. Xiao [28] has used IDEF to describe the business integration model. Since the component needs to be described in integration model at function layer and IDEF does not support the requirement well, hence, the UML based system integration modeling language is defined for application system design of SCS.
A UML-based System Integration Modeling
99
6. Conclusion When designing application systems of SCS, the data exchange, function assignment and workflow must be described clearly. However, currently, there is no such modeling language to serve the purpose. Therefore, a research about the SIML for application system design of SCS is studied in this paper. There are three contributions of this paper: 1) an integration framework of application systems of SCS is proposed, which consists of data, function and workflow; 2) the integration metamodel, explaining the data, application system, function, component, workflow and their relationship for the first time, is introduced based on the framework; 3) a UML based SIML is defined, which is easy to learn, use and understand for the system designer. This work is meaningful for two reasons: 1) as the integration metamodel is defined and a UML based SIML4ASD is provided, system designers can describe the integration model of application systems easily and clearly in multi-view, which is helpful for communication of designers; 2) systems are usually evaluated after their design, and automatic evaluation methods are more and more mature in recent years. A clear and unambiguous definition of integration metamodel and SIML4ASD is the basis of automatic system evaluation. This paper is supported by the National Natural Science Foundation of China under Grant No.60773155 and the National Grand Fundamental Research 973 Program of China under Grant No.2007CB310803.
7. References [1] [2] [3] [4] [5] [6] [7] [8]
Mei Jiao, “Development Features and Prospects of Shipborne Combat System at Home and Abroad”, Ship Command and Control System, vol. 5, July 1996, pp. 817.(in chinese) Draft Federal Information Processing Standards Publication 183. INTEGRATION DEFINITION FOR FUNCTION MODELING (IDEF0). 1993 itl.nist.gov. Integration Definition for Information Modeling (IDEFIX). 21 Dec 1993. www.itl.nist.gov/fipspubs/idef1x.doc Richard J. Mayer et al. Information Integration for Concurrent Engineering (IICE): IDEF3 Process Description Capture Method Report. Logistics Research Division, Wright-Patterson AFB, OH 45433 OMG. OMG Unified Modeling Language (OMG UML), Superstructure, V2.1.2. http://www.omg.org/docs/formal/07-11-02.pdf Deng Su, Zhang Wei-Ming, Huang Hong-bin, Information System Integration Technology, 2nd ed., Publishing House of Electronics Industry: Beijing, 2004. (in chinese) Wang Hui-Bin, Wang Jin-Yin, Information System Integration and Fusion Technology and Its Application, National Defense Industry Press: Beijing, 2006, pp.21-75. (in chinese) Zhang Xiao-Juan, “System Integration in the Contemporary Business Information Systems-Framework,Implementation and Case Study”, 4th International Conference on Wireless Communications, Networking and Mobile Computing, Dalian. 2008. pp:1-6
100
F. Zhiqiang, G. Hui, S. Jufang and Z. Li
[9]
Edward A. Stohr, Jeffrey V. Nickerson, Enterprise Integration: Methods and Direction. Oxford University Press: New York, 2003, pp:227-251 Giachetti R.E., A Framework to Review the Information Integration of the Enterprise, International Journal of Production Research, 2004, Vol. 42, Iss. 6, pp.1147-1166. Workflow Management Coalition,The Workflow Reference Model-TC00-1003 v1.1. 1995. Li Hui-fang, Fan Yu-shun. "Overview on Managing Time in Workflow Systems", Journal of Software, vol 13(8), 2002, pp.1552-1558 (in chinese) Balasubramanian Krishnakumar, Schmidt Douglas C, Molnar Zoltan, Ledeczi Akos. Component-based System Integration via (Meta)Model Composi-tion. 14th Annual IEEE International Conference and Workshops on the Engineering of ComputerBased Systems, Tucson, AZ . 2007. pp: 93-102 Shetty S, Nordstrom S, Ahuja S, Di Yao, Bapty T, Neema S. Systems Integration of Large Scale Autonomic Systems using Multiple Domain Specific Modeling Languages. 12th IEEE International Conference and Workshops on the Engineering of Computer-Based Systems, 2005. pp: 481-489 OMG. Common Object Request Broker Architecture - for embedded. http://www.omg.org/cgi-bin/doc?ptc/2006-05-01.pdf OMG. Data-Distribution Service for Real-Time Systems, V1.2. http://www.omg.org/cgi-bin/doc?formal/07-01-01.pdf ISO 10303-11:2004 Industrial automation systems and integration -- Product data representation and exchange -- Part 11: Description methods: The EXPRESS language reference manual Function model. http://en.wikipedia.org/wiki/Function_modeling#cite_note-11 Sandia National Laboratories. Sandia Software Guidelines Volume 5 Tools, Techniques,and Methodologies. 1992 .http://www.prod.sandia.gov/cgibin/techlib/access-control.pl/1985/852348.pdf NASA. Techniques of Functional Analysis. In: NASA Systems Engineering Handbook June 1995. p:142. John Mylopoulos. Conceptual Modelling III. Structured Analysis and Design Technique (SADT). Retrieved 21 Sep 2008. SADIQ S,ORLOWSKAM.On correctness issues in conceptual modeling of workflows[A].Proceedings of the 5th European Conference on Information Systems[C].Cork,Ireland:Cork Publishing Ltd,1997. W.M.P.van derAalst.TheApplication of Petri Nets to Workflow Management.The
[10] [11] [12] [13]
[14]
[15] [16] [17] [18] [19] [20] [21] [22] [23]
Journal of Circuits,Systems and Computers,1998,8(1):21~66. [24] Georgakopolous D,Hornick M,ShethA.An overview of workflow management:from process modeling to workflow automation infrastruc-ture[J].Distributed and Parallel Database,1995,3(2):119~152. [25] Leymann F,Altenhuber W.Managing business processes as an information resource[J].IBM Systems Journal,1994,33(2):326~348. [26] Chang E, Gautama E, Dillon T S. Extended activity diagrams for adaptive workflow modelling[C]// Fourth IEEE International Symposium on Object-Oriented Real-Time Distributed Computing. Magdeburg, 2001: 413-419. [27] Yonghua Zhou, Yuliu Chen, Huapu Lu. UML-based Systems Integration Modeling Technique for the Design and Development of Intelligent Trans-portation Management System. IEEE International Conference on Systems, Man and Cybernetics. 2004. pp: 6061 - 6066 [28] Xiao Yan-Ling, Xu Fu-Yuan, Hu Wen-Bo. Business Process Reengineering Based on IDEF Methods. Proceedings of the 2004 IEEE International Conference on Information Reuse and Integration. 2004. pp: 265 – 270
Contribution to Interoperability of Executive Information Systems Focusing on Data Storage System Guillaume Vicien1, Yves Ducq1, Bruno Vallespir1 1
IMS UMR 5218 CNRS, Dept LAPS / GRAI Group, 351, cours de la Libération, 33405 Talence Cedex – France
Abstract. Nowadays, performance improvement in order to reach the economic, technical and social objectives has become essential for all enterprises whose want to stay competitive. For this reason, enterprises should set up an executive information system which will give them their performance’s level at every moment through different performance indicators. The developement of those systems is long and difficult and different ineroperability problems can occur especially during the design of the data storage system. Those interoperability problems disturb the system and the agregation of the different performance indicators.This article aims in a first part to propose a framework for the design and implementation of the data storage system. In a second, we analyse the different possible architecture and make their comparison. Keywords: executive information system, performance indicators, interoperability
1. Introduction Ever since Jonhson and Kaplan published in 1987 their seminal book entitled “Relevant Lost – The rise and fall of management accounting” [1], performance measurement gained increasing popularity both in practice in the enterprises and in research [2]. Today, the enterprises have understood that they need performance measurement and performance management practices in order to be competitive and increase their development. In fact, managers are continually measuring or requesting measures and management could hardly exist nowadays without measurement [3]. To measure their performance, enterprises have to define a performance indicators system that allows to verify the accomplishment of organizational objectives and to help the managers to take the best decisions. To define this system, many methodologies have been developped since many years like the Balanced Score Card [4], Prism method [5] or ECOGRAI [6].
102
G. Vicien, Y. Ducq, B. Vallespir
When all the performance indicators are defined, the enterprise has to set up an executive information system that will be able to restore the indicators on different forms (dash boards, OLAP analysis, data mining...). Executive information system uses different computing tools like ETL (Extract, Transform and Load) and Data warehouse to aggregate, store and restore the different performance indicators. The first applications of computing tools to help managers to take decisions appear at the beginning of the 90’s and the development of data warehousing and OLAP technologies date of the middle of the 90’s [7, 8]. Nowadays, the Business Intelligence market that includes all the different computing tools is very developed and still developing very quickly. This article aims to show the importance in the choice of the structure and the modelling use for the data storage system of an executive information system in order to aggregate the different performance indicators without interoperability problems. This paper is composed of two parts. The first one begins with the presentation of executive information systems in order to introduce the different interoperability problems that can be found in those systems. The second one focuses on the data storage process proposing a framework for their design and comparing the different possible architectures.
2. Executive Information Systems Nowadays, executive information systems are common place in all sectors of industry and commerce and in all sized enterprises. Indeed, we observe for few years an increasing interest of the small and medium sized enterprises for executive information systems due to the price decrease of computing applications. However, we do not found methodology for the implementation of performance indicators system within executive information systems and we notice several failures in enterprises [9, 10]. 2.1. Critical Success Factors of Executive Information Systems Executive information system development and implementation, which usually tales around 6 and 12 months [11] is a risky task due to the fact that many elements take part in this process and are closely llinked to one another. The riks that something goes wrong are very high [12]. A study realized by Poon [13] have showed that failures occure in 70% of cases and could lead to the reliquishment of the project. Furthermore, human ressources are highly involved in this kind of project and they will be very important in its success. Indeed, they will have many responsibilities including selecting hardware and software, identifying information requirements, locating and accessing needed information, designing screens ans providing training [14]. It exists in the literature many articles that deal with executive information system success factors and the works of Salmeron [11] have been selected. These research work propose to established three categories to study the success keys: human
Contribution to Interoperability of Executive Information Systems
103
resources, technical and information resources and system operation. The study results are proposed in the table 1 below. Table 1. Keys to EIS success [11] Human resources
User’s interest 96.55 %
Competent and balanced EIS staff 65.52%
Executive sponsor’s support 62.07%
Technical and Information resources
Right information needs 96.55%
Suitable Soft/Hard 68.96%
Others 10.34%
System operation
Flexible and sensitive system 79.31%
Speedy development of a prototype 48.28%
Others 16%
Others 6.9%
Regarding this table we can conclude that there are three main keys success for executive information system. The first one, “user’s interest” requires enterprises to involve their employees from the beginning of the system design. The second one, “right information needs” is obvious if an EIS intends to satisfy the manager’s needs of information. The third one, “flexible and sensitive system” is inherent to the concept of executive information system and the final users would like to use a complexe, integrated and interoperable system that will evolve easily with their new expectations. 2.2. Executive Information System Process and Architecture Futher to the different keys success, we are going to see more in details the process and the architecture of executive information systems. 2.2.1 Process The executive information system is composed of three sub-processes. The first process “Data recovery” consits in the recovery of the operational data stored in the different operational data sources of the enterprise. This process is also composed of three sub-processes, extraction, transformation and loading. Extraction means the process that extracts data from enterprise operational data sources using adapters. Those adapters consult metadata to determine which data to extract, how and when. Transformation is the process which transforms the extracted data into a consistent format applying rules defined in the metadata repository. This process is also responsible for data validation and data accuracy. The process of load is responsible for loading the transformed data into data warehouse using target data adapters. Those adapters also consult metadata [18]. The second process “Indicators aggregation and storage” consists in the data storage and their processings in order to aggregate the different performance indicators. The data storage will also allow the historical management of the indicators and facilitate the information access.
104
G. Vicien, Y. Ducq, B. Vallespir
Finally, the third process “indicators restitution” allows final users to recover performance indicators on different forms. The more common widespread restitution forms are dashboards, OLAP analysis and datamining. Those restitution forms have to be developed according the final users expectations. 2.2.2 Architecture The executive information system process is supported by a specfic architecture composed of different computing applications, figure 1.
Fig 1. Executive information system architecture
This architecture is composed of three main parts according to the executive information system process. The first one “recovery” is composed of the different operational data stores and the ETL (Extract, Transform and Load) which will operate the data transfer until the data warehouse. The “storage” is composed of different elements which allow the storage of the data and the different performance indicators. We will see later in the article that the storage system could be different according to the enterprise expectations and those different architectures will be analysed. The “restitution” architecture is developed according to the final users expectations. Indeed, many computing tools are on the market and are used to propose different restitution modes. 2.3. Interoperability in Executive Information Systems Due to complex architecture of executive information systems, different interoperability problems could occur. We propose the enterprise interoperability framework developped by Chen [16] and proposed in figure 2., to caracterise the interoperability problems considered in these research works.
Contribution to Interoperability of Executive Information Systems
105
Fig 2. Enterprise interoperability framework
The research works presented in this paper are focused on the data barrier which is the most important to solve in the implementation of the executive information system (grey part). We have identified four major interoperability problems. For each problem, we will detect which solution has to be set up in order to solve this problem and position it in the framework. Interoperability problem 1: Data transfer between operational data sources and data warehouse. Enterprises use nowadays various operational computing tools and the ETL has to be able to transfer data between those different applications. ETL that we found on the market own different adapters that allow data transfer. If a specific adapter does not exist, it is possible to develop it but it is preferable for the enterprise to choose it ETL according to its operational data sources in order to avoid the development. We found the same kind of problem between ETL and data warehouse. Based on the previous description, this interoperability problem is then positioned in the framework as Data, C, Federated. Interoperability problem 2: When the data recovery process in launched, the ETL has to know which data does it have to recover? Where? When? Which processings have to be done? And Where the data has to be store? During recovery process design, it is necessary to specify all information about the operations which have to be realized on the data. To do so, it is necessary to develop a metadata repository where all those information will be stored. Furthermore, the setting of a metadata repository makes the system evolution easier because the possible evolutions will be recording directly in the metadata repository. This interoperability problem is positioned in the framework as Data, Conceptual, Unified. Interoperability problem 3: Data that have to be stored in the data warehouse come from different and heterogeneous operational data sources and semantic interoperability problems will certainly appear in the data warehouse. So it is necessary to define and adopt a common semantic for all the data that have to be stored in the data warehouse. This standardization of the data will be supported by a metadata repository. The interoperability problem is positioned in the framework as Data, Conceptual, Unified.
106
G. Vicien, Y. Ducq, B. Vallespir
Interoperability problem 4: the setting up of OLAP analysis forces the designer to use a particular modelling. Indeed, OLAP analysis require a snow flake modelling of the data. The data will be stored in two different kinds of tables: fact and dimension. The interoperability problem is classified in the framework as Data, Conceptual, Unified. In conclusion, we realize that most of interoperability problems that occur in executive information system can be solved using a unified approach. This result is consistent allowing that the goal of an executive information system is to centralize data extracted from various and heterogeneous data sources. Furthermore, all those interoperability problems could be identified and solved during the process design in order to avoid system failure.
3. Storage System In the previous section are presented interoperability problems in executive information system and in this section we are going to present our contribution on the data storage system. This section is composed of three parts. In a first time, we propose a framework for the setting up of the storage system. In a second part, we choose the architecture of the storage system which allows to avoid interoperability problems. Finally, in the third part, we propose a formalism in order to model the different elements of the architecture. 3.1. A Five Stage Framework It exits in the literature many articles that proposed a framework for the building of storage system [17, 18]. We propose to use a five stages framework. 3.1.1 Stage 1 – Business requirements The first stage consists in the recovery of the different requirements of the system final users. This part is one for the most important as mentionned in section 2.1. This step is generally realized thanks to interviews of the system final users and all information could be stored in specification sheet. It is necessary to make a specification sheet for each performance indicator. It allows to centralize information and find it easily. 3.1.2 Data sourcing When all information are recovered, it is necessary to be sure that all the data necessary to aggregate performance indicators are available and identify the different processings to make on data in order to store them in the data warehouse. In order to model the data recovery process, we suggest to use the model proposed in our previous works [19]. 3.1.3 Target architecture The choice of the architecture is very important and we found in the litterature different architectures. This step will be detailled in the next section.
Contribution to Interoperability of Executive Information Systems
107
3.1.4 Tool selection When the architecture is defined, designers have to chose the different tools which will compose the architecture. It will depend of the chosen architecture and final users requirements. Indeed, designers have to answer users requirements ensuring that chosen architecture will support the different computing applications. 3.1.5 Administration One of the commonly neglected issues is the administration of the data warehouse after its implementation. The appropriate technical resources mustbe assigned to monitor query load and performance, to handle dynamic changes to the data structures, and to ensure platform scalability with increased user demands. Generally, data warehousing projects use the prototype approach to development and the success of the prototype will determine the overall success of the project. 3.2. Data Storage Architecture The data warehousing architectures provides discussions and examples of a variety of architectures [20]. We found in the litterature four main architectures : independent data marts, data mart bus architecture with linked dimensional data marts, hub and spoke and centralized data warehouse. The hub and spoke architecture is considered as the most complete and we have chosen to use this architecture. This architecture will be detailled below. With this architecture, attention is focused on building a scalable and maintainable infrastructure. Data are stored in two different databases, the data warehouse and the data marts. The data warehouse stored all operational data extracted from data sources which are necessary for the aggregation of the different performance indicators. Dependant data marts are developed for departmental, functional area or specific purposes (e.g., data mining).In the data marts are only stored all the performance indicators and some atomic data for specific analysis (e.g., OLAP analysis). Data marts independence improves the rapidity of the system and furthermore it allows it to be more easily scalable and maintainable. However, this architecture owns two disadvantages. The first one is the cost of this system which is more expensive due to the data quantity to store which is more important. The second one is the difficulty of development. Indeed, two different modelling have to be used, one for the data warehouse and one for the data marts. When the architecture is chosen, it is necessary to realize the modelling of the data warehouse and the data marts. 3.3. Data Storage System Modelling As explained in the previous section, the data warehouse and the data marts have different objectives and so it is necessary to use a specific modelling for each one.
108
G. Vicien, Y. Ducq, B. Vallespir
3.3.1. Data warehouse modelling The goal of the data warehouse is to centralize all operational data extracted from the different data sources. The fist step of the data warehouse building is the creation of the metadata repository. The metadata repository will allow to solve the interoperability problem 3 definied in section 2.3 about the heterogeneous of the data. Designers have to develop a common format for data storage. This common format has to be develop according to the different information recovered about data in order to limit the number of processinds which have to be done on data during the extraction process. Then, it is necessary to model the data storage using the 3rd normal form of relational databases. The main interest of the 3rd normal form is to avoid data redundancy. The modelling is composed of two parts. The first allows consits in the definition of the different tables and the relations between the tables. This modelling has to be done using entity relation model. 3.3.2 Data marts modelling The modelling of the data marts is very important in order to avoid the interoperability problem 4 about the need for specific analysis or applications (e.g. OLAP analysis) to use a specific modelling. It is so important to have well defined the users requirements during the first stage of the framework. In order to allow the aggregation of the different performance indicators and give the final users the access to detailed data, it is necessary to organize the data in a star schema. Star schema is composed of two kings of tables and we have defined a formalism in order to model the different elements of the star schema. Fact tables allow to store the different measures of the fact and are composed of different elements: (1) Fact name, (2) Key name, (3) List of entry in dimensions and (4) Specific attributes of the fact. The fact table formalism is: Fact_name [(F), (Fact_Key), (Entries), (Dimensions)] Dimensions tables allow to do analysis of the fact according to the dimension and are also composed of different elements: (1) Dimension name, (2) Key name, (3) List of the different members of the dimension, for example the dimension Time could have for members: year, month, week and day, (4) Specific attributes of the dimension. The dimension table formalism is: Dimension_name [(D), (Dimension_key), (Members), (Attributes)] In order to illustrate the formalism, we are going to take the example of the performance indicator: number of stock shortages per item and product. Fact table: Stock_Shortages [(F), (Key_Stock_Shortage), (Key_Item, Key_Product, Key_Time), (Causes)]. Dimension tables: Items [(D), (Key_Item), (Item_Number), (Item_ Name)] Product [(D), (Key_Product), (Product_number), (Product_name)] Time [(D), (Key_Time), (Day, Week, Month, Year), ()] The figure 3 illustrates the star schema of the example.
Contribution to Interoperability of Executive Information Systems
109
Fig 3. Example of star schema
When all the star schema are finished for all the performance indicators which have to be agregated, it is necessary to find the different common dimensions in order to build the complete model of each data marts. The figure 4 shows a global star schema after the grooping of dimensions.
Figure 4. Global star schema
4. Conclusion This paper has allow to identify the different interoperability problems which can occur during the design of the data storage system of an executive information system. Thanks to their positionnement in the interoperability framework, the designers can identify them quickly and propose a solution. In the second part of this paper, the framework proposed for the data storage system permits to well identify the different steps of the design and the implementation of the system in order to assure the system success. Then, we have chosen to use the hub and spoke architecture and proposed a formalism to build the data storage elements model. Our future works, we will devoted on the restitution process and more particularly on the dashboards design. It will be interesting to design the dashboards using different concepts like decision centers or the coherence analysis developed by [21] in his previous works. Finally, we will have to apply those different results to a real case in order to verify their contributions and to adjust them according to the obtaining results.
110
G. Vicien, Y. Ducq, B. Vallespir
5. References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21]
Johnson H. T., Kaplan R. S. (1987), Relevance Lost – The Rise and Fall of Management accounting, Harvard Business School Press, Boston, MA. Bittici U., Garengo P., Dörfler V., Nudurupati S. (2009), Performance measurement: Questions for Tomorrow, APMS 2009 conference, Bordeaux, France. Lebas M. J. (1995), Performance measurement and performance management, Production Economics, Volume 41, pp. 23-35. Kaplan R. S., Norton D. P. (1996), The balanced scorecard, Harvard Business Press. Neely A., Adams C., Kennerley M. (2002), The performance prism, Prentice Hall. Bitton M. (1990), ECOGRAI : Méthode de conception et d’implantation de systèmes de mesure de performance pour organisations industrielles, Thèse de doctorat, Université Bordeaux 1. Haouet C. (2008), Informatique décisionnelle et management de la performance de l’entreprise, Cahier de recherche numéro 2008-01. Cody W. F., Kreulen J. T., Krishna V., Spangler W. S. (2002), The integration of business intelligence and knowledge management, IBM Systems journal, Volume 41, Number 4, pp 697-713. Nord J., Nord G. (1995), Executive information system: a study and comparative analysis, Information & Management, Volume 29, pp. 95-106. Rainer R. K., Watson H. J. (1995), The keys to executive information system success; Journal of Management Information Systems, Volume 12, pp. 83-98. Salmeron J. L. (2003), EIS success: keys and difficulties in major companies, Technovation, Volume 23, pp. 35-38. Rainer R. K., Watson H. J. (1995), The keys to executive information systems success, Journal of Management Information Systems, Volume 84. Poon P., Wagner C. (2001), Critical success factors revisited: success and failures cases of information systems for senior executive, Decision Support Systems, Volume 30, pp. 393-418. Watson H. J., Aronson J. E., Hamilton R. H., Iyer L., Nagasundaram M., Nemati H., Suleiman J. (1996a), Assessing EIS benefits: a survey of current practices, Journal of Information Technology Management. Zode M. (2007), The evolution of ETL, from hand-coded ETL to tool-based ETL, Cognizant Technology Solutions. Chen D. (2009), Framework for enterprise interoperability, CIGI 2009 Conference, Bagnères de Bigorre, France. Murtaza A. H. (1998), A Framework for Developing Enterprise Data Warehouses, Information Systems Management, Volume 15, pp. 21-26. Mollard D. (2006), Systèmes décisionnels et pilotage de la performance, Lavoisier. Vicien G., Ducq Y., Vallespir B. (2009), Contribution to Interoperability of Decision Support Systems focusing on the data recovery process, IESA 2009 Conference, Pékin, China. Watson H. J., Ariyachandra T. (2005), Data Warehouse Architectures: Factors in the Selection Decision and the Success of the Architectures, Report available on http://www.terry.uga.edu/~hwatson/DW_Architecture_Report.pdf. Ducq Y. (1999), Contribution à une méthodologie d’analyse de la cohérence des systèmes de production dans le cadre du modèle GRAI, Thèse de doctorat, Université Bordeaux 1.
GRAI-ICE Model Driven Interoperability Architecture for Developing Interoperable ESA Lanshun Nie1, Xiaofei Xu1, David Chen2, Gregory Zacharewicz2, Dechen Zhan1 1 2
School of Computer Science and Technology, Harbin Institute of Technology, 150001 Harbin, China. IMS/LAPS/GRAI, University Bordeaux 1, 351 Cours de la Liberation, 33405 Talence Cedex, France.
Abstract. This paper presents GRAI-ICE Model Driven Interoperability Architecture (MDI) which is developed based on MDA (Model Driven Architecture) of OMG and some initial works performed in INTEROP NoE. This MDI architecture aims at supporting the development of changeable on-demand and interoperable ESA (Enterprise Software Application). The architecture defined five modelling levels, i.e., Top CIM, Bottom CIM, Object oriented PIM, Pattern oriented PSM, and Component and configuration oriented CODE. This paper presents in detail core concepts and rational of each modeling level. An application example in nuclear equipment industry is outlined. Keywords: MDI, MDA, ESA, Interoperability, Architecture, Model transformation
1. Introduction Today, with the increasing pressure from competitive market, enterprises particularly SMEs are obliged to work in network in order to meet the changeable and customized demand. ESA (Enterprise Software Application) supporting the business should be adaptable and flexible to the business change. In such networking approach and with the development of the globalization of the economy, enterprises have to cooperate with partners such as material suppliers, service suppliers, distributor, retailer and public administration to provide highquality products and services to customers. In order to cooperate, it is required for enterprises to have capabilities of interoperability in the level of both business and IT/software systems, i.e. ESA. Changeable on-demand and interoperable ESA brings great challenge to software architecture, implementation, deployment, in a word, software development [1, 2]. This paper demontrates how MDI (Model Driven Interoperability) contribute to these developments.
112
L. S. Nie, X. F. Xu, D. Chen, G. Zacharewicz and D. C. Zhan
1.1. Problems and Research Objectives Code is the core artefact in traditional software development process. This process is usually very long, and the cost is usually very high. It is not rapid and adaptative enough to develop changeable on-demand interoperable ESA. Since years there is a shift from code-based software development to model-based software development. The most known approach is MDA (Model Driven Architecture)[3]. In the objective of tackling interoperability at each level of life cycle development, MDI (Model Driven Interoperability) architecture has been introduced by generalysing the steps of MDA. It deals with interoperabilty between ESAs during their respective developments. In this new methodology, conceptual models of enterprise interoperabilty are elaborated to facilitate the interoperablity at the coding level. Software development starts from business level where a Computing Independent Model (CIM) is elaborated and then transformed into executable ESA system by automatic/semi-automatic transformation and code generation tools. There are still some research questions due to the fact that MDA, and by heritage, MDI only defines the framework and some guiding rules, but not detailed method to follow, nor modelling languages. For example enterprise modelling techniques are not well considered at the CIM level. Should the CIM level be further divided in order to model both business and software requirements? The interoperability among adjacent levels is also not sufficiently studied. The research objective is to address the problems mentioned above through the elaboration of a Model Driven Interoperability Architecture (entitled GRAI-ICE MDI) taking into account the MDA and some initial works performed in INTEROP NoE. This MDI architecture aims at defining both the modelling levels of the architecture and model transformation method between adjacent levels. The proposed GRAI-ICE MDI will support the development of interoperable, configurable and executable next generation ESA. 1.2. Background and Related Works Model Driven Interoperability (MDI) architecture [2, 3] aims at decreasing the complexity and difficulty of interoperable software system development by decomposing complex, abstract system into multiple levels. The basic ideas are: (1) decoupling the requirement specification and function specification from implementation specification in specific platform; (2) establishing computation independent model (CIM) which describes business requirement, platform independent model (PIM) which describes design of software, and platform specific model (PSM) which describes implementation mechanism of system; (3) transformation and mapping from upper level model to lower level model, refining the model and automatically generating software from models [4,5]. It has been recommended for modelling using languages and technologies such as UML, MOF and CWM, and transforming models using technologies such as meta-model and pattern [6, 7]. However, MDI only defined some guiding rules, but not the details such as concrete modeling elements, modeling methodology and model transformation method [8]. Consequently, it is necessary to select or refine modeling method(s) and organize them consistently.
GRAI-ICE Model Driven Interoperability Architecture for Developing Interoperable
113
Researchers have studied some most known existing modeling methodology, language and tools such as CIMOSA [9], GRAI [10] and IDEF[11]. UML (Unified Modeling Language) is the most widely used modeling language for software specification [11]. The former is suitable for business level, i.e. CIM level. The latter is suitable for software oriented modelling, i.e. PIM and PSM levels. However, there still exists a big semantic gap between them and no existing approach shows how to integrate them to a consistent architecture. Therefore, the question is still open on how to select, from the existing business modeling methods and software modeling methods, adequate formalisms, to refine, combine and integrate them into MDI architecture. Motivated by the requirement of developing changeable on-demand and interoperable ESA, and based on some existing modeling languages, a Model Driven Interoperability architecture, entitled GRAI-ICE MDI architecture, is proposed. GRAI refers to GRAI methodology and is used at CIM level to describe business user oriented requirements. ICE means Interoperable Configurable and Executable. This approach is based on some initial works performed in INTEROPNoE and further developed within the frame of INTEROP VLab through a research collaboration between University Bordeaux 1 (IMS/LAPS) and Harbin Institute of Technology (HIT) and also the company GFI.. It aims at defining concrete modelling elements (concepts) and relations among them at all levels of MDI architecture so that the model of ESA can be described precisely and consistently by these elements. This architecture supports model transformation top-down (CIM→PIM→PSM→CODE), and from code to system deployment. Also performed in INTEROP NoE, R. Grangel etc. proposed a similar MDI framework. They broke down CIM level into two sublevels, i.e. top-level CIM and bottom-level CIM. Besides this, their research work focus on transformation from top-level CIM model to bottom-level CIM model, with an example of transforming GRAI extended actigrams into UML activity diagrams[12,13]. 1.3. Purpose and Structure of the Paper The purpose of the paper is to present the preliminary research results on Model Driven Interoperability architecture for developing Interoperable ESA. In this paper focus is on the set of modelling methods and languages defined in MDI. The related transformation methods will be presented in a forthecoming paper. After the introduction given in section 1, the global GRAI-ICE MDI architecture is presented and explained in section 2. Then, section 3 to section 5 will explain the problems, target of modelling, core concepts and relations, as well as modelling methods used at the three main levels (CIM, PIM, and PSM). Section 6 presents the last level of the architecture (ESA component and system level) and outlines the principle of model transformation and supporting tools. An application case is discussed in section 7. Finally section 8 and 9 respectively conclude the paper and discuss on future perspectives.
114
L. S. Nie, X. F. Xu, D. Chen, G. Zacharewicz and D. C. Zhan
2. Overview of GRAI-ICE MDI Architecture GRAI-ICE MDI follows the basic structure of MDA proposed by OMG. It includes four modelling levels and one system level. It also includes transformation methods between these levels. The global view of GRAI-ICE MDI architecture is shown Fig.1. Note that CIM level is divided into two sub-levels: Top CIM (business oriented CIM) and Bottom CIM (ESA oriented CIM). Business Oriented CIM (GRAI*)
Decision Oriented
Business-oriented CIM Global Model Dec
Phy
Inf
AS-IS
Func
Org TO-BE 1
Models
ESA Oriented CIM For inter-op
TO-BE 2
Transform: Business CIM→ ESA CIM
Transform.
Process Oriented
ESA-oriented CIM Global Model GRAI ESA Decision Model
Function/Process View
UML+
Information View
GRAI TO-BE ESA
Organization View
ESA Model
Views Transform: CIM→ PIM
Transform.
ICE-PIM
Object Oriented
For inter-op
UML+
Transform: PI M→ PSM
Transform.
ICE-PSM For inter-op
Pattern Oriented
Static Confi. Model BC Model
XML+
Generator/Transform.
ESAComponent & System
Patterns Elements
EWF Model
ESA Implementation
Data Model (DB Schema)
Pattern based on framework Generator: PSM→ Code
Component & Configuration Oriented
Transf: PSM→ Confi.file
BC
Sys. Confi.
CC
Framework
Code/Conf. DB Table/View Generator/Transform.
Data Model (IDEF1X or E-R)
Role Model
Diagrams
Data Model (BO-R Model)
BO-Based Workflow Model
BO Model
Generator: PSM→ DB
EWF Confi..
DB Table/View
WF engi.
Platform(J2EE) Transform: CI M/PIM/PSM→ ESA Implementation ESA Implementation Specification ESA Manual
ESA Training
Global picture of GRAI-ICE Version: 1 (26-07-2006)
Fig. 1. GRAI-ICE MDI architecture
Top CIM, which is business oriented and based on GRAI methodology, is a set of models focusing on the description of decision making process, business process and physical product flow process. Bottom CIM, which is ESA oriented, extracts from Top CIM and describes more in detail business requirements which must be supported by ESA, i.e. function/process, information, organization and decision making structure. ICE-PIM is concerned with the object-oriented design of ESA. It focuses on identifying and describing business objects (BO). It includes BO model, BO-Based workflow model, role model, and data model.
GRAI-ICE Model Driven Interoperability Architecture for Developing Interoperable
115
ICE-PSM, which describes the implementation of ESA, is pattern oriented and XML based. It includes business component (BC) model, static configuration model, executable workflow model and relational data model. ESA system is assembled by business components. It provides service to users by parsing the executable workflow model and scheduling function of business components supported by workflow engine.
3. TOP CIM and Bottom CIM There are two main modelling objectives at CIM level. On the one hand, CIM supports business process modeling, analysis, and optimization, with business people as the stakeholder. On the other hand, CIM also supports the modeling of ESA requirements, with business people and software developer as the stakeholders. Consequently the CIM level of GRAI-ICE MDI is further divided in two sub-levels: Top CIM (business oriented) and Bottom CIM (ESA oriented). Business oriented CIM aims at modeling three kinds of process (or flow). The first is physical process and physical product flow, which supports the analysis and optimization of the time, cost, quality and needed services. The second is business process and document flow, which supports managing, controlling and recording physical process. The third is decision process and order/plan flow which aims at controlling and coordinating physical process and resources. Besides, it is also necessary to describe enterprise organization and decision-making structure. The core modeling concepts/elements of business oriented CIM are: (1) Physical Product, (2) Information, (3) Function/Action, (4) Resource/Facility, (5) Organization, (6) QQV (Quality-Quantity-Value points): the point or the location where input, output or support should be measured, verified, recorded, controlled, processed, or reported, (7) Connection Sign. The models at business oriented CIM level are composed by these elements and their relations. The meta-model of business oriented CIM is shown Fig. 2.
Fig. 2. Metamodel of business oriented CIM
All elements can be further classified into sub-elements. For example, the element “Function/Action” is classified into “function”, “business action”, “decision action” and “physical action”. The element “Information” is classified into
116
L. S. Nie, X. F. Xu, D. Chen, G. Zacharewicz and D. C. Zhan
“document”, “order”, “event” and “report”. The view of CIM model can be exactly described by the combination of these elements and sub-class of them. GRAI-ICE CIM model is composed of three kinds of process diagram and some UML class diagram for information, organization, resource, etc. Physical process model describes the transformation and state transition of physical product in manufacturing. It includes six kinds of basic elements, i.e. product, action, order, resource, organization and QQV. Business process controlling/recording physical process describes the transformation and state transition of documents, such as receipts, orders, reports. It includes four kinds of basic elements, i.e. information/documents, order, action, and organization. Decision process model is based on GRID model of GRAI methodology. It has two dimensions. One is the decision level, for example strategic, tactics, operational levels. The other is the functions. The decision process model describes both the decision making process and the results which coordinate various functions and control other decision making processes. Information entities and the relations can be identified by analyzing the information flow embedded in business process to build information model. Also, organization model is also needed to construct at this level. Relation inside each kind of elements, such as information, organization, function, product, and resource, is described using UML class diagram. Analysis and simulation can be performed on business oriented CIM (AS-IS) of existing system. Then, by optimization and re-engineering, the TO-BE CIM can be obtained Business requirement for ESA is analyzed on the basis of TO-BE CIM. The result is ESA oriented CIM which focus on software requirement definition. For example, the physical process is not concerned since ESA does not control this process directly. We refine models extracted from TO-BE CIM and distinguish those activities (processes and information) which will be processed by machine or human with human-machine interaction. Due to the limited space, the specification of ESA oriented CIM is not described in detail in this paper.
4. ICE-PIM ICE-PIM is based on object-oriented approach, UML and workflow technology. The core concept BO is proposed for integrating both the process oriented model and object oriented software implementation. ICE-PIM is the bridge connecting business model and software implementation. At this level, information entities in the business process of CIM are abstracted and formalized as Business Objects (BO) describing the data structure, data dependency and relations. From the perspective of users, BO is an information entity with its state transitions and operations. From the perspective of software implementation, BO is an integrated object with unique identity, a lifecycle, several data sets and several operation sets. For example, the abstraction of a purchasing order is a BO. It provides a good basis for component-based software development. As mentioned above, CIM is more process oriented, in addition to
GRAI-ICE Model Driven Interoperability Architecture for Developing Interoperable
117
BO, it is necessary to transform business process of CIM into workflow model. In other words, PIM mainly consists of business objects model and workflow model. The meta-model of ICE-PIM is shown in Fig. 3.
Fig. 3. Metamodel of GRAI-ICE PIM model
Based on the meta-model, the composition of PIM model is as following. Business object model describes BOs and relationship of BOs. BO integrated diagram describes the detail of BO in four dimensions. (1) BO Data diagram describes the relations among data sets in IDEF1x. (2) BO class diagram describes the DPR (controlling relation between operation and data set), the relations among OP (operation of BO) and activity AT (activity set of BO). (3) BO state diagram describes the state SS, transition ST, and the properties associated with lifecycle. (4) BO use case diagram describes the relation among roles and operations. Relations among several BOs are described by BO-R model in two levels. The first level, i.e. BO level, describes the data dependency relations among BOs, such as association and integration. The second level, i.e. entity-relation level, considers the data set inside BO as entity and then describes the relations among them. Workflow model describes the sequence of activities of one BO of multiple BOs. The business object model is the core of PIM level. Workflow model assembles BOs into process supporting business function. Data, operation and state of BO provide input, output and trigger event for workflow exectuion. Data model describes the data schema of all BOs and provides global business view for users.
118
L. S. Nie, X. F. Xu, D. Chen, G. Zacharewicz and D. C. Zhan
Role model defines the access to data and operation. ICE-PIM provides all essential design facets of ESA.
5. ICE-PSM On one hand, ICE-PSM follows and inherents the design information from ICEPIM level models and transforms them into platform specific implementation model. On the other hand, it is the basis for code generation. Software pattern and platform specification are two cornerstones for ICE-PSM. Business Component (BC) is a largest grained unit of software development, and is a realization body of Business Object (BO) which is the finest grained operation unit of business system. BC is an implementation and running pattern for business components and is the core modeling elements in ICE-PSM. It is based on software pattern and XML, and can be transformed to code by code generator. It is a link between business object model and ESA system. Because of limited space, meta-model of BC is omitted. The executable workflow model consists of workflow model of PIM level and implementation details based on schedule pattern. It is described by the language supported by workflow engine (e.g. business process modelling engine, state transition event driven engine…). The static configurable model (SCM) is extracted from the relation between business object and role model. Pattern based ICE-PSM formulizes the software implementation model, simplifies the development of complex ESA and supports industrialized component development. The use of XML provides standard data source for code generation, automatic system configuration and deployment, and interoperability.
6. Generation of Code: ESA Components and System In this level, PSM will be transformed into business components and ESA system. There are three phases for business component. The first is stem business component in software development phase. The properties related with roles and individuals are still in parameter. The second is reused business component in deployment phase. Variable properties of BC are finalized for special role/user. So, the BC in this state is role-related component. The third is individualized business component in running phase. The components show different interfaces and behaviors according to the configuration and requirement of users. By doing so, the requirement of flexibility is distributed in several phases and the difficulty of development is decreased. The ESA development is based on business components. Firstly, the business component models are transformed into stem business components by code generator. Then, they are assembled by workflow, association mechanism and extend mechanism into complete business process. The workflow engine will manage the execution by invoking the different implement BO in the right order. Verification of correctness can be done by running simulation using the workflow
GRAI-ICE Model Driven Interoperability Architecture for Developing Interoperable
119
engine before execution. After that, they are configured and deployed based on the information of PSM.
7. An Application: Purchase Management The proposed architecture, methods, and associated tools and framework have been applied in ESA (Enterprise Resource Planning system) development in several Chinese enterprises. The time of development and deployment is shortened in more than 40%. The application in one of them, i.e. SHANDONG Nuclear Equipment Co. LTD, China is outlined below. Based on the GRAI-ICE MDI architecture, a set of tools have been developed. Firstly, a MOF, XML and Eclipse based extendable modeling tool supports defining modeling elements on demand and is used in modelling. A running framework has also been developed with a workflow engine inside for ESA. Some patterns in ESA have been extracted and formulized for business objects. XML and pattern based PSM model for BO is built. Based on pattern based PSM, we developed a PSM modeling tool and code generation tool, which support software developer for modeling PSM quickly, generate BC code, configure them and deploy them into the running framework. Fig. 4 (a) gives a segment of business process model at Top CIM level in purchase management domain. Fig. 4 (b) shows the BO-relation model at PIM level in the same domain.
Fig. 4. Example of CIM model and PIM model in purchase management domain
Fig. 5 (a) shows a segment of XML file, which is the PSM of purchase order BO. Fig. 5 (b) shows the running component of purchase order, which is generated from PSM model.
120
L. S. Nie, X. F. Xu, D. Chen, G. Zacharewicz and D. C. Zhan
Fig. 5. Example of PSM model and component of purchase order BO
8. Conclusions Changeability on-demand and interoperability are critical characteristics for ESA in today dynamical economical environment. A concrete and operable model driven interoperability architecture entitled GRAI-ICE MDI is proposed for developing ESA. It integrates both business modeling methods and software modeling methods, defines proper levels and transformation between models elaborated at these levels. The vertical transformation is developed straightforward along the thread “information entity→business object→business component model→stem business component→reused business component”. Dynamic component/service assembly based on workflow technology and variable business component gives good support to ESA assembly, configuration and deployment. The originality of the proposed approach is as following: (1) A nearly complete implementation of MDA/MDI is given in GRAI-ICE MDI, which appropriately integrates enterprise modelling, software modelling and software architecture. (2) The proposed architecture is interoperable, configurable and executable. (3) CIM is divided into business oriented CIM and ESA oriented CIM to capture both business and software requirements. (4) The proposed architecture and associated tools have been successfully applied in changeable on-demand and interoperable ESA development. Last but not least, the time of development and deploy has been greatly shortened.
9. Perspectives Considering the technical trend such as service computing and SaaS (Software as a Service), the GRAI-ICE MDI architecture should be adapted to service-oriented model-driven interoperable architecture. Business components will evolve to service components and ESA will evolve to service components composition.
GRAI-ICE Model Driven Interoperability Architecture for Developing Interoperable
121
More and more components and services are available on Internet. In order to incorporate, manage and reuse them in GRAI-ICE MDI architecture, it should not only be top-down, i.e. transformation and generation from business to IT, but also bottom-up, i.e. discovery, assembly and composition from existing IT elements to meet new business interoperability requirement[14]. Also, validation of the models by simulation can be envisaged. For now, GRAI-ICE MDI architecture focuses more on function requirement of ESA. However, non-function requirements such as confidentiality, security, timing, are crucial for run-time system. So, how to extend the proposed architecture to modeling non-function requirement and transform them into ESA is another important issue to consider in the future.
10. References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12]
[13] [14]
C Pautasso, T Heinis, G Alonso. Autonomic resource provisioning for software business processes. Information and Software Technology, 49(1): 65-80(2007) J.P. Bourey, R. Grangel, G. Doumeingts and A.J. Berre, “Report on Model Driven Interoperability”, Deliverable DTG2.3, INTEROP NoE, , April 2007, pp. 91 OMG. MDA guide version 1.0.1. http://www.omg.com/.12th June 2003 J. Bezivin, O. Gerbe. Towards a precise definition of the OMG/MDA Framework. Automated Software Engineering.(ASE 2001). Proceedings. 16th Annual International Conference on 26-29 Nov. 2001:273 - 280 T. Meservy, K. D. Fenstermacher. Transforming software development: An MDA road map. Computer, 38(9): 52-58(2005) J. Bezivin. From object composition to model transformation with the MDA. Proceedings of the 39th International Conference and Exhibition on Technology of Object-Oriented Languages and Systems, Santa Barbara, August 350 (2001) R. Lemesle. Transformation rules base on meta- modeling. Enterprise Distributed Object Com- puting Workshop. EDOC'98. Proceedings. Second International,3-5 Nov. 113 – 122(1998) W. Tan, L. Ma, J. Li, Z. Xiao. Application MDA in a conception design environment. First International Multi-Symposiums on Computer and Computational Sciences, IMSCCS'06: 702-704(2006) K. Kosanke. CIMOSA--overview and status. Computers in Industry, 101-109(1995) D. Chen, B. Vallespir and G. Doumeingts. GRAI integrated methodology and its mapping onto generic enterprise reference architecture and methodology. Computers in Industry, 387-194(1997) D. Chen, G. Doumeingts, F. Vernadat. Architectures for Enterprise Integration and Interoperability: Past, Present and Future. International Journal of Computers In Industry, 59(5), May 2008. R. Grangel, R. Ben Salem, J. P. Bourey, N. Daclin, Y. Ducq. Transforming GRAI extended actigrams into UML activity diagrams: a first step to Model Driven Interoperability, I-ESA 2007, Funchal (Madeira Island), Portugal, March 28– 30, 2007. R. Grangel, R. Chalmeta, C. Campos, R. Sommar, J. P. Bourey. A proposal for goal modelling using a UML profile. I-ESA 2008, Berlin, Germany, March 26–28, 2008. G. Zacharewicz, D. Chen; B. Vallespir. Short-Lived Ontology Approach for Agent/HLA Federated Enterprise Interoperability, I-ESA China 2009, Beijing, 22-23 April 2009.
Model for Trans-sector Digital Interoperability António Madureira1, Frank den Hartog2, Eduardo Silva3 and Nico Baken1,4 1 2 3 4
Network Architectures and Services (NAS) group, Delft University of Technology, Delft, The Netherlands TNO, Delft, The Netherlands Centre for Telematics and Information Technology (CTIT), University of Twente, Enschede, The Netherlands Corporate Strategy and Innovation department, Royal KPN, The Hague, The Netherlands
Abstract. Interoperability refers to the ability of two or more systems or components to exchange information and to use the information that has been exchanged. The importance of interoperability has grown together with the adoption of Digital Information Networks (DINs). DINs refer to information networks supported by telecommunication infrastructures and terminated by microprocessors. With an upcoming interest in services science and transsector business models, a stronger driver arises to further break the interoperability barriers across sectors. In this paper, we propose a novel model to address trans-sector digital interoperability, which by definition involves interoperability across different economic sectors connected by DINs. Particularly, we specify how a well known interoperability framework, the ATHENA framework, should be adapted for the economic sector plane. Based on data from the Eurostat survey on ICT usage and e-Commerce in enterprises, we illustrate how conclusions about trans-sector interoperability can be extracted and technological implications can be derived. Keywords: interoperability; MDA; sector; economic; model; productivity; service science; ATHENA; digital information network
1. Introduction IT systems interoperability is a growing interest area, mainly motivated by the need of integrating new, legacy and evolving systems [5]. Interoperability refers to the “ability of two or more systems or components to exchange information and to use the information that has been exchanged” [12]. The importance of interoperability has increased together with the adoption of Digital Information Networks (DINs). DINs refer to information networks supported by telecommunication infrastructures and terminated by microprocessors. DINs
124
A Madureira, F. den Hartog, E. Silva and N. Baken
support the digital economy: an economy that is based on digital goods and services in any of the production, distribution and consumption stages. According to the ATHENA framework [5], interoperability takes place, at least, at four levels: business, process, service and information. In this paper, we investigate trans-sector digital interoperability, which by definition involves interoperability across different economic sectors connected through DINs. We define sector as a cluster of organisations performing homogeneous activities. Sectors such as Healthcare, Education or Transport have different societal roles [1]. Therefore, enterprises from different sectors are inherently not competing in the same market, since they provide different products or services. Besides sectors operating in the same traditional value chains, the invisible hand of the markets does not lead enterprises to look across the boundaries of their own sector. With an upcoming interest in services science [20] and novel trans-sector business models [2], sometimes with more intangible social or environmental outcomes (e.g. eGovernment), a stronger driver arises to break the interoperability barriers across sectors. Bastiaansen and Baken [3] conceptualise two approaches for trans-sector interoperability: 1) top-down interoperability: the functionality to be realised must be leading in the IT system development process. Thus, the business requirements must be top-down translated into trans-sector implementations. A method for this approach is Model Driven Architecture (MDA) (e.g. the ATHENA framework); 2) bottom-up interoperability: trans-sector processes require the integration of the existing functions provided by the individual sectors. Such integration requires rich and unambiguous syntactic and semantic description of the IT functions provided by the sectors. A method for this approach is Semantic Web Services (SWS). The ATHENA framework gives a good starting point for designing a top-down interoperability model, but it should be taken from the enterprise plane to the economic sector plane. In this paper, based upon the ATHENA framework, we propose a novel model to address trans-sector digital interoperability. Particularly, we specify how the ATHENA framework should be adapted for the economic sector plane. With this model, we follow the top-down interoperability approach principle of identifying the business requirements before translating them into trans-sector implementations. Naturally, the model described is also useful in a bottom-up interoperability approach, because it helps to define syntactic and semantic descriptions of IT systems, potentially with direct business value. To validate the usefulness of the model, we describe how to derive technological implications from the application of the model. This paper is organised as follows. In section 1.2, we describe the state of the art regarding trans-sector digital interoperability. In section 1.3, we introduce a set of mechanisms that serves as the base for the model proposed in this paper. In section 1.4, we provide an overview of the complete model. Finally, in section 1.5, we draw some conclusions and indicate directions for future work.
Model for Trans-sector Digital Interoperability
125
2. State-of-the-Art Traditionally, scientific research on IT interoperability is a bottom-up process. Examples of this approach are semantic web service frameworks such as IRS-II [23], OWL-S [26] and WSMF [10]. Compared to IRS-II and OWL-S, WSMF takes a more business-oriented approach, focusing on a set of e-commerce requirements for Web services. But in practice, it lacks a top-down view of business requirements to make the components it describes (ontologies, goal repositories and mediators) more concrete for business purposes. Bottom-up semantic interoperability already evolved to standardisation efforts [13]. Motivation for this research comes from Virtual Enterprises (VE), Virtual Organisations (VO), Computer Supported Collaborative Work (CSCW), Workflow Management System (WFMS), Process-Centred Environments (PCEs), etc. ATHENA [5], a recent European project on interoperability, attempted a topdown approach. The uniqueness of this project lies in its multidisciplinary character, merging three research areas: 1) enterprise modelling to identify business requirements; 2) architectures and platforms to define implementation frameworks; and 3) ontologies to identify interoperability semantics. Figure 1 presents the ATHENA reference model indicating the required and provided artefacts of two connected enterprises. Interoperation can take place at various levels (enterprise/business, process, service and information/data). For each of these levels, a model-driven interoperability approach is prescribed where models are used to formalise and exchange the provided and required artefacts that must be negotiated and agreed upon.
Fig. 1 ATHENA reference model One ATHENA meta model is particularly relevant to our work: the Crossorganisational Business Process (CBP) model. The CBP tries to capture tasks and relationships from the different parties involved in cross-organisational business processes. One example of a process is a retailer-manufacturer cooperation: 1) a supplier sends a request for quotation to the manufacturer; 2) the manufacturer checks the stock; and 3) the manufacturer accepts the quotation and responds with an order. To our knowledge, no one has yet took the effort of breaking the meta concept of process into more refined terms, particularly at a general and complete, but necessarily abstract, economic plane. Here lies the value and originality of our work: breaking the concept of process at an economic trans-sector plane to be used
126
A Madureira, F. den Hartog, E. Silva and N. Baken
in an interoperability framework as the ATHENA. The novelty of our trans-sector approach for interoperability can be verified by a conclusion taken in [19]: the theoretical and empirical support for the relation between DINs and economic productivity is still inconclusive. Even economists focusing on understanding the economic importance of DINs, thus DINs' business value, still struggle to clarify it. In [19], a framework contributing to clarify this relation has been laid down which serves as the base for our trans-sector interoperability model described in this paper. Other projects have taken further action to meet the recognised need to clarify the concept of process for deriving technological requirements. Pratl et. al [27] introduce an eighth layer in the OSI model defining profiles for (manufacturing) control purposes. Bauer et. al [4] extend the OSI model with human factors, providing a conceptual tool to facilitate the discussions in human-computerinteraction disciplines.
3. Capabilities Productivity refers to a summarised measure of performance (P), based on the ratio of the total value of output (O) divided by the total value of input (I) (see [15]): P=O/I. In this section, we describe a set of mechanisms (defined as capabilities) which take an economic agent using DINs to a higher level of productivity. Any mechanism incrementing productivity leads to business value. Unlike mere operational functions (e.g. routing, forwarding, transferring, etc), the mechanisms identified are part of the interface between business and service. These mechanisms are generally applicable to economic agents across all sectors. Thus, they are the mechanisms required to identify a model for trans-sector digital interoperability. Before describing this set of mechanisms, we start by referring to the concept of externality to introduce a definition of economic agent. From here, follows a definition of capability, which is our conceptualisation of a direct causal mechanism linking DINs to sectoral productivity. Network externality can be defined as a change in productivity that an individual achieves when the number of other individuals using DINs changes. This allows, in principle, to separate the value of productivity into two distinct parts. One component, the autarky value, is the productivity value if there are no other individuals using DINs. The other component, the connection value, is the additional productivity value achieved when multiple other individuals are using DINs. The latter value is the essence of DINs' externality effects. Using the definition of connection value, we define economic agent in the following way: an economic agent is any entity from an economic environment which may achieve an additional productivity value when multiple other individuals are using DINs. Examples of agents are researchers using DINs to search for knowledge and companies marketing their products on-line. An agent explores personal and intrinsic capabilities to become more productive within his economic environment. For example, consumer A meets supplier B to acquire a production input at a lower price. The capability of A and B to meet each other will make both more productive. From a thorough literature review on the relation between information,
Model for Trans-sector Digital Interoperability 127
digital infrastructures and productivity, we have come across time and time again with a relevant set of six capabilities of a productive economic agent, which are directly dependent on DINs and impact productivity. We define capability as: capability is a quality of the economic agent used for productive purposes and directly effected by DINs. In the following subsections we describe these six capabilities, which are generally applicable to agents across all economic sectors. 3.1. Coordinativity Coordination is "the act of managing interdependencies between activities performed to achieve a goal" (see [21]). It arises, effecting productivity, when the agent has to choose between actions, the order of the actions matters and/or the time at which the actions are carried out matters. This leads to: coordinativity is the capability of an economic agent to manage interdependencies between activities with other agents to jointly achieve a common goal. Coordinativity prevents conflicts, waste of efforts, and squandering resources, and assures focus, while trying to accomplish a common goal. The work of Kandori et. al [16] has triggered much interest in coordination games. Important research results concern the impact of different network structures in coordination. In a survey, 45% of the respondents identified DINs as a driver to reorganise work practices (see [14]). More specifically, on-line remote monitoring can be seen as a good example of an application of digital coordination. 3.2. Cooperativity Cooperation can be defined as “acting together with a common purpose” (see [11]). Sharing information helps agents aligning their individual incentives with outcomes. Assuming proper behaviour, if absolute incentives are more advantageous over relative incentives, the agents cooperate. Both inter- and intraorganisational cooperation have been object of study since the work of Marshall [22]. Good examples are joint ventures. This leads to: cooperativity is the capability of an economic agent to align his personal goals with different individual goals from other agents for a common purpose. In practice, it is often hard to distinguish cooperativity from coordinativity. Conceptually, the main differences are: 1) in coordinativity the agents share exactly the same goals, while in cooperativity the agents share only partially aligned goals; and 2) in coordinativity the relation between the agents is critically dependent on time, while in cooperativity the agents relate to each other typically off-line. Although the experimental literature on cooperation is vast, only a few papers consider the role of networks in this process (see e.g. [31]). Supply and demand matching with on-line trading is an important practical example of the importance of DINs for cooperativity. 3.3. Adoptativity Nelson and Winter [24] state that firms improve their productivity by adopting technological and organisational solutions from the most innovative firms.
128
A Madureira, F. den Hartog, E. Silva and N. Baken
Examples are informal associations and product advertisement. Important dimensions to be accounted are the level of codification and the extent to which the knowledge fits in a set of interdependent elements. This leads to: adoptativity is the capability of an economic agent to adopt knowledge from other agents. There is a vast literature studying adoptativity using network analysis. It started with Ryan and Gross [28] studying adoption of pesticides by rural sociologists, and Coleman et. al [7] studying the adoption of medicines. Many examples could be cited showing the value of digital networks to exchange knowledge. A good example is e-learning between students. 3.4. Creativity Agents can increase their productivity by creating new knowledge following from collaborating with other agents to address operational inefficiencies. Their motivation to collaborate comes from their limited specialised knowledge and changes in their environment. Organisations that best address crucial information gaps through their information network structures may be more able to create novel knowledge. This leads to: creativity is the capability of an economic agent to create new knowledge, unknown to him before and to his collaborative agents. The relevance of DINs for collaborative research is well recognised (see [25]), and evidences have been found that organisations that use them more intensively, innovate more (see [17]). A trade-off exists between the rate of information gathering and the rate of environmental change. A good example of creativity is research in universities. 3.5. Selectivity Selection is the process of scanning for the unknown or generating courses of action that improve on known alternatives (see [6]). For maximal productivity, the agent has to decide for a stopping point in an uncertain environment, while keeping computational requirements within limits. This leads to: selectivity is the capability of an economic agent to scan and value information from other agents, generating courses of action that improve on known alternatives. The role of information networks has been extensively acknowledged in this process (see [32]). A practical proposal accounting the value of networks in the process of selection has been made by Saaty [29]. This framework has been used for interdependent information system project selection. On-line job hunting and Google.com are good examples of selectivity using DINs. 3.6. Negotiability Negotiability occurs when exchange happens between unfamiliar partners or when evaluating new courses of action. Negotiation grows in importance with the perception that potential downside effects of a wrong decision can be large and costly to reverse. Negotiability mechanisms include signalling (e.g. give guarantees to buy) and screening (e.g. give certificates to sell). Economic literature further distinguishes between one shot and repeated contracts. This leads to: negotiability
Model for Trans-sector Digital Interoperability 129
is the capability of an economic agent to bargain with other agents for lower exchange costs. Kranton and Minehart [18] developed a model in which the prices are determined by a bargaining process rather than an English auction. However, the precise influence of the network structure in negotiation processes has not been intensively studied yet. On-line stock trading activities are a good example of the importance of DINs for negotiability.
4. Trans-sector Interoperability Model Figure 2 resumes our model for trans-sector interoperability. Naturally, it is based upon the ATHENA model of figure 1, but specifies the general processes for the trans-sector (economic) plane. The processes are the capabilities referred in the previous section. In this context, it makes more sense to talk about inter-sector interactions rather than inter-enterprise interactions. Naturally, being sectors clusters of enterprises, these capabilities are also applicable to inter-enterprise interactions.
Fig. 2 Trans-sector interoperability model The scheme represents the four layers of sectors A and B: DINs and other IT (e.g. computer terminals), service, processes and business. The middle part shows the various capabilities/processes analysed in this paper, symbolising how exploiting these capabilities increases productivity when DINs are used for interaction between sectors. The capabilities are shown in random order. In the future, however, we would like to input some structure to the order of the capabilities. Intuitively, we would expect productivity to be more sensible to some of these capabilities. For example, creativity is a source for innovations which might have a profound effect on productivity. Selectivity, on the other hand, seems to be an input to other capabilities (e.g. adoptativity). Therefore, it might be more complicated to correlate data on selectivity with data on productivity. DINs are the infrastructure which enables the economic agents to use these capabilities to increase their productivity, regardless of the specific sectors they are dealing with on a case to case basis. These capabilities are generally applicable to agents across all economic sectors. Therefore, at an aggregated macro level, our model may help to manage and control trans-sector interoperability, productivity, and innovation on a national or even global scale from a capabilities point of view.
130
A Madureira, F. den Hartog, E. Silva and N. Baken
Figure 3 is an attempt to quantify the importance of coordinativity and adoptativity for different economical sectors. The data source used is the Eurostat survey on ICT usage and e-Commerce in enterprises [9]. The amount of data is quite significant, spanning the years from 2002 to 2008, various countries from EU, with regional and sectoral breakdowns, for a large collection of different aspects related to the use of ICT in enterprises. For this paper, only data from 2007 for the Netherlands has been used. For both subfigures, the proxy variable used for DINs is “have access to Internet” (reference is e_iacc). For coordinativity, the proxy variable used is “use of systems for managing production, logistics or service operations (reference is e_lnkpls). For adoptativity, the proxy variable is “purpose of the Internet (as a customer): training and education” (reference is e_iedu). The numerical references in the figures (e.g. 10+:72) refer to more refined aggregations of enterprises (e.g. a particular type of manufacturing enterprises).
Fig. 3 Importance of coordinativity (top) and adoptativity (bottom) per sector The limitations of the data only allow us to take preliminary and exemplifying conclusions. For example, coordinativity seems to be more important for the manufacturing sector than for real estate, renting and business activities sector. The inverse could be said for adoptativity: the manufacturing sector performs less online training and education than the real estate, renting and business activities
Model for Trans-sector Digital Interoperability
131
sector. These figures could be considerably improved at least in three ways: 1) the Eurostat only collects data for particular economic sectors (e.g. manufacturing). Other sectors are not observed (e.g. the education sector). For a full overview of trans-sector interoperability, all the sectors should be evaluated; 2) the sector classification used is the Classification of Economic Activities in the European Community (NACE). Some sectors classified are hardly understandable (e.g. the “production” sector). There is no worldwide consensus is any classification, and furthermore, classifications evolve over time as the relevance of specific activity clusters varies. A proper sector classification has to be used that discriminates sectors in a clear and relevant way. Worldwide, the UN International Standard Industrial Classification (ISIC) [30] classification seems to be the preferred one; and 3) the proxies used for the capabilities should be composed measures of various available data source variables. E.g. coordinativity could also have been measured with the variable “use invoicing and paying systems” (reference is e_lnkpay). After understanding which capabilities are more relevant for the different sectors, a survey to IT departments of representative enterprises of the different sectors could be done to investigate which IT applications are commonly used to address those capabilities (e.g. online agenda applications for coordinativity). Finally, these IT applications should be analysed concerning the fulfilment of interoperability requirements (e.g. use of open standards). Hence, conclusions could be extracted concerning the status of trans-sector interoperability. Furthermore, technological requirements for those IT applications could be derived to increase trans-sector interoperability. The procedure described in the previous paragraph will be used in our future work.
5. Conclusion and Future Work Generally, we conclude that current interoperability architectural models do not take a full top-down approach when translating business requirements into technological implementations. The ATHENA model gives a good starting point for designing such a top-down interoperability model, but it should be taken from the enterprise plane to the economic sector plane. One of the model's levels requiring adaptation is the processes level. To address the economic plane of interoperability, we propose a trans-sector interoperability model based upon a set of mechanisms (defined as capabilities) which take an economic agent using DINs to a higher level of productivity. Any mechanism incrementing productivity leads to business value. Unlike mere operational functions (e.g. routing, forwarding, transferring, etc), the mechanisms identified are part of the interface between business and service. These mechanisms are generally applicable to economic agents across all sectors. Thus, they are the mechanisms required to identify a model for trans-sector digital interoperability. Using data from the Eurostat survey on ICT usage and eCommerce in enterprises, we have made a preliminary attempt to quantify the importance of two capabilities (coordinativity and adoptativity) for different economical sectors. Although their relative relevance and orthogonality should still
132
A Madureira, F. den Hartog, E. Silva and N. Baken
be investigated, from our thorough literature review, we are confident to state that these capabilities most probably form a complete set. However, it should be noted that we only took economic capabilities into account. Human and social factors are also important, but are not yet included. Our future work goes along six directions: 1) the capabilities described in this paper are used by an economic agent to rationally navigate through a production space problem. However, other factors are affected by DINs that, indirectly, affect productivity, particularly human (e.g. limitations in information sensing) and social factors (e.g. trust). Thus, further development of the model is required to include these factors; 2) we aim to functionally decompose the capabilities of the model to test if they overlap. From here, commonalities between the capabilities might be revealed; 3) further integration of the model with the top-down interoperability framework ATHENA; 4) perform a thorough analysis to understand which capabilities are more relevant for the different sectors; 5) perform a survey to representative enterprises from different sectors to understand which IT applications are commonly used to address the capabilities, and analyse these IT applications concerning the fulfilment of interoperability requirements; and 6) extract conclusions about transsector interoperability and derive technological requirements for IT applications to increase trans-sector interoperability.
6. References [1]
N.H.G. Baken, N. van Belleghem, E. van Boven, and A. de Korte. Unravelling 21st Century Riddles -Universal Network Visions from a Human Perspective. The Journal of The Communication Network, 5(4):11– 20, 2007. [2] N.H.G. Baken, E. van Boven, and A.J.P.S. Madureira. Renaissance of the Incumbents: Network Visions from a Human Perspective. eChallenges 2007 Conference, The Hague, The Netherlands, October 2007. [3] H.J.M. Bastiaansen and N.H.G. Baken. Using Advanced Integration Concepts for Trans-sector Innovation- View and Status. In Proceedings of the FITCE Conference, 2007. [4] J.M. Bauer, P. Gai, J. Kim, T.A. Muth, and S.S. Wildman. Broadband: Benefits and Policy Challenges. Technical report, Quello Center for Merit Network Inc., 2002. [5] A. Berre, B. Elvesæter, N. Figay, C. Guglielmina, S. Johnsen, D. Karlsen, T. Knothe, and S. Lippe. The ATHENA Interoperability Framework. In Enterprise Interoperability II: New Challenges and Approaches, pages 569–580. Springer, 2007. [6] N. Bulkley and M. Van Alstyne. Why Information Should Influence Productivity. Working Paper Series 202, MIT Sloan School of Management, 2004. [7] J.S. Coleman, E. Katz, and H. Menzel. Medical Innovation: A Diffusion Study. New York: Bobbs-Merrill, 1966. [8] F. den Hartog, M. Blom, C. Lageweg, M. Peeters, J. Schmidt, R. van der Veer, A. de Vries, M. van der Werff, Q. Tao, R. Veldhuis, N. Baken, and F. Selgert. First experiences with Personal Networks as an enabling platform for service providers. In Proceedings of the Second Workshop on Personalized Networks, 2007. [9] Eurostat. ICT usage and e-Commerce in enterprises. http://epp.eurostat.ec.europa.eu/portal/page/portal/information, 2009. [10] D. Fensel and C. Bussler. The Web Service Modeling Framework WSMF. Electronic Commerce Research and Applications, 1(2):113–137, 2002.
Model for Trans-sector Digital Interoperability
133
[11] Z. Hua. Study of Multi-Agent Cooperation. In Proceedings of Third International Conference on Machine Learning and Cybernetics. IEEE, 2004. [12] IEEE. IEEE standard computer dictionary : a compilation of IEEE standard computer glossaries. IEEE Computer Society Press, 1991. [13] ISO. Industrial automation systems and integration- Manufacturing software capability profiling for interoperability. Standard 16100, ISO, 2002. [14] P. James and P. Hopkinson. Sustainable broadband? The Economic, Environmental and Social Impacts of Cornwall’s actnow Project. Technical report, University of Bradford and SustainIT, 2005. [15] D.W. Jorgenson and Z. Griliches. The Explanation of Productivity Change. Review of Economic Studies, 34:349–383, 1967. [16] M. Kandori, G.J. Mailath, and R. Rob. Learning, Mutation, and Long Run Equilibria in Games. Econometrica, 61(1):29–56, 1993. [17] P. Koellinger. Impact of ICT on corporate performance, productivity and employment dynamics. Technical Report Industry, Special Report No. 01/2006, e-Business Watch, European Commission and DG Enterprise, 2006. [18] R. Kranton and D.F. Minehart. A Theory of Buyer-Seller Networks. American Economic Review, 91(3):485–508, 2001. [19] A. Madureira, N. Baken, and H. Bouwman. Towards a Framework to Analyze Causal Relations From Digital Information Networks To Micro Economic Productivity. World Congress on the Knowledge Society, Venice, Italy, 2009. [20] P. Maglio and J. Spohrer. Fundamentals of service science. Journal of the Academy of Marketing Science, 36(1):18–20, 2008. [21] T.W. Malone and K. Crowston. What is coordination theory and how can it help design cooperative work systems? In CSCW ’90: Proceedings of the 1990 ACM conference on Computer-supported cooperative work, pages 357–370. ACM Press, 1990. [22] A. Marshall. Principles of Economics. London: Mcmillan, 1890. [23] E. Motta, J. Domingue, L. Cabral, and M. Gaspari. IRS-II: A Framework and Infrastructure for Semantic Web Services. In The SemanticWeb - ISWC 2003, pages 306–318. Springer Berlin/Heidelberg, 2003. [24] R.R. Nelson and S.G. Winter. Evolutionary Theory of Economic Change. Belknap Press, 1985. [25] OECD. OECD Broadband Portal. http://www.oecd.org/sti/ict/broadband, 2008. [26] OWL-S Coalition. OWL-S 1.0 Release. Website, 2003. http://www.daml.org/services/owl-s/1.0. [27] G. Pratl, D. Dietrich, G. Hancke, and W. Penzhorn. A New Model for Autonomous, Networked Control Systems. Industrial Informatics, IEEE Transactions on, 3(1):21– 32, 2007. [28] B. Ryan and N.C. Gross. The Diffusion of Hybrid Seed Corn in Two Iowa Communities. Rural Sociology, 23:15–24, 1943. [29] T.L. Saaty. The Analytic Network Process: Decision Making with Dependence and Feedback. RWS Publications, 2001. [30] UN. International Standard Industrial Classification of all Economic Activities (ISIC) Revision 3.1. Statistical Paper ST/ESA/STAT/SER.M/4/Rev.3.1, United Nations, 2002. [31] F. Vega-Redondo. Building Up Social Capital in a Changing World. Discussion paper, University of Alicante, 2002. [32] D. J. Watts, P. S. Dodds, and M. E. J. Newman. Identity and Search in Social Networks. Science, 296:1302, 2002.
Transformation from a Collaborative Process to Multiple Interoperability Processes Hui Liu1, 2 and Jean-Pierre Bourey1, 2 1 2
Université Lille Nord de France, F-59000 Lille, France Ecole Centrale de Lille, Laboratoire de Modélisation et de Management des Organisations, Cité Scientifique – BP 48 – 59651 Villeneuve d’Ascq Cedex, France
Abstract. This paper defines some important concepts in the domain of enterprise interoperability, and based on these concepts, it proposes a method to solve the enterprise interoperability problem. This paper elaborates in detail one of the steps included in the method: the transformation from collaborative process to interoperability processes. The transformation is based on the definition of the concept “rank of interoperability process” that makes it possible to analyse the relationships between three kinds of interoperability processes. Keywords: collaborative process, interoperability process, transformation, rank of interoperability process
1. Introduction To meet the business objectives, enterprises need to collaborate with other enterprises, which will raises their own added value and improves their competitiveness greatly. How to support such collaboration by information systems of all the related enterprises is a big problem, and it is described as the enterprise interoperability problem. However, the information systems (IS) of the related enterprises are usually distributed heterogeneous, so the foundation of this problem is how to make such information systems exchange information and understand and use the exchanged information. For the aspect of information exchange, some protocols and standards for describing communication method and information format have been proposed and widely used, for example, TCP/IP, HTTP, JMS, XML and SOAP and so on; and to support the information exchange at the enterprise-level, some middleware and architecture styles have also been proposed, such as EAI, CORBA, ESB, P2P, SOA (web service, RESTful service [7]) and SMDA (Service Model Driven
136
H. Liu and J.P. Bourey
Architecture)[12]. For the aspect of information understanding, lots of ontology languages have been proposed, for example, OKBC, OIL, OWL-S1, WSMO, WSDL-S, SAWSDL, PIF[10], some of which are XML-based, some are not and some of which are used to represent knowledge, some are used to describe internet resources and some are used to describe business processes. For further study of enterprise interoperability, [5] describes a roadmap, which includes three key areas: 1) enterprise modelling, 2) architecture and platform and 3) enterprise ontology. This roadmap is supported by the efforts of several European projects, including ATHENA, INTEROP, Ecoleader2, etc. We can see that the above foundation of enterprise interoperability is included in the second and third points of the roadmap and they have been researched deeply. Instead, this paper will research collaboration between enterprises from the viewpoint of enterprise modelling. The collaboration between enterprises is described in a collaborative business process, which is a process describing the collaboration requirements and whose activities are supported during implementation, by at least two information systems. Each of these systems performs the implementation of the collaborative business process, called interoperability process and when executing the interoperability process, each of the systems will communicate with other systems. So an interoperability process is defined as a process containing at least one activity of information exchange, called interoperability activity. In section 2, this paper proposes a method which describes how to transformation a collaborative business process into an executable interoperability process. In section 3, this paper elaborates in detail an important transformation step in the proposed method. Finally the conclusion summarizes the contributions and describes the future work.
2. Terminology Definition In Section 1, collaborative business process is used to model the collaboration requirements between enterprises and it is implemented by interoperability process after some steps of transformation. According to the MDA framework defined in [4], we can see that collaborative process belongs to the level CIM and interoperability process belongs to PIM and PSM. The actors in collaborative business process can be enterprises or departments, and such actors are defined as collaborators; and the actors in interoperability process can be information systems, sub-systems, components or services, they are defined as participants. So we can get Table 1.
1. [OWL-S] http://www.daml.org/services/owl-s/; [WSMO] http://www.wsmo.org/; [WSDL-S] http://www.w3.org/Submission/WSDL-S/; [SAWSDL] http://lsdis.cs.uga.edu/projects/meteor-s/SAWSDL/ 2. http://www.athena-ip.org; http://www.interop-vlab.eu/; http://ecolead.vtt.fi/
Transformation from a Collaborative Process to Interoperability Processes 137 Table 1. Relationship of the concepts about interoperability problem with MDA MDA level
Actor
Business process
CIM
Collaborator
Collaborative process
PIM, PSM
Participant
Interoperability process
To further analyze the enterprise interoperability problem, we analyze the classification of business processes. [6] has proposed several criteria to classify the business processes. This paper will classify the business process according to the following criterion: the quantitative relationship between the owners and controllers of the business activities in a business process. The owner of business activity is the actor who is responsible for performing this activity; the controller is the actor who starts the activity. Following the above criterion, there are three kinds of processes in or between the enterprises’ IS: 1. The internal process: it is composed of the activities which belong to the same IS of an enterprise; 2. The coordination process: it is composed of the activities some of which take place between several IS and/or enterprises, but the process execution is owned and controlled by only one IS and/or enterprise; 3. The cooperation process: it is composed of the activities some of which happen between several IS and/or enterprises and the process execution is owned and controlled by several IS and/or enterprises, but each IS and/or enterprise can only control the execution of its own activities. For the collaborative process, if it is the internal process, i.e., the process is across different departments in one enterprise, the collaborator is just the enterprise itself; if it is the coordination process, the collaborator who controls the process execution is named the coordinator (or mediator), other collaborators are named passive collaborator; if it is the cooperation process, the collaborator who controls the process is named principal cooperator and the collaborator who controls its own activities but doesn’t control the process is named secondary cooperator. For each of the above three processes, if one of its activities is the interoperability activity, then the process is an interoperability process. At first, the two latter types of processes may be interoperability processes. In addition, in the case of an internal process, it may be necessary to implement the information exchanges between some modules of enterprises’ IS. These exchanges are carried out by the activities that can be considered as internal interoperability activities. Hence, the three types of processes may be interoperability processes. For the interoperability process, if it is the coordination process, its participants can play the roles “requester” and “provider”; if it is the cooperation process, its participants can play the roles “requester” and “provider”, “subscriber” and “publisher”; if it is the internal process, it can be executed as coordination process or cooperation process and its participants can be that of coordination or cooperation process.
138
H. Liu and J.P. Bourey
3. Proposed Method to Solve Enterprise Interoperability Problem According to terminologies defined in the precedent section, the following method is proposed to solve interoperability problem, illustrated in Figure 1. Collaborative Process Collaborative Process Annotated with information of collaborators PoIM1- Interoperability Process (Fix part of messages’ types; map collaborators to participants; divided into N sub-processes)
Ontology
PoIM2- Interoperability Process (Message; search the participants for activities)
PoSM- Interoperability Process (Implemented in process specification language)
Execution engine of business process (process language; algorithm) Infrastructure (Cloud Computing) Figure 1. Proposed method to solve enterprise interoperability problem
At the first level “Collaborative Process”, the collaborative process must be constructed from two aspects: business flow and data (message) flow, which is inspired by [8]. In [8], US Army, proposed an expansion of the system architecture into three further sub-architectures: software architecture, data architecture and network architecture. The software architecture defines the functionality of each modular, and the data architecture is related with data definition, and the network architecture is related with software deployment requirement. Furthermore, all business requirements must be mapped into a certain system architecture if it want to be implemented and the collaborative process is one kind of business requirements, so collaborative process must also be mapped onto the above three sub-architectures, that is to say collaborative process must have some aspects that can be mapped onto the above three sub-architecture. However, the network subarchitecture is determined by the concrete business requirements, in this paper we don’t discuss about it, but it’s still very important; so finally we will construct the collaborative process from two aspects: the business flow describing the functionality of the collaborative process, and the data flow describing the exchanged data in the process. At the second level, the collaborative process will be annotated with the information of collaborators, i.e., all the activities in the process must be charged by one collaborator. This task depends on the ontology base. When searching the corresponding collaborator for a certain activity, the ontology base will be inspected to determine which collaborator can do such activity. If several candidates are selected, the target candidate will be fixed according to the
Transformation from a Collaborative Process to Interoperability Processes 139
collaboration policy/requirements, or according to the predefined conditions, such as QoS, trust rank/belief value etc. So ontology base must contain such information of all the collaborators (for example, collaborator’s name, historical information about its service offering, what it is responsible for, etc). At the third level “PoIM1” (Protocol Independent Model 1), the types of some messages in the collaborative process can be determined according to messages’ context (messages’ sender and receiver, and relevant business activities). The information about some messages’ types may also be explicitly declared in the data flow of collaboration process. These two cases for message type determination are based on ontology base. So ontology base must contain the definitions of business messages which have some context specifications of their usages. After that, the collaborators in the collaborative process will be mapped onto participants. The key of the mapping from collaborator to participant of an activity depends on the activity’s functionality and context. After the mapping, the collaborator’s information in the process must still be kept, because such information has some semantics that are not implied in participants, for example, there are several roles of collaborators, whose semantics can’t be represented by participants’ roles. The above task also relies on the ontology base. So the ontology base must also contain the information about each collaborator’s system architecture because the participant is an element of system architecture. After some messages’ types is fixed and the mapping from collaborator to participant is done, the collaborative process becomes the interoperability process, and then the interoperability process will be transformed to a set of interoperability sub-processes according to a certain transformation method which will be elaborated in detail in section 1.4. At the fourth level “PoIM2” (Protocol Independent Model 2), the messages whose types haven’t been determined will select their types and each participant will be mapped to a concrete service. This step is also based on ontology base and the ontology base must contain the definitions about messages and services. At the fifth level “PoSM” (Protocol Specific Model), the interoperability process will be implemented in a certain executable process specification language. We can see that the above proposed method depends closely on ontology base and SOA, and it also has one precondition: the interoperability process’s realization totally depends on the original functions of each collaborator’s information system. Of course the above method also relies on the process execution engine and a certain infrastructure, such as cloud computing infrastructure.
4. Interoperability Process The composition of interoperability activities constitutes the interoperability process, which is determined by the collaboration requirements between enterprises and generated from collaborative process. The interoperability process is used to express the application-application business process [6] and can be implemented directly in a programming language (Java, C + +, etc.), or a processes specification language (WS-CDL[9], WS-BPEL[2], workflow model etc). As the
140
H. Liu and J.P. Bourey
development mode of information systems has been shifted from “programming” to “assembly” and from “data-centric” to “process-oriented” [6], the interoperability process tends to be expressed in process specification language. In addition, the use of workflow tools is limited to specific industries, such as banking and insurance [11]. Furthermore, Web services are interesting to software vendors and scientists for architecting and implementing business collaborations within and across organizational boundaries [1]. So it will be a trend to adopt the web servicerelated process specification languages to express the interoperability process, for example, BPMN, which can be transformed into WS-BPEL process using the mapping method proposed in the BPMN specification V1.2. In the following subsections, a method to generate the interoperability process is proposed. This method is based on annotated BPMN diagrams. As the execution of internal business process or coordination business process is controlled by only one enterprise, and it is only related with a series of information exchanges with their partners, so it is easy to implement with the help of WS-BPEL or workflow model. The following two subsections will focus on the cooperation process. In addition, in order to simplify our discussion, collaborative process and interoperability process in the following subsections only contain their business flow (data flow is omitted), and in interoperability process, the participants of all activities are the corresponding collaborators (because of the lack of sufficient information, such mapping can’t be done), and we assume that some messages’ types in collaborative process can be determined according to a certain ontology architecture, i.e., the cooperation process as collaborative process can be transformed into interoperability process. 4.1. Rank of Interoperability Process In the cooperation process, it is assumed that 1 there are N cooperators in the process, N>=2; 2 if the process is launched, all the cooperators will follow the process to carry out corresponding activities, so all cooperators must know clearly the state of the process execution; 3 the continuous activities of the same cooperator in the process can be merged into one activity node who delegates a sub-process of the corresponding cooperator, then the cooperation process can be changed into a new process with “sub-process” nodes and it has the following property: in the process each two neighboring activity nodes belong to the different cooperators. A transformation example is given in Figure 2.b which is obtained from Figure 2.a by merging B.T11 and B.T12 into B.T1 and merging C.T31 and C.T32 into C.T3. In Figure 2, the name of each activity has the format X.YZ, where X indicates the owner of the activity, Y indicates the activity type and Z is the identifier of the activity. Particularly before the transformation from Figure 2.a to Figure 2.b, some messages’ types in Figure 2.a have been fixed.
Transformation from a Collaborative Process to Interoperability Processes
141
a A collaborative process with the continuous activities that belong to the same cooperator
b An interoperability process without the continuous activities that belong to the same cooperator Figure 2. Cooperation Process in BPMN
To characterize the complexity of interoperability, we distinguish two essential characteristics: the number of owners for interoperability activities (called the rank of interoperability process, noted as R) and the number of controllers for interoperability processes (noted as NCP). As all normal activities in the interoperability process must belong to a collaborator, but the collaborators may not necessarily control the interoperability process, the number of controllers for interoperability process will be less than or equal to R. If the number of process controllers is 1, i.e., if only one enterprise is responsible for controlling the collaborative process, the process is actually an internal process or a coordination process. If the rank is greater than 1 and the NCP is 1, the process is a coordination process. Table 2 shows the precedent relationship. Table 2. Relationship between three kinds of processes and their rank Business Process
Rank
NCP
Internal Processus
R=1
NCP=1
Coordination Processus
R>1
NCP=1
Cooperation Processus
R>1
NCP<=R but NCP>1
Then the following question will be more interesting: how to decline the rank of the interoperability process? In Figure 2.b, its R is equal to 4 and the designer has defined A and B as principal cooperators, and C and E as secondary cooperators. The two branches of the gateway G1 are related with cooperators B and C. They can be replaced with two sub-processes (cf. Figure 3), and the 4 interoperability
142
H. Liu and J.P. Bourey
sub-processes of Figure 2.b (i.e. B.T1ÆC.T2ÆB.T3/C.T4, C.T1ÆB.T2ÆC.T3, A.T1ÆE.T1 and E.T2ÆA.T2) are replaced with B.P1, B.P2, A.P1 and A.P2, so the rank of the obtained process is 2 (cf. Figure 3). Finally, the cooperators A and E have two sub-processes corresponding to (A.T1ÆE.T1) and (E.T2ÆA.T2). Meanwhile, the cooperators B and C have two sub-processes (B.T1ÆC.T2ÆB.T3/C.T4) and (C.T1ÆB.T2ÆC.T3). The cooperators B and C have one internal process separately: (B.T11ÆB.T12) and (C.T31ÆC.T32). The cooperators A and B also have one cooperation process illustrated in Figure 3.
Figure 3. Simplifed cooperation process in BPMN
The interoperability process transformation from Figure 2.a to Figure 2.b and then to Figure 3 must respect the following principles: 1. 2. 3.
The rank of target interoperability process must be less than that of the source interoperability process; The rank of any generated interoperability sub-process must be less than or equal to that of the target interoperability process; The rank of the target interoperability process must be more or equal to 2.
According to the transformation described previously, we can see that the global cooperation process (Figure 3) becomes simpler, and at the same time new sub-processes are generated: the transformation simplifies the implementation of cooperation process, but meanwhile, it will increase the management complexity of the interoperability process. Are there other benefits brought from the transformation? To answer this question, we will analyze the execution of the cooperation process between enterprises in the following section. 4.2. Execution of interoperability process To illustrate the execution of the cooperation process, consider the cooperation subprocess B.P1 in Figure 3, whose detail is given in Figure 4.
Figure 4. Cooperation sub-process - B.P1
Transformation from a Collaborative Process to Interoperability Processes
143
As B.P1 is owned and controlled by cooperators B and C, this paper offers the following execution process of the process B.P1: 1. 2. 3. 4.
5.
6.
When B.P1 is invoked, a participant, for example B, will create the instance of B.P1, and meanwhile it informs all the other cooperators (cooperator C) to create the instance of P1 in their own IS; After all the cooperators have completed the instantiation of P1, then execute the following steps; Each cooperator will check which cooperator executes the next activity, if B finds that it charge the execution of B.T1, then it will execute it and all the other cooperators (C) will wait for the notification from B; If all the other cooperators (C) receive the notification from B, then each cooperator will check which cooperator executes the next activity, if C finds that it charges the execution of C.T2, then it will execute it and all other cooperators (B) will wait for a notification from C; If all the other cooperators (B) have received the notification from C, then each cooperator will check which cooperator will execute the next activities (B.T3 and C.T4), if B finds that it will execute B.T3 and C finds that it will execute C.T4, then all the other cooperators (C and B) will wait for notifications from B and C; If all the other cooperators (C and B) have received notifications from B and C, then the execution process ends.
As described earlier, the executions of the cooperation process at different cooperators are synchronized and collaborative. All the relevant cooperators follow the same method to execute the cooperation process, but in the IS of each cooperator, the execution behaviour is different. If any cooperator retreats from cooperation process or if any notification is not received by a target cooperator, the execution of the process will be blocked or abort. If any kind of failures comes out during the process execution, some measures must be taken to make the process execution recover from the failure or make the process execution stop elegantly. So, the execution engine of cooperation process should be based on distributed computing and message-oriented computing [3]. In addition, as all cooperators have been determined before the design and implementation of cooperation process, the cooperation process can satisfy the requirements of “static” collaboration between enterprises, i.e., all the collaborators have the fixed relationship. If the collaboration is dynamic, i.e., some collaborators can often be replaced by other candidates, the cooperation process in this paper is not able to meet such requirement directly. However, the cooperation process can be extended to support dynamic collaboration. Firstly, at the level of business modelling, the cooperation activity belongs to a role, not to a cooperator, and a role can have several cooperator candidates; secondly, during the execution of the cooperation process, if a member quits, the execution promoter will choose another candidate whose role is the same as that of the quitting cooperator. After the introduction of the interoperability process’ execution, we can see, the transformation which reduces the rank of the cooperation process can reduce many notifications between cooperators.
144
H. Liu and J.P. Bourey
5. Conclusion This article has proposed a method to align collaboration requirements with technology capability. Particularly, this article has illustrated in detail the transformation from collaborative process to interoperability process. This transformation is based on the definition of the concept “rank of interoperability process” and allows making information exchanges between hierarchical processes. The future research focuses firstly on semantic annotations of exchanged information and participants, and secondly on the formalization of the transformation algorithm (from the collaborative process to a set of interoperability process) which will rely on the formalization of enterprise architecture and the formal expression of collaboration process and interoperability process.
6. References [1]
Alonso G., Casati F., Kuno H., and Machiraju V., Web Services Concepts, Architectures and Applications, Springer-Verlag, Berlin, 2004. [2] Alve A., Arkin A., et al. OASIS Web Services Business Process Execution Language Version 2.0. April 11 2007. http://docs.oasis-open.org/wsbpel/2.0/OS/wsbpel-v2.0OS.html [3] Berre A.-J., State of the art for interoperability architecture approaches, Nov 19, 2004, http://www.interop-noe.org [4] Bourey J.-P., Grangel Seguer R., Doumeingts G., Berre A.-J., Deliverable DTG2.3 Report on Model-Driven Interoperability, 15 Mai, 2007, http://interopvlab.eu/ei_public_deliverables/interop-noe-deliverables/dap-domain-architecture-andplatforms/D91/ (Accessed on March 17 2009) [5] Chen, D., Doumeingts, G., “European Initiatives to develop interoperability of enterprise applications - basic concepts, framework and roadmap”, Journal of Annual reviews in Control, 27 (3), 151-160, Elsevier, December 2003. [6] Dumas M., van der Aalst W.M.P., and ter Hofstede A.H.M., Process-Aware Information Systems: Bridging People and Software through Process Technology, Wiley& Sons, 2005. [7] Fielding Roy T.. “Architectural Styles and the Design of Network-based Software Architectures”. Ph.D. dissertation, 2000.http://portal.acm.org/citation.cfm?id=932295 [8] Hamilton John A.. “A practical application of enterprise architecture for interoperability”. ISE '03. ISBN: 1-56555-270-9. http://www.scs.org/getDoc.cfm?id=2227 [9] Kavantzas N., Burdett D., et al. W3C Web Services Choreography Description Language Version 1.0. Nov 9 2005.http://www.w3.org/TR/ws-cdl-10/ [10] Polyak S., Lee J., Gruninger M., and Menzel C., “Applying the process interchange format (PIF) to a supply chain process interoperability scenario”, ECAI’98, Brighton, England, 1998. [11] van der Aalst W M. P., Benatallah B., Casati F., Curbera F., Verbeek E., “Business process management: Where business processes and web services meet”, Data Knowl. Eng. 61(1): 1-5 ,2007. [12] X. F. Xu, T. Mo and Z. J. Wang. “SMDA: A Service Model Driven Architecture”. Enterprise Interoperability II-New Challenges and Approaches. Springer-Verlag London Limited. 2007.ISBN: 9781846288579.p291-302
Part III
Semantics for Enterprise Inoperability
A Manufacturing Foundation Ontology for Product Life Cycle Interoperability Zahid Usman1, Robert I.M. Young1, Keith Case1, Jenny A. Harding1 1
Loughborough University-Wolfson School of Mechanical & Manufacturing Engineering, Loughborough, Leicestershire, UK
Abstract. This paper presents the idea of a proposed Manufacturing Foundation Ontology (MFO) aimed at acting as a basis for the Product Life Cycle (PLC) interoperability. MFO is aimed to have the provision for introducing interoperability not only across departments but across organization as well. The proposed idea shows the development of a MFO in several layers and various levels in those layers. The foundation ontology will act as a basis for building Interoperable knowledge bases or ‘World Models’ from a library of formally defined concepts in a heavy weight ontology. MFO must be flexible enough to allow organizations to be able to model their own domains with the flexibility to use the terms they want. Rules and axioms governing each and every concept add rigour to the semantics of the MFO and restrict the use of concepts to facilitate interoperability with a minimum effect on flexibility to model. Keywords: Business Interoperability requirements, meta-modelling for Interoperability, foundation Ontology, semantic mediation and enrichment, Product Life Cycle Interoperability
1. Introduction Manufacturing is and will be one of the top revenue and employment generators in Europe. According to a European commission report manufacturing in European Union (EU) is responsible for nearly 22% of the GNP, about 75% of total GDP and 70% of the employment [1]. In 2005, 2.3 million enterprises in the EU-27 had manufacturing (NACE Section C, D, I and K as quoted by [2]) as the main activity, having generated EUR 6,323 billion turnover, value added production of 1,630 billion and having employed 34.6 million of human resource [1,3] The role of computers in industry has increased exponentially and the Information and Communication Technologies (ICT) have become an integral part of almost every organization. Organizations all around the world have entered into a new era of ICT. Manufacturing organizations have moved from traditional
148
Z. Usman, R.I.M. Young,K.Case, J.A. Harding
manual drawings and design to the Computer Aided Technologies (CAx). Most expensive of tests have been replaced by computer simulations. Not only from the products perspective but also from the organizational point of view tools and techniques like Enterprise Resource Planning (ERP) e.g. Oracle, SAP, etc. Manufacturing and Materials Resource Planning (MRP) and several others to Manage PLC activities assisting from minor to major activities are being rapidly progressing. According to a report in 2005 [4] on ICT for Manufacturing, ICT are key to the manufacturing competence, competitiveness and jobs in Europe. Interoperability is defined by Ray & Jones [5] “the ability to share technical and business data, information and knowledge seamlessly across two or more software tools or application systems in an error free manner with minimal manual interventions”. With number of software tools being developed in parallel in different companies around the globe, it also raises the problem of interoperability across them. Organizations would need to interoperate internally as well as externally to take competitive advantage. To highlight the importance of interoperability a study by Brunnermeier and Martin [6] in 1999 at NIST showed that one billion U.S. $ per year are spent by U.S. automotive sector alone for solving interoperability problems. The multiples of this amount when including other sectors like, services, health care, electronics, logistics, telecommunication, aerospace, etc and not across U.S. only the figures would definitely highlight this as a major problem. This highlights the need to have an interoperability system which minimizes the cost incurred on solving interoperability problems. In the development of interoperability systems it is of extreme importance to formally capture & incorporate the semantics of the concepts. As highlighted through a survey by Tan and Madnick [7] that almost 70% of the total costs of interoperability projects is spent on identifying and locating semantic mismatches and developing the code to map them. Semantics are important for the foundation of a well organised hierarchy of concepts, the relations between them and rules governing their use. This study highlighted the need to incorporate the semantics or formalized meanings of concepts in ICT including PLC Management (PLM). The need for formal semantics leads to the need for heavy weight ontologies. Semantic formalization converts a simple hierarchy and dictionary of concepts or light weight ontology to a heavy weight ontology. Several definitions of ontology which is a borrowed term from philosophy can be found in the literature. The most quoted definition is by Gruber in 1993 [8] and several others e.g by Uschold & Gruninger in 1996 [9], Guarino [10] and Borst in 1997 [11], Studer et al 1998 [12], Schlenoff [13], et al and Roche in 2000 [14], Noy & McGuinness in 2002 [15] and Blomqvist & Ohgren in 2007 [16]. The one we prefer is given in the Process Specification Language(PSL) standard (ISO-18629) [17] as “a Lexicon of the specialized terminology along with some specifications of the meanings of the terms involved.”
2. The Requirement for a Manufacturing Foundation In this section business interoperability requirements are explored first from a systems’s perspective and then from the perspective of design and manufacturing.
A Manufacturing Foundation Ontology for Product Life Cycle Interoperability 149
2.1. A System’s Perspective on Interoperability The present typical approach for developing multiple integrated systems can be represented in the context of Model Driven Architectures (MDA) as shown in fig.1 a. This starts from a computational independent model (CIM) which defines the requirements for a system. From the CIM a Platform Independent Model (PIM) is developed and then the PIM is transformed into several platform specific models (PSMs). An example of this can be seen in the use of STEP [18]standards as PIM models supporting multiple CAD specific PSMs. However in our approach we want to provide the flexibility of multiple PIMs but still support interoperability. We argue that this can be achieved as long as all PIMs which are to interoperate share a common set of foundation concepts in their development as illustrated in figure 1 b. Because this approach offers flexibility in the definition of the PIM there is a need for a verification mechanism across the PIMs to confirm the level of interoperability which is possible.
a
b
Fig. 1. a. MDA approach to Interoperability; b. Requirement of Foundation Concepts for MDA
2.2. Design and Manufacturing Perspectives on Interoperability This research work is focussed on interoperability between design and manufacture and is undertaken in conjunction with manufacturing companies from the aerospace and automotive sectors. From our work with these companies we have identified three key types of interoperability between design and manufacture which we must accommodate. These are (i) interoperability between similar departments but across different organizations (ii) interoperability between
150
Z. Usman, R.I.M. Young,K.Case, J.A. Harding
different departments of the same organisation (iii) interoperability between different departments of different organisations. These are illustrated for different types of business in figure 2. Type (ii), especially between design and manufacture departments, is the most important to our work as it is particularly important for designers to understand the manufacturing consequences of their decisions.
Fig. 2. World Model Layers & Modes of Interoperability
3. A Manufacturing Foundation Ontology Approach to Interoperability While the concept of a foundation ontology is not new, with DOLCE, SUMO, OCHRE, OpenCyc, BFO [19]being examples, they tend to apply to very high, generic or “upper level” concepts which are not easily applied directly to the manufacturing domain. Some work has been done to work on Interoperability approaches [20], manufacturing ontologies [21] and semantic interoperability framework [22].Gunendran and Young[23] and Young et al [24] highlight important questions for sharing knowledg across PLC. Perhaps the most effective work is that of Borgo and Leito [25] who used concepts from foundation ontologies to build a manufacturing domain. In our approach we accept the use of upper level concepts where appropriate but believe that the range of concepts relevant to manufacturing is sufficiently broad as to warrant a manufacturing foundation ontology in its own right. Also, by targeting concepts in the
A Manufacturing Foundation Ontology for Product Life Cycle Interoperability
151
manufacturing domain, we believe that the concepts can be defined more tightly, which will aid in the evaluation of cross domain interoperability across manufacturing systems. Working across design and manufacture we can consider specific domain concepts for design and for manufacture. However we want to consider the foundation concepts which apply across both of these domains and across the full product lifecycle. We also recognise that the concepts which apply to automotive manufacture will not be totally in line with those for aerospace manufacture. We consider these as being different manufacturing “worlds” where the ability to represent knowledge of the particular “world” is critical. The actual design and manufacturing functions then would use “world” knowledge which has been constructed upon a formal foundation ontology. Hence we perceive of a knowledge framework as illustrated in Fig. 3. The MFO and the ‘worlds’ are meta modelling for interoperability though very specific worlds can be build from the framework. 3.1. Levels in a Manufacturing Foundation Ontology In our investigation of concepts for manufacture we have identified that some are specific to key areas in the lifecycle such as design, manufacturing, operation or disposal; some concepts are applicable to a product across all phases of the lifecycle and some concepts are generic to multiple product types which go well beyond the typically machined products with which we are mainly concerned. Illustrations of concepts from these three levels are described below and in particular the way in which some concepts apply at each level but with varying levels of specialisation. Re/Use of Foundation Ontology & World Models
Final Application & Re/Use
World Models Interoperable Knowledge Bases
How to Make/Design Company Specific Parts-World How To Make/Design PartsWorld How to Make/Design FeaturesDesign. Manufacture, Operation & Disposal Concepts
Foundation Ontology
Lifecycle Generic Concepts
Generic Concepts
Fig 3. The Manufacturing Foundation Ontology & Interoperability Framework
3.1.1 Generic Concept Level Generic concepts like activity, activity occurrence, feature, time point, dimension, tolerance, part, part family etc, are applicable across all types of product. Concepts from the generic level can then be specialised to Product Life Cycle (PLC) generic concepts and then to PLC specific concepts i.e. specific to either design or manufacture in our case.
152
Z. Usman, R.I.M. Young,K.Case, J.A. Harding
3.1.2 PLC Generic Concept Level Generic concepts are specialized into the PLC generic concepts which are applicable to any of the activities like design, manufacture, operate & dispose of the whole PLC but not outside it. Concepts like product feature, product part family, geometric dimensions, geometric tolerance, are specializations of generic concepts for the PLC generic concept level. 3.1.3 PLC Specific Concept Level The concepts at PLC specific level are specific to each activity of PLC and not outside that. Concepts for design and manufacturing domains are under development. Concepts like manufacturing feature and manufacturing part family are specialization of PLC generic concepts for manufacturing specific concepts. Similarly design feature and design part family are specialisations of PLC generic concepts for design specific concepts. Concepts specific to design or manufacture can only be used within their domain and not outside them. As the concepts are semantically enriched their semantic mediation, verification and mapping is ability is there when interoperating across different domains. 3.1.4 An Example of Specialisation through the Concept Levels This section uses the example of the concept of a feature to illustrate how a generic concept for feature can be progressively specialised and constrained into design feature and into various levels of manufacturing feature. This explains the 1st layer shown in Fig. 3. Fig 4 illustrates the taxonomic breakdown of a feature concept.
Fig 4. Screen captures of part of MFO describing Concept Specialization in MFO
A common logic [26] formalization of Fig 4 is explained below. First all the concepts like feature their various sub-classes and their associative relations are defined in IODE as follows. “:Prop” is used for introducing a concept in IODE. Declaration of a Concept In IODE :Prop Feature :Inst Type :sup ConcreteEntity :name "Feature" :rem "The MFO Generic Concept"
Definition of a Relation (binary) in IODE :Rel hasAttribute_of_Interest :Inst BinaryRel :Sig Feature Attribute_of_Interest :name "hasAttribute_of_Interest" :rem "hasAttribute_of_Interest ?feature ?AoI"
A Manufacturing Foundation Ontology for Product Life Cycle Interoperability
153
In a similar way all other concepts and relation are defined in KFL. The behaviour of a concept is partially controlled in its declaration by defining its type through “:Inst”, super classes or classes through “:sup”. The extensive rigour, integrity or semantic enrichment of concepts is done through the use of axioms or integrity constraints (IC). ICs are mainly of two types namely hard ICs and soft ICs which control the use of any concept by either disallowing the use of concept in case of hard IC or by giving a warning message in case of soft IC in case of the use of concepts in way which violates the ICs. The integrity constraints working behind the fig. 4 are explained next with an explanation of how to read them for a couple of initial constraints. (=> (and (Feature ?feature) ;If ?feature is a variable representing Feature ;and ?AoI is variable representing an Attribute of Interest (Attribute_of_Interest ?AoI)) (hasAttribute_of_Interest ?feature ?AoI) ) ;then ?feature has an Attribute_of_Interest. :IC hard "Every feature has an Attribute of Interest" (=> (and (Shape_Feature ?s_feature) ;If ?s_feature is a variable representing Shape_Feature (Shape ?shape)) ;and ?shape is a variable Shape (and (Feature ?s_feature) ;then ?s_feature is a Feature (hasShape ?s_feature ?shape) )) ;and ?s_feature has a ?shape :IC hard "Every Shape feature is a feature and has a shape" (=> (and (Product_Feature ?P_feature) (Product ?P) ) (and (Shape_Feature ?P_feature) (hasProduct ?P_feature ?P) )) :IC hard "Every Product feature is a Shape feature and has an associtaed product" (=> (and (Manufacturing_Feature ?M_feature) (Manufacturing_Process ?M_Process) ) (and (Product_Feature ?M_feature) (hasMfg_Pro ?M_feature ?M_Process) )) :IC hard "Every Manufacturing feature is a Product feature and has an associated Manufacturing Process" (=> (and (Design_Feature ?D_feature) (Function ?function) ) (and (Product_Feature ?D_feature) (hasFunction ?D_feature ?function) )) :IC hard "Every Design feature is a Product feature and has a associated function"
4. Discussion and Conclusion The manufacturing foundation ontology described focuses on the product lifecycle domain with specializations in design and manufacture. Parts designed using the proposed framework should benefit from the semantic rigour captured in the foundation ontology and hence provide an improved level of interoperability. The MFO is build in multiple layers with increasing levels of specialization which simplifies ontology building, concept selection and reuse. Three levels of
154
Z. Usman, R.I.M. Young,K.Case, J.A. Harding
specialisation have been defined to suit our needs but we anticipate that this would increase if the approach were applied across a wider area of application. An industrial exploration of the concepts for MFO is being undertaken at present in one of the partner aero-space companies. This will be used to develop and test the ideas against practical problems and requirements and provide an evidence for the use of MFO. This will lead to a more practical system and provide a basis for evaluating and analyzing the framework and the effectiveness of MFO in a real case scenario.
5. Acknowledgements This research work is funded by the Innovative Manufacturing and Construction Research Centre (IMCRC) in Loughborough University under the Interoperable Manufacturing Knowledge Systems (IMKS) project (IMCRC project 253). The authors would also to thank fellow researchers and colleagues in the IMKS project for their support and cooperation.
6. References [1] [2] [3] [4] [5] [6] [7]
[8] [9]
European Commission, Manufacture, (2004), A Vision for 2020, Assuring the Future of Manufacturing in Europe, Report of the High-level Group. Available from: http://www.manufuture.org/documents/manufuture_vision_en[1].pdf Johanson, U., Industry, Trade and Services (2008), eurostat-statistics in focus, 37/2008. Availabel from: http://epp.eurostat.ec.europa.eu/cache/ITY_OFFPUB/KSSF-08-037/EN/KS-SF-08-037-EN.PDF Trade policy review report by European communities, (2009), World Trade Organization-Trade Policy Review Body, page 16. Available from: http://www.wto.org/english/tratop_e/tpr_e/g214_e.doc ICT for Manufacturing, (2005), Report of Meeting with Group of Representatives of Five Expert Panels,15 March 2005 in Brussels Ray S.R., Jones A.T. (2003) Manufacturing interoperability. Concurrent Engineering, Enhanced Interoperable Systems. Proceedings of the 10th ISPE International Conference, Madeira Island, Portugal: 535–540 Brunnermeier, S. B. and Martin, S. A., (1999), Interoperability Cost Analysis of the U.S. Automotive Supply Chain, Technical report, NIST, Research Triangel Institute, Planning Report- 99-1. Tan, P, Madnick, S.E, Tan, K.L. (2005) Context Mediation in the Semantic Web: Handling OWL Ontology and Data Disparity Through Context Interchange. In: Bussler, C.J., Tannen, V., Fundulaki, I. (eds.) SWDB (2004), Springer, Heidelberg, LNCS, vol. 3372, pp. 140–154. Gruber, T., (1993). Towards principles for the design of ontologies used for knowledge sharing”, International Journal of Human-Computer Studies .Vol. 43(5/6),pp.907-928. Uschold, M. Gruninger, M, (1996), Ontologies: Principles methods and applications, The Knowledge Engineering Review 11 (2) (1996) 93–155.
A Manufacturing Foundation Ontology for Product Life Cycle Interoperability 155 [10] Guarino, N., (1997), Understanding, building and using ontologies, International Journal .Human-Computer Studies,46 , 293 – 310. [11] Borst, W., N., Construction of Engineering Ontologies, (1997), PhD Thesis, University of Tweenty, Enschede, NL––Centre for Telematica and Information Technology. [12] Studer, R, Benjamins, V.R, Fensel, D. (1998) Knowledge engineering: principles and methods, Data and Knowledge Engineering 25 161–19 [13] Schlenoff, C., Gruninger, M., Tissot, F., Valois, J., et, al (2000)"The Process Specification Language (PSL): Overview and Version 1.0 Specification," NISTIR 6459, National Institute of Standards and Technology, Gaithersburg, MD. [14] Roche, C., (2000), Corporate ontologies and concurrent engineering, Journal of Materials Processing Technology, 107, 187-193 [15] Natalya F. Noy and Deborah L. McGuinness, (2002), “Ontology Development 101: A Guide to Creating Your First Ontology”, Stanford University, Stanford, CA. [16] Blomqvist, E., O¨hgren, A. (2008), ‘Constructing an enterprise ontology for an automotive supplier’, Engineering Applications of Artificial Intelligence 21, 386–397 [17] ISO 18629-1, ISO TC184/SC4/JWG8, (2004), Industrial Automation System and Integration—Process Specification Language: Part 1. Overview and Basic Principles. [18] ISO/DIS 10303-224.3 (2003) Product data representation and exchange: Application protocol: Mechanical product definition for process planning using machining [19] Oberle, Daniel, et al. (2007), "DOLCE Ergo SUMO: On Foundational and Domain Models in the SmartWeb Integrated Ontology (SWIntO)." Web Semantics: Science, Services and Agents on the World Wide Web 5.3: 156-74. [20] ISO (CEN/ISO)11354 ,V8.3 (2008), Requirements for establishing manufacturing enterprise process interoperability, PART I:Framework for Enterprise Interoperability. [21] LAGOS, N. and 'SETCHI, R.M.'., A manufacturing Ontology for e-learning. [22] Chungoora, N. and Young, R.I.M., (2009) "A Semantic Interoperability Framework for Manufacturing Knowledge Sharing", Submitted for the International Conference on Knowledge Management and Information Sharing (KMIS), Madeira, Portugal, 6th - 8th October 2009. [23] Gunendran, A.G., and Young, R., I., M., (2008), Methods for the Capture of Manufacture Best Practice in Product Lifecycle Management, International Conference on Product Lifecycle Management. [24] Young, R.I.M, Gunendran, A.G, Cutting-Decelle, A.F, Gruninger, M, (2007), Manufacturing knowledge sharing in PLM: a progression towards the use of heavy weight ontologies, International Journal of Production Research, Vol. 45, No. 7, 1 April, 1505–1519 [25] Borgo, S, Leitão, P., (2007), Foundations for a core ontology of manufacturing, 1Laboratory for Applied Ontology, ISTC-CNR, via Solteri 38,38100 Trento, 2Polytechnic Institute of Bragança, Quinta Sta Apolónia, Apartado, Bragança, Portugal.1134, 5301-857 [26] ISO/IEC-24707 (2007), Information technology — Common Logic (CL): a framework for a family of logic based languages, International Standard, First edition 2007-10-01.
A Semantic Mediator for Data Integration in Autonomous Logistics Processes Karl Hribernik1, Christoph Kramer1, Carl Hans1, Klaus-Dieter Thoben1 1
BIBA - Bremer Institut für Produktion und Logistik GmbH Hochschulring 20 28359Bremen
Abstract. Autonomous control in logistic systems is characterized by the ability of logistic objects to process information, to render and to execute decisions on their own. This paper investigates whether the concept of the semantic mediator is applicable to the data integration problems arising from an application scenario of autonomous control in the transport logistics sector. Initially, characteristics of autonomous logistics processes are presented, highlighting the need for decentral data storage in such a paradigma. Subsequently, approaches towards data integration are examined. An application scenario exemplifying autonomous control in the field of transport logistics is presented and analysed, on the basis of which a concept, technical architecture and prototypical implementation of a semantic mediator is developed and described. A critical appraisal of the semantic mediator in the context of autonomous logistics processes concludes the paper, along with an outlook towards ongoing and future work. Keywords: semantic mediator, semantics, ontologies, data integration, autonomous logistics processes
1. Introduction Autonomous control in logistic systems is characterized by the ability of logistic objects to process information, to render and to execute decisions on their own [1]. The requirements set by complex logistics systems towards the integration of data regarding the individual entities within them prove immensely challenging. In order to implement complex behaviour with regards to autonomous control, dynamism, reactivity and mobility, these entities, including objects such as cargo, transit equipment and transportation systems but also software systems such as disposition, Enterprise Resource Planning (ERP) or Warehouse Management Systems (WMS) require the development of innovative concepts for the description of and access to data. [2]
158
K.A. Hribernik, C. Kramer, C. Hans and K.-D. Thoben
2. Related Work The following sections review related work relevant to data integration in autonomous logistics processes. First of all, an understanding of the term “autonomous logistics processes” is developed and the necessary properties of supporting information systems to support such processes are identified. Subsequently, data integration approaches are reviewed in order to identify one which provides adequate support for the properties identified in the first step. 2.1. Characteristics of Autonomous Logistics Processes According to [1], “Autonomous Control in logistic systems is characterized by the ability of logistic objects to process information, to render and to execute decisions on their own.” They furthermore define logistics objects in this context as “material items (e.g. part, machine and conveyor) or immaterial items (e.g. production order) of a networked logistic system, which have the ability to interact with other logistic objects of the considered system.” In [3], the former are further differentiated as commodities and all types of resources and whilst constraining the immaterial logistics objects to orders. In [4], a catalogue of criteria for autonomous cooperating processes is suggested. Within this catalogue, three criteria explicitly address the “information system” layer, as illustrated in Fig. 1 below. Specifically, these criteria deal with the properties of data storage, data processing and the system’s interaction ability. The first two criteria are directly related to the problem of data integration. The properties of these criteria make apparent that the less central the data storage and processing of an information system is, the higher the level of autonomous control is. The third criterion, interaction ability, relates implicitly to the data integration problem, in that the interaction ability of the information system in question is based upon its ability to access and process data stored according to the initial two criteria.
Fig. 1. Information System Layer Criteria for Autonomous Cooperating Processes [4]
A Semantic Mediator for Data Integration in Autonomous Logistics Processes
159
Consequently, an IT system can be said to contribute to the autonomous control of a logistics system if it provides support for information processing and decisionmaking on the part of logistics objects, both material and immaterial. Furthermore, the information system is required to exhibit the properties data storage, data processing and interaction ability with respect to these types of logistics objects. This paper concentrates on the first of the three properties – data storage – and investigates a semantic mediator can be applied to support a decentral approach to the storage of autonomous logistics data. 2.2. Data Integration Approaches The diversity of systems involved in complex, autonomous logistics processes defines challenging requirements towards an approach to data integration in this field. The variety of different applications, data sources, exchange formats and transport protocols demand a high level of flexibility and scalability. To address these challenges, the specific requirements complex logistics systems exhibit need to be considered. A number of different, traditional solutions to data integration may be taken into account, foremost amongst these tightly coupled, loosely coupled and object-oriented approaches [5]. Whilst a tightly coupled approach can quickly be dismissed on grounds of its inflexibility, loosely coupled and object-oriented approaches cannot be adopted without critical analysis. An object oriented approach generally provides good mechanisms for avoiding integration conflicts. However, when considering this approach, one must take into account that a single canonical model is required to describe the entire data model, which clearly restricts its flexibility and scalability. Each time a new stakeholder or data source enters the logistics system, the model needs to be extended. Depending on the dynamics of the logistics system, this may or may not be a disqualifying factor with regards to this approach. As the fluctuation of data sources, stake holders and systems in a complex logistics system with any degree of autonomous control can be assumed to be high, an object-oriented approach to data integration is likely to be unsuitable. A loosely coupled approach requires detailed knowledge of each of the heterogeneous data sources to be able to be successfully employed. With regards to complex logistics systems, further analysis is required to determine whether this is feasible or not. The possibility of requiring highly flexible, and thus possibly not always pre-determinable, context data, for example from sensor networks, may prove to be an argument against this approach. Besides the aforementioned traditional approaches to data integration, a number of predominantly semantic approaches remain to be taken into account. Here, the main concepts constituting the architecture of such data integration systems are mediators [6]. In this approach, both syntactic and semantic descriptions of the data to be integrated are applied. The semantic mediator is capable of extracting knowledge regarding the data structures of the underlying data sources and subsequently transforming, decomposing and recomposing data requests according to that knowledge. The mediator relies on semantic descriptions of the data sources. In the case of autonomous logistics processes, this implies a wholly
160
K.A. Hribernik, C. Kramer, C. Hans and K.-D. Thoben
semantic modeling of the relevant logistics information and data across the distributed, heterogeneous sources, for which a number of approaches, such as ontologies, may be chosen. Here, extensive research is required to determine whether such semantic descriptions of logistics data are feasible and adequate to address the requirements of autonomous logistics processes. However, the application of the semantic mediator to the problem area currently offers the most promising solution candidate — the following paragraphs attempt to sketch a technical concept to the problem on that basis and evaluate it through the appraisal of a prototypical implementation within a defined application scenario.
3. Research Approach The aim of the research presented in this paper is to develop a concept for the integration of data in autonomous logistics processes, specifically in the field of transport logistics. The semantic mediator was identified as a promising concept based on the analyis of related work outlined above. In order to achieve a better understanding of the requirements autonomous control in transport logistics processes poses towards semantic mediation, an application scenario was defined and analysed. The scenario selected is based on simulation experiments used to demonstrate the principles of autonomous control using multi-agent systems [7]. It illustrates the application of autonomous control for the transport of fresh fruit products through Europe, taking into account the use of sensors [8] and shelf-life prediction [9] in combination with a multi-agent based system [10] to monitor and control the relevant logistics processes. Analysis of the scenario focused mainly on identifying the agents’ data requirements in the autonomous logistics processes, and by what means that data may be communicated, taking standard data exchange formats and interfaces commonly used in the field of transport logistics into account. The following section describes the application scenario in detail. 3.1. Application Scenario This scenario illustrates the application’s processes as a story-board exemplifying one possible simulation run. The application scenario’s processes are illustrated in a BPMN (Business Process Modeling Notation) diagram in Fig. 2 below. To begin with, an order is placed with the fruit producer in Poznań, Poland for e.g. a crate of blueberries to be delivered to a restaurant in Paris, France. The order is placed electronically as a business-to-business transaction, from the customer’s ERP system to an order management agent, which identifies the goods to be transported and initializes a shipping agent to start the autonomous transport logistics processes. The shipping agent’s initial task is to identify a suitable means of transport for the delivery of the fruit it is responsible for. It does this by making use of the services of a transport brokerage agency. This agency provides an electronic market where freight forwarders can publish details about their available transport capacities, including their current routes, possible delivery times, cooling capabilities, special equipment such as temperature sensors, and so on. The
A Semantic Mediator for Data Integration in Autonomous Logistics Processes
161
shipping agent can thus discover which means of transport come into question for the route to Paris, taking into account the requirements resulting from the type of goods to be transported. Fresh fruits require cooling and temperature monitoring and need to be delivered within a defined time frame. The shipping agent begins negotiations with the respective transport agents and chooses and books the most appropriate one. The selection is made the agent on the basis of a number of weighted criteria such as pricing, delivery time guarantees, and quality of service.
Freight
Means of transport
Transport discovery
Sequence Flow Message Flow
Shipping agent
Sensor check
Traffic warnings check
Rerouting
Monitor environment values during the journey
Discover cold storage
+
Carrier
+
Cold storage
Sensors
Fig. 2. BPMN-Diagram of the Application Scenario
Once the selected truck has picked the fruits up, it is ready to begin the delivery process. The shipping agent first contacts the agent representing the sensor network in the truck. It ensures the required sensors are present and operating satisfactorily. Having passed the sensor check, the shipping agent arranges for the deployment of a dedicated shelf life monitoring agent, developed and preconfigured by the food producer [11] which automatically connects to the sensor net. The shipping agent then acknowledges that everything is satisfactory and delivery may commence. The sensor network continuously monitors the shipping conditions and after some time detects an increase in temperature in the front part of the reefer where the crates with berries are located. The monitoring agent evaluates the situation as critical. Its shelf life prediction based on the sensor readings suggest that unacceptable degradation of fruit quality will occur if the current transport plan is not changed. It contacts the shipping agent in order to find a replacement means of transport at a nearby transshipping point. The shipping agent renegotiates transport according to the process outlined above. A new means of transport is discovered and the handover negotiated. Alternatively, if no suitable transport can be found, the shipping agent may attempt to find a temporary storage facility providing cooling and temperature monitoring facilities. The process for discovering and negotiating warehouse storage space is analogous to that for acquiring a means of transport. The above autonomous logistics processes iterate until the goods are delivered at the customer’s delivery address.
162
K.A. Hribernik, C. Kramer, C. Hans and K.-D. Thoben
4. Findings 4.1. Scenario Analysis On the basis of the application scenario, the data integration requirements were identified by first modeling the autonomous logistics processes in BPMN exhibited by the scenario (cf. Fig. 2), and then modeling and analyzing the communication which takes place between the individual actors. The next step taken was to map the communication between the individual actors to messages and events represented by standard data exchange formats used in transport logistics. The selection of exchange formats was guided by a market study which identified the most prominent exchange formats used in IT systems supporting transport logistics. Consequently, business-to-business communications and agent interactions were mapped to EDIFACT (Electronic Data Interchange For Administration, Commerce and Transport) EANCOM (EAN + Communication) messages whilst information regarding the tracking and tracing of the material flow through the scenario were mapped to EPCIS (Electronic Product Code Information Service) events. 4.2. System architecture
Agent
Agent
Agent Mediator QueryInterpreter
Legend Agent Communication Data Exchange
WrapperDisposer
Raw Data Exchange (in DBMS-Format)
EANCOMWrapper MessageCreator
MessageProcessor
EPCISWrapper EventCreator
EventProcessor
EPCIS Repository DB
DB
Fig. 3. Mediator Architecture Definition
The proposed system architecture illustrated in Fig. 3 follows the traditional pattern of a semantic mediator - besides the actual mediator component, which possesses a global ontology of the autonomous logistics processes of the application scenario, the wrapper components each contain extension ontologies which fully formalize
A Semantic Mediator for Data Integration in Autonomous Logistics Processes
163
the data sources they are responsible for as semantic descriptions. Heterogeneity conflicts [21] are solved either by the mediator component itself or by the respective wrapper, depending on the type of conflict. Due to the application scenario and the selected data exchange formats only very few of the conflicts as described by [21] occur. 4.3. Semantic Data Descriptions: Ontology Specifications The description of the application domain as well as the objects and data in this domain were implemented as a global ontology. The system, however, is designed to support the use of multiple ontologies in order to leverage modular flexibility and scalability of the mediator approach in further applications. The Web Ontology Language (OWL-DL) [16] was used for the specification of the ontology which describes the individual data exchange formats. OWL-DL was chosen for three reasons: first of all, it is used in the multi-agent system for the description of the domain of autonomous controlled logistics. Secondly, it was judged to be adequately expressive to cover the semantic description of both the standard exchange formats used in transport logistics and the overarching concepts of autonomous logistics processes. Finally, a number of Java libraries and reasoners are readily available for OWL-DL, which was expected to significantly accelerate the development of a prototypical implementation. The ontology used is a critical success factor of any semantic mediator. It has to reflect all the characteristics of the application domain and simultaneously has to be as simple and comprehensible as possible. Many existing onotologies may be taken into consideration for the semantic description of the entities in the given transport logistics scenario, such as those used in the fields of product lifecycle and data management as exemplified by [18], [19] or [20]. However, none of these truly reflect the syntax and semantics of autonomous logistics processes whilst encompassing the syntax and semantics of standard logistics data exchange formats. Consequently, as a first step, a new ontology was designed based on both the application scenario and the top-level ontology of the multi-agent system which describes basic concepts of autonomous logistics processes. It can be extended can by incorporating additional ontologies into the system. By using the OWL-DL features “owl:equivalentClass” and “owl:equivalentProperty”, the possibility for linking the ontology to other existing ontologies is available. 4.4. Querying the Semantic Mediator The query language “SPARQL” [12] is used as the query language at the query interface of the system. It was specifically developed for querying ontologies and thus provides an adequate basis for queries to the semantic mediator. However, SPARQL only offers the possibility to query the system but not write to it. However, SPARQL alone doesn’t fulfill all of the requirements, because agents representing autonomous objects also need to be able to create messages and data. To extend the functionality to support bidirectional queries, the “SPARQL Update” [13] language was used to extend the SPARQL query
164
K.A. Hribernik, C. Kramer, C. Hans and K.-D. Thoben
language. This allows for the editing of ontologies with a similar syntax to SPARQL. A combination of both languages was specified and prototypically implemented as the query language in of the semantic mediator. 4.5. Data Transformation in the Wrappers The wrappers query data from the respective data sources and transform it via an internal format in order to enable the processing of data from heterogeneous data sources and formats. The transformation is carried out within the wrappers and is transparent to the actual mediator component. This allows for a complete abstraction from the data sources. Transformation in the wrappers is rule-based and described and implemented using to the business rule management system “Drools” [14] (Drools - Business Logic Integration Platform). The use of Drools offers the possibility to react more quickly and flexibly to modifications to individual data sources. Should a change need to be made, only the rule files need to be updated. Modifications to the source code of the wrappers with subsequent recompilation and deployment can be avoided in this way. 4.6. System Test and Prototypical Implementation In order to test the semantic mediator with the application scenario described above, all of the mediator components were prototypically implemented. The data sources described in the application scenario were also implemented. A repository was set up for both EPCIS and EANCOM. The Fosstrak EPCIS repository [15] was used to query and capture the EPCIS events. A minimal EANCOM messaging implementation was developed to relay the respective messages. These repositories formed the basis for the communication. Every message or event created is stored in the corresponding repository to support the validation of the system.
5. Conclusions and Outlook Validation of the prototypical implementation against the application scenario demonstrated the applicability of the concept of the semantic mediator in the field of autonomous logistics processes. The semantic mediator proved capable of fulfilling the bidirectional data integration requirements set by the application scenario. For this scenario, the semantic mediator supports a decentral approach to the storage of autonomous logistics data , consequently fulfilling the first criterion of the “information system” layer for autonomous cooperating processes [4] and thus contributing to the potential for autonomous control in the transport logistics sector. From a more practical point of view, one issue is currently in the focus of current work. The extended SPARQL query interface of the semantic mediator is difficult to integrate with existing systems. A loosely-coupled approach is being
A Semantic Mediator for Data Integration in Autonomous Logistics Processes
165
investigated which aims at providing a generic, service-based interface towards the semantic mediator which can be easily accessed both by enterprise systems and autonomous logistics objects represented, for example, by software agents. Also, it remains to be shown that the concept for a semantic mediator presented in this paper is adequate for more complex scenarios in autonomous logistics. First of all, the data expressed by the standard data exchange formats EPCIS and EDIFACT EANCOM are, by design, largely disjunct. The former deals mainly with tracking and tracing the logistics material flow whilst the latter is concerned with communicating business-to-business messages. Consequently, the potential for heterogeneity conflicts in this scenario is quite limited. Future work will analyse whether the presented concept is suitable for more complex scenarios involving more data sources with a higher potential for conflict. Beyond the current focus on the domain of autonomous control in transport logistics, future work will be directed towards expanding the vocabulary of the mediator into other logistics domains, such as production and reverse logistics. This implies investigating existing ontologies in these areas and how they may be integrated into the current development. Here, one promising approach may be to adopt an ontology from the domain of product lifecycle management in order to link the individual autonomous transport logistics processes into the overall product lifecycle context.A further aspect to be taken into consideration in future work is that of the integration of dynamic data sources. More demanding scenarios of autonomous control in logistics rely heavily on the availability of reliable, realtime data from, for example, sensor networks. Whether and how the presented semantic mediator can be successfully applied to integrating such data will be the focus of future research.
6. Acknowledgements This research was supported by the German Research Foundation (DFG) as part of the Collaborative Research Centre 637 “Autonomous Cooperating Logistic Processes”.
7. References [1]
[2]
[3]
Böse, Felix, and Katja Windt. “Catalogue of Criteria for Autonomous Control in Logistics.” In Understanding Autonomous Cooperation and Control in Logistics - The Impact on Management, Information and Communication and Material Flow, by Michael Hülsmann and Katja Windt. Berlin:,Springer, 2007. 57-72 Hans, Carl, Karl Hribernik, and Klaus-Dieter Thoben. “An Approach for the Integration of Data within Complex Logistics Systems.” LDIC2007 Dynamics in Logistics: First International Conference. Proceedings. Heidelberg: Springer, 2007. 381-389. Scholz-Reiter, Bernd, Jan Kolditz, and Thorsten Hildebrandt. “Specifying adaptive business processes within the production logistics domain – a new modelling concept and its challenges.” In Understanding Autonomous Cooperation & Control in Logistics – The Impact on Management, Information and Communication and
166
[4]
[5] [6] [7]
[8]
[9] [10]
[11] [12] [13] [14]
[15] [16] [17] [18]
K.A. Hribernik, C. Kramer, C. Hans and K.-D. Thoben Material Flow, by Martin Hülsmann and Katja Windt, 275-301. Berlin: Springer, 2007. Windt, Katja, Felix Böse, and Thorsten Phillipp. "Criteria and Application of Autonomous Cooperating Logistic Processes." Proceedings of the 3rd International Conference on Manufacturing Research - Advances in Manufacturing Technology and Management. Cranfield, 2005. Wache, H. (2003). Semantische Mediation für heterogene Informationsquellen. Akademische V.-G. Aka . Ullman, J.D.: Information integration using logical views. In F.N. Afrati and P. Kolaitis, editors, Proceedings of the 6th International Conference on Database Theory (ICDT’97), 1997. Jedermann, R., Gehrke, J. D., Becker, M., Behrens, C., Morales Kluge, E., Herzog, O., et al. (2007). Transport Scenario for the Intelligent Container. In M. Hülsmann, & K. Windt, Understanding Autonomous Cooperation & Control in Logistics. The Impact on Management, Information and Communication and Material Flow (pp. 393-404). Berlin: Springer. Jedermann, R.; Behrens, C.; Laur, R.; Lang, W.: Intelligent containers and sensor networks, Approaches to apply autonomous cooperation on systems with limited resources. In: Hülsmann, M.; Windt, K. (eds.): Understanding Autonomous Cooperation & Control in Logistics – The Impact on Management, Information and Communication and Material Flow. Springer, Berlin, 2007, pp. 365-392 Jedermann, R.; Edmond, J.P.; Lang, W.: Shelf life prediction by intelligent RFID. In: Haasis, H.D.; Kreowski, H.J.; Scholz-Reiter, B. (eds.): Dynamics in Logistics. First International Conference, LDIC 2007, Springer, Berlin Heidelberg, 2008, pp. 231-240 Gehrke, J.D.; Ober-Blöbaum, C.: Multiagent-based Logistics Simulation with PlaSMA. In: Koschke, R.; Herzog, O.; Rödiger, K.-H.; Ronthaler, M. (eds.): Informatik 2007 - Informatik trifft Logistik, Band 1. Beiträge der 37. Jahrestagung der Gesellschaft für Informatik e.V. (GI), GI, Bonn, 2007, pp. 416-419 R. Jedermann, C. Behrens, D. Westphal, W. Lang, Applying autonomous sensor systems in logistics; Combining Sensor Networks, RFIDs and Software Agents, Sensors and Actuators A (Physical) 132 (1), 2006, pp. 370-375. Prud'hommeaux, E., & Seaborne, A. (2008, Januar 15). SPARQL Query Language for RDF. Retrieved 07 15, 2009, from http://www.w3.org/TR/rdf-sparql-query/ Seaborne, A., Manjunath, G., Bizer, C., Breslin, J., Das, S., Harris, S., et al. (2008, July 15). SPARQL Update. A language for updating RDF graphs. Retrieved 07 28, 2009, from http://www.w3.org/Submission/SPARQL-Update/ Drools Community Documentation. Drools Introduction and General User Guide 5.0.1 Final. JBoss Enterprise, 2009. Retrieved 07.12.09 from http://downloads.jboss.com/drools/docs/5.0.1.26597.FINAL/droolsintroduction/html_single/index.html Roduner, C., Steybe, M.: Fosstrak EPCIS. ETH Zurich & University St. Gallen. Retrieved 07.12.09 from http://www.fosstrak.org/epcis/index.html Smith, M. K., Welty, C., & McGuinness, D. L. (2004, Februar 10). OWL Web Ontology Language Guide. Retrieved 07 15, 2009, from http://www.w3.org/TR/owlguide/ Wiederhold, G. (1992). Mediators in the Architecture of Future Information Systems. Computer , pp. 38-49. Terzi, S. (2005). Elements of Product Lifecycle Management: Definitions, Open Issues and Reference Models. PhD – University Henri Poincaré Nancy 1 and Politecnico di Milano, May
A Semantic Mediator for Data Integration in Autonomous Logistics Processes
167
[19] Tursi, A. (2009). Ontology-approach for product-driven interoperability of enterprise production systems. PhD - University Henri Poincaré Nancy 1 and Politecnico die Bari, November [20] Jeongsoo Lee, Heekwon Chae, Cheol-Han Kim, Kwangsoo Kim. (2009). Design of product ontology architecture for collaborative enterprises. In Expert Systems with Applications, Volume 36, Issue 2, Part1, pp. 2300-2309 [21] Wache, H., Vögele, T., Visser, U., Stuckenschmidt, H., Schuster, G., Neumann, H., Hübner, S. (2001). Ontology-Based Integration of Information – A Survey of Existing Approaches
An Ontology Proposal for Resilient Multi-plant Networks Rubén Darío Franco1, Guillermo Prats1, Rubén de Juan-Marín2 1 2
Centro de Investigación, Gestión e Ingeniería de Producción Univ. Politécnica de Valencia; 46022 Valencia, Spain Instituto Tecnológico de Informática Univ. Politécnica de Valencia; 46022 Valencia, Spain
Abstract. This paper presents an ontology proposal for the REMPLANET FP7 project which aims at developing methods, guidelines and tools for the implementation of the Resilient Multi-Plant Networks in non-hierarchical manufacturing networks, characterized by noncentralized decision making. Due to structural heterogeneity of REMPLANET integration scenarios, an ontological approach seems to be a first key challenge to be addressed. Consequently, one of the main contributions of this paper is to provide an initial ontological approach, intended to harmonize domain concepts and their relationships. Keywords: Ontology engineering, Resilient multi-plant networks, Interoperability.
1. Introduction Nowadays, collaboration among organizations is increasing and becoming a critical issue in profiting from ever-evolving market needs. In manufacturing and distribution networks, the traditional concept of static Supply Chains is changing towards a broader concept where they are defined as the operations management of heterogeneous, dynamic and, geographically distributed partners. This new environment is making enterprises changing the way they manage their trading relationships. Interoperability is a critical issue to take into consideration when such collaboration needs appear due to the importance of information exchange and distributed coordination needs. The European Project REMPLANET is intended to create methods and tools in supporting resilient organizational models. In doing so, a multi-disciplinary approach will be followed and defining a unified and commonly agreed understanding of main
170
D. Franco, G. Prats, R. de Juan-Marín
concepts, and their relationships, it is considered a first critical step for project partners. Collaboration among project members is crucial to achieve REMPLANET objectives. Entities from several countries will participate in the project; it is necessary to define in a unique way, different concepts that will be used in the domain of the project, in order to facilitate communication between them. Data interoperability between members will be essential for the success of the project objectives. This data interoperability will facilitate understanding between members of different countries. This paper presents an ontology proposal, defined as a unique form of representing knowledge applied in various domains, in order to provide a global view of the project scenario. Its main objective is to unify REMPLANET concepts and its relations for all the partners involved in the project. In this way, interoperability problems identified in an inter-operational environment such as conceptual barriers could be avoided. This ontology will facilitate partners to understand how concepts are interrelated and it also will facilitate information sharing. The document has been structured as follows. In Section 2, REMPLANET Project is introduced. In Section 3, a short introduction to basic concepts and definitions are given. In Section 4, ontology for the REMPLANET Project is described. Section 5, conclusion and next steps are envisioned.
2. The REMPLANET Project REMPLANET Project is funded by the European Union by the Seventh Framework Programme. The main concept of the project is to develop methods, guidelines and tools for the implementation of the Resilient Multi-Plant Networks Model in nonhierarchical manufacturing networks, characterized by non-centralized decision making. Resilience is defined as the capacity to adapt to major internal disruptions. The project defines the concept of a resilient organization as the capacity to respond rapidly to unforeseen change, even chaotic disruption. It is the ability to bounce back — and, in fact, to bounce forward — with speed, determination and precision. A resilient organization effectively aligns its strategy, operations, management systems, governance structure, and decision-support capabilities so that it can uncover and adjust to continually changing risks, endure disruptions to its primary earnings drivers, and create advantages over less adaptive competitors. A Resilient Organization can be understood from two points of view: operational resilience and strategic resilience. Operational resilience is defined as the ability to respond to the ups and downs of the business cycle or to quickly rebalance product-service mix, processes, and the supply chain, by bolstering enterprise agility and flexibility in the face of changing environments. On the other hand, strategic resilience is defined as the ability to dynamically reinvent business models and strategies as circumstances change. It is not about responding to a one-time crisis, or just having a flexible supply chain. It is about continuously anticipating and adjusting to discontinuities that can permanently impair the value proposition of a core business. Strategic resilience refers, therefore,
An Ontology Proposal for Resilient Multi-plant Networks
171
to a capacity for continuous reconstruction. It requires innovation with respect to those organizational values, processes, and behaviors that systematically favor perpetuation over innovation, renewal being the natural consequence of an organization’s innate strategic resilience. In terms of operational resilience, a key issue faced by organizations today is the challenge to deliver the products matching the needs of individual customers, in different geographical markets, at any time, and preferably individually customized, as cheaply and as quickly as possible. The mass customization strategy has been suggested as a way to address the challenge of providing individual products with mass production efficiency. On the other hand, in terms of strategic resilience, the project will take the perspective of open innovation [1]. It has been shown that both the efficiency and the effectiveness of innovation can be increased dramatically when innovation is not seen as a closed process conducted with one form, but as an open activity within a network of loosely-coupled actors, including the users of the product or service.
3. Basic Concepts and Definitions Information exchange needs of REMPLANET actors will be critical to achieve resilience objectives. Taking into consideration that the REMPLANET integration scenarios will be geographically distributed and socio-cultural divergences may appear, removing such conceptual barriers will be an important problem to deal with. In doing so, ontologies have been identified in IDEAS [2] as one of the three domains that tackle interoperability problems. As stated before, the main objective of this paper is to provide an ontology proposal in order to avoid data interoperability problems between REMPLANET partners. 3.1. Interoperability Interoperability goes far beyond the simple technical problems of computer hardware and software, but encompasses the broad but precise identification of barriers not only concern data and service but also process and business as well [3]. In the context of Multi-Plant Networks, interoperability between partners plays an important role in order to achieve success. Interoperability is defined as the ability of information systems to operate in conjunction with each other encompassing communication protocols, hardware, software, application, and data compatibility layers [4]. Other authors state that an interoperability problem appears when two or more heterogeneous resources are put together. Interoperability per se is the paradigm where an interoperability problem occurs [5]. Finally, definitions of interoperability have been reviewed in [6], [7]. There have been identified a group of barriers that do not allow interoperability among enterprises. Those barriers appear in diverse enterprise levels. Those can be understood as incompatibilities that obstruct information sharing and prevent from exchanging services. Barriers are common to all enterprises whatever the sector of activities and size. Developing interoperability means to develop knowledge and solutions to remove incompatibilities [8].
172
D. Franco, G. Prats, R. de Juan-Marín
Fig. 1 shows the framework that was developed within the frame of INTEROP Network of Excellence in order to better understand the problem and the vision of interoperability. As showed, there have been identified three groups of interoperability barriers [9]: • Conceptual barriers, • Technological barriers and, • Organizational barriers.
Fig. 1. Enterprise interoperability framework (three basic dimensions)
Generally, conceptual barriers are defined as syntactic and semantic difference of information exchanged. In order to solve conceptual problems, a solution could be to develop an ontology in order to define terminology that will be used between partners. This paper will center its efforts in solving conceptual barriers for the REMPLANET project. 3.2. Ontology As mentioned above, interoperability barriers can be divided in three main groups. One solution identified in the literature to solve conceptual barriers, is to develop an ontology for the domain being considered. Ontology is defined as a unique form of representing knowledge applied in various domains. An ontology is a formal, explicit specification of a shared conceptualization [10]. The main advantage for having such formal specification is to facilitate the knowledge sharing and re-use among the various parties interested in that particular domain of knowledge. In the context of a European Project, having a set of standardized ontology will enhance understanding between partners. Using the ontology for defining project concepts will result in a reusable, easy to integrate knowledge bases.
An Ontology Proposal for Resilient Multi-plant Networks
173
3..3 Ontology Construction Methodology The Ontology was developed using Protégé software. Protégé is a free open-source software tool that was developed at Stanford University for building Ontologies and knowledge based systems. To define the domain and scope of the ontology, several basic questions have been answered: • What is the domain that the ontology will cover? • For what we are going to use the ontology? • For what types of questions the information in the ontology should provide answers? • Who will use and maintain the ontology? One of the ways to determine the scope of the ontology is to sketch a list of questions that a knowledge base based on the ontology should be able to answer, competency questions [11]. To develop the Ontology, a list of terms was written down. Initially, it was important to get a comprehensive list of terms without worrying about overlap between concepts they represent, relations among the terms, or any properties that the concepts may have, or whether the concepts are classes or slots. The next two steps were developing the class hierarchy and defining properties of concepts. There are several possible approaches in developing a class hierarchy [12]: topdown, bottom-up and a combination of both. The ontology was built by defining classes, subclasses, properties, and instances representing REMPLANET environment using the combination process.
4. REMPLANET Ontology Proposal In dynamic environments, where conditions are constantly changing, rejecting Business Opportunities due to the lack of flexibility could be the end of the business. To avoid this scenario, REMPLANET will develop methods, guidelines and tools for the implementation of Resilient Multi-Plant Networks Model in non-hierarchical manufacturing networks, characterized by non-centralized decision making. Next, the ontology proposal is showed and a briefly description is given. The ontology has been divided into three simpler views in order to facilitate understanding, resilient organization, members and ITC Platform. Fig. 3 shows the view of a Resilient Organization. It can be formed by the company network, the group network in case the company is a member of a group, and required external networks without current links with the company, for the new product-service/processes collaborative co-design materialization. From an organizational point of view, the “resilience” concept has two perspectives: operational resilience and strategic resilience.
174
D. Franco, G. Prats, R. de Juan-Marín
All members of REMPLANET Scenario will be connected to an ICT Platform with the aim of react to business opportunities faster than competence and to support automated process.
Fig. 2. Resilient Organization View
Each individual network can be formed by several organizations forming its own supply chain. These members can be Manufacturing Plants, Distributor companies, Logistic Operators among others. Organizations have to be resilient in the meaning that has to adapt its capabilities not only in episodic or crisis-driven, creating more advantages than its competitors. Attending to this flexibility, more customers and business opportunities will be satisfied. Each member could play different roles in the REMPLANET scenario such, coordinator, partner and supplier. Fig. 4 shows relation between members and the resilient organization.
Fig. 3. Members of REMPLANET Scenario
REMPLANET Scenario will offer to its network the possibility to use an ICT Platform in order to facilitate communication. This platform is composed of various modules, EBPM (Extended Business Process Management), SOP (Service Oriented
An Ontology Proposal for Resilient Multi-plant Networks
175
Platform), a Decision Support System, Open Innovation module and, a Simulation Tool. As is stated before, Information and Communication Technologies (ICT) play an important role in distributed scenarios. In this context, ICT Platform should be a basis for the non-centralized decision making and has to integrate a simulation and optimization decision support software to evaluate alternative product-processessupply network resilient configuration for several customized demand scenarios and global manufacturing network conditions. Every future supply network member is interconnected through an ICT platform for an efficient real time collaborative planning/scheduling execution. The ICT platform (Fig. 5) incorporates interoperability functionalities, to facilitate the supply network member’s systems integration, and allow each new member a fast connection to the network. Service-Oriented Platform for Extended Business Process Management (SOP4EBPM) will support collaboration activities between REMPLANET Scenario members. Service-Oriented Architecture (SOA) [13] is a concept of Software Architecture that defines the utilization of services as basic constructions for application development. This architecture allows enterprises to improve its flexibility by decoupling business process logic from IT implementations. The ICT platform incorporates interoperability functionalities, to facilitate the supply network member’s systems integration, and allow each new member a fast connection to the network.
Fig. 4. ICT Platform View
A simulation and optimization decision support software will be used to evaluate alternative product-processes-supply network resilient configurations for several customised demand scenarios and global manufacturing network conditions. Relevant Key Performance Indicators (KPIs) like delivery time, cost, agility and flexibility metrics will be calculated as support to the final decision of the future network configuration responsible for placing the new product-service on the market.
176
D. Franco, G. Prats, R. de Juan-Marín
ICT Platform will allow partners to achieve strategies like mass customization, a concept that customers are demanding more every day. Open Innovation Platform will support members to achieve open innovation. It has been shown that both the efficiency and the effectiveness of innovation can be increased dramatically when innovation is not seen as a closed process conducted with one form, but as an open activity within a network of loosely-coupled actors, including the users of the product or service. This platform is supposed to enhance the implementation and continuous improvement of a mass customization system utilizing inputs for these processes from a wide non-hierarchical network of external partners. Finally, Fig. 6 shows the global view of the proposed ontology. Concepts are logically related in order to facilitate communication between project partners.
Fig. 5. Global Ontology View
As showed, REMPLANET Scenario can be formed by a company network, a group network in case the company is a member of a group, and by an external networks. This networks, are integrated of different classes of enterprises, of different sectors (manufacturing, logistics…) but all of them have in common that are connected to the ICT Platform that REMPLANET Scenario provides in order to achieve flexibility and preparedness to environments changes. Customer will be also connected to the ICT Platform, through its Open Innovation Platform due to the importance of integrating customers in the innovation process. This module also supports the concept of mass customization. Enterprises need
An Ontology Proposal for Resilient Multi-plant Networks
177
specific competencies in order to succeed mass customization. Open Innovation supported also by the Open Innovation Platform, will enable companies to co-create and learn in a horizontal network of other companies facing the same task. 4.1. Glossary Next, important concepts take into account in this paper, are defined: EBPM: an extended business process management model supports business activity integration across enterprises irrespective of the technical implementation of the business activity, the electronic data standards and the partner contracts used. Open Innovation: system where innovation is not solely performed internally within a firm, but in a cooperative mode with other external actors. Open innovation is characterized by cooperation for innovation within wide horizontal and vertical networks of customers, start-ups, suppliers, and competitors. Decision Support Systems: constitute a class of computer-based information systems including knowledge-based systems that support decision-making activities. Mass Customization: producing goods and services to meet individual customer's needs with near mass production efficiency. Open Innovation Platform: empowers innovation and increases its effectiveness throughout the supply chain. Service Oriented Platform: the SOP concept offers an application platform as a network service (inside or outside an enterprise firewall), providing a centralized runtime to execute and manage distributed or centralized applications. Business Opportunity: It is a time or occasion with favorable combination of circumstances that is suitable to start a business. Market Opportunity: newly identified need, want, or demand trend that a firm can exploit because it is not being addressed by the competitors. Company Network: provides an inter-firm relationship among suppliers and users which gives them the advantages associated with size while remaining small. External Network: is defined as the group of companies without current links with the company. Group Network: is defined as the group of companies to which the specific company belongs.
5. Conclusions In this paper we have introduced an ontology proposal for understanding concept interrelations of the REMPLANET scope. As we pointed out, it is a useful approach to capture and model in a comprehensive way the concepts that will appear in the project domain. It also allows the knowledge to be reused, shared, and enriched with more knowledge using templates and automated procedures. At this level of development, three main ontological concepts can be identified: Resilient Organization, ITC Platform and Members and its related processes. Despite this initial contribution, we still believe that some concepts should be further discussed, for example, the concept customer can be understood as an
178
D. Franco, G. Prats, R. de Juan-Marín
external customer, but, can be a member an internal customer? In the same way, can be an order an internal order, or it only can be an external order? Next, REMPLANET members will need to discuss this ontology proposal. It will be transferred to each member and the ontology will be detailed with additional concepts which are relevant to the project development. Finally, mention that the ontology was encoded in OWL, using the Protégé framework.
6. Acknowledgements The research leading to these results has received funding from the European Community's Seventh Framework Programme (FP7/2007-2013) under grant agreement n° NMP2-SL-2009-22933
7. References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13]
H. Chesbrough, The New Imperative for Creating and Profiting from Technology, Harvard Business School Press, 2003. IDEAS, “Interoperability Developments for Enterprise Application and Software roadmaps,” http://www.ideas-roadmap.net. 2005. INTEROP - DI.1b, “Deliverable DI.1b: Interoperability Knowledge corpus,” IST-1508011: Interoperability Research for Networked Enterprises Applications and Software (INTEROP), 2006. R. Poler, J.V. Tomás and P. Velardi, “Interoperability Glossary,” Deliverable 10.1, Interop Network of Excellence., 2005. Y. Naudet, T. Latour, K. Hausmann, S. Abels, A. Hahn and P. Johannnesonn, “Describing Interoperability: the OoI Ontology,” In Workshop on Enterprise Modelling and Ontologies for Interoperability (EMOI-INTEROP), 2006. D. Chen and F. Vernadat, “Enterprise Interoperability: A Standardisation View Interand-Intra Organizational Integration,” K. Kosante et al., Kluwer Academic Publishers, 2002, pages. 273-282. D. Chen and F. Vernadat, “Standards on enterprise integration and engineering - A state of the art.,” International Journal of Computer Integrated Manufacturing (IJCIM), vol. 17(3), 2004, pages. 235-253. D. Chen and N. Daclin, “Framework for Enterprise Interoperability,” IFAC EI2N Bordeaux, 2006. D. Chen, “Enterprise Interoperability Framework,” INTEROP Network of Excellence, 2006. T.R. Gruber, “A Translation Approach to Portable Ontology Specifications,” Knowledge Acquisition, vol. 5, 1993, pages. 199 – 220. M. Gruninger and M.S. Fox, “Methodology for the design and evaluation of ontologies,” In Proceedings of the Workshop on Basic Ontological Issues in Knowledge Sharing, IJCAI-95, 1995. M. Uschold and M. Gruninger, “Ontologies: Principles, methods and applications,” To appear in Knowledge Engineering Review, vol. 11, 1996. N. Bieberstein, S. Bose, M. Fiammante, K. Jones, y R. Shah, Service-Oriented Architecture (SOA) Compass, IBM Press, 2006.
Collaboration Knowledge Ontologies to Support Knowledge Management and Sharing in Virtual Organisations Muqi Wulan1, Xiaojun Dai2, Keith Popplewell1 1
Faculty of Engineering and Computing, Coventry University, Coventry, CV1 5FB, UK Department of Mechanical Engineering, University of Birmingham Birmingham B15 2TT, UK
2
Abstract. This paper focuses on representing collaboration knowledge in the context of a Virtual Organisation (VO). Knowledge representation embodies the characteristics of a VO from its incubation, formation, operation and termination. It is necessary to scope and identify all the collaboration knowledge utilised in a VO’s life cycle. The classification of collaboration knowledge or knowledge repositories is described in terms of 3 different types of organisations: individual enterprises, collaboration pools and VOs. Therefore collaboration knowledge falls into three main categories: enterprise knowledge, collaboration pool knowledge and VO knowledge. Later they are elucidated in ontology. The terms and concepts are defined and considered to be consistent with each other. Keywords: knowledge management, knowledge sharing, ontology, collaboration, VO
1. Introduction Virtual Organisations (VOs) emerge as a response to fast-changing markets. A VO is a temporary business alliance of independent enterprises based on network technology [1]. These enterprises collaborate together to complement each other’s core competences and resources in order to generate new solutions on a common business objective which may not be achieved by a single enterprise. Being virtual provides a vision appropriate for not only large businesses but also small and medium enterprises (SMEs). As we know knowledge management has already been one of the most indispensable strategies for an organisation or an enterprise to maintain its competitive advantages and core competences over its rivals [2]. Being a new organisational form, VOs should have similar requirements and face even more complicated issues on knowledge management. Compared to that in a single
180
M. Wulan, X. Dai, K. Popplewell
enterprise, knowledge management within a VO is involved in more perspectives and more levels since there are several different forms and sizes of organisations acting and interacting in the process of a VO’s life cycle. At the same time being virtual brings new challenges on how to effectively manage virtual resources along with physical and human capital [3].. On the other hand knowledge management in a VO is typified by the need to both share and protect knowledge. Sharing of knowledge between enterprise partners is critical to the success of any network organisation whilst commercially valuable expertise and IPR possessed by the individual enterprise should be protected in a secure way. From the individual enterprise’s point of view, it may share knowledge either within the enterprise or with external collaborating enterprises [4]. It is understandable that there are notable differences in approaches to internal and external knowledge sharing. This paper addresses the requirements for an ontology of VO and related knowledge: (1). Firstly, at the macro level it is essential to identify the categories of organisations that occur in VO collaboration, and how these organisations behave in the process of forming, operating and terminating the VO. (2). Through analysis of each organisation category’s characteristics, the knowledge which needs to be utilised and managed in a VO can be linked to the category of organisation. In this context the knowledge is called collaboration knowledge. In order to share collaboration knowledge in the activities of virtual collaboration, it is structured and represented in an ontology. (3). A related paper [5] will address the issues of service architecture to support knowledge management and sharing in the VO.
2. Evolution of Collaboration We summarise a VO’s life cycle into four stages: pre-creation, creation, operation and termination. Since a VO is founded by individual enterprises to address a specific business opportunity (BO), initially there must be a number of candidate enterprises joining together to explore potential collaboration. This is the stage of pre-creation where the infrastructure is set up for enterprises to register, publish their capabilities, look for possible collaborative partners and find BOs. Having identified a BO worth further development, in the second stage (creation), the enterprise partners start to investigate and assess the possibility of establishing a VO from financial and technical perspectives. As a result a legal agreement for the VO may be reached with a clear definition of collaboration objectives and business partners. Now a VO is brought into being and its life cycle enters the stage of operation. During this phase the identified partners contribute to the actual execution of the VO tasks by executing pre-defined business processes. This phase is also featured by monitoring of the performance and the enforcement of policies in the VO. When the objectives of the VO have been fulfilled it must be dissolved, closing the infrastructure, archiving VO related documents and information, evaluating the collaboration for evolution of new knowledge and rate all
Collaboration Knowledge Ontologies to Support KM and Sharing in VOs
181
participating enterprises. These key activities are carried out in the last phase of VO’s lifecycle - VO’s termination. As to the categories of organisations in a VO, the VO itself is clearly one category. It is also not difficult to identify that the individual enterprises constitute another distinct category: they are physically entities having their own capabilities and core competences relevant to a collaboration. Although highly developed IT infrastructures provide the possibility of VO’s realisation, the “traditional” issues of trust, security and profit sharing among partners still remain at the heart of collaboration. Trust is determined to be the main inhibiting factor for creating collaborative environment like the VO. Trust building between partners is an essential requirement for development and smooth working of a VO [6]. The VO breeding environment (VBE) was proposed to foster successful generation of VOs, especially for long-term collaboration [7]. Hence in the early phases of a VO’s life cycle, we can identify a third category of organisation working for creating a VO. In the context of this research we name it a Collaboration Pool (CPool) where the activities, culture and policies of different organisations can be identified and which helps these organisations build up common understanding. The concept of a collaboration pool certainly includes VBE, but also embraces a wider range of possibly less formal associations, with or without formal brokers or managers. Compared to VOs and individual enterprises, a collaboration pool is loosely coupled.
3. Collaboration Knowledge and Knowledge Repositories Collaboration knowledge relating to a VO is also structured by these organisational categories, and can be structured and implemented in a knowledge repository. The knowledge repository, or organisational memory, is an ICT system that systematically captures, organizes and analyses the knowledge assets of an organization. This is a collaborative system where people can search, browse, retrieve and preserve organizational knowledge in order to facilitate collaborative working inside an enterprise or with enterprise partners. Frequently the structure of one organisation bears almost no resemblance with the others’ in vision, activity, human resource, management strategy and communication methods. Because we have identified three different categories of organisations - individual enterprises, VOs, and CPools, collaboration knowledge in a VO respectively consists of three main parts: enterprise knowledge (EK), collaboration pool knowledge (CPK) and VO knowledge (VOK). The collaboration knowledge in each category of organisation is collected and extracted by analysing business behaviours activities, and modelled into each organisation’s knowledge repository. Ontology has been widely accepted as a formal and explicit representation of shared conceptualisation, which can be used and reused in order to facilitate comprehension of concepts and relationships in a given domain, and communication between different domain actors by making the domain assumptions explicit [8]. For a VO, collaboration partners might come from different fields or follow a different philosophy, it is necessary to introduce a mechanism to share common understanding on knowledge and to agree on a
182
M. Wulan, X. Dai, K. Popplewell
controlled vocabulary used to communicate. In the following subsections three categories of collaboration knowledge will be described. 3.1. Approach Adopted in Knowledge Representation In developing an ontology of collaboration knowledge on which to base web services for knowledge management and sharing in collaboration [5], the ontology language OWL DL (RDF S) [9] and the standard OWL 1.0 [10] are used to represent the ontologies in three main categories: Enterprise Knowledge (EK), Collaboration Pool Knowledge (CPK) and Virtual Organisation Knowledge (VOK). This assures consistency in terminology. The ontology development software/tool Protégé and plugin Graphviz are used to define and develop the knowledge ontology. There is a four-phase approach to be employed: preparatory phase, anchoring phase, iterative improvement and application phase. 3.2. Enterprise Knowledge Ontology Through describing shared concepts and relationships within an enterprise, the enterprise ontology (EO) provides an overview of an enterprise, characterise resources/knowledge possessed and how resources/knowledge are used. Recently developed EOs, such as AIAI [11] and TOVE [12], focus on generic models and enterprises’ general business activity independent of a particular application domain or sector. However, from examination of the above enterprise ontology proposals, it is difficult to define a common ontology for enterprises which can accommodate the terminology of different industry sectors. In the EO developed here, the top level is generic and applicable to more than one domain. The detail level deals with domain specific terminology, for example, for the manufacturing sector. This EO is adapted from Factory Data Model (FDM) [13] which defined the terms and their relationship within the manufacturing sector. The top-level ontology of this EO for manufacturing domain is depicted in Fig. 1 where six basic classes are defined. Resource includes human resource, manufacturing machinery and tooling etc., computing facilities etc. needed to carry out a process. Strategy includes all the knowledge and methods used to make decisions throughout the enterprise. Process includes both manufacturing and business processes. Facility includes organisational units such as factory, shop, cell, as well as administrative, commercial and decisional units. Flow objects connect independent processes into a system with a purpose. Token represents business objects (products, information and documents etc.) created and used by processes and transferred through flows.
Collaboration Knowledge Ontologies to Support KM and Sharing in VOs
183
Fig. 1. Top level of an Enterprise Ontology
The Facility can be an enterprise having Strategy, Resource and Process to plan and implement its business objective. The Enterprise’s Strategy controls Process and allocates Resource. Token represents the enterprise’s business object which is subject to the Flow class to link the Process. 3.3. Collaboration Pool Knowledge Ontology A collaboration pool (CPool) consists of potential collaboration partners which are enterprises willing to cooperate with each other and exploit a promising business opportunity. Inside a CPool, the members will exchange certain amount of knowledge and information, which may be a part of the enterprise’s own knowledge together with CP Knowledge, relevant only to the CP as a whole, rather than individual members. The purpose of a CPool is to let each member’s core competences be known within the CPool and help building up better trust among members. In order to maintain common understanding of CP Knowledge we define a CP ontology. The basic classes and their relationships in the CP ontology are illustrated in Fig. 2. A CollaborationPool consists of CPMmembers who intend to identify a BusinessOpportunity. The CollaborationhPool has CPAsset for managing and running CollaborationPool. Each CPMember has a CPMemberProfile and has a CPRole at least to be included in CPTask. A CollaborationPool aims to breed a VirtualOrganisation in response to certain BusinessOpportunity. A VirtualOrganisation has VOPartners. Both CPMember and VOPartner can be Enterprises.
184
M. Wulan, X. Dai, K. Popplewell
Fig. 2. Basic classes and relationships in a Collaboration Pool Ontology
3.4. VO Knowledge Ontology
Fig. 3. Basic classes and their relations in a VO Ontology
A VO is formed when members of a CP have identified a BO and respond to it. A VO is a short-term association with a specific goal of acquiring and fulfilling a BO [8]. However, despite the short term nature of the VO, to operate successfully business partners in a VO must share knowledge and information to a significant degree. Knowledge about each individual partner is enterprise knowledge. This may be available to other partners in the VO, subject to access control restrictions protecting confidentiality and critical IPR. In addition there is a body of knowledge
Collaboration Knowledge Ontologies to Support KM and Sharing in VOs 185
which is relevant to the VO as a whole, regarding structural and operation aspects of the VO. The VO ontology therefore describes the terminology in VO Knowledge. The basic classes and their relations of a VO are presented in Fig. 3.
4. Conclusions The paper discussed the issues of knowledge management for VOs from the perspective of knowledge ontologies for the evolving categories of organisation which are involved in VO collaboration. In the analysis of a VO’s life cycle three different types of organisations are identified to work within the context of a VO. They are individual enterprises, collaboration pools and virtual organisations. Thus collaboration knowledge contributing to a VO can be classified into three categories and represented in an ontology. This ontology can in turn be used to structure a knowledge repository which is comprised of knowledge bases for individual enterprises (EKB), collaboration pools (CPKB) and virtual organisations (VOKB). This forms the foundation for development of a software service architecture which supports population, application and evolution of knowledge in these knowledge bases, reflecting the developing experience of partners in collaboration, which is reported in a related paper [5].
5. Acknowledgements This work is fully supported by the European Union funded 7th framework research project – Supporting Highly Adaptive Network Enterprise Collaboration Through Semantically-Enabled Knowledge Services (SYNERGY).
6. References [1] [2] [3] [4] [5]
[6]
Luczak H, Hauser A, (2005) Knowledge management in virtual organizations, Proceedings of International Conference on Services Systems and Services Management. vol. 2, 898 – 902 Geisler E, Wickramasinghe N, (2009) Principles of knowledge management: theory, practices and cases, M.E. Sharpe Klobas J, Jackson P, (2008) Being virtual: knowledge management and transformation of the distributed organisation, Physica-Verlag Kess P, Phusavat K, Torkko M, Takala J, (2008) External knowledge: sharing and transfer for outsourcing relationshops, International Journal of Business and Systems Research, vol.2, no.2, 196-213 Wulan M, Dai X, Popplewell K, (2010) Collaboration knowledge management and sharing services to support a Virtual Organisation, accepted by the International Conference on Interoperability for Enterprise Software and Applications, Coventry, UK Ellman S, Eschenbaecher J, (2005) Collaborative network models: Overview and functional requirements, Virtual Enterprise Integration: Technological and organizational perspective, eds. G.Putnik, M.M. Cunha, Idea Group Inc., 102-103
186
M. Wulan, X. Dai, K. Popplewell
[7]
Camarinha-Matos L, (ed), (2004) Virtual Enterprise and Collaborative Networks, Kluwer Academic Publishers, USA Plisson J, Ljubic P, Mozetic I & Lavrac N, (2007) An ontology for virtual organisation breeding environments, IEEE Transactions on Systems, Man, and Cybernetics – Part C: Applications and Reviews, vol. 37, no. 6, 1327-1341 W3C, (2004 a) OWL/RDFS: Section 1.4 of OWL Web Ontology Language Guide, available at http://www.w3.org/TR/2004/REC-owl-guide-20040210/ W3C, (2004 b) OWL 1.0 Standard: OWL Web Ontology Language Guide, available at http://www.w3.org/TR/2004/REC-owl-guide-20040210/ Uschold M, King M, Moralee S & Zorgios Y, (1998) The Enterprise Ontology, The Knowledge Engineering Review, vol.13, no.1, 31-89 Fox MS, (1992) The TOVE Project: A Common-sense Model of the Enterprise, Industrial and Engineering Applications of Artificial Intelligence and Expert Systems, Belli, F. and Radermacher, F.J. (Eds.), Lecture Notes in Artificial Intelligence # 604, Berlin: Springer-Verlag, 25-34 Harding J, Yu B, Popplewell K, (1999) Organisation and functional views of a small manufacturing enterprise, International Journal of Business Performance Management, vol.1, no.3, 338-352
[8] [9] [10] [11] [12]
[13]
Mediation Information System Engineering for Interoperability Support in Crisis Management Sébastien Truptil1, Frédérick Bénaben1, Nicolas Salatgé2, Chihab Hanachi3, Vincent Chapurlat4, Jean-Paul Pignon5 and Hervé Pingaud1 1 2 3 4 5
Centre de Génie Industriel – Université de Toulouse, Mines Albi – Campus Jarlard, Route de Teillet, 81013 ALBI Cedex 09 (FR) PEtALS Link (formerly EBM WebSourcing) – 4, Rue Amélie, 31000 TOULOUSE (FR) Institut de Recherche en Informatique de Toulouse – Université de Toulouse 1 – 2, Rue Doyen Gabriel Marty, 31042 TOULOUSE Cedex (FR) Laboratoire de Génie Informatique et d’Ingénierie de Production – Ecole des Mines d’Alès – Parc scientifique Georges Besse, 30035 NIMES Cedex 1 (FR) THALES Communications – 160, Boulevard de Valmy, BP82, 92705 COLOMBES cedex (FR)
Abstract. One objective of the French-funded (ANR-2006-SECU-006) ISyCri Project (ISyCri stands for Interoperability of Systems in Crisis situation) is to provide the crisis cell in charge of the situation management with an Information System (IS) able to support the interoperability of partners involved in this collaborative situation. Such a system is called Mediation Information System (MIS). This system must be in charge of (i) information exchange, (ii) services sharing and (iii) behavior orchestration. This article proposes, first, an approach of MIS engineering in crisis context. This design method is model-driven and uses model morphisms principles and techniques. Furthermore, due to the intrinsically evolutionary nature of crisis phenomenon, the MIS must remain adapted to the situation and to the panel of partners involved. This paper presents also the MIS on the fly adaptation, based also on model-driven approach in order to provide agility to the MIS. Keywords: Information System, Interoperability, Mediation, Ontology, Crisis, Modeldriven engineering
1. Introduction As for the ISyCri project, a crisis is an abnormal situation [1], which occurs suddenly and impacts an ecosystem with unacceptable consequences. Such a break into the continuous state of the considered (eco)system implies to deal with the crisis management through a dedicated cell of actors in charge of the crisis
188
S. Truptil, F. Bénaben, N. Salatgé, C. Hanachi, V. Chapurlat, J.P. Pignon H. Pingaud
response. First, there are actors on the field, using their specific abilities in order to perform business activities (such as evacuating injured persons, fixing a road, etc.). Second, there are heads of these actors on the field, grouped into a crisis cell (and potentially directed by a single authority). The ISyCri project [2] focuses on this crisis cell and on the question of the coordination of the partners inside the crisis cell. This objective may be separated in two questions. The first point is the following: according that partners of the crisis reduction have the competences and the business procedures to act as suitable as possible on the field of the crisis, how is it possible to help to manage, to coordinate and to synchronize this set of operational experts from the crisis cell? The second part of this article aims at presenting the proposed conceptual solution based on mediation considerations (especially between Information Systems). The third part of this article presents more specifically the developed approach, which aims to (help to) define, at the beginning of the crisis, (i) the appropriate collaborative behavior through a collaborative process model and (ii) the dedicated Mediation Information System (MIS) able to support the previously identified collaborative process with a service-oriented technology. The second point is the following: considering crisis management, there are three main instability factors: • a crisis situation is, by nature, potentially unstable (the system is damaged and replicas or over-crisis may occurs at any time); • the crisis cell itself may be a very weak collaborative network, which might be unstable too (sporadic collaboration, partners unused to work altogether, often from heterogeneous business fields or culture); • the knowledge concerning the situation is certainly incomplete and constantly evolving, providing instability into the management system. The question is so: How to ensure that the solution proposed for the first question (the MIS) remains adapted to the evolving crisis situation? The fourth part of this article explains the conceptual and technical solutions proposed to support this agility requirement of the MIS. Finally, the fifth section of this article concludes and presents some perspectives for this work.
2. Management of Heterogeneous Actors of the Crisis Cell 2.1. The Main Question and its Refinements When a crisis occurs, several actors are involved simultaneously to solve (or at least to reduce) the situation. The question is then: “How to provide this set of heterogeneous actors with the ability to organize and coordinate their actions in an optimal manner”. Depending from the type, the size and the location of the crisis, this organization may be more or less formalized. The first hypothesis of this work is that this collaborative organization is based on a crisis management cell, which may be seen as a decision center. The question is then changed into: “how
Mediation Information System Engineering
189
to provide the crisis cell with the ability to organize and coordinate the actions of partners in an optimal manner”. Interoperability is defined in [3] by the European network of excellence InterOp as “the ability of a system or a product to work with other systems or products without special effort from the customer or user”. It is also defined in [4] as “the ability of systems, natively independent, to interact in order to build harmonious and intentional collaborative behaviors without modifying deeply their individual structure or behavior”. Considering these definitions we propose to consider that the question is, at this point, the following “how to support the interoperability of actors into the crisis cell”. We believe that the crisis cell can be seen as a system of systems (SoS). According to [5], there are five criteria to define a SoS: 1. 2. 3. 4. 5.
Operational independence of the elements (systems) Managerial independence of the elements (systems) Evolutionary development of the system of systems Emergent behavior of the system of systems Geographical distribution of elements
The crisis cell concerns elements (actors), which are, by definition, independent on the operational and managerial plans. They are ISs from different organizations with different competencies (on the operational level) and different heads (on the managerial level). The crisis cell is, furthermore, an evolving entity. New actors may join the cell while some other may quite it. Moreover, the behavior of the crisis cell is adapted to the crisis to solve and is following the evolutions of the situation. Finally, even if actors may be in the same room, first, the crisis cell may be distributed in several places and, second, members of the crisis management cell “exchange only information and not substantial quantity of mass or energy” (which is the sub-definition of the fifth criterion). The crisis cell can be considered as a system of systems and the question is so the following: “how to support the interoperability among the SoS of the crisis cell”. 2.2. Ideas and Tracks to Answer the Main Question An important point is that the members of the crisis cell are, on the one hand, communicating with their operational resources on the crisis field through their own channels. Another assumption of this article is that members of the crisis cell are able to use these specific means of communication to receive incoming reports and to send outgoing instructions. However, the central issue is that the members of the crisis cell are, on the other hand, not used to exchange information between each other. If they expect to organize and coordinate their actions, they need to find a way to support their exchange of information into the crisis cell. Information systems are so at the center of the topic of this article and the system of systems of the crisis cell will be considered through the information system point of view. Considering [6], we believe that interoperability of ISs into the crisis cell may not be assumed through a peer-to-peer architecture. Such a solution would imply too much interfaces to ensure the efficiency of collaboration (each partner’s IS would need to be able to interact capably with too much other ISs with an
190
S. Truptil, F. Bénaben, N. Salatgé, C. Hanachi, V. Chapurlat, J.P. Pignon H. Pingaud
unavoidable exploding numbers of potential connections). Furthermore, as we stated in the previous section that the crisis cell may be seen as a SoS, contributing systems will be considered as “functions provider” for the crisis cell (functions that they would be able to ensure on the operational field by activating their operational resources thanks to their own channels) and will be seen as split in two parts: public part (involve in the collaborative network of the crisis cell) and private part (dedicated to its own behavior, according to SoS considerations). This characteristics matches with the framework proposed in [6] and the proposed conceptual solution is so a Mediation Information System (MIS) dedicated to support interoperability of ISs of the SoS of the crisis cell. 2.3. Architecture of the MIS We propose to consider the MIS according to a three layers model: 1.
2.
3.
Business layer: presents the exchange of information, the management of services (which use and deal with information) and the orchestration of the collaborative process (which use and deal with services and information). The definition of this layer is strongly motivated by the mediation conclusion of previous section. Logical layer: presents the service-oriented architecture (SOA), which is strongly motivated by the business layer. This layer includes three different views perfectly coherent with the previous layer (information view, service view and process view). PIM4SOA results and SoaML inspire this layer [7]. Technical layer: concerns the choice of an enterprise service bus (ESB), which is one of the best platform to deploy a SOA solution and is so strongly motivated by the logical layer. Such a technical solution provides a middleware and a workflow engine to coordinate services.
One important remark is that another assumption of this article concerns the fact that partners of the crisis cell use SOA principles.
3. MIS Engineering in a Crisis Context: Theory and Example The three levels model presented in the previous section corresponds exactly to the model-driven architecture defined by OMG [8]. The business level matches with the CIM layer (Computer Independent Model), the logical level corresponds with the PIM layer (Platform Independent Model), while the technical level fits with the PSM layer (Platform Specific Model). 3.1. Model Transformation in a Model-driven Approach Considering this model-driven issue, the crucial point of the presented approach is model transformation, which may be synthesized as follow :
Mediation Information System Engineering
191
Fig. 1. Model transformation principle
A source model is used (built according to a source meta-model (MM)). The key point is that the source MM shares part of its concepts with the target MM (the two spaces, source and target, have to be partially overlapping in order to allow modelmorphism). As a consequence, the source model embeds a shared part and specific part. The shared part provides the extracted knowledge, which may be used for the model transformation, while the specific part should be saved as capitalized knowledge in order not to be lost. Then, mapping rules (built according to the overlapping conceptual area of MMs) can be applied onto the extracted knowledge in order to provide the transformed knowledge. That transformed knowledge and an additional knowledge (to fill le lack of knowledge concerning the non-shared part of concepts into the target MM) may be finally used to create the shared part and the specific part of the target model. 3.2. Example of MIS Engineering in Crisis Context February the 27th, 10 AM, the police is informed that an accident between a tanktruck and a train. Both of them contain unknown materials, potentially dangerous, which are currently escaping in the atmosphere. Policemen sent in the area and railway-station employees are unconscious while children of the nearest school are feeling sick. This is an overview of the scenario used to run the whole approach of MIS engineering (originally this is a CBRN exercice). The following presents the different steps of the design and the associated concrete results. First of all, based on a crisis situation meta-model presented in [2], a modeling tool has been designed (thanks to GMF environment [9]). This modeling tool allows the user to characterize the situation in terms of (i) people, goods and natural sites impacted, (ii) characteristics of the crisis (trigger-event, known facts, existing risks, potential danger, aggravating factors) and (iii) available resources and rescuers. The following picture presents the obtained model.
192
S. Truptil, F. Bénaben, N. Salatgé, C. Hanachi, V. Chapurlat, J.P. Pignon H. Pingaud
Fig. 2. CBRN characterization model
Another tool, presented in [10], is dedicated to describe services of partners in order to define (before or at the beginning of the crisis) what are the functions available for each partner and what are the technical specificities of these services. The second step of the engineering process is to use an ontology framework to extract all the damaging facts and all the risks of the situation in order to rank them, according to the user point of view (the tool assists the user but the head of the crisis cell has to decide which risk or impact should be treated first). In our example, we propose to sort these elements as follows (they are the main identified facts/risks, others exist but, for the moment, they are not considered as priorities): (1) Prevent risk of explosion, (2) Prevent risk of contamination, (3) Care sick people, (4) Prevent risk of panic. Then according to the defined services, a first version of the collaborative process is deduced (in BPMN [11]) in the third step, and proposed to the crisis cell, which can modify, improve or validate this model. Furthermore, another tool, not described here but fully presented in [12], is used at this level to check and validate the proposed model through an anticipative effect-based approach. This step improves the robustness of the deduced collaborative process model and enriches the approach with a verification/validation stage. The obtained result of the deducing and improving phases is presented on the following figure:
Mediation Information System Engineering
193
Fig. 3. Collaborative process model (BPMN) of the CBRN example – CIM level
Regarding section 1.3.1, this CIM model has been obtained thanks to a model transformation mechanism: The source meta-model is the crisis situation metamodel used in the characterization tool, the target meta-model is the collaborative process meta-model described in [13] while the additional knowledge is the order of critical risks and facts. The mapping rules are described in [2]. The fourth step concerns the PIM level and proposes to obtain the logical architecture of the MIS, according to the previous collaborative process model. This model transformation provides an UML diagram. The following UML diagram is an extract corresponding to the service of actor setSecurityPerimeter :
Fig. 4. Extract from the SOA MIS model (UML) of the CBRN example – PIM level
Regarding section 1.3.1, this PIM model has been obtained thanks to a model transformation mechanism: The source meta-model is the previously used collaborative process meta-model, the target meta-model is the SOA meta-model described in [14] while the additional knowledge is the added profile for services. The mapping rules are also described in [14].
194
S. Truptil, F. Bénaben, N. Salatgé, C. Hanachi, V. Chapurlat, J.P. Pignon H. Pingaud
The fifth step concerns the PSM level and proposes to obtain the technical structure of the MIS, according to the previous logical architecture model. This model transformation provides an UML diagram. The following UML diagram is an extract corresponding to the service of actor setSecurityPerimeter :
Fig. 5. Extract from MIS technical model (UML) of the CBRN example – PSM level
Regarding section 1.3.1, this PSM model has been obtained thanks to a model transformation mechanism: The source meta-model is the previously used logical SOA architecture meta-model, the target meta-model is the ESB meta-model described in [15] while the additional knowledge is the added profile for services. The mapping rules are also described in [15]. Finally, the deployment of the MIS on the ESB PEtALS [16] is presented on the following figure: Fight Explosion
Set Security Perimeter
How Many People To Rescue
Set Equipment
Rescue People
Maintain Perimeter
Set Medical Post
BringPeople To MedicalPost
Care People
Fig. 6. ESB deployment (PEtALS) on the CBRN example – PSM level
4.MIS Agility in a Crisis Context Due to the obvious instability of crisis situation, agility, as the ability to adapt to change, is a crucial requirement for a crisis cell, and for its mediation information system. According to [17], [18] and [19], agility may be seen as “an ability to
Mediation Information System Engineering
195
satisfy a change in a short time”. In the context of ISyCri project, the notion of agility is considered as the combination of reactivity (as an ability of short time reaction) and flexibility (as an ability of following the needs). The main factor of MIS agility is based on the MIS engineering approach and especially the MIS reengineering approach. Indeed, this design method has been implemented into an ESB where each tools of each step has been plugged has a service. Then, the native plasticity of the workflow orchestration of that platform may be used in order to reconfigure the MIS. The following picture presents the overview of the “design-ESB” with the tools presented in section 1.3.2 as services: Crisis Modeling Tool
Service Modeling Tool
Collaborative Process Modification Tool
Rank Risk, Consequence Tool
PIM Model Transformation
Collaborative Process Deduction
PSM Model Transformation
Collaborative Process Validation Tool
Deployment of MIS
Fig. 7. ESB (PEtALS) dedicated to support the engineering/reengineering of MIS
According to this figure and depending from the need of change, three levels of agility are available: 1. 2.
3.
If the crisis evolves in a consequent manner (the characterization model is no longer acceptable at all), the obvious solution is to restart the whole process. This is the large loop. If the crisis does not evolve but the collaborative network change (one actor leave, one other join the group, etc.), then the crisis characterization remain acceptable but the services available to manage the situation are no longer the same. The process should redefine the services and restart the deduction chain (with the same crisis model). This is the medium loop. If the process needs to be improved on the fly, then, interruption during the orchestration may allow the crisis cell to use late binding to choose the most appropriate service at the right moment, as explained in [20]. This is the small loop.
5. Conclusion and Perspectives The presented works aims at providing a crisis management cell with a set of structured tools, based on model-driven principles, to build a mediation information system able to support the interoperability between ISs in a crisis context. These tools run model transformations to dive into abstraction layers by
196
S. Truptil, F. Bénaben, N. Salatgé, C. Hanachi, V. Chapurlat, J.P. Pignon H. Pingaud
fulfilling business level, logical level and technological level. Assumptions are the following: (i) the crisis management is assumed by a crisis cell, (ii) each partner of the crisis cell is able to ensure the communication with its own resources on the operational field and finally, (iii) each partner of the crisis cell is able to propose a set of services corresponding to the reflect of its concrete operational abilities. The two first ones could not be changed for the moment, but concerning the third one an interesting perspective may be the support of service design. This track concerns the assisted conception of technical services, according to interfaces patterns and business services analysis. One obvious limit of this work is that, for the moment, there is no data translation in the workflow. All the partners have to use the same information models. The current work on mediation services embedded in the bus, and a starting PhD on semantic reconciliation, should be a nice track for this situation. Finally it is also possible to claim that a single collaborative process is maybe a too poor model to describe the business level. Another PhD is also starting to enrich the business and logical layers in order to provide a more complete knowledge for model transformation.
6. References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14]
Bénaben F, Pignon JP, Hanachi C, Lorré JP, Chapurlat V, (2007) Interopérabilité des systèmes en situation de crise. WISG’07. France. Truptil S, Bénaben F, Pingaud, H (2009), Collaborative process design for Mediation Information System Engineering. ISCRAM’09, Sweden. Konstantas D, Bourrières JP, Léonard M, Boudjlida N, (2005), Interoperability of Enterprise Software and Applications, IESA’05, Springer-Verlag, Switzerland.. Pingaud H, (2009), Prospective de recherches en interopérabilité : vers un art de la médiation ?. CIGI’09, France. Maier MW, (1998) achitecting principles for systems of systems, Systems Engineering, vol.1, n°4. Wiederhold G, (1992) Mediators in the Architecture of Future Information Systems, IEEE Computer magazine, Vol 25 N 3, 38-49. Benguria G, Larrucea X, Elveseater B, Neple T, Beardsmore A, Friess M,(2006) A Platform Independent Model for Service Oriented Architectures. IESA’06. OMG (2003), MDA guide version 1.0.1, document number : omg/2003-06-01 edition. Graphical Modeling Framework : www.eclipse.org/gmf. Truptil S, Bénaben F, Couget P, Lauras M, Chapurlat V, Pingaud H (2008) Interoperability of Information Systems in Crisis Management: Crisis Modeling and Metamodeling. IESA’08. Germany. BPMN : Business Process Modeling Notation. www.bpmn.org Daclin N, Chapurlat V, Bénaben F, (2009) A verification approach applied for analyzing collaborative processes: the Anticipative Effects-Driven Approach. INCOM’09. Moscow, Russia. Rajsiri V, Lorré JP, Bénaben F, Pingaud H, (2009) Knowledge-based system for collaborative process specification. Special issue of Computers in Industry, Elsevier. Touzi J, Bénaben F, Pingaud H, Lorré JP, (2008) A Model-Driven approach for Collaborative Service-Oriented Architecture design. Special issue of International Journal of Production Economics. Elsevier.
Mediation Information System Engineering
197
[15] Wenxin M, (2009) Model Transformation Research from Logical to Technical Model in MISE project. Master Report, France. [16] PEtALS : http://petals.objectweb.org [17] Kidd TP, (1994) Agile Manufacturing: Forging New Frontier. Addison-Wesley. [18] Lindberg P (1990) Strategic manufacturing management: a proactive approach, International Journal of Operation and Production Management, Vol 10, n°2. [19] Sharifi H, Zhang Z, (1999) A methodology for achieving agility in manufacturing organizations: An introduction. International Journal of Production Economics, 62. [20] Van der Aalst WMP, Basten, T (1999) Inheritance of Workflows: An approach to tackling problems related to change. Computing Science Reports 99/06.
Service Value Meta-model: An Engineering Viewpoint Zhongjie Wang1, Xiaofei Xu1, Chao Ma1, Alice Liu2 1 2
Research Centre of Intelligent Computing for Enterprises and Services (ICES), Harbin Institute of Technology, Harbin, China 150001 IBM China Research Laboratory, Beijing, China 100094
Abstract. Various tangible or intangible values are the ultimate outputs of service systems. In service engineering, the comprehensive description of service value is a key step that helps service engineers design and develop high-quality interoperable service systems that will deliver expected values to customers and providers. In this paper, a service value metamodel is put forward from an engineering point of view. First, the objective of service value modeling is extensively analyzed. Then, the various essential aspects of service value that have significance for service system design are presented and each aspect is illustrated in detail by a set of concepts. Based on this meta-model, a multi-level service value model and the corresponding value-oriented service engineering process are briefly introduced. Keywords: service engineering, service modeling, service value, meta-model, value-aware
1. Introduction Service modeling is a critical step in service engineering [1] and the quality of service models determines the quality of service systems to a great extent. Core components of a service model are the interactive co-production behaviors and processes between customers, enablers, and providers. In addition, people, resources, shared information, techniques, environment, etc, are essential constituents [2]. At present, there are massive service model specifications and modeling methods that are widely applied in practice, e.g., BPMN [3]. Besides the above-mentioned functional service elements, “value”, as the ultimate goal of a service, should occupy an important place in service models [4]. As service is defined as a co-production process to create and share value [2], various tangible and intangible values are regarded as the output of a service system. Quality of a service system depends on whether and to what degree it can provide expected values to customers and providers. From this viewpoint, “value” needs to be accurately characterized in service models and connected with functional elements, thereby helping service engineers evaluate and optimize models for future high-quality service systems.
200
Z.J. Wang, X.F. Xu, C. Ma and A. Liu
Researchers put forward various value models, e.g., the value network and e3Value, to describe value flow and exchange relationships. A value network is a complex set of social and technical resources which work together via relationships to create social goods (public goods) or economic value [5]. It is mainly used as a tool to plan, organize, control and harmonize the co-production process between organizations to help enterprises optimize the service process and achieve optimal business performance. e3-value model is an ontology-based methodology for “identifying and analyzing how value is created, exchanged and consumed within a multi-actor network, hence, taking the economic value perspective and visualizing what is exchanged and by whom” [6]. Although there have been such value-oriented models, they emphasize particularly the value exchange relationship from a financial point of view. From an engineering viewpoint on service system design, they are quite insufficient. To help service engineers explicitly describe values from all essential aspects, in this paper we present a service value meta-model which is composed of a set of basic concepts to facilitate value-aware service engineering and methodology (VASEM). It is worthwhile to note that, as a service system is a socio-technological system that is composed of “any number of elements, interconnections, attributes, and stakeholders interacting to satisfy the request of a known client and create value” [7], it is usually considered as an interconnected system (or called “system of systems”). When designing such a complex system, interoperability issues must be elaborately addressed to facilitate value co-production between systems of different service participants. Therefore it is necessary to reach a consensus on how to describe service values uniformly, which is of great significance on developing the system from an engineering viewpoint.
2. Objective of Service Value Modeling The objective of service value modeling is to describe values that are to be delivered by service systems and to connect values with functional elements, thus to help service designers analyze whether and to which degree these expected values are implemented in aid of the service models. To meet this objective, service value models should have the ability to answer the following questions: (1) For each customer and provider, what are his value expectations? (2) How are these values co-produced, transferred, transformed, decomposed, composed, and allocated among participants? (3) How are values measured? (4) How are values inter-dependent with each other? (5) What is the lifecycle of each value? (6) How and to what degrees are these values to be implemented under the support of functional service elements?
Service Value Meta-model: An Engineering Viewpoint
201
3. Service Value Meta-model 3.1. Top Level of the Meta-model Fig. 1. shows the top level of the meta-model. When a service system (ServiceSystem) is to be designed and developed to realize a specific service (Service), designers need to create not only functional models (FunctionModel) but also value models (ValueModel). Functional models focus more on describing how a service is actualized by a set of functional elements (FunctionalElement) and relations between them (ElementRelation). Value models (ValueModel) emphasize the service’s tangible/intangible output expected by customers and providers. Service
<>
ServiceSystem
ValueModel
FunctionModel
1
1 1..n
ServiceValue
ValueRelation 1..n
1..n
ValueMapping
ElementRelation
FunctionalElement
1
1
1..n
+ValueType 1
ValueTaxomony
1
ValueLifecycle
1
DeliveryMeans
1
ValueIndicatorSet
Fig. 1. Top level of service value meta-model
A ValueModel is composed of a set of values (ServiceValue) and the relationships between them (ValueRelation). Each ValueRelation connects one value with one or multiple other values. When a service is designed, which values are to be coproduced and delivered are first identified and relations between them are clarified. Afterwards, each value is connected with one or multiple functional element(s) to validate whether and to what degree the expected value could be implemented under the support of the functional elements. ValueMapping is imported here for this purpose. If we go into the details of ServiceValue, the following four aspects must be elaborated: • ValueTaxonomy that describes information of service value classification. • ValueLifecycle that describes process and phases which a service value goes through before it is delivered to and consumed by its receivers. • DeliveryMeans that describes the temporal features of a value’s delivery to its receivers; • ValueIndicatorSet that describes a set of indicators to be used for measuring the implementation degree (quality and quantity) of the value.
202
Z.J. Wang, X.F. Xu, C. Ma and A. Liu
3.2. Service Value Taxonomy Values are co-produced by customers and providers together. No matter whether a value is tangible or intangible, it must lead to some improvements on specific aspects of its receivers. According to different types of improvements, we classify values into two categories, i.e., (a) ThingTransferingValue refers to those values generated by transferring some tangible or intangible “things” (e.g., information, a product, right of using a resource) from value producers to value receivers. Such values improve the possessions (of things) of the receivers. It is refined into 4 sub-categories: (a.1) Financial Value (FIV): monetary benefits transferred from value producers to value receivers; (a.2) Product Value (PRV): tangible products transferred from producers to receivers; (a.3) Information Value (INV): information transferred from producers to receivers; (a.4) Resource Usage Value (RUV): a resource’s “right to use” transferred from producers to receivers. (b) StateImprovementValue refers to those values generated by improving certain physical or spiritual states of participants. It is refined into 5 sub-categories: (b.1) Experience Value (EXV): the value receiver’s experience in engaging in specific services increases so that in the future he could provide higher-quality services which bring higher financial benefits to him. (b.2) Knowledge and Skill Value (KSV): the value receiver’s knowledge or professional skills in some specific domains are upgraded. (b.3) Market & Social Impact Value (MIV): the value receiver’s social impact in some social or marketing areas is expanded, e.g., increasing of market share, improvement of word of mouth. (b.4) (Thing’s) State Improvement Value (SIV): states of certain physical entities of the value receiver are changed by service activities like transporting, repairing, maintenance. (b.5) (People’s) Enjoyment Value (EJV): the physical or spiritual states of the value receiver are improved. ServiceValue ValueName ValueType ValueReceiver ValueProducer AttachedObject
INV
RUV
ValueTaxomony
+ValueType
ThingTransferingValue
StateImprovementValue
PRV
1
FIV
EXV
MIV
KSV
EJV
SIV
Fig. 2. Meta-model of service value taxonomy
When we model a value, we need to identify its name (attr: ValueName), type (attr: ValueType), producers (refers to the participant who generates the value, denoted as attr: ValueProducer), receivers (refers to the participant who obtains
Service Value Meta-model: An Engineering Viewpoint
203
and consumes the value, denoted as attr: ValueReceiver), and the object that the value is attached to, no matter whether the object is transferred from producers to receivers or the state of the object is improved (denoted as attr: AttachedObject). The meta-model concerning value taxonomy is shown in Fig. 2. 3.3. Service Value Lifecycle Most coarse-grained values cannot come into being all at once and need a comparatively long period during which they are gradually implemented. We call this the service value lifecycle (ValueLifecycle) [8] which is decomposed into 7 phases, i.e., (P1) BiLateralSearching. Before value co-production, a channel is to be established to connect unacquainted customers and providers together and form a “supply-demand” relationship between them. (P2) BiLateralNegotiation. In this phase, the customer contacts and negotiates with candidate providers and reaches a consensus on Service Level Agreement (SLA) of the expected values with specific providers. (P3) UniLateralPreparation. Before value co-production, some preparation work has to be done. For instance, the provider should prepare necessary resources, or he might contact other confederate service providers to outsource part of the service if he does not have sufficient capabilities to handle the service himself. (P4) CoProduction. This is the core phase of the value lifecycle. As all required participants have been specified and all necessary resources have been obtained, it is the time to “produce” the expected values by collaborations between both sides. (P5) Transferring. This is a simple phase in which produced values are transferred from providers to the customer. (P6) Usage. After the customer receives his expected value, he may further utilize the value in other services where the obtained value is to be transformed into other forms. (P7) Payment. The customer certainly needs to pay the providers for the value he gains. In this phase, money is transferred from the customer to the providers. Besides, the sequences (PhaseSequence) between one and other phases need to be modeled. In most services, all 7 phases exist and follow a specific sequence. Sometimes, there are exceptional circumstances, e.g., if a customer knows which providers are best suited to his requirements based on his historical experiences or “word of mouth”, he could directly contact the providers and come to an agreement, then P1 and P2 will be omitted. Another example is that sequences between P4, P5 and P6 are flexible, i.e., they may separate from each other (denoted as “4-5-6”) or may occur concurrently (e.g., “456”, “45-6”, “4-56”). P7 is independent of other phases and it may occur at any time during the service. We use three terms to model possible sequences, i.e., PhaseLockstep which refers that two phases occur in lockstep way, PhaseMerged which refers that two phases occur concurrently, and NoLimit which refers that there are no specific sequences between two phases. As each phase has distinct contributions to value implementation, there are distinct value indicators attached to each phase. To improve the value produced, service designers look for innovative design decisions in each phase to improve the corresponding indicators as far as possible.
204
Z.J. Wang, X.F. Xu, C. Ma and A. Liu
The meta-model of service value lifecycle is shown in Fig. 3. PhaseMerge ServiceValue NoLimit
ValueIndicatorSet
PhaseLockstep
1
1
ValueLifecycle
PhaseSequence
ValueIndicator +Indicators 1
1..7
1..n
1
Usage
Phase
BiLateralSearching
Payment BiLateralNegotiation
Transferring
UniLateralPreparation
CoProduction
Fig. 3. Meta-model of service value lifecycle
3.4. Means of Value Delivery Value lifecycle describes the steps for how a value is produced and transferred from nonexistence to be created for consumption. If we examine the transferring phase (P5) according to the time, period and frequency of the transfer, different values follow one of the following means of delivery, i.e., (1) Time-Discrete Delivery (TDD) [9]. This takes place at one instant of time of zero duration and stops at the point in time when they are started. When value is modeled, the concrete time spot (attr: Time) that the transfer occurs is central to our concern. (2) Time-Continuous Delivery (TCD) [9]. This takes place over a period of time and stops some time after they are started. Delivery duration (attr: Duration) is used to state how long a delivery takes place. TDD Time
ServiceValue
1
TCD Duration
TPD Frequency
DeliveryMeans
TAD
Fig. 4. Meta-model of value delivery means
(3) Time-Periodical Delivery (TPD). Value is transferred periodically and repeatedly from one side to another through either time-discrete or time-continuous delivery. Delivery frequency (attr: Frequency) is used to state how many times delivery occurs in the time-extent of a business case.
Service Value Meta-model: An Engineering Viewpoint 205
(4) Time-Accumulative Delivery (TAD). There is a particular type of situation in which, during each value delivery, the value is too small to be explicitly observed and perceived; however, after some period in which value is delivered multiple times, the total accumulative value becomes apparent. Note that a TPD must contain a set of TDD or TCD and a TAD is a specialized TPD. Fig. 4. shows the meta-model of value delivery means. 3.5. Service Value Indicator A set of value indicators are attached to each value to measure the value’s implementation degree. In economics, value is measured by the ratio of benefit to cost. Aiming at broad-sense values, benefit-oriented indicators and cost-oriented indicators are to be adopted similarly, called Value Benefit Indicators (BenefitIndicator) and Value Cost Indicators (CostIndicator). The former measures the benefit the value brings to its receivers, i.e., (1) to which degree AttachedObject transferred to the receivers is good enough; or (2) to which degree the state of AttachedObject is improved. The latter measures the cost paid to obtain the value, including monetary cost, time and energy cost, resource usage cost, etc. An indicator may have a continuous or discrete data type (attr: IndicatorType). Its concrete value (attr: Value) results from calculation or reasoning according to the QoS of functional elements to which the value is connected. ServiceValue
1
ValueIndicatorSet
BenefitIndicator ValueIndicator IndicatorType Value Scope
Discrete IndicatorType
CostIndicator
Continuous
Fig. 5. Meta-model of value indicator
3.6. Service Value Relationship A value does not exist independently but there are close relationships between different values. Such relations lead to the dependency between the implementation degrees of different values. When a service system is designed, it is necessary to look for a tradeoff between interdependent values to ensure that both are to be implemented to an acceptable level. There are 5 primary types of value relations: 1) Composition. To realize a coarse-grained value, multiple diverse finegrained values are to be implemented separately and they are further composed together to form large and significant values. A “whole-part” relationship is formed. Concerning the relative necessity that the part has to the whole, there are
206
Z.J. Wang, X.F. Xu, C. Ma and A. Liu
four possibilities, i.e., Mandatory, Optional, SingleSelection, and MultipleSelection. 2) Aggregation. This is similar to Composition but the coarse-grained value is a combination of multiple homogeneous fine-grained values. Attribute Scale is used to describe what and how many find-grained values are to be merged together. 3) TemporalAssociation. It means the delivery of values should follows strict sequences, i.e., some values are prerequisite of other values. 4) ProducerAssociation. Two values must be provided by the same producers (ProducerEqual) or must have different producers (ProducerInequal). 5) Dependency. Implementation degrees of multiple values are not completely independent of one another, i.e., improvement of one value will possibly lead to improvement or deterioration of another value. Denote DD(vi ; vj) as the dependency degree (DependencyDegree) that vj depends on vi. Value dependency is classified into positive dependency (PositiveDependency) (i.e., improvement on vi will lead to improvement on vj) and negative dependency (NegativeDependency) (i.e., improvement on vi will lead to deterioration on vj). ProducerEqual ProducerAssociation
1
ServiceValue
ProducerInequal 1..n
ValueRelation
TemporalAssocation Mandatory
PositiveDependency Optional Composition SingleSelection
MultipleSelection
Merge Scale
DependencyDegree
Dependency NegativeDependency
DependencyFunction
Fig. 6. Meta-model of value relations
3.7. Mapping between Value and Functional Elements The last but most crucial aspect that must be described is the mapping between value and functional elements. For each value, there are one or multiple behaviors, resources, information, etc that facilitate the implementation of the value. Each indicator of the value may be calculated or reasoned by the QoS of the corresponding functional elements. Therefore, we describe the mappings from two aspects: 1) mapping between value and functional elements (ValueMapping); 2) mapping between value indicators and the QoS of the functional elements (ValueIndicatorMapping). Concerning ValueMapping, designers identify mappings based on the object attached to the value, i.e., to check whether each functional element has contributions to the transfer or state improvement of the object. If YES, then there are mappings between them. The next step is to identify the element’s contribution
Service Value Meta-model: An Engineering Viewpoint 207
to the implementation of the value, i.e., which phase(s) it contributes to the value’s lifecycle. There are 3 types of ValueMapping: 1) 1:1, i.e., one value is related to only one functional element; 2) 1:n, i.e., one value is related to multiple functional elements; 3) Global mapping, i.e., a value cannot be directly mapped to any concrete elements but is related to the whole service. The process of establishing ValueMapping and ValueIndicatorMapping is called “Value Annotation”. 1:1
1:N
Global
Resource
MappingType
Information 1..n
ServiceValue 1
ValueMapping MappingType
1..n
FunctionalElement
1
Participant 1..n
1..n
ValueIndicator 1
1..n
ValueIndicatorMapping
QoS
Behavior
1
Fig. 7. Meta-model of Mapping between Value and Functional Elements
4. Service Value Modeling and Analysis Based on the value meta-model, we have designed a multi-level service value model which is composed of four layers: Service Proposition Model (SPM), Participant-Oriented Value Network (POVN), Value Dependency Model (VDM), and Value Annotation Model (VAM). Based on this value model, a value-aware service engineering and methodology (VASEM) is put forward. In VASEM, service values are considered as the ultimate optimization goals and need to be aware in every step of service system design and development. Firstly, value models are established. Then the identified values are connected with functional elements of service models (called “value annotation”). Based on the annotation, service models are qualitatively and quantitatively analyzed to validate whether and to what degree all expected values could be realized. If there are deficiencies, service model are to be optimized by adjusting the selection of functional elements and their relationships. If these value deficiencies cannot be overcome by model optimization, this means that the value constraints are too strict (i.e., value expectations are too high to be satisfied), then they are fed back to value models for further adjustment. If no value deficiencies are found during the analyses, then the service system is to be designed based on the service models.
208
Z.J. Wang, X.F. Xu, C. Ma and A. Liu
Due to limited space, details of values models and VASME may be found in [10] and [11].
5. Conclusions Traditional viewpoints of service value usually focus on financial or economic meanings. Existing work such as the Value Network are commonly adopted to analyze service at the business level. Compared with existing works, the metamodel presented in this paper pays greater attention to describing service values from a service engineering point of view. Its main purpose is to support better design and development of service systems to ensure they will deliver expected values to customers and providers. To describe the values is only one of the initial steps of value-aware service engineering and methodology. Future work includes: (1) To look for a uniform set of value indicators for each type of service values and corresponding measurements; (2) to develop an automatic value annotation method; and (3) to exploit the value-oriented service model analysis method.
6. Acknowledgement Research works in this paper are supported by the National Natural Science Foundation (NSF) of China (60803091, 70971029) and IBM-HIT Joint Lab 2009 Joint Research Project (No.JLAS200907001-1).
7. References [1] [2] [3] [4] [5] [6] [7] [8] [9]
Tien J, Berg D, (2003) Towards Service Systems Engineering. IEEE International Conference on Systems, Man and Cybernetics, 5(5):4890–4895 Spohrer J, Maglio P, Bailey J, Gruhl D, (2007) Steps towards a Science of Service Systems. IEEE Computer, 40(1):71–77 OMG, (2007) Business Process Modeling Notation (BPMN). http://www.bpmn.org Henkel M, Perjons E, Zdravkovic J, et al, (2006) A Value-based Foundation for Service Modelling. The 4th European Conference on Web Services, Washington DC, USA: IEEE Computer Society Press, 129–137 Allee V, (2000) Reconfiguring the Value Network. Journal of Business Strategy, 21(4):36–39 Gordijn J, Yu E, van der Raadt B, (2006) e-Service Design Using i* and e3value Modeling. IEEE Software, 23(3):26–33 IBM Corporation (2007). SSME: Systems. http://www304.ibm.com/jct09002c/university/ scholars/skills/ssme/resources.html. Wang ZJ, Xu XF, (2009) SVLC: Service Value Life Cycle Model. IEEE International Conference on Cloud Computing, IEEE Computer Society, 159–166 Wieringa R, Pijpers V, Bodenstaff L, Gordijn J, (2008) Value-driven coordination process design using physical delivery models. 27th International Conference on Conceptual Modeling, Lecture Notes in Computer Science 5231, Springer, 216–231
Service Value Meta-model: An Engineering Viewpoint 209 [10] Wang ZJ, Xu XF, (2009a) Multi-Level Graphical Service Value Modeling Method. Computer Integrated Manufacturing Systems, 2009, 15(12): 2319–2327 [11] Xu XF, Wang ZJ, (2008) Value-Aware Service Model Driven Architecture and Methodology. Invited paper on Service Science Cross Session of the 20th World Computer Congress, Springer Boston, 280:277–286
Part IV
Architectures and Frameworks for Interoperability
An Interoperable Enterprise Architecture to Support Decentralized Collaborative Planning Processes in Supply Chain Networks Jorge E. Hernández1, Raul Poler1 and Josefa Mula1 1
CIGIP (Research Centre on Production Management and Engineering). Universidad Politécnica de Valencia. Escuela Politécnica Superior de Alcoy. Edificio Ferrándiz y Carbonell, 2, 03801 Alcoy (Alicante), Spain. {jeh, rpoler, fmula}@cigip.upv.es
Abstract. Supply chain management, since many years, has been realated to coordinate the efforts among the supply chain nodes in order to mitigate unpredictable behaviours related to environment uncertainty. This allows to the involded nodes to achieve common goals in a efectivelly maner likewise the customers demand. Thus, by sharing accurate and actionable information, on a timely basis, the collaboration among the nodes will emerge in order to improve the decision-making process related, mainly, to the planning processes in the supply chain. Hence, from a decentralized perspective, each supply chain node will consider their own enterprise systems in order to manage and exchange the right information among them. Thereafter, aspects such us the interoperabilty of the systems are a very crytical issue to be considered, even in the modelling process and also in the implementation stage. Then, this paper, supported by the Zachman framework and by the REA standar ontology, proposes an novel interoperable enterprise architecture to support the decentralized collaborative planning and the decision-making processes in the supply chains. In addition, the proposed architecture consider a multi-agent based system approach as well as its application to a real automobile supply chain network. Keywords: Enterprise architecture, Interoperability, Multi-Agent, Collaborative planning
1. Introduction The supply chain (SC) management process encompasses all the necessary activities to satisfy final customer demand by considering, in the most of the cases, the distribution of components and raw materials among SC nodes and also how they interact to coordinate their activities and decision-making process. In fact, [3] emphasize how the main information elements to be considered in the SC management process must cover node actions (orders, order filling, shipping, receiving, production, etc.), node policies (inputs and outputs inventory policies),
214
J. E. Hernández, R. Poler and J. Mula
and the costs and rates involved. Therefore a collaborative process will emerge by sharing the proper information among the SC nodes. In addition, collaboration in SCs is important in terms of innovation as every node realize the benefits related to the high quality, lower costs, more timely deliveries, efficient operations and the effective coordination of activities [21]. Moreover, and from a descentralized point of view, collaborative planning (among SC partners) can be achieved by a simple form of coordination of upstream planning by providing the collaboration partners an opportunity to modify suggested order/supply patterns iteratively [7]. Thus, the collaborative relationships change the way in how the business are doing in SC’s, in this context it is possible to say that nodes trends to evolve from cooperation to collaboration [18]. In order to support this, and by considering the fact that enterprise engineering has been leveraged as key topics in enterprise management [17], the concept of enterprise architecture emerges as a system to support the integration of the main concepts, information and processes related to the SC. Thus, as defined by [25] the enterprise architecture gives an interpretation for the enterprise relationships, the information that the enterprises use, and a view of the physical and technological layers considered in the enterprise. Nevertheless, [12] establish that enterprise architectures should not be considered as a “magical” solution to the main enterprise problems (communication, information technologies, etc.), but also it should be considered to support the development efforts in the integration of specified units and processes. Moreover, [23] consider that the enterprise architectures can be used as blueprints in order to achieve the business objectives by considering the information technologies. Hence, in order to support the development process of enterprise architectures, it is highly recommended the use of some meta-architecture to facilitate communication and provide standardized terminologies [5], In this context, [6] review the most importants approach such as the Zachman Framework, ARIS, TOGAF, ATHENA, DoDAF and many more, which can be considered in any enterprise architecture. Therefore, important aspects related to the SC collaboration and enterprise architectures are related to the sharing information process among the SC nodes and also to the involved decision-making process which request the information related with the SC planning process. Moreover, those nodes commonly consider different enterprise resource planning (ERP) systems, in where the concept of interoperability come ups in order to support the right understanding in the collaborative process related to each SC node ERP systems. In this context, [4] establish that a software interoperability represents a problem for the SC enterprises, mainly due to the unavailability of standards and the existence of heterogeneous hardware and software platforms. In this same context, [4] consider that there are three main research themes, or domains, that address interoperability such as enterprise modelling, architecture-platform and the ontology’s (details of this concepts can be found in [24]). Then, as established by [11], if the enterprise architecture is defined based on ontology’s, the communication problems among different ERP systems will be supported more precisely. Hence, the stakeholder decision-making process by considering a common understanding among the SC parties is enriched [11].
An Interoperable Enterprise Architecture
215
Therefore, by considering the modelling methodology established in [9] and by extending the already defined architecture in [10] in order to highlight it links with the interoperable concepts, this paper presents an interoperable architecture to support the collaborative planning in SC networks by considering elements such as enterprise architecture, standard enterprise ontology supported by the platform Protegé, the language OWL-S (Semantic Markup for Web Services) and a multiagent based system (MAS) approach supported by JADE 3.6 and ECLIPSE. Hence, the paper is set out as follows: Section 2 briefly reviews the relevant literature on SC collaboration, interoperability and ontology’s. Section 3, extending [10], presents the interoperable enterprise architecture by considering the standard REA ontology in order to support the decentralized collaborative planning in SC and, in addition by considering [8] a case study applied to the automobile SC sector, supported by MAS, is presented in order to observe the real application of the proposed interoperable enterprise architecture. Finally, Section 4 provides the main conclusions of the paper and also establishes a brief description of our future work.
2. Background in Supply Chain Collaboration Two main aspects regarding the study of the collaborative relationship in the SC are commonly considered: the first deals with the intensity of the relationships between partners, which consider the simple information sharing and also the the risk and profits related to this shared information; the second studies the extent of the collaboration across the SC [13]. Thus, companies consider SC cooperation at levels which implies planning, forecasting and replenishment in a collaborative context (collaborative planning, forecasting and replenishment or CPFR). [22] defines the CPFR systems as a step that goes beyond the efficient answers which the consumer requires. [1] define collaboration as the sharing of information, functions and functionality, knowledge and business processes with the objective of creating a multi-win situation for all the participants of the community of businesses in the chain value, including employees, customers, suppliers and partners. The coordination process of autonomous, yet inter-connected tacticaloperational planning activities, is referred to as collaborative planning [9]. Therefore collaborative planning constitutes a decision-making process that involves interaction components, which presents a wide range of dynamic behaviours [14]. For this reason, the visions that address the collaboration process in recent years, like a distributed decision-making process, are becoming more important than the centralised perspective. Thereafter it is possible to highlight that the collaboration in SCs helps in to get an efficient information management. This information primarily supports the decision-making process of the company which, in most of the cases, is related with the how and when the orders must be placed and be sent to the suppliers as well as to anticipate the demand changes from the customer. In this sense, regarding to [10] and [19], in order to foster collaboration in SCs necessarily the collaborative and non-collaborative nodes must be identified. Moreover, regarding
216
J. E. Hernández, R. Poler and J. Mula
to the information flow oriented to support the decision-making process, result a critical issue that the SC nodes must be able to share information in order to support any decision level (strategic, tactic and operative), this in order to support the feedback on time at any decision-making level.
3. The Interoperable Enterprise Architecture for Collaborative Planning in Supply Chains Networks The collaborative planning process, supported by the proposed enterprise architecture (Figure 1), is oriented to promote the exchange of information by considering an interoperable environment. This in order to show the informational flows which supports, on one hand, the collaboration among the SC nodes and, in the other hand, decision-making process of the SC nodes. In this sense, it is possible to say that the collaboration involves the exchange of information such us plans, decisions, actions, etc… Thus, a tactical perspective in the decision-making process is to be considered in order to support the collaborative decision-making. Thereafter, this section presents an extension of [10] in where the main models related to the collaborative process have been shown. Then, in the same way, the interoperable enterprise architecture to support the decentralized collaborative planning process, consider the Zachman enterprise framework [26] to support the structure of the interoperable architecture. Moreover, by considering the SCAMMCPA modelling methodology expressed in [9], the MAS are used to model the main actions and information flows related to the collaborative SC processes. 3.1. The Conceptual Enterprise Architecture in Supply Chain Networks for Collaborative Processes From the background section, it can be said that the most of the authors suggest consider a framework to carry on the enterprise architecture development process, such us definitions, modelling process and experiments. Moreover, the right selection of a framework will depend on the modeler’s experience and how robust it can be in the environment to which is to be applied. In this context, the Zachman Framework [26] has been chosen to represent the main elements of the architecture based on MAS which will support the collaborative processes in the SC. In this context, the interoperability involves an upper level (customer), the first-tier supplier and the suppliers at the lower level (second or N-tier supplier). Therefore, the selected cells from the Zachman framework grid [20] have been chosen to collect and transmit the related information from the SC nodes which are intended to participate in a collaborative process as collaborative or non-collaborative nodes. Then, the collaborative decision-making supported by the SC information flow takes place in the following highlighted Zachman grid blocks: (1) at the business concepts level (“where” and “who”, is defined as Physical Layer), then at the (2) system logic level (“what”, is defined as Data Layer), followed by the (3) business concepts level again (by considering the “how” and is defined as the Information Layer). Next, (4) the level of technology physics is defined (in where
An Interoperable Enterprise Architecture 217
the “what”, “how” and “where” is considered, this level is named as Ontology REA-based Layer). Next, (5) the technology physics and components assembles are considered in order to build this view (here, the “who” and “when” perspective are considered, thus this layer is defined as the Agent Communication Layer). Finally, (6) the operation classes are considered in order to give a real perspective of the system and the processes as well (at this level the “where” and “who” define this level, thus this layer is defined as Behaviour Layer). Thus, the model representation (as can bee seen in the Figure 1) related to each selected cell consider, on one hand, the representation proposal of [16] and, on the other hand, an extension of [10] to support SC interoperability in collaborative processes (first from its basic point of view and then, in the Figure 2, each selected cell are shown in detail).
Fig. 1. Main blocks from the Zachman framework to support the interoperbility in collabortive SC’s (adapted from [26])
By considering the review from the previous section, it is possible to state that the collaborative process presented in Figure 1 aims to promote the exchange of information. In this context, the main purpose of the proposed architecture is to show the informational flows that will support interopebility among the SC nodes and also de decision-making process related to every node. In this sense, it is possible to say that collaboration involves many types of processes, and this papers is dedicated to propose an interoporable architecture to support the collaborative planning in SC’s. Moreover, in a collaborative context, these processes relate to the exchange of the demand plans which will eventually imply the consideration of a tactical perspective within the decision-making process. Additionally, and in a SC management context, there are other relevant processes that foster collaboration such as forward and reverse logistics, requesting management, inventory control, key performance indicators, and so on. Thus, in order to conceptually and graphically explain the collaborative process among the SC nodes, and also to efficiently reach a right solution through a decentralized collaborative decision-
218
J. E. Hernández, R. Poler and J. Mula
making process, this architecture focuses in the planning process in where a negotiation on demand plans is considered by lonking the main constraints related to all SC nodes to the information and decisional flow.. •
Physical Layer (1): Through this layer the SC configuration is analyzed as well as the resources and items related to it. This layer will provide aspects like the enterprise flows, topologies. This is also considered as the real system in where the decision making process take places.
•
Data Layer (2): this layer can be considered as a respository systems which provides simplified access to data stored by consideting an entityrelational database. From a decentralized point of view, every node consider their on data base.
•
Information Layer (3): The information layer collects, manage and structure all the necessary information for the information exchange process from a generical view. This in order to support the upper layer on ther collaboration process.
•
Ontology REA-based Layer (4): In this case, the REA [15] enterprise ontology has been considered regarding to its standar approach and to its simplicity in the modeling process. Thus, at this layer the mains description logic is established by considering the main economical resources, events and MAS. After this, in order to support the conectivity with the MAS (next layer) a sematic languaje will be choosen in order to support the comunicacion between the physical and agent layer. This layer can be supported by some ontological software desginer, such as protegé.
•
Agente Comunication Layer (5): This layer is oriented to support the MAS infrastructure in order to provide the information requested. Thus, the information flow consider aspects such as the transfer and processing of the information which is linked to the corresponding database. The most common librarie wich support this decentralized infraestructure is JADE.
•
Behaviour Layer (6): the behaviour layer can be defined in three types, the first one related to MAS which generates a call for proposal ACL menssage (CFP) offers and receive proposals or inform in a ACL language, the second one related to the reception of CFP and proporsal and the generation of CFP messages as well, and the last one oriented to receive the CFP request and answer by accepting, refusing or proposing the CFP request.
In this context, by supporting the enterprise architecture with the REA enterprise ontology (as happen in mostly all of the enterprise resource planning, ERP, systems), the business processes are viewed as components of a single value chain. The exchange processes (like the sale of a product) is modelled twice, once in the enterprise system of each SC node. Then, the Collaboration layer (this is presented in the Figure 3 in where the OWL-S semantic ontology has been considered to support the collaboration among the SC nodes) is established for every node from an independent perspective. Hence, the information exchange
An Interoperable Enterprise Architecture 219
process is modelled once in independent terms that can be then mapped into internal enterprise system components. Then it is possible to say that REA cover independent SC models for trading partner. In general, the concepts which support the proposed enterprise architecture is due to the information layer, which consider the common information oriented to support the collaborative planning in SC’s Behavioral Layer
6 INSTANCES Standarized information
5 Agent comunication Layer
3 4
REA Standar Ontology Layer Data Layer
2
CDB
FTSDB Demand
1
C
Products
STSDB
Common information
Demand
FTS
Parts
STS
Physical Layer
Fig. 2. Interoperable enterprice architecture supported by REA enterprise ontology
Thus, the relationship among the collaborative nodes will be supported by the demand plan exchanging process, which will collaboratively promote the negotiation of unfeasible values, thus the collaborative decision-making. 3.2. Application of the Proposed Interoperable Enterprise Architecture to a Real Automobile Supply Chain Network The proposed architecture is being applied to a real automobile SC network. A full description of the company and main processes can be found in [8]. In particular, the plant under study herein supplies seats for automobiles, and the main results related to the interoperable enterprise architecture application can be seen in the Figure 3.
220
J. E. Hernández, R. Poler and J. Mula
Hence the interoperability, supported by this proposed architecture, establishes that the collaboration among the different supply chain tiers will be held by the information exchange among them. In the context of the automobile supply chain sector, this information is mainly related to the production planning, which linked to the decision-making process. Hence, by considering a longer horizon plan, the capacity to react to some unexpected demand plan requirement will be improved, and then the collaborative decision-making will emerge. Thus, by conspiring the advancement of orders, or by changing the respective safety stock, respective suppliers will be able to react to the uncertainty in demand by avoiding an excess of orders or by maintaining a sufficient stock of materials in order to effectively and efficiently cope with changes in orders. Then, in terms of the SC, the order may be accepted, negotiated or rejected. Thereafter, the negotiation process takes place when the SC configuration is in such a state that suppliers of suppliers will exist, and the information exchange (inherent in the decision-making process) will involve several SC nodes which, in turn, will imply that the nodes will exchange the proper information (timely) needed to cover possible backlogs in the production planning process from upper and lower SC tiers in a collaborative and decentralized manner. Then, the Figure 3 shows the results about applying the interoperable architecture to the real automobile SC: •
Automobile Supply Chain (A): This layer consider the decision-makers related with production planning process in order to generate demand plan and diceminate it through the SC tiers. In this context, this layer will feed the rest of layers (B, C, D and E) in order to validate the system.
•
Ontological Data Base (B): The information contained herein is to store the ontologies with which the SC tiers will support their communication. This information is related with the production planning process, which will be considered by the MAS in order to support the decentralized negotiation process supported by the REA standar ontology. Then, this is connected with A, D and E
•
Protégé Platform (C): The Protégé platform is to be considered in to generate the apropiate language which will support the communication technology with the defined ontologies. This in order to build respective classes that every agent will use and also the behviours in the context of the FIPA-ACL ContractNet protocols. This layers is connected with all the rest of the layers.
•
OWL-S (D): This layer support the semantic comunication among the SC nodes in order to make the service functionalities possible. Moreover, by considering the main ontologies defined in B and C three main issues are covered in this layer: Service profiles, modelling process and interoperability through messages, which will support the agent comunication in the decentralized collaborative process.
An Interoperable Enterprise Architecture
•
221
Jade Application (E): The behaviours related to the collaborative processs, such as the decision-making process, in the context of production planning process involved in the ERP systems is supported by the implementation of the negotiation, which is supported by the FIPA CFP protocol as can be seen in the dotted square and by the JADE 3.6 libraries. Then the interoperability, at this application layer, will accour among the threads which correspond to the instantiated classes in order to represente the collaborative process among the decision-makers.
Fig. 3. Application of the proposed interoperable architecure to an automobile SC
Therefore in the automobile SC domain context, the model considers the automobile manufacturer (or assembler), the first tier supplier, the second tier supplier, and transport as the main nodes that exchange information and take decision among them as the main human resources involved in the process. Supported by [2], Figure 3 shows that the integration involves the conceptual, technical and applicative aspects such us the conceptual SC model, the ontological platform supported by protégé, OWL-S and a relational database, and the application which highlight the communication process from a dynamical point of view, respectively. Thus, from this case study, it is possible to observe that the interoperability system is specific to the information systems, so it inherits all the characteristics of information systems (the information layer in Figure 2). Moreover, the
222
J. E. Hernández, R. Poler and J. Mula
interoperability can be defined by using the REA enterprise ontology and the OWL-S semantic are supported by the protégé system Hence, some properties of the interoperable enterprise architecture, in the automobile SC, can be highlighted: (1) the interoperability is defined by considering a common semantic which is interpreted by the MAS, (2) the different layers are defined to support the interoperability in different enterprise systems, (3) MAS are instances of the related ontological classes defined in protégé, which support interoperability among this and another MAS which will be related to the same SC network configuration.
4. Conclusions This paper has presented a novel architecture proposal based on the REA enterprise ontology to support the decentralized collaboration process in SC networks such as the planning process. This proposal also has considered the Zachman enterprise framework in order to give a well defined structure to the architecture. Furthermore, it is possible to conclude that MAS are an appropriate tool to model collaborative process from a decentralized perspective, in where the information coming from the collaborative and non-collaborative SC nodes must be identified. As further research objectives, is expected to (1) apply this architecture to study the collaboration in SC with another modeling approaches such us mathematical models, discrete event-based simulation, among others, (2) apply another semantics and ontology’s to this architecture and, (3) consider another standard frameworks such as ATHENA, ARIS among others in order to compare its applicability in real supply chain networks considering the proposed interoperable architecture proposed in this framework.
5. Acknowledgments This research has been supported partly by the EVOLUTION project (Ref. DPI2007-65501) which is funded by the Spanish Ministry of Science and Education and partly by the Universidad Politécnica de Valencia (Ref. PAID-0508) and the Generalitat Valenciana, www.cigip.upv.es/evolution.
6. References [1] [2]
Ashayeri J, Kampstra P, (2003) Collaborative Replenishment: A Step-by-Step Approach, Ref: KLICT Project: OP-054, Dynamic Green Logistics, Tilburg University. Berre AJ, Elvesæter B, Figay N, Guglielmina C, Johnsen SG, Karlsen D, Knothe T and Lippe S (2007) The ATHENA Interoperability Framework. In: Enterprise
An Interoperable Enterprise Architecture 223
[3] [4] [5] [6] [7] [8] [9] [10]
[11] [12] [13] [14] [15] [16] [17]
[18]
[19]
Interoperability II, Gonçalves, R.J.; Müller, J.P.; Mertins, K.; Zelm, M. (Eds.) Springer London: 569-580. Chatfield DC, Harrison TP, Hayya JC, (2009) SCML: An information framework to support supply chain modelling. European Journal of Operational Research 196: 651– 660. Chen D, Doumeingts G, (2003) European initiatives to develop interoperability of enterprise applications—basic concepts, framework and roadmap. Annual Reviews in Control 27: 153–162. Chen Z, Pooley R, (2007) Rediscovering Zachman Framework using Ontology from a Requirement Engineering Perspective. 33rd Annual IEEE International Computer Software and Applications Conference: 3-8. Chen D, Doumeingts G, Vernadat F, (2008) Architectures for enterprise integration and interoperability: Past, present and future. Computers in Industry 59: 647–659. Dudek G, Stadtler H, (2005) Negotiation-based collaborative planning between supply chains partners. European Journal of Operational Research 163(3): 668-687. Hernández JE, Mula J, Ferriols FJ, Poler R, (2008) A conceptual model for the production and transport planning process: An application to the automobile sector. Computers in Industry 59(8): 842-852. Hernández JE, Alemany MME, Lario FC, Poler R, (2009a) SCAMM-CPA: A Supply Chain Agent-Based Modelling Methodology That Supports a Collaborative Planning Process. Innovar 19(34): 99-120. Hernández JE, Poler R, Mula J, (2009b) A supply chain architecture based on multiagent systems to support decentralized collaborative processes. In: Leveraging Knowledge for Innovation in Collaborative Networks, Camarinha-Matos, LM, Paraskakis I, Afsarmanesh H, (Eds.) Springer – Boston: 128-135. Kang D, Lee J, Choi S, Kim K, (2010) An ontology-based Enterprise Architecture. Expert Systems with Applications 37: 1456–1464. Kilpeläinen T, Nurminen M, (2007) Applying Genre-Based Ontologies to Enterprise Architecture. 18th Australasian Conference on Information Systems Applying Ontologies to EA. Toowoomba Kilpeläinen: 468-477. La Forme FG, Genoulaz VB, Campagne J, (2007) A framework to analyse collaborative performance. Computers in Industry 58(7): 687-697. Lambert MD, Cooper MC, (2000) Issues in supply chain management. Industrial Marketing Management 29: 65-83. McCarthy WE, (1982) The REA Accounting Model: A Generalized Framework for Accounting Systems in a Shared Data Environment. The Accounting Review: 554-78. Nahm YE, Ishikawa H, (2005) Ahybrid multi-agent system architecture for enterprise integration using computer networks. Robotics and Computer-Integrated Manufacturing 21: 217–234. Ortiz A, Anaya V, Franco D (2005) Deriving Enterprise Engineering and Integration Frameworks from Supply Chain Management Practices. In: Knowledge Sharing in the Integrated Enterprise - Interoperability Strategies for the Enterprise Architect, Bernus P, Fox M, (Eds) Springer - Boston: 279-288. Poler R, Ortiz A, Lario FC, Alba M, (2007) An Interoperable Platform to Implement Collaborative Forecasting in OEM Supply Chains. In: Enterprise Interoperability New Challenges and Approaches, Doumeingts G, Müller J, Morel G, Vallespir B, (Eds) Springer – London: 179-188. Poler R, Hernández JE, Mula J, Lario FC (2008) Collaborative forecasting in networked manufacturing enterprises. Journal of Manufacturing Technology Management 19(4): 514 – 528.
224
J. E. Hernández, R. Poler and J. Mula
[20] Shen W, Hao Q, Yoon HJ, Norrie D, (2006) Applications of agent-based systems in intelligent manufacturing: An updated review. Advanced Engineering Informatics 20: 415–431. [21] Soosay CA, Hyland PW, Ferrer M, (2008) Supply chain collaboration: capabilities for continuous innovation. Supply Chain Management: An International Journal 13(2): 160-169. [22] Tosh M, (1998) Focus on forecasting. Progressive Grocer 77(10): 113-114. [23] van den Hoven J, (2003) Data Architecture: Blueprints for Data. Information Systems Management 20(1): 90-92. [24] Ushold M, King M, Moralee S, Zorgios Y (1998) The Enterprise Ontology. The Knowledge Engineering Review 13(1): 31-89. [25] Vernadat FB, (2007) Interoperable enterprise systems: Principles, concepts, and methods. Annual Reviews in Control 31: 137–145. [26] Zachman JA, (1997) Enterprise Architecture: The Issue of the Century. Database Programming and Design: 44-53.
Business Cooperation-oriented Heterogeneous System Integration Framework and its Implementation Yihua Ni1, Yan Lu2, Haibo Wang1, Xinjian Gu1, Zhengxiao Wang1 1 2
Department of Mechanical Engineering, Zhejiang University, Hangzhou, China CNRS Faculté des Sciences et Techniques, Nancy-Université, France
Abstract. In the process of enterprise informatization, mass of legacy systems (such as database and applications) need integration, different application systems (PDM, ERP, SCM, CRM) need interaction and business between enterprises demand close cooperation. Took enterprise business cooperation as object, aimed at heterogeneous system integration, this paper proposed a framework for business cooperation-oriented heterogeneous system integration, and analyzed key technologies related to the framework, such as ontology modeling, ontology mapping, and semantic interoperation mechanism. Finally, the realization of the business cooperation framework was discussed. As a proof of concept, a prototypical platform based on this framework was developed, and the operating principle of the platform was analyzed. The proposed scheme has great theoretical significance which provides a new architecture and approach for heterogeneous system integration on business cooperation. Keywords: ontology, business cooperation, heterogeneous system, ontology integration
1. Overview Heterogeneous data source integration is a classical problem in the field of database though many data integration methods and related tools have been put into practice. However, because of the restrictions that complexity of enterprise application system, variety of heterogeneous data source and so on, especially the new application demands of companies, the enterprise data integration process has become extremely complicated. The relationship between various heterogeneous system integration technologies is described in Fig. 1. Enterprise data source heterogeneous is mainly represented as system heterogeneity, mode heterogeneity and semantic heterogeneity. With the development of technology, the use of CORBA, DCOM and other middleware products, especially the appearance of XML and Web Service technology, provides enough technical support to solve the problems caused by system heterogeneity
226
Y. H. Ni, Y. Lu, and H. B. Wang, X. J. Gu, Z. X. Wang
and mode heterogeneity [1]. However, XML does not provide the standard data structure and terminology that are used to describe enterprise information exchange. Thus, XML could not effectively resolve the semantic conflict. Higher basic theories or establishment are required for better business cooperation [2]. Ontology, which could describe the conceptual model of information system in the semantic level, was proposed by researchers as a solution to semantic heterogeneity. Space dimension Resource integration Business process integration Different softwares integration
Organization Grid based Web service based and process Software platform Ontology optimization Agent based based based based Middleware based Workflow based Software component based
Single software integration
Basic preparation System development System deployment System execution
Time dimension
Fig. 1. Relationship between heterogeneous system integration technologies
A key feature of business cooperation is the loose coupling integration of data and application, that is, a unified data model independent of detailed data source. The key part of the unified data model is a meta-model which is built up based on ontology and connects several dispersed systems. The feature of ontology-based integration technology proposed in this paper is wide-ranging and fundamental, that is helpful to solve the problem of heterogeneous system integration fundamentally.
2. Related Work The integration of PDM, CAx, ERP and so on is mainly based on the specialized interface of the systems. The method suffers from some shortcomings such as poor expansibility and expensive in establishment and maintenance [3]. For the propose of achieving mutual information sharing and data exchange between enterprises, a common exchange standard abided by everyone was in demand. EDI is such a kind of standard for electronic data exchange, which is helpful in solving system and mode heterogeneity between different softwares. However, due to its ignorance of semantics, EDI can not provide higher level of interoperability. In order to realize the semantic integration of CAD/CAM, Lalit Patil proposed a product data description language PSRL for data fusion. In view of the integration of product design and manufacture, based on ontology technology, a comparatively complete solution was proposed to realize the integration of the product terminology [4].
Business Cooperation-oriented Heterogeneous System Integration
227
K.Y.Kim used a first-order logic method to establish product ontology faced to CPC environment to realize semantic mapping. The goal is to achieve a shared ontology [5]. C. Dartigues established unified middle ontology by using KIF system, realized mappings of ontologies in different fields and information exchange of various application software through manually defined rules [6]. AMIS program realized the automatic integration of software systems by using current technologies, and solved the problem of semantic conflict in integration process [7]. The main contribution of this paper is: capture and use semantics to describe the software system knowledge and the auto-reasoning of tasks. [8] presented an ontology-driven integration approach called a priori approach. Its originality is that each data source participating in the integration process contains an ontology that defines the meaning of its own data. This approach ensures the automation of the integration process when all sources reference a shared ontology.
3. Heterogeneous System Integration Framework To realize business cooperation-oriented heterogeneous system integration, first of all, we build domain ontology related to business cooperation. Secondly, realize mappings among different domain ontologies. At last, develop relevant software tools to achieve heterogeneous system integration. Architecture of ontology-based heterogeneous system integration faced to business cooperation is put forward in this paper, which could be divided into five layers: ontology model layer, ontology maintain layer, integration platform layer, interface layer and information system layer. The structure of our proposed framework is shown in Fig. 2. multi -layer ontology model component ontology product ontology activity ontology process ontology
user
heterogeneous information system integrationenteprise portal
business multicooperation warehouse faced to
component ontology warehouse
-layer ontology product ontology warehouse
activity ontology warehouse process ontology warehouse
ontology bulid,maintain , manage tools
heterogeneous system integration platform faced to business coop ( ontology mapping 、 business process inference
1
ontology model layer
、 software component invoke
eration )
information system interface and encapsulation ( XML 、 CORBA 、Web Service ) information system
information
software component business activity
software component business activity
product
product
2 ontology maintain layer
3
integration platform layer
4 interface layer
5 information system layer
Fig. 2. Architecture of ontology-based heterogeneous system integration faced to business cooperation
228
Y. H. Ni, Y. Lu, and H. B. Wang, X. J. Gu, Z. X. Wang
Ontology model layer: In view of domains that business cooperation involves, a multi-layer ontology model consisted of business process ontology, business activity ontology, product ontology and software component ontology is built up. This layer primarily reflects the achievements of our theory study, and is core of ontology library building and integrated platform operating. Ontology maintain layer: based on the multi-layer ontology model, using the tools for ontology warehouse building, maintenance, and management to build business process ontology, business activity ontology, product ontology and software component ontology warehouses, which lay the foundation for realizing heterogeneous system integration faced to business cooperation. Building ontology warehouse is a time-consuming and onerous task, but it is a fundamental work for heterogeneous system integration as well. Integration platform layer: it bridges the gap between business and software in heterogeneous system integration, including mass of software tools, mappings in same domain and among different ontologies. Its main function is to perform business process coordination, software component invoking, ontology warehouse self-organizing management and so on. Information systems that participate in the integration platform exchange XMLformat business data with platform through a unified interface, and the platform transforms the XML document into enterprise local ontology for business collaboration. Then, the platform maps each enterprise local ontology to the domain ontology, thus achieving semantic consistency among enterprise local ontologies. Interface layer: it use information system integration standard and research achievements available nowadays, such as XML, Web Service and CORBA, to realize the operation of software systems. Information system layer: it is ultimate manifestation of realizing heterogeneous system integration faced to business cooperation.
4. Key Technologies Related 4.1. Domain Ontology Building Generally, the establishment of domain ontology costs domain experts plenty of time, and still requires a lot of energy for regular maintenance in subsequent usage. In fact, there is no domain ontology that has been recognized by industry. Many small and medium enterprises are not qualified for this job. To solve the problem mentioned above, we propose a self-organization mechanism to build ontologies, that is: transfer the task of ontology building to system users- primarily the enterprise designers, they are allowed to build and maintain ontology during the process of usage. The only thing that the administrator needs to do is tracking the feedback from users to continuously improve the accuracy and availability of ontology. Eventually, a domain ontology that constructed and shared by all users is built up.
Business Cooperation-oriented Heterogeneous System Integration
229
4.2. Local Ontology Building Enterprises that participate in the heterogeneous system integration framework need a local ontology to solve the semantic heterogeneity with other partners. However, it takes plenty of time for enterprises to collect various knowledge and information which is used to built business-cooperation local ontology. Therefore, it is necessary to find an automated way to create local ontologies for enterprises. Recently, XML, due to its structural flexibility and ease of scalability, has become the standard for Web heterogeneous data conversion and transmission. Consequently, a method for creating “lightweight” local ontology is put forward, namely to extract ontologies from XML documents that are used to describe business-cooperation information. Enterprise XML documents are generated by their information systems, which are used to exchange business data with target enterprise and achieve business collaboration. This project presented a rule-based approach to realize automatic mapping from XML document to ontology, the framework for mapping is shown in Fig. 3.
Fig. 3. The process of transforming XML document to ontology
Implementation steps are as follows: • • • •
•
extract structural information of the XML document to generate XML Schema file, which is used to create ontology model in the next step; transform the generated XML Schema into ontology model. There is a natural correlation between elements of XML Schema and OWL, and XML Schema is transformed into ontology model by specified rules; transform the original XML document to ontology instances. In this step, instance data that carried by the source XML document is mapped to ontology instances; merge ontology model and instances to generate the final ontology file; assess, verify and amend the generated ontology.
230
Y. H. Ni, Y. Lu, and H. B. Wang, X. J. Gu, Z. X. Wang
4.3. Ontology Mapping Ontology mapping is a process that takes two or more different ontologies as input, and establishes the semantic relation between the elements of the input ontologies in accordance with semantic correlation. In the ontology-based integration of heterogeneous systems, ontology mapping is essential to solve the problem of semantic heterogeneity. The core technology in ontology mapping is similarity measurement, which measures the semantic similarity between entities of distinct ontologies. On the basis of existing similairty algorithms, this paper proposes a comprehensive algorithm to measure the similarity between entities among different ontologies. This proposed algorithm avoids the shortcoming of large deviation by using a single method and improves the accuracy in similarity calculation. For classes C1, C2; ontologies O1, O2; and C1 O1, C2 O2, the similarity between C1 and C2 can be divided into four parts: similarity based on class names-SC(C1,C2); similarity based on properties-SP(C1,C2); similarity based on class structure-SS(C1,C2) and similarity based on class instances-SI(C1,C2). The comprehensive similarity between classes C1, C2 is: Sim(C1 , C2 ) =
w1 * SC (C1 , C2 ) + w2 * SP (C1 , C2 ) + w3 * SS (C1 , C2 ) + w4 * SI (C1 , C2 ) (1.1) w1 + w2 + w3 + w4
Where: w1,…, w4 are weights that assigned to each component of similarity calculation, and their values are determined by domain experiences. In practice, they can be got by a group of training samples. The mapping relations between entities of local ontology and domain ontology are determined based on similarity measurement. In the similarity calculation step, any entity (e.g., a class named C) in ontology O1 was calculated with all the classes in ontology O2. Then the class with maximum similarity value with C is chosen as matching class, and they will be saved as mapping result if their similarity value is larger than the predetermined threshold. An interactive interface is provided for users to add semantics to local ontologies if they are not satisfied with the results of similarity calculation. In our framework, users are permitted to describe classes in ontology by adding properties, since properties are built to distinguish a class from others. After semantic enrichment, similarity between local and domain ontologies is recalculated to generate the final mapping document.
5. System Implementation 5.1. Integration Process Based on the proposed framework, this paper presents an ontology-based business cooperation-oriented heterogeneous system integration process, as shown in Fig. 4.
Business Cooperation-oriented Heterogeneous System Integration
231
Fig. 4. Information systems data integration process
Ontology-based business cooperation-oriented heterogeneous system integration process includes the following steps: 1) Building business cooperation-oriented domain ontology. 2) Building business cooperation-oriented local ontology for enterprises. 3) Source information system generates request data (XML format) for business cooperation: enterprises use the information system interface to generate request data for business cooperation. 4) Transform business cooperation request data into business cooperation local ontology for enterprises. 5) Transform business cooperation local request ontology for enterprises into business cooperation domain request ontology. 6) Transform business cooperation domain request ontology into business cooperation domain response ontology. 7) Transform business cooperation domain response ontology into business cooperation local response ontology. 8) Transform business cooperation local response ontology into business cooperation response data (XML format).
232
Y. H. Ni, Y. Lu, and H. B. Wang, X. J. Gu, Z. X. Wang
5.2. Prototype Platform According to the ontology-based heterogeneous system integration process proposed, we have achieved a prototype system—Ontology-based business cooperation integration platform. This platform is performing as web service, providing operations such as ontology mapping, business process inference and semantic transforming. The structure of this integration platform in shown in Fig. 5. Similarity Measurement Ontology Building/Editing Module
Ontology Extraction Tool Self-organizing Ontology Warehouse Building Tool
Ontology Mapping File
Business Cooperation Ontology warehouse
Save
Ontology Querying
Ontology Reasoning/ Mapping
User Information System
Local Ontology Save Local Ontology Save Domain Ontology
Semantic Transformation/ Business Process Inference
Information System
Business Cooperation Integration Module
Fig. 5. The structure of business cooperation integration platform
The business cooperation flow of enterprises that participated in the integration platform is listed below: z Users login to the collaboration platform, select the desired type of service and set specific parameters. Then, upload the corresponding XML documents for business cooperation. z Platform accepts the request, and encapsulates the XML document in accordance with the service type that user has selected. So the system will be able to determine the type of the file uploaded (e.g., purchase order), and decide how to handle it the next step. z The platform deals with the collaborative information based on the ontology warehouse and ontology mapping file in the background, and forms the outcome document. z User receives the outcome document and performs corresponding operations, cooperation completed.
6. Conclusion The biggest obstacle that business cooperation encountered in practice is data source heterogeneity, which can be divided into three levels–system heterogeneity, mode heterogeneity and semantic heterogeneity. This paper concludes technologies
Business Cooperation-oriented Heterogeneous System Integration
233
and schemes that are used to realize heterogeneous data source integration. Then, a multi-layer ontology model faced to business cooperation was proposed. Based on this model, a multi-level ontology warehouse that faced to business cooperation was constructed, which could be used to resolve the semantic conflicts between enterprises. A framework for business cooperation was proposed, and key technologies related were discussed. In our proposed framework, we use the similarity algorithm to produce ontology mapping files, which are then used for ontology-mapping, business process inferring and semantic-transforming. The mapping files are crucial in ensuring semantic consistency among different local ontologies. Based on the mapping files, the platform provides a feasible solution scheme for inter-enterprise business integration.
7. Acknowledgments The authors are grateful for the financial support from National High-Tech. R&D Program of China (2006AA04Z157) and the Zhejiang Natural Science Foundation (Y107360).
8. References [1] [2] [3] [4] [5] [6] [7]
[8]
Hu JQ, Guo CG, Wang HM, Zou P, (2005) Web Services Peer-to-Peer Discovery Service for Automated Web Service Composition. Lecture Notes in Computer Science 3619: 509–518 Grønmo R, Jaeger MC, Hoff H, (2005) Transformations Between UML and OWL-S. Lecture Notes in Computer Science 3748:269–283 Qi GN, SCHOTTNER J, Gu XJ(eds.) (2005) Illustrating product data management. China Machine Press, Beijing (in Chinese) Patil L, Dutta D, Sriram R, (2005) Ontology-based exchange of product data semantics. IEEE Transactions on Automation Science and Engineering 2:213–225 Kim KY, Chae SH, Suh HW, (2004) An Approach to Semantic Mapping using Product Ontology for CPC Environment. Journal of Korea CAD/CAM 9:192–202 Dartigues C (2003) Product data exchange in a cooperative environment. Ph.D. dissertation, Univ. of Lyon 1, Lyon, France Barkmeyer EJ, Feeney AB, Denno P, Flater DW, Libes DE, Steves MP, Wallace EK (2003, Feb.) Concepts For Automating Systems Integration (Tech. Rep. NISTIR 6928). National Inst. of Standards and Technol., Boulder, CO. [Online]. Available: http://www.nist.gov/msidlibrary/doc/AMIS-Concepts.pdf Bellatreche L, Dung NX, Pierra G, Hondjack D (2006) Contribution of ontologybased data modeling to automatic integration of electronic catalogues within engineering databases. Computers in Industry 57:711–724
Collaboration Knowledge Management and Sharing Services to Support a Virtual Organisation Muqi Wulan1, Xiaojun Dai2, Keith Popplewell1 1 2
Faculty of Engineering and Computing, Coventry University, Coventry, CV1 5FB, UK Department of Mechanical Engineering, University of Birmingham Birmingham B15 2TT, UK
Abstract. With analysing the requirements of knowledge management for collaboration in a Virtual Organisation (VO), this paper mainly investigates technical solutions for Partner Knowledge Management Services (PKMS): web services which provide knowledge sharing and protection within such collaborations. PKMS is designed in compliance with ServiceOriented Architecture (SOA). Initially several design principles of SOA are examined and adopted to build up the structure of PKMS. As a consequence PKMS is composed of HighLevel Services and Low-Level Services in a hierarchy, which assure that services can be reused and contribute to adaptive and flexible business applications. The functions and relationships of High-Level Services and Low-Level Services are presented herein. The architecture of PKMS can be implemented through four structural layers: presentation, business logic, middleware and persistence. Keywords: knowledge management, knowledge sharing, collaboration, VO, web services
1. Introduction Knowledge management has already been one of the most indispensable strategies for an organisation or an enterprise to maintain its competitive advantages and core competences over its rivals [1]. Virtual Organisations (VOs) emerge as a response to fast-changing markets. Being a new organisational form, they have similar requirements and face even more complicated issues on knowledge management. The VO is regarded as a temporary alliance of independent enterprises based on network technology [2]. These enterprises collaborate together to complement each other’s core competences and resources in order to generate new solutions on a common business objective which may not be achieved by a single enterprise. Being virtual provides a vision appropriate for not only large businesses but also small and medium enterprises (SMEs). Therefore comparing to that in a single
236
M. Wulan, X. Dai, K. Popplewell
enterprise, knowledge management within a VO is involved in more perspectives and more levels since there are several different forms and sizes of organisations acting and interacting in the process of a VO’s life cycle. At the same time being virtual brings new challenges on how to effectively manage virtual resources along with physical and human capital [3]. Furthermore knowledge management in a VO is typified by sharing and also protection of knowledge properly. Sharing of knowledge between enterprise partners is critical to the success of any network organisation whilst commercially valuable expertise and IPR possessed by the individual enterprise should be protected in a secure way. From the individual enterprise’s point of view, it may share knowledge either within the enterprise or with external collaborating enterprises [4]. It is understandable that there are notable differences in approach to internal and external knowledge sharing. Addressing the above requirements for knowledge management in a VO, this paper focuses on the major issues, and describes implementation methods provided through web services. It is essential to identify the categories of organisations that occur in VO collaboration, and how these organisations behave in the process of forming, operating and terminating the VO. A related paper [5] discusses these organisation categories in detail, identifying, as well as the virtual enterprise itself, the individual enterprise and a Collaboration Pool whose members are individual enterprises who cooperate loosely, and perhaps very informally to identify and address potential business opportunities where a (possibly quite small) subset of the Collaboration Pool can come together to pool their capabilities to address the opportunity, thus giving rise to a new VO. Efficient approaches to collaboration knowledge management in enterprise, Collaboration pool and VO should provide reliable and trusted sharing and protection of knowledge. This is fully reflected in the design prototype of collaboration knowledge services, which aims to be implemented as web services in accordance with service oriented architecture (SOA).
2. Collaboration Knowledge Services Collaboration knowledge services implement operations on the knowledge repositories and provide appropriate mechanisms to manage and use the related knowledge therein in a beneficial and secure way. We call these services Partner Knowledge Management Services (PKMS). PKMS is defined to include all services associated with the discovery, creation, management and security of knowledge held, owned or shared by individual enterprises, a collaboration pool and a virtual organisation. It enables enterprises (service users) either in collaboration or seeking for collaboration to make use of different knowledge repositories and supports decision making in the execution of business activities. On the other hand PKMS allows other services to access the knowledge repositories under secure control to retrieve required information or facts.
Collaboration Knowledge Management and Sharing Services to Support a VO 237
2.1. Architectural Design One of key principles of service-orientation is “designing services within a business process context and aligning services with individual process steps” [6]. In the real world, specific goals of collaboration are not the same, but correspond to divergent business requirements. Not all enterprises participating in collaboration must experience the entire VO life cycle, but may for example join only after early stages of business opportunity selection and VO formation are complete. PKMS should have capabilities to be configured dynamically to meet such requirements. Application of a service-oriented architecture (SOA) is quite different from traditional coding functionality as a whole into a large system. SOA breaks down the whole functionality into smaller components and assigns a set of building blocks performing specific tasks or units of functionality. These blocks are then used to build up high-valued and targeted functionality. We call the building block a service: a software resource which can be used, reused or orchestrated in more complex business contexts. Design principles are concerned with how to construct such building blocks and ensure the overall system follows SOA from architectural point of view [7]. Normally these principles are not orthogonal and are even strongly correlated to each other. Consequently we will discuss several service design principles to be considered for the structure of PKMS. 2.2. Design Principles Service reusability Service reusability is a fundamental criterion of designing and evaluating a service. It requires services to be designed to be as reusable as possible. It is notable that the essences of service reusability are not purely multiple service uses but to minimise the influences on service consumers in their applications when they respond to the changes of business. Therefore reusability is not a purpose but a means to achieve the goal. Furthermore the definition implies the reusability of a service may vary for different levels of logic and execution context. Service levels of abstraction It is necessary to design services at a correct level of abstraction and with welldefined interfaces so that they can be used in a consistent way. As the starting point, analysing use cases of a VO’s life cycle helped to identify functional service components for PKMS. A two-levelled service hierarchy is proposed for the architecture of PKMS, comprising low-level services and high-level services. We define the low-level data or knowledge manipulation and application operations as the low-level services of PKMS. They are primarily technical and provide basic data operations for other services and applications. After extracting low-level services, high-level services can be determined. High-level services implement supporting a business process. Table 1 compares the high-level and low-level services in logic type and execution contexts. Due to different execution contexts low-level services are likely to be more widely reused, although this is not an
238
M. Wulan, X. Dai, K. Popplewell
absolute rule – there are significant benefits in reuse of high-level services whenever this is possible. Table 1. Characteristics of different level services in PKMS Features
Logic type
Execution contexts
High-level services
Business
Business
Low-level services
Technical
Business or Technical
PKMS
Service granularity If services are designed into a hierarchical structure, they will certainly result in differing granularities. Service granularity is a relative measurement of the overall pieces of functionality encapsulated by a service. It is determined by its functional context. For instance, compared to high-level services, low-level services address smaller unit of functionality or exchange smaller amount of data. In general large numbers of low-level services may be orchestrated to contribute to and automate complex business processes. Technically, how interfaces are grained influences the granularity of services. It is recommended that coarse-grained interfaces of a service are used for external integration such as service choreography since it is less sensitive to the changes (especially technical changes). This allows users such as business experts to use services productively in business applications without possessing in-depth technical skills. In contrast services with fine-grained interfaces are suitable for internal usage providing the potential of being reused. Fig. 1 briefly shows the associations between interface granularity and service levels. There is a balance between coarse-grained interfaces and fine-grained ones for well-designed services.
Fig. 1. Degrees of service granularity
Collaboration Knowledge Management and Sharing Services to Support a VO 239
3. Conceptual Architecture of PKMS Here the architecture represents the conceptual design that defines the structural and behavioural aspects of PKMS. PKMS is composed of a collection of interacting services with published interfaces and follows the features of a serviceoriented architecture (SOA). PKMS are synchronised between high-valued business applications and low-level IT implementations through a layered architecture (as shown in Fig. 2.). This architecture is constructed in four architectural layers: Presentation, Business logic, Middleware and Persistence.
Fig.2. Technical architecture of PKMS
Presentation layer The presentation layer is the appearance which the services present to their consumers/users. At this layer services should be easily understood by the consumers so that they can be integrated into a business process. Technically the services provide interfaces to consumers for the business application. Business logic layer In the general 3-tier architecture, the term “business logic” is employed to describe “the functional algorithm that handles information exchange between a database and a user interface” [8]. In this context there are two middle layers, called business logic layer and middleware layer, responsible for bridging between the interface and the knowledge base. The High-Level Services of PKMS are positioned in this layer. They perform the expected functions and can be externally orchestrated with other services. Middleware layer
240
M. Wulan, X. Dai, K. Popplewell
The Middleware Layer provides computer-based solutions that integrate services. At the core is an Enterprise Service Bus (ESB. It allows a service to locate transparently across the networks so as to make interaction or connection with other services possible. Persistence layer This layer persistently stores knowledge or data generated by services, and makes it available for future retrieval by services. All the collaboration knowledge is structured and resides herein under the three categories: Collaboration Pool Knowledge Base (CPKB), VO Knowledge Base (VOKB) and SYNERGY Enterprise Knowledge Base (EKB). It also permits an enterprise to have its own enterprise knowledge base existing independently from SYNERGY. However this requires an appropriate mediation to access private knowledge bases. Due to security and sharing circumstances, there must be effective access controls on collaboration knowledge bases. As a business process is executed, CPKB, VOKB and EKB will normally be populated and updated. 3.1. Low-Level Services of PKMS In PKMS the Low-Level Services provide the generic operations on collaboration knowledge and related data within the context of technical execution. They are at the physical level which support high-level business-valued applications and may not be seen by the users engaging such a type of activities. Low-Level Services are reusable by High-Level Services. Table 2 is a service portfolio listing identified Low-Level Services of PKMS. Table 2. Low-Level Service portfolio Low-Level Services of PKMS
Descriptions
KB Creation
Creates an unpopulated instance of a KB (EKB, CPKB or VOKB) for use by a newly formed Collaboration Pool or VO, or by an Enterprise. These are structured according to the ontology.
Knowledge Storage
Stores knowledge in a KB, subject to access control.
Knowledge Retrieval
Retrieves knowledge from a KB, subject to access control.
Knowledge Modification
Modifies knowledge in a KB, subject to access control.
Knowledge Removal
Removes knowledge from a KB, subject to access control.
Knowledge access control
Specifies the restrictions on access to content of KBs. It is possible that specified access control will be enforced within storage and retrieval services, or by these services requesting access control service
Collaboration Knowledge Management and Sharing Services to Support a VO
241
Table 2. continued support. This will be determined during research.
KB Housekeeping
Provides KB housekeeping services (backup, recovery, trace etc.)
3.2. High-Level Services of PKMS Compared with the Low-Level Services, the PKMS’ High-Level Services are business-meaningful. They can compose or assemble Low-Level Services, and also can be orchestrated with other services to accomplish a targeted business processes. High-Level Services need interaction with users and invocation from users. They provide knowledge management services directly for users who seek collaboration or already participating in collaboration. Table 3 lists the identified High-Level Services of PKMS. Table 3. High-level service portfolio High-level services
Descriptions
CP Inauguration
This service creates the infrastructure to support a newly formed Collaboration Pool. It instantiates a new CPKB, and populates it with shared CP Knowledge, through interaction with the user.
VO Inauguration
This service creates the infrastructure to support a newly formed VO. It instantiates a new VOKB, and populates it with shared VO Knowledge, through interaction with the user.
Collaboration Registration
This service enables users to register in a collaboration. Registration into both a CP and a VO are supported. Where a registering partner has no pre-existing SYNERGY EKB, one is instantiated and populated in part during registration. Where an EKB already exists the CPKB is in part populated from this during CPKB Registration. Similarly in VO Registration, EKB and CPKB knowledge will be used to partially populate the VOKB.
Collaboration Knowledge Browser
Provides search, retrieval and display of collaboration knowledge from VOKB, CPKB and EKB, as appropriate to the context from which it is invoked. LowLevel PKMS services apply access control.
Collaboration Knowledge Population
Provides facility for insertion and amendment of collaboration knowledge in
242
M. Wulan, X. Dai, K. Popplewell Table 3. continued VOKB, CPKB and EKB, as appropriate to the context from which it is invoked. LowLevel PKMS services apply access control.
Collaboration Knowledge Archive
This services offers shared VO collaboration knowledge to the VO partners, for storage in their own EKB or CPKB at the termination of a VO. Partners are under no obligation to accept any item of knowledge, but such knowledge is otherwise lost when the VOKB is closed down.
3.3. Views in PKMS The views in PKMS are considered in terms of the user’s concerns and should express how PKMS are presented to users. A view in PKMS allows different types of users to take advantage of PKMS in an efficient way, and provides more insight into decision-making regarding collaboration knowledge provided by PKMS while users explore their business objectives. PKMS offers four views of PKMS High-Level Services, related to the business activities of Collaboration Pool Management, VO Collaboration Management (i.e. management of the collaboration itself), VO Operations Management (i.e. management of the operational activities of the VO) and Risk Management. Each of these views provides access to services to support related business processes, and the views characterise a user’s concerns in carrying out a collaboration-related task. With the identification of the views and hierarchical services of PKMS, the relationship of reusability among them is presented in Fig. 3. It is clear that a number of services are used in the context of more than one view. This recognises the SOA principle of reusability. However we recognise also a tension between this and the need to achieve appropriate granularity of services. For example, the Collaboration Registration service can be applied in registration of an enterprise as a member of either a Collaboration Pool or a VO, and there is much commonality between the requirements for each service, though one interacts primarily with the Collaboration Pool Knowledge Base and the other with VO Knowledge Base, the latter requiring more extensive and detailed registration content. This commonality of function leads us to propose a single service, even though the processes followed by the service are in detail dependent on the context of its use – CP or VO. The alternative – applying different services for CP Registration and VO Registration achieves a smaller granularity, at the expense of reducing re-usability.
Collaboration Knowledge Management and Sharing Services to Support a VO
243
Fig.3. Reusability between views and services
The Collaboration Pool View is created on four High-Level Services of PKMS: Collaboration Registration, CP Inauguration, Collaboration Knowledge Browser and Collaboration Knowledge Population. The VO management View is composed of five High-Level Services of PKMS including Collaboration Registration, Collaboration Knowledge Browser and Collaboration Knowledge Population, VO Inauguration and Collaboration Knowledge Archive, along with Collaboration Risk Evaluator. Therefore there are three High-Level Services which can be reused and shared between CP Management View and VO Management View. The third View – VO Operations Management is constructed by Collaboration Knowledge Browser, Collaboration Knowledge Population and Collaboration Risk Evaluator, which are also used by VO management View. Of all the high-level services shown, Collaboration Knowledge Browser and Collaboration Knowledge Population are the most reusable components among three views of PKMS. On the other hand the operations provided by Low-Level Services are reused by all HighLevel Services of PKMS.
4. Conclusions The paper discussed the issues of knowledge management for VOs from the perspective of the design of Partner Knowledge Management Services (PKMS),
244
M. Wulan, X. Dai, K. Popplewell
web services to support knowledge management for the evolving categories of organisations which are involved in VO collaboration. In the paper [5] three such categories of organisations are identified to work within the context of a VO: individual enterprises, collaboration pools and virtual organisations. The service architecture proposed recognises that two levels of PKMS service are needed for each of these categories: (1) High-level services which provide business oriented interfaces for storage, manipulation, retrieval and application of knowledge, and (2) Low-level services providing common infrastructure for access to the knowledge repository. This latter repository is founded on a knowledge ontology described in [5].
5. Acknowledgements This work is fully supported by the European Union funded 7th framework research project – Supporting Highly Adaptive Network Enterprise Collaboration Through Semantically-Enabled Knowledge Services (SYNERGY).
6. References [1] [2] [3] [4] [5]
[6] [7] [8]
Geisler E, Wickramasinghe N, (2009) Principles of knowledge management: theory, practices and cases, M.E. Sharpe Luczak H, Hauser A, (2005) Knowledge management in virtual organizations, Proceedings of International Conference on Services Systems and Services Management. vol. 2, 898 – 902 Klobas J, Jackson P, (2008) Being virtual: knowledge management and transformation of the distributed organisation, Physica-Verlag Kess P, Phusavat K, Torkko M, Takala J, (2008) External knowledge: sharing and transfer for outsourcing relationshops, International Journal of Business nad Systems Research, vol.2, no.2, 196-213 Wulan M, Dai X, Popplewell K, (2010) Collaboration knowledge ontologies to support knowledge management and sharing in Virtual Organisations, accepted by the International Conference on Interoperability for Enterprise Software and Applications, Coventry, UK Salunga A, (2008) What process experts need to know about SOA, Forrester Research, available at http://www.forrester.com/Research/Document/0,7211,45534,00.html Erl T, (2007) SOA Principles of Service Design, Prentice-Hall/Pearson PTR Wikipedia, (2009) available at http://en.wikipedia.org/wiki/Business_logic
Developing a Science Base for Enterprise Interoperability Yannis Charalabidis1, Ricardo Jardim Gonçalves2, Keith Popplewell3 1 2 3
University of the Aegean, Greece New University of Lisbon, Portugal Coventry University, UK
Abstract. Interoperability is an important characteristic of information systems, organisations, their processes and data. Achieving automated collaboration of processes and systems may lead to a dramatic increase in productivity for enterprises of any size. As a result of this projected benefit, interoperability has been prescribed by numerous standardization frameworks, guidelines at enterprise level, data schemas and techniques to tackle the problem of non-communicating systems or organisations. In parallel, most international software, hardware and service vendors created their own strategies for achieving the goal of open, collaborative, loosely coupled systems and components. This paper goes beyond the presentation of the main milestones in this fascinating quest for collaboration between people, systems and information: it attempts to describe how this new interdisciplinary research area can transform into a vibrant scientific domain, by applying the necessary method and tools. To achieve that, the chapter presents the ingredients of this new domain, proposes its needed formal and systematic tools, explores its relation with neighbouring scientific domains and finally prescribes the next steps for achieving the thrilling goal of laying the foundations of a new science. Keywords: Enterprise Interoperability, Scientific Foundation, Formal Methods
1. Introduction Interoperability is a key characteristic of organisations, processes and systems towards increasing overall productivity and effectivenes, both in the public and private sector. Since its inception as “The ability of systems, units, or forces to provide services to and accept services from other systems, units, or forces and to use the services so exchanged to enable them to operate effectively together” [01] and through the years, interoperability tends to obtain a broader, all-inclusive scope of a repetitive feature pertaining to well-organized, collaborative organizations.
246
Y. Charalabidis, R.-J. Gonçalves, K. Popplewell
Interoperability can dramatically decrease the costs, risks and complexity of information systems, being now the most important characteristic for organisations and their ICT infrastructures, representing a challenge to competition policies in Europe and America [2]. Yankee Group further advises IT departments to focus on interoperability technologies and skills as a core competency imperative, envisaging savings of more than one-third of the total cost of ownership - if they succeed in achieving business and technical interoperability [3]. Gartner Group identifies the importance of interoperability in the public sector systems and processes, considering it a key element during digital public service provision [04]. However, since projects involving integration, interoperation and interoperability have been conducted from different vantage points, are typically multi-faceted, complex and run a high risk of failure [5]. In this direction, escalating economic and societal demands, together with the continued mainstreaming of ICT and the need to push further the technology limits, have set a growing agenda for interoperability research. According to the European Commission Enterprise Interoperability Roadmap [6], four Grand Challenges that collectively constitute a long-term strategic direction for research in enterprise interoperability are recognized: • •
• •
Interoperability Service Utility (ISU) that provides interoperability as a technical, commoditised functionality, delivered as services. Future internet enterprise systems, seeking to apply the concepts, technologies and solutions flowing from developments in Web technology, to address the problems of enterprise interoperability and result in next generation, service-oriented business models. Knowledge-oriented collaboration, which builds on state-of-the-art research on enterprise interoperability. Science base for enterprise interoperability, by combining and extending the findings from other established and emerging sciences.
Thus, Enterprise Interoperability (EI) is a well established applied research area, studying the problems related with the lack of interoperability in the organizations, and proposing novel methods and frameworks to contribute with innovative solutions for EI problems. However, in spite of the research developed so far, nowadays it was not established yet the scientific foundations for EI. This is a deficit recognized by the EI research community, disabling the generalization and complete reuse of the methods and tools that have been developed. Among the scientific domains recognized by the international community the following subjects were identified by the authors to be within those that might contribute for the EI scientific foundations: system complexity, network science, information and web science. Up to now, the principal tools for targeting the above challenges appear as the various standards that try to govern information systems development and operation. Such standards are usually linked with specific market sectors, application areas or technology trends, thus having a limited time span, a static nature and quite often different interpretations by technology vendors [7,34,35]. So, interoperability has to be studied and developed as a rigorous mathematical and
Developing a Science Base for Enterprise Interoperability 247
scientifically-lawful phenomenon, following scientific practices similar to neighbouring domains.
2. Policy and Research Attempts for Interoperability As the key importance of interoperability in the public and private sector has been identified within the 20th century, a series of activities were implemented in order to provide administrations and enterprises with the necessary policy instruments and standards. In parallel, important research initiatives were attempting to develop the first tools for assisting the systematic pursuit of this “new and promising” ingredient of ICT infrastructures. Key milestones in the European Commission and European Parliament interoperability – related activities can be considered the following: •
•
•
•
The creation of a Pan-European set of standards and guidelines, issued by European Commission, unit “Interoperable Delivery of European eGovernment Services to public Administrations, Businesses and Citizens” in the form of European Interoperability Framework v.1 [8]. The development of the Enterprise Interoperability Research Roadmap, by European Commission, DG Information Society and Media, as an instrument to set the research challenges of the enterprise interoperability domain, both for the user-companies and the ICT research community [9] The publication of the European Commission MODINIS Programme “Study on Interoperability at Local and Regional Level”, identifying the main inhibitors for interoperable services and local administration level [10]. The publication of several communications and directives from the European Commission and the European Parliament, underlining the importance of interoperability in areas like the public sector information systems, the Geographical Information Systems, with the INSPIRE Directive [11], or the free flow of services around the European Union, with the Services Directive [12].
Following European Union interoperability-related activities, and especially the EIF guidelines, most EU member states invested in the development of National Interoperability Frameworks (NIF’s), soon transforming into infrastructures for designing, testing and maintaining interoperable processes and systems [13,32]. A collection of some pioneering research attempts supported by the European Union Framework Programme (FP), would yield the following: •
•
The ATHENA integrated project on “Advanced Technologies for Interoperability of Heterogeneous Enterprise Networks and their Applications”, gathering many European software houses in an effort to achieve basic interoperability features for common applications [14]. The GENESIS research project, on interoperability among financial applications of Small and Medium Enterprises through XML-based interconnection [15].
248
Y. Charalabidis, R.-J. Gonçalves, K. Popplewell
• • •
The INTEROP Network of Excellence, transforming into the INTEROP VLab research network, now operating in more than 10 European Union member states [16]. The COIN integrated project, started in 2008, researching on advanced interoperability infrastructures and utility-like provision of electronic services [17]. The SYNERGY research project, which since 2008 has focused on issues of knowledge oriented enterprise collaboration [18].
The above interoperability-related initiatives, complemented by several industrylead campaigns and also United Nations (UN) initiatives on interoperability standardisation for non-European countries [19], formed the basis of research and practice during the last 10 years, proving the importance of interoperability and defining the first guidelines for its pursuit.
3. The Elements of the New Scientific Domain There have been many approaches to analyse the internal characteristics of interoperability, its ingredients and their nature, during recent years. As stated in the ATHENA Enterprise Interoperability research project [14] and also in the INTEROP project results [20], interoperability can be studied along the aspects of technical, semantic and organisational issues. Going further, the latest revision of the European Interoperability Framework suggests that the policy-related issues should also be included in the definition of interoperability, including also legal and statutory topics [33]. Trying to integrate these interoperability levels, denoting the different aspects of organisations, systems, processes or data where interoperability appears as a key issue, the following facets are proposed: • • •
•
Technical Interoperability, investigating problems and proposing solutions for the technical-level interconnection of ICT systems and the basic protocols, digital formats or even security and accessibility mechanisms. Semantic Interoperability, including methods and tools, usually in the form of ontologies or standardized data schemas, to tackle issues of automated information sharing, during the various process execution steps. Organisational Interoperability, relating to the problems and solutions relevant to business processes, functional organisation or cross-enterprise collaboration activities – usually involving various different ICT systems and data sources. Enterprise Interoperability, referring to the alignment of higher enterprise functions or government policies, usually to be expressed in the form of legal elements, business rules, strategic goals or collaborative supply chain layouts.
Following on the various initiatives around the formulation of interoperability under the European Union Research Framework Programme, the DG Information Society and Media initiated a Task Force on Enterprise Interoperability Science Base (EISB), in order to further study the steps needed towards the establishment
Developing a Science Base for Enterprise Interoperability 249
of a “scientific base” of this new domain [21]. The Task Force announced an initial agenda within 2009, stating the needed steps and methods to advance interoperability towards its scientific foundation [22]. In an approach relevant to those of other neighboring domains, such as the Software Engineering [2] the following elements are envisaged for enterprise interoperability scientific foundation, as shown in Figure 1: a)
At the foundational semantics level • •
b)
At the models level •
• c)
Assessment models, for effectively measuring the status of various systems or organisations, accompanied with proper metrics that would allow the relative analysis of individual systems with the state-of-the-art or the stateof-practice in each application domain. Specific solving algorithms, by means of well-defined procedural or other methods to tackle effectively specific problem formulations. At the tools level
• • •
d)
Formalisation of the problem space, containing formal methods for the description and analysis of various occurrences of interoperability problems at technical, semantic, organisation or enterprise level. Formalisation of the solution space, in an effort to provide a systematic mapping of solution-oriented approaches, models or even algorithms to support the systematic resolution of interoperability shortcomings in various problem statements.
• •
Simulation tools that will allow the laboratory-based reproduction and analysis of real problems, simulating enterprise gains and losses from noninteroperable processes, data or ICT systems. Testing tools that will support the machine-supported testing of simple or complex configurations, providing automation to the previously mentioned assessment models. Other interoperability infrastructures, supporting the achievement of interoperable behavior of systems and organisations, such as interoperability service registries, federated mechanisms for the maintenance of common semantic taxonomies, or even semantically enriched interoperability standardisation repositories. At the orchestration Level Interoperability frameworks, for enterprises, public sector organisations and specific application sub-domains. Business models, providing scenarios for value generation through collaboration, in an attempt to give proven guidelines for maximisation of productivity in larger cycles – in comparison with the software life-cycle approaches in software engineering.
250
Y. Charalabidis, R.-J. Gonçalves, K. Popplewell
Fig. 1. The Scientific Elements of Interoperability
4. Neighboring Scientific Domains and Approaches Interoperability is a multi-disciplinary domain by nature, as it is applicable as a capacity into technical, semantic or human-centric systems. As a result of this nature, the establishment of a scientific base will also have to be strongly related to neighboring sciences and scientific domains. Basic classification of science [23] brings as related scientific domains those of social sciences, applied sciences and formal sciences, while natural sciences (e.g. astronomy, biology, chemistry, earth sciences and physics) have to be considered more distant to interoperability. Analysing the tree of social, applied and formal sciences in comparison with the four identified levels of scientific elements of interoperability (semantics, models, tools, orchestration), as presented in the previous section, the following scientific domains are proposed: •
•
•
At the level of semantics, the mathematical domains of logic, set theory, graph theory and information theory seem to have practical applications for describing interoperability problems in a formal way. A mention to patterns theory has also to be made in this area, both in the form of design patterns [24] and also in the more mathematical form of general pattern theory [25]. At the level of models and tools, one should look for existing knowledge in the neighboring domains of systems theory, systems engineering, computer algorithms or operational research. Service science [26] should also not be overlooked in the needed definitions of models and tools for interoperability, at this level. Systemic simulation approaches, such as the System Dynamics approach [27]. At the orchestration level, where more generic formulations are needed, the social sciences provide a sound scientific corpus, in the face of economics, legal science or even public administration and management.
Developing a Science Base for Enterprise Interoperability
251
In addition to the above directions, special attention should be also given to more focused approaches and propositions for a formal framework to describe interoperability such as the category theory application to semantic interoperability [28], combined category theory and calculus approaches [29], or knowledge discovery metamodel application to interoperability of legacy systems [30]. For the higher levels of interoperability, that is the organisational and enterprise interoperability facets, the scientific domains of systems complexity, network science and information science seem to have a high degree of relevance and applicability [31].
5. The Steps Towards a Science Base For Interoperability In order to proceed towards the formulation of a scientific base for interoperability, research activities should target a new set of concepts, theories, and principles derived from established and emerging sciences, as well as associated methods, techniques, and practices for solving interoperability problems. A similar approach has been presented for the software engineering science [02]. Bringing this approach further, it is proposed that the research work is to be developed along six axes, as following: • •
• •
•
•
Foundational principles: investigation of basic ideas and concepts, initial formal methods to describe problems and solutions, patterns identification, and critical research questions. Concept formulation: circulation of solution ideas, development of a research community, convergence on a compatible set of ideas, solutions on specific sub-problems, and refinement of fundamental problems structure. Development and extension: exploration of preliminary applications of the technological and scientific principles, populations of formal descriptions and generalization of the various approaches. Internal enhancement and exploration: extension of the approaches to vertical domains, application of the technology in real problems, stabilization of technological means, initial assessment of impact, and development of training curricula and material. External enhancement and exploration: communication towards a broader community, substantiation of value and applicability, progress in detail towards complete system solutions, and embodiment within training programmes. Popularization: standardisation and methodologies for production quality, systematic assistance in commercialisation and marketing of scientific offerings.
252
Y. Charalabidis, R.-J. Gonçalves, K. Popplewell
6. Conclusions In order to tackle effectively interoperability problems in a systematic, repetitive and optimizing manner, a scientific view of interoperability is needed. This way, the scientific community will gradually focus its efforts on the most important and forward-looking parts of the problems, capitalizing on problem situations that have been formally described, analysed and answered. An analysis of the present situation shows that a vibrant research community is in place, working in various research and development initiatives – not avoiding though the fragmentation effect due to the lacking of formal methods and means. Today, several approaches exist for the fundamental means of describing the interoperability problem and solution space, that need to be further organised and clarified: somewhere among information systems modelling methods, design patterns recognition and analysis, to more generic mathematical and logic approaches, lie the core elements for formalizing this complex, though repetitive, domain. This formalization of the problem and solution space will then give way to the most demanding and promising levels: those of the needed models and tools to assess systematically the interoperable performance of systems and organisations, test various infrastructures and data configurations, simulate the impact of different solutions and finally achieve the status of systematically propose sound solution paths to formally recognized situations. Medium to long-term steps in this scientific base development will have to deal with complete solution paradigms, possibly differentiated in various sectors of the economy, able to convince other, neighboring sciences or even non-scientists on the feasibility and the overall return on investment of this approach. Then, the scientific community of enterprise interoperability will be able to grow further, both internally and externally, tying with other key research initiatives, such as cloud computing, internet of services and internet of things. When reaching the status of a first, solid scientific basis, communication towards a broader community must follow, including the inclusion of interoperability science within academic and vocational training programmes. The main risks in this way of establishing the scientific foundation of interoperability relate both with the inherent complexity of the domain and the apparent fragmentation of approaches at global level. These may result in a long contradiction of approaches that are “non-interoperable” among themselves – thus resulting in further fragmentation of the scientific efforts. Another typical, unfortunate example is the slow pace of scientific development in some domains, that is then passed by the faster technological evolutions, resulting in an endless loop of scientific attempts that lag behind reality.
7. References [1] [2]
DODD (1977) DODD 2010.6 Standardization and Interoperability of Weapon Systems and Equipment Within the North Atlantic Treaty Organization (NATO) Shaw M., Clements, P. (2006) The Golden Age of Software Architecture, in IEEE Software, March/April 2006.
Developing a Science Base for Enterprise Interoperability [3] [4] [5] [6] [7] [8] [9]
[10] [11] [12] [13] [14] [15]
[16] [17] [18]
[19] [20]
253
Yankee Group Report (2003) Interoperability Emerges as New Core Competency for Enterprise Architects, Available at: http://www.intersystems.com/ensemble/analysts/yankee.pdf Gartner Group (2007) Preparation for Update European Interoperability Framework 2.0 - Final Report, Available at: http://ec.europa.eu/idabc/servlets/Doc?id=29101 Pardo, T. A., & Tayi, G. K. (2007). Interorganizational information integration: A key enabler for digital government. Government Information Quarterly, 24, 691−715 Charalabidis Y., Gionis G., Hermann K-M., Martinez C. (2008) “Enterprise Interoperability Research Roadmap v.5.0”, DG Information Society and Media, European Commission, March 2008. Figay N., Steiger-Garção A., Jardim-Gonçalves R. (2006) Enabling interoperability of STEP Application Protocols at meta-data and knowledge level, International Journal of Technology Management Vol. 36 (4). pp. 402-421. 2006 ISSN 0267-5730 IDABC (2004). European Interoperability Framework for pan-European eGovernment Services, Version 1.0. Available at: http://ec.europa.eu/idabc/servlets/Doc?id=19529 European Commission (2006), “Enterprise Interoperability Research Roadmap”, DG Information Society and Media, available at ftp://ftp.cordis.europa.eu/pub/ist/docs/directorate_d/ebusiness/ei-roadmapfinal_en.pdf MODINIS (2007) Study on Interoperability at Local and Regional Level, Version 2.0, Available at: http://www.epractice.eu/files/media/media1309.pdf European Parliament (2007), Directive 2007/2/EC of the European Parliament and of the Council of 14 March 2007 establishing an Infrastructure for Spatial Information in the European Community (INSPIRE). European Commission (2007). Handbook on implementation of the Services Directive. Retrieved April 15, 2009, from http://ec.europa.eu/internal_market/services/services-dir/index_en.htm Guijarro, L. (2007) Interoperability frameworks and enterprise architectures in egovernment initiatives in Europe and the United States, Government Information Quarterly, 24, 89-101. Ruggaber, R. (2006). ATHENA - Advanced Technologies for Interoperability of Heterogeneous Enterprise Networks and their Applications, in Interoperability of Enterprise Software and Applications, Springer Publications. Charalabidis Y., Gionis G., Askounis D., Mayer P., Kalaboukas K., Stevens R., Kuhn H. (2007) Creating a Platform for End-to-End Business to Business and Government to Business Electronic Transactions in the new European Union: The GENESIS Project, in eChallenges 2007 Conference, October 25-28, 2007, The Hague. Interop Vlab (2009) The Virtual European Laboratory for Enterprise Interoperability, http://www.interop-vlab.eu/ COIN (2008) COIN-IP Home Page, COIN project, 2008. http://www.coin-ip.eu/ Popplewell, K., Stojanovic, N., Abecker, A., Apostolou, D., Mentzas, G., and Harding, J., "Supporting Adaptive Enterprise Collaboration through Semantic Knowledge Services", Enterprise Interoperability III: New Challenges and Industrial Approaches, eds. Mertins, K., Ruggaber, R., Popplewell, K. and Xu, X., Springer 2008, ISBN 978-1-84800-220-3, pp381-393. United Nations Development Programme (UNDP) (2007) e-Government Interoperability: A Review of Government Interoperability Frameworks in Selected Countries, Available at: http://www.apdip.net/projects/gif/serieslaunch Doumeingts, G, Muller, J, Morel, G, & Vallespir, B. (2007), Enterprise Interoperability: New Challenges and Approaches. Springer Publications.
254
Y. Charalabidis, R.-J. Gonçalves, K. Popplewell
[21] European Commission (2008) Enterprise Interoperability Science Base Task Force, http://cordis.europa.eu/fp7/ict/enet/fines-eisb_en.html [22] Charalabidis Y., Goncalves R., Liapis A., Popplewell K. (2009) Towards a Scientific Foundation for Interoperability, European Commission EISB Tack Force, June 2009, Available at: ftp://ftp.cordis.europa.eu/pub/fp7/ict/docs/enet/20090603-presentacioncharalabidis-goncalves-liapis-popplewell_en.pdf [23] Wikipedia (2009) Science, Basic Classifications of Science, available at: http://en.wikipedia.org/wiki/Science [24] Gamma, E., Helm R., Johnson R., Vlissides J. (1995) Design Patterns: Elements of Reusable Object-Oriented Software. Addison-Wesley. ISBN 0-201-63361-2. [25] Grenander U. (1996) Elements of Pattern Theory. Johns Hopkins University Press. (ISBN 978-0801851889) [26] Spohrer J., Maglio P., Bailey J., Gruhl D. (2007) Steps Toward a Science of Service Systems, in IEEE Computer, Volume 40 , Issue 1, pp. 71-77, January 2007. [27] Karnopp D., Rosenberg R. (1975) System dynamics: a unified approach, WileyInterscience publications, ISBN 0471459402. [28] Cafezeiro, I. & Haeusler, E. H. (2007), Semantic interoperability via category theory, ACM International Conference Proceeding Series, Volume 83, Tutorials, posters, panels and industrial contributions at the 26th international conference on Conceptual modeling, Auckland, New Zealand. [29] Rossiter N., Heather M., Nelson D. (2007), A Natural Basis for Interoperability, in Enterprise Interoperability New Challenges and Approaches, Springer London, ISBN 978-1-84628-713-8. [30] Dehlen V., Madiot F., Bruneliere H. (2008), Representing Legacy System Interoperability by Extending KDM, Proceedings of the MMSS'08 Model-driven Modernization of Software Systems workshop, ECMDA 2008 (Berlin, Germany). [31] Goncalves J.R., Steiger G.A. (2009), Towards EI as a science: Considerations and points of view, European Commission EISB Tack Force, June 2009, Available at: ftp://ftp.cordis.europa.eu/pub/fp7/ict/docs/enet/20090603-paper-ei-science-goncalvessteiger_en.pdf [32] Charalabidis Y., Tschichholz M., Hopkirk A. (2007a) Advancing the eGovernment Interoperability Framework in European Countries: Architectures, Challenges and Perspectives from the new Greek eGIF, eChallenges 2007 Conference Proceedings, October 25-28, 2007, The Hague. [33] IDABC (2008) European Interoperability Framework draft version 2.0, Available at: http://ec.europa.eu/idabc/servlets/Doc?id=31508 [34] Leyton M. (2008), A Generative Theory of Shape, Lecture Notes in Computer Science, Springer, ISBN-10: 3540427171 [35] Ralyte, J., Jeusfeld, M., Backlund, P., Kuhn, H. & Arni-Bloch, N. (2008) A knowledge-based approach to manage information systems interoperability. Information Systems, 33, 754–784
From Pipes-and-Filters to Workflows Thorsten Scheibler1, Dieter Roller1, Frank Leymann1 1
Universität Stuttgart, Institute of Architecture of Application Systems (IAAS), Universitätsstr. 38, D-70569 Stuttgart, {scheibler, dieter.h.roller, leymann}@iaas.unistuttgart.de
Abstract. The Pipes-and-Filters (PaF) Architecture has been prominently exploited in the context of Enterprise Application Integration (EAI). The individual tasks have typically been implemented using specialized EAI-vendor technology, message flows, and quite often customer-specific implementations. This implementation approach is in conflict with flow technology, a cornerstone of the Service-Oriented Architecture (SOA). We show in this paper how this conflict can be resolved. We first show how the PaF architecture can be implemented using flow technology by transforming the appropriate PaF patterns, in particular those used in EAI, into appropriate WS-BPEL constructs. We then present the results of appropriate tests that show that the performance of the corresponding workflows is superior to the mapping of PaF patterns to message flows. We finish off with outlining the additional tangible and non-tangible benefits that the Workflow Management System (WfMS) provides, such as monitoring and process instance management. In a nut shell, we illustrate that the PaF architecture does not require an own implementation. It is sufficient to have a PaF modeling tool and then convert the appropriate models to workflows for execution by an appropriate WfMS. Keywords: Enterprise Application Integration, BPEL, Pipes-and-Filters, Workflows
1. Introduction Customers usually try to use standard applications that they buy or lease from software vendors rather than developing their applications in-house, as leasing/buying software tends to be considerably cheaper. So, unless there are very sophisticated or unique requirements, that standard software does not provide, there is no reason for developing own applications. As a result, more and more application systems were put into production, solving the problems of some aspect of the company, such as bookkeeping or customer relation management. Typically each of the application systems managed its own data, normally in a form that was only understandable by the application
256
T. Scheibler, F Leymann, and D. Roller
system. Furthermore, the applications usually provided application programming interfaces that were proprietary, not only in the data formats that were exchanged between the requester and the application software, but also the invocation mechanism and the patterns that were used for controlling the communication. It should be noted that the customer applications in general were also developed without any considerations for communication with vendor applications. As a result of these silos of data, the same information needed to be maintained in several places. For example, the personnel number of a newly hired employee needed to be entered into the mail system, the personnel system, and the payroll system. The simplest approach of replicating the data from one data store to another showed to be more complicated as expected: the structure of the data in the store was not known, the data access method was proprietary, the access control mechanism were different, and so on. These types of problems are normally addressed via Enterprise Application Integration (EAI). Various approaches and platforms have been developed to solve this problem of integrating enterprise applications: prominent examples are: workflows [12], message flows [2], and pipes-and-filters [16][13]. In the last years enterprise integration patterns [7] gained more importance for modeling integration solutions. These patterns are based on the PaF architecture. However, workflows seem to be more suitable to solve integration solutions on a production system with certain quality of services requirements [12]. PaF and workflow are inherently different concepts, however as shown in [14] and [15], PaF can be mapped onto workflows. In this paper, we show that an implementation of the PaF architecture can advantageously be delivered by a workflow management system. Sections 2 and 3 provide the base for understanding the underlying architectures of PaF and workflows; in particular the advantages and disadvantages of the different architectures are highlighted. Section 4 shows a scenario that was developed as the base for carrying out the measurements and the implementations that have been used for the two different architectures. We will present the setup in which the implementations were executed, here as well. Section 5 describes the test cases that were run, and analyzes the results of the test cases, shows that the performance of the workflow implementation is superior, and discusses other benefits that user gain by exploiting workflow technology.
2. Pipes-and-Filters Architecture The pipes-and-filter architecture is quite simple: components (filters) that process data and connections (pipes) that move the data emitted by one component to the next one for consumption [13]. This concept has been part of the UNIX and other operating system for quite some time by using files as source and targets [1]. Each filter in the set of filters that make up an application carries out independently of each other filter a particular task that is needed in the application. It does so by reading a stream of data that it receives at its input interface, performing some operation, and then evicting a stream of data at its output interface. Quite often processing of the input stream starts as soon as the first pieces of stream data arrives and as soon as possible evicts the first pieces of the
From Pipes-and-Filters to Workflows 257
output stream. It should be noted that the term filter is used for some historical reasons and does not imply that data is only filtered. In fact, in EAI scenarios, the typical purpose of a filter is to access data in some database. Pipes are typically provided as separate components such as data repositories or queues, whose sole purpose is to provide interfaces that allow to put data in and get it out. A pipe is only responsible for transmitting data between filters, it does not carry out any processing of data. The main characteristics of the PaF architecture is the total isolation of each component; each component, let it be pipe or filter, works independent of any other component. If, for example, the output of different filters needs to merged, the only option is to organize them into a sequence in such a way that the output of a filter is further refined in the next filter. Since filters are isolated, they obviously do also not know their predecessors or successors. The described characteristics give the PaF architecture a set of attractive nonfunctional properties. First, a designer understands the overall input/output behavior of an application as a simple composition of the behaviors of the individual filters. Second, filters are highly reusable because of the simplicity of their interfaces. Any two filters can be connected, provided they agree on the data that are exchanged between them. And thirdly, throughput and deadlock analysis can be applied fairly easily [13]. Implementations of the PaF architecture also benefit from the simplicity of the architecture. First, they are easy to understand, to maintain, and to enhance. New filters can be easily added and old filters can be replaced by new or improved ones. Second, testing is quite simple: each filter can be tested separately using appropriate scaffolding code. And finally, PaF systems can easily implement the concurrent processing of applications, by implementing each filter as a separate task or even as a set of separate tasks. [10] presents various examples how scalability can be achieved through appropriate setups. Nevertheless, PaF systems have a few drawbacks. As filters are completely independent artifacts, they are often designed to process only complete input and output data; incremental processing of data is often not supported. This leads to a batched organization of the processing, in which a filter only starts to work after the previous filter has finished its work. Thus, the potential performance advantages of the parallel execution of filters cannot be realized. Another disadvantage occurs if a common data format has to be introduced needed by the diversity of formats offered by the implementation of the filters. This requires additional work to be carried out by each filter, such as parsing and constructing data: the consequence is reduced performance and increased complexity in writing filters. Another disadvantage is the resource demand of a pipe. For each pipe connecting two filters a separate resource has to be allocated: significantly driving up the resource consumption in complex scenario with a large number of pipes. Furthermore, PaF are not well suited for handling interactive applications [5]; filters are not designed for interacting with the environment, they are intended to be used for small, self-contained functionality. Finally, the efficiency of PaF systems may be impacted by having to maintain correlations between two separate but related streams. As PaF systems are inherently loosely coupled, sharing state or having a global status of the overall system is not supported; that means PaF
258
T. Scheibler, F Leymann, and D. Roller
systems do not support monitoring functions for tracking the execution of an EAI solution.
3. Workflow-based Systems A process model describes the structure of a business process in the real world [12]. It defines all possible paths through the business process, including the rules that define which paths should be taken and all actions that need to be performed. This model is a template from which each process is instantiated; that means an instance of the process model is created. Often the term process instance is used to avoid the possible ambiguity when using the term process. An individual process (process instance) is carried out according to a set of values that determines the actual path through the process. The term workflow model is used for those parts of a process model that are carried out by a computer. WS-BPEL [4] has emerged as the standard for modelling business processes. Business processes are described as a set of tasks and the sequence in which these tasks are carried out; the associated data is specified as a pool of variable from which data is obtained and written to [11]. This control flow approach is in contrast to the PaF architecture, which is data flow oriented. Workflows are carried out by a workflow management system (WfMS) that creates process instances, navigates through the associated process instance using the associated process model, invokes the defined Web Services, and processes requests coming from previously invoked Web Services. The notion of an instance is the main difference compared to PaF. Within such an instance all information a process has is stored (e.g. values of variables, status of an instance, errors, and messages sent and received...). Therefore, monitoring and auditing can be realized when these information is continuously persistently stored. Many workflow management systems, such as IBM Process Server [9], are implemented as stateless servers on top of an application server and use transactions to provide the necessary robustness. A transaction includes all resources that the WfMS uses, such the access to the database that contains the process instance data and the messages that are used for chaining the transactions. The number of transactions that are carried out is either decided unilaterally by the WfMS or controlled by some WfMS customization property. In any case, a transaction must be completed, when a receive, pick, or wait activity is encountered when navigating through the process. If none of these activity types are available in a process, the complete process can be processed in a single transaction. IBM Process Server calls such a process a micro flow in contrast to macro flows that consist of a set of transactions.
From Pipes-and-Filters to Workflows 259
4. Test Cases for Evaluating the Architectual Styles This section presents the scenario that has been used, followed by implementations that have been derived from the scenario, and finally the environment in which the implementations were run and tested. 4.1. Scenario The scenario has been adopted from the loan scenario given in [7]. It has been enriched with the following additional steps to provide a broader and more elaborate exploitation of the capabilities offered by the implementations: (1) A check if public support programs can be used, and (2) an internal validation of the amortization ability of the customer. The first step includes two different checks, one for a subsidy and one for a surety. The second step comprises the invocation of various services to collect information of the customer and to be able to estimate the ability of the customer to pay back the loan. Figure 1 shows the enhanced scenario modeled using integration patterns [7].
Fig. 1. Extended loan broker scenario
As shown, the scenario consists of four major steps. The processing is initiated via a request from a customer. The request contains basic information, such as the amount of the loan and some identification of the customer, such as a customer number. The first step Rating Agent contacts a rating agency for the credit rating of the customer and adds the received information to the supplied one. The next step Support programs is only carried out when certain criteria are matched. The step itself determines the support programs for which the customer is eligible. The Banks step contacts a set of banks, obtains their conditions for the loan, and finally selects the bank with the most attractive offering. The Amortization then calculates varies alternatives that would apply and selects the most appropriate one, resulting in an offer, which is then sent back to the customer. Each step in the scenario is represented by a single integration pattern. The Content Enricher pattern is used for describing the Rating Agent step. The Content-based Router pattern represents the decision whether the customer is
260
T. Scheibler, F Leymann, and D. Roller
eligible for any support program. The Support programs step is modeled as a Complex Message Processor pattern with a Splitter at the beginning, an Aggregator at the end, and intermediate tasks. The Banks step is represented as a Scatter-Gather pattern: a Recipient List determines the appropriate banks, and an aggregator collects the responses and calculates the best result. Finally, the Routing Slip pattern is used for the Amortization step. 4.2. Implementation The scenario was now transformed into a PaF implementation and a workflow implementation using the approach of [15]. This approach takes parameterized integration patterns [14] as foundation for modelling integrations solutions, configures an individual solutions via so called parameters, and transforms the solution via different algorithms into code for different runtime environments (cf. figure 2). One runtime is, for example, BPEL and Web services; another runtime could be mediation flows.
Fig. 2 Transformation to runtime environments
For the PaF implementation we used mediation flows as offered by IBM WebSphere Enterprise Service Bus (ESB) [8]; for the workflow implementation WS-BPEL workflows as implemented by IBM WebSphere Process Server [9]. We have chosen these two products for two simple reasons: (1) The underlying technology is the same, namely IBM WebSphere, which should eliminate any performance differences that may come from the underlying platform. (2) The skill level of the development teams that implemented the two products can be assumed to be equal. Incidentally, IBM ships the ESB as part of the process server. 4.2.1 Mediation Flows Mediation flows implement very closely the Pipes-And-Filters architecture: messages send between the different execution steps provide the required notion of data flows, each execution step (mediation primitive) processes an incoming message and sends it to the subsequent step, each execution step works completely independent, mediation flows have no global knowledge, and finally do not
From Pipes-and-Filters to Workflows
261
implement any failure behavior. Figure 3 shows the bank step in mediation flow notation.
Fig. 3. Abstract of mediation flow representing the loan broker scenario
Every activity of the scenario is implemented by one or more mediation primitives. External services (like bank calls) are represented by an Invoke activity. A routing decision (i.e. content-based router) is implemented by a Request Router. Transformation between message formats is done by a Business Object Mapper. The recipient list is realized by an object mapper constructing the recipient information followed by a Splitter which copies messages according to the recipient list. The subsequent router sends the message to the according bank invocation activity. The Join activity together with the mapper activity represents the aggregator. The last pattern is the routing slip. This pattern is realized by a mapper constructing the routing slip, a splitter and a router which send the messages one after another to the invoke activities, a mapper after each invoke to incorporate the results and a join at the end to combine the different paths. 4.2.2 WS-BPEL Workflow For this implementation, the scenario was translated to a WS-BPEL process. The Rating agent step is implemented as an Invoke activity. The content-based router checking if the support programs should be considered is realized by a Choice activity. The Support programs step is designed a two Sequences including choices. The recipient list of the bank steps is represented by an Assign activity which initializes a variable needed for the following loop construct. The Bank step itself is modeled as a ForEach activity that includes the preparation of the request and the actual call. Following the forEach activity a choice element checks the responses of the banks and an assign activity calculates the best offer. The final part of the scenario, the routing slip, is implemented using a While construct. First, the next routing slip element is determined. Afterwards, the call to the appropriate service is done. The last steps of the process after this construct are responsible for copying the results into the resulting variable. The final step of the process is a Reply activity which sends back the requested information to the customer. 4.3. Setup The appropriate test cases have been carried out using the following hard- and software setup.
262
T. Scheibler, F Leymann, and D. Roller
Hardware We exploited two processors, one for running the actual test cases, and one for running the invoked Web Services. The test server was an Intel Core 2 Quad CPU with 2.4 GHz and 4 GB RAM; the Web Server machine was an Intel Pentium M processor with 1.6 GHz and 512 MB RAM. Both systems were running Windows XP Professional with Service Pack 2. The two machines were connected via a 100 MBit ethernet network. Software The test server was running IBM Websphere Process Server 6.1.2 (with the included IBM Websphere Enterprise Service Bus (ESB)); the Web Server machine had Apache Tomcat 6.0.14 installed.
5. Measurements In this section, we present the varuious test we have carried, and the appropriate results we have measured. 5.1. Test Case Scenarios We have created a set of test cases that vary the major variants in the scenario: scalability and message size. Scalability is tested by running many loan scenarios in parallel which tests the impact of different loan requests on each other and by invoking many Web Services (in our case the bank requests) in parallel, which shows how well the implementation handles concurrency within a single loan request. The impact of the message size is analyzed by running the different scalability test cases using different message sizes. This results in the following five different tests summarized in Table 1. Table 1. Test Case Scenarios Parameters
Scenarios 1
2
3
4
5
1
30
1
30
30
Message size
10kb
10kb
500kb
500kb
500kb
Service calls
3
3
3
3
30
Threads
All test cases were run using soapUI [3], which provides the appropriate capabilities for generating appropriate tests, test the response, and producing appropriate statistics. Both, mediation flows and microflows, are carried out a single transaction. We have also measured macroflows just to get a feeling about what would be the impact of persisting the information several times during the execution of a process.
From Pipes-and-Filters to Workflows
263
5.3. Results Table 2 (on the left side) compares the different test cases with respect to the inter process scalability. Test cases 1 and 2 compared the scalability with a small messages size, whereas test cases 3 and 4 compared scalability at a large message size. Following the tradition of transaction and database benchmarking [6] the results are presented in instances/second. The figure reveals two important observations. First, the workflow engine scales significantly better than the mediation flow engine (the service bus). Second, the mediation flow engine does not scale at all at larger message sizes. Table 2. Comparisson of inter process scalability and impact of message size Inter process scalability
Impact of message size
1
2
3
4
1
3
2
4
Mediation Flow
12,7
32,6
4,3
4,5
12,7
4,3
32,6
4,5
Micro Flow
12,6
55,4
6,2
16,9
12,6
6,2
55,4
16,9
Macro Flow
1,4
2,4
0,9
2,5
1,4
0,9
2,4
2,5
Table 2 (on the right side) evaluates how the different implementations are impacted by the message size. The comparison just reconfirms the observation from the left side of the table that the mediation flow engine is significantly more sensitive to message size than the workflow engine. Table 3. Intra process scalability 4
5
Mediation Flow
4,5
1,9
Micro Flow
16,9
11,0
Macro Flow
2,5
0,9
Table 3 shows the intra process scalability, that means the handling of multiple parallel Web Services requests. It is obvious that with the additional processing that is required the request rate that the implementation can handle goes down. However, the comparison shows that the mediation flow engine handles parallelism significantly less efficient than the workflow engine.
6. Conclusions We have shown that the Pipes-and-Filters architecture can be implemented within a workflow management system providing the same functionality as the implementation via a mediation flow engine. Moreover, a workflow system offers extended properties regarding auditing and monitoring (i.e. the notion of an
264
T. Scheibler, F Leymann, and D. Roller
instance), which is very important when executing a productive system. However, we analyzed that the extended ability can be realized at substantially better performance. Further research is required to see whether the performance benefits of the workflow engine still hold when both scenarios are carried out as macroflows. In this situation, the workflow solution would provide additional benefits such as process instance tracking. In fact, we have shown that the workflow engine carries out the same scenario significantly better than a service integration bus that uses message flows. Further research is required to see whether this finding is also true for the typical scenarios of the service integration bus.
7. References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16]
M. J. Bach. The Design of the UNIX Operating System. Prentice Hall, June 1986. B. Blakeley, H. Harris, and R. Lewis. Messaging and Queuing using the MQI. McGraw-Hill, Inc., New York, NY, USA, 1995. eviware. soapUI. http://www.eviware.com, 2009. Organization for the Advancement of Structured Information Standards (OASIS). Web Services Business Process Execution Language (WS-BPEL) Version 2.0 OASIS Standard. http://docs.oasis-open.org/wsbpel/2.0/wsbpel-v2.0.htm, 2007. D. Garlan and M. Shaw. An introduction to software architecture. In Advances in Software Engineering and Knowledge Engineering, pages 1–39. World Scientific Publishing Company, 1993. J. Gray and A. Reuter. Transaction Processing: Concepts and Techniques. Morgan Kaufmann Publishers, Inc., 1993. G. Hohpe and B. Woolf. Enterprise Integration Patterns: Designing, Building, and Deploying Messaging Solutions. Addison-Wesley Professional, 2003. IBM Corporation. IBM WebSphere Enterprise Service Bus. http://www01.ibm.com/software/integration/wsesb/, 2009. IBM Corporation. IBM WebSphere Process Server. http://www01.ibm.com/software/integration/wps/library/, 2009. C. Isaacson. Software pipelines: A new approach to high-performance business applications, 2007. F. Leymann and D. Roller. Modeling Business Processes with BPEL4WS. Information Systems and e-Business Management (ISeB), 2005. F. Leymann and D. Roller. Production Workflow: Concepts and Techniques. Prentice-Hall, Upper Saddle River, New Jersey, 2000. R. Meunier. The pipes and filters architecture, pages 427–440. ACM Press/AddisonWesley Publishing Co., New York, NY, USA, 1995. T. Scheibler and F. Leymann. A Framework for Executable Enterprise Application Integration Patterns. In 4th International Conference Interoperability for Enterprise Software and Applications (I-ESA 2008), March 2008. T. Scheibler, R. Mietzner, and F. Leymann. EMod: Platform Independent Modelling, Description and Enactment of Parameterisable EAI Patterns. Enterprise Information Systems, 3(3): 299–317, 2009. D. E. Perry, and A. L. Wolf. Foundations for the study of software architecture. ACM SIGSOFT Software Engineering Notes, 17: 40–52, 1992.
Risk Sources Identification in Virtual Organisation Mohammad Alawamleh1, Keith Popplewell1 1
Coventry University, Priory Street, Coventry CV1 5FB, United Kingdom
Abstract. Enhanced sharing of knowledge, with dynamic access control and security accelerates and improves network decision making, shortens time to market and reduces network operating costs, whilst improved capture and especially re-use of enterprise and network knowledge reduces the cost of repeating work of earlier projects, and of repeating past errors. Improved, risk aware decision making reduces the costs of wrong decisions, and failed collaborations. Along with its numerous advantages, virtual organisations (VOs) – the virtual integration of supply chains - also pose several challenges, including risks which we have been studying through a review of literature where we found thirteen risk sources in the network related risks of the VO which are our main focus, such as lack of trust, top management commitment, information sharing, inadequate collaboration agreements, ontology differences, heterogeneity, structure and design, loss of communication, culture differences, geographic distribution, knowledge about risks, bidding for several virtual organisations at the same time and wrong partner selection. This paper is a comprehensive study identifying these threats to gain a better understanding by going through them one by one using literature and previous studies, then evaluating and ranking these sources based on qualitative study. Keywords: virtual organization; VO, risk, risk identification, SME
1. Introduction To meet their business objectives, enterprises need to collaborate with other enterprises. For some enterprises, doing business globally has become critical to their survival, while others discover new opportunities by focusing their business in a local setting. Enterprises, both big and small, need to establish cooperation agreements with other enterprises. Small and medium sized enterprises (SMEs), who need to specialise in core activities in order to raise their own added value, particularly have to combine forces to compete jointly in the market. Today, an enterprise’s competitiveness is to a large extent determined by its ability to seamlessly inter-operate with others.
266
M. Alawamleh, K. Popplewell
Enterprises need to collaborate in order to compete. These collaborations are given many names and a number of other terms are used with similar, if not actually equivalent meaning to Virtual Organisation, referring to the grouping of enterprises/organisations or to a single organisation within the group (e.g. virtual enterprise, networked organisation, networked enterprise, etc.). The term “virtual organisation” has been selected from these to be used in this paper where the Enterprise Interoperability research roadmap definition for this term means “generally, a grouping of legally distinct or related enterprises coming together to exploit a particular product or service opportunity, collaborating closely whilst still remaining independent and potentially competing in other markets or even with other products/services in the same market” [34]. As in the supply chain VO risks come from different sources, [28] suggesting organising risk sources relevant for the supply chains in three categories: external to the supply chain, internal to the supply chain and network related. External risk sources are, for example, natural risks, political risks, social risks and industry market risk. Internal risk sources range from labour (such as strikes) or production (as with machine failure) to IT system problems, and the network related risks occur from interactions between organisations within the supply chain which is also mentioned by [3] as a different kind of risk related directly to collaboration. Models of VOs are similar to the way supply chains are managed, partners being integrated and running alongside one another. [42] It has been discussed how a cooperative enterprise working as a VO automatically gains the same features as a supply chain structure. Clearly there are no differences between the internal and external risk sources in a supply chain and VO, whilst the different risk sources are in the network related risk due to the different relations between the supply chain partners and the VO partners. Networked risks related to collaboration are not purely dependent on the enterprise goals and objectives, although in many relationships there is a dominant party whose hope is to take responsibility for managing the entire supplier network. The increasing sharing of responsibilities and the dynamic nature of relationships require the assessment to be undertaken in a dyadic fashion, to be able to manage these scenarios proactively in network related risks as identifying the risks becomes more complex because of dependencies with other enterprises [22]. In this paper we will first examine different definitions of risk which could apply to the VO then examine the literature related to the issues in VOs which shows that these risks can be categorised in terms of sources and potential impact on the VO and its partners. Finally we are going to present the results of a qualitative study based on experts’ opinions in order to rank these sources.
2. Risk There are many definitions of risk in general, with the most scientific one being provided by the Royal Society [56] “The probability that a particular adverse event occurs during a stated period of time, or results from a particular challenge. As a probability in the sense of statistical theory, risk obeys all the formal laws of combining probabilities”. Another scientific perspective on risk is that defined in
Risk Sources Identification in Virtual Organisation 267
[37]; risk is the probability of loss and the significance of that loss to the organisation or individual where it expressed as a formula: Risk n = P (loss n) x L (loss n)
(1.1)
Where n is the event, P is the probability and L is significance. As there are many similarities between risk in VO and risk in a supply chain, we shall use these similarities to gain better understating of the VO risk where there is a significant gap in studying risk in the VO.Risk in the supply chain context has been defined as the potential occurrence of an incident associated with inbound supply in which the result is the inability of the purchasing organisation to meet customer demand [70]. Risk propagation can impact whole areas of the supply chain (SC) where the overall business losses could be huge. Managing risk can make an improvement in the throughput where better coordination of material and capacity prevents loss of utilisation waiting for parts, cycle time reduction, inventory cost reductions, optimised transportation, increased order fill rate and increased customer responsiveness [8]. The propagation of risk impacts on partners and the whole of the VO and can affect four main areas: time, cost, quality and total failure of the VO [61, 20]. In the next section we are going to discuss the risk sources in the network level of VO, where identifying risk sources is the first and most important step in risk management.
3.Identification of Risk Sources in the Virtual Organisation Risk sources in the VO have been identified from various authors who have researched and written on this issue. In this paper, after using literature carefully, we found thirteen risk sources under the risks and barriers categories. 3.1. Lack of Trust Lack of trust took up most of the discussions about VO risks in the literature. The degree to which one partner trusts another is the measure of belief in the honesty, benevolence and competence of the other partner. Problems occur if VO partners do not trust each other; they will not share sensitive information and Intellectual Property Rights (IPR), they will not agree about splitting the money and they will not work as they should to support the VO collaboration. Collaborative relationships require trust and commitment for long term cooperation along with a willingness to share risks [51]. The degree of trust among SC partners enhances commitment [36]; on the other hand lack of trust is one of the major factors that contribute to SC risks [55]. Trust can be developed through effective communication and can create resources that lead to a competitive advantage [32]. Trust is an expectation that partners will not act in an opportunistic manner even if there are short term incentives to do so [12] and can contribute significantly to the long term stability of an organisation and its SC [57]. Collaboration involves
268
M. Alawamleh, K. Popplewell
mutual engagement of participants to solve a problem together, which requires strong mutual trust and thus takes time, effort and dedication [7]. Trust is also essential for minimising risk that stems from exposure to opportunistic behaviour by partners, together with uncertainty, ambiguity and incomplete information, which characterise inter-organisational agreements. Nevertheless, trust is a risky investment; it exposes entities to the potential for opportunistic behaviour by others [40]. In inter-organisational arrangements, trust is positively related to conflict resolution [59] and further facilitates harnessing the various benefits of conflict, the spark that ignites valuable innovation [41]. A conceptual framework that introduces key concepts capturing initial trust establishment in on-demand VO is presented by [49]; according to this establishing trust between VO participants, is the basis for successful collaboration and joint performance. Without trust no social, political or economic exchange is possible as it requires one party to make some form of approach towards another with the tacit assumption that the other party can be relied upon to respond with suitable good grace [33, 66]. 3.2. Loss of Communication VO variation and changing structure leads to loss of communication where there is an inverse relationship between communication and uncertainty where less communications means more uncertainty. Communication is a fundamental factor to any form of organising but is preeminent in VOs [18]. The communication in the virtual form is expected to be rapid and customised in response to customer demand [15]. Loss of communications is an obstacle in the VO and insufficient communication involves the risk of failure [7, 62, 1, 5, 21]. It is possible to build a more trusting relationship between partners in order to ensure open lines of communication and where the key issues will go far beyond technology; this will allow for a free flow of information among members of the SC, the most important issue here being that partners can feel secure about the exchange of sensitive information. It is of crucial importance that there should be a clear understanding that the SC is more about the level of commitment than it is about the technology concerned [57]. 3.3. Inadequate Collaboration Agreement The risk of inadequate collaboration agreement occurs when the agreement between partners entering into the collaboration is not clear enough or the text is not sufficient. Some authors [7, 62, 5] mention the weak foundation where there is no clear and commonly accepted definition of objectives, strategies and basic conditions, insufficient management of expectation, and poor or in some way unfair contracts which will lead to the risk of failure. Another [64] addressed the moral risk before the VO being setup, where various members can constitute a contract to restrict the scope and the degree of sharing knowledge while [46] defining the negotiating contracts between the partners, where each partner may be required to sign a legally binding contract agreeing to the process and protocols and respecting the constitution of the VO.
Risk Sources Identification in Virtual Organisation 269
A key component for Supply Chain Management (SCM) is sharing both risks and rewards among the members of the SC [35]. An SC works well if the incentives of its members are aligned which requires that the risks, costs, and rewards of doing business are distributed fairly across the network [38]. Revenue sharing is a kind of SC contract that makes it possible to share the risks among SC partners [58]. Risk and benefit sharing varies according to the type of collaboration. In joint venture or strategic alliance risks and benefits are often shared through joint ownership [68], with formal agreement such as obligation contracting, profit sharing schemes, property rights sharing, or ownership control, providing incentive systems for the parties to collaborate. In other less formal types of collaborations, there may not be such transparency of risk and benefit sharing. In these circumstances it is important to form an agreement to ensure long term commitment to allow sharing of sensitive information, knowledge and competences. Where enterprises are trusted there is perceived to be less risk in sharing sensitive information, less need for elaborate contracts to protect interests and a willingness to rely on each other to make decisions, since each person involved believes that his own perspective will be taken into account [12]. 3.4. Information Sharing Information sharing is vital for collaboration while decreasing information visibility in the VO increases the risks including the non-availability of catalogues with normalised and updated profiles of organisations. On the other hand increasing the information sharing leads to lose of IPR. If a VO has not formed values that regard sharing and collaborations as core content, the members of the VO will lack the acceptance of knowledge sharing. They may think that their advantages in the enterprise come from certain knowledge they have and others do not have, so they will refuse to share their own knowledge with others in order to protect individual interests [64]. Information sharing is essential for SC as lack of information can lead to panic, chaotic behaviour and unnecessary costs [11]. Current models for SCM agree that the sharing of business information is an important aspect, which binds SC together at each stage [69, 65]. SC risks include the sharing of sensitive information such as inventory levels and production schedules with other members [47]. The degree of information sharing relates to choosing the partners with whom information should be shared, the type of information shared, and the quality of shared information. Information sharing is basic to effective coordination in an SC. Many studies have found that information sharing has great impact on SC performance, especially in reducing bullwhip effect. Furthermore, information sharing enables enterprises to make better decision in their operation leading to better resource utilisation and lower SC costs [30, 65]. 3.5. Top Management Commitment Weak participation by the highest level executives at a specific critical decision point in VO formation or operating maximises the risks [7, 62, 1, 5]. Low
270
M. Alawamleh, K. Popplewell
commitment is a risk factor in an alliance failing to meet its objectives [29].Top management is responsible for each and every activity at all the levels of the organisations. It is instrumental in the development of organisational structure, technological infrastructure and various decision making processes which are essential for effective creation, sharing and use of knowledge [4]. 3.6. Wrong Partner Selection Incompatible objectives, strategies, culture, core competencies and capabilities that are not complementary are further risk factors [7, 62, 1, 5]. The lack of information about partners is one of the obstacles in the selection of the VO [52] while others [18, 10] have pointed out that diversity of interests increases the VO risk. Conflict is an expressed struggle between at least two inter-dependent parties who perceive incompatible goals and rewards, as well as interference from other parties in relation to achieving their goals [63]. There are three common forms of organisational conflict: relationship conflict or affective conflict, task or cognitive conflict and process conflict. 3.7. Ontology Differences The semantics of real world objects are described via (domain) ontology. Ontology is “a formal, explicit specification of a shared conceptualisation” [19]. The problem of ontology differences occurs when two different words mean the same thing, or worse than that when the same word means two different things when used by different partners. Ontologies have proven to be an unambiguous and compact way of knowledge representation for mutual understanding as they provide the basis for sharing information [43]. In order to share the same terminology partners need to agree on the terms that they intend to use for collaboration where there is a problem in lack of common ontologies among the cooperating organisations [6]. Ontology provides a representation of knowledge sharing, which can be used or reused, in order to facilitate the comprehension of concepts and relationships in a given domain and the communication between different domain actors where the most basic type of ontology is a set of terms representing a controlled vocabulary as a glossary which are the terms that people agree to use when dealing with a common domain. Any misunderstanding caused by using different ontologies will cause problems regarding the agreement when forming the collaboration and the process of the collaboration which will increase the risks in the VO. 3.8. Structure and Design The dynamic structure of VO lacks the ability to synchronise parallel activities through failure to distribute responsibilities, tasks, rules, to decide who reports to whom (who is in charge) and span of control (the ability of the phase leader to manage the rest of the partners, the right to make decisions and its affect on the whole of the network).Sources of structure and design may include centralisation, decentralisation, specialisation (division of labour), span of control and
Risk Sources Identification in Virtual Organisation
271
formalisation [67]. Lack of common collaboration infrastructure is one of VO’s obstacles [7]. Weak or ineffective control; these risks are external to the enterprise and can occur due to uncertainty arising from inter-organisational networking which is similar to what was [17, 2] reported where there was weak control over supplier and customers in the strategic alliances, so that risk occurs. 3.9. Culture A VO could be a compression of several cultures. These different cultures lead to problems of incompatibility between processes, to miscommunication and have an effect on the information sharing [18, 10, 7]. The organisational culture defines the core beliefs, value norms and social customs that rule the way individuals act and behave in an organisation. It is the sum of shared philosophies, assumptions, values, expectations, attitudes, and norms that connect the organisations together [54, 31]. However, lack of organisational culture is a key barrier for successful implementation of knowledge management in an organisation [9]. More attention to the VO members refers to the staff’s values, faith and behaviour in common in the enterprise where the core of enterprise culture is the enterprise value which is embodied by the enterprise system that regularises the staff’s behaviour; VO members could have come from different enterprises, each member enterprise having its unique enterprise culture so they will lack shared knowledge [64]. Therefore, participating members within a VO may be culturally diverse with no previous working or partnering history, and therefore they do not share a common past. With the increasingly transactional nature of enterprise and the technological advances that facilitate virtual alliance, the impact of cultural diversity, with different lexicons, values and interpretations, on dispersed team working, is a particularly relevant issue [14]. 3.10. Heterogeneity of Partners Heterogeneity means the partners incompatibility between systems (hardware and operating systems), syntax (different languages and data representations), structure (data model) and semantics (meaning of terms). Others [7, 52] mentioned heterogeneity of potential partners in information technology infrastructures, methods of work, and business practice as one of the obstacles in the VO operation. Lack of technological infrastructure is one of the barriers to implementation of knowledge management [54]. Also [25] addressed heterogeneity between these taxonomies is a serious problem for efficient cooperation process; it means that semantic information extracted from the taxonomies may be heterogeneous with others. Such heterogeneity is caused by the difference of not only the terminologies as synonyms and antonyms but also, more importantly, the knowledge structures as database [24] and ontology [26].
272
M. Alawamleh, K. Popplewell
3.11. Geographic Location The geographic locations of different partners may increase the risk in VO where there is a direct correlation between risk and distance between different partners, and some geographic locations may present more problems (politically and legally) than others [48, 16] together with the extent of the geographic areas covered by the SC, the political areas and borders crossed as one of the factors affecting SC exposure [45]. The geographic distance adds to SC complexity [44]. Moreover [18, 10] it adds more risk for VO due to geographic differences between members. 3.12. Knowledge About Risks The absence of knowledge about the risks which may occur in the collaboration increases the chance for these risks to appear and maximise their impact. Improved understanding about risks in an SC helps to make better decisions and decreases the risks for both a single enterprise and the whole network [22]. There are many different forms of SC risks which can be classified according to how their realisation impacts on a business and its environment [23]. By understanding the variety and interconnectedness of SC risks, enterprises can shape balanced, effective risk reduction strategies [13]. Organisations need to develop a common understanding of the risks surrounding their network [27]. Risk analysis is a practice with methods and tools for identifying risks in a process [55] which provides a disciplined environment for proactive decision making to assess continuously what could go wrong, determine which risks are important to deal with, and apply strategies to deal with those risks [53]. 3.13. Bidding for Several Virtual Organisations at the Same Time Partners may be apply to collaborate in several VOs at the same times where partner capacity is insufficient to support all of these collaborations. This risk may arise if one partner wins more than one VO bid at the same time and this partner capability (resources or staff) is not enough to do the tasks in all of these VOs, which is an occurring risk. Up until now coverage of this point is still vague in the literature and has not had attention from researchers. We can see it vaguely even in [6] when he mentions that contract bidding and negotiation, competencies and resources management, task allocation, monitoring and coordination of task execution according to contracts are obstacles to VOs. Resource management and bidding strategies are constraints on the VO [39].
4. Survey Study Based on the literature review and previous studies the following risk sources are identified, which have potential impact on failure to meet delivery time, cost and quality targets or total failure for the collaboration. These sources have been evaluated based on an expert’s judgment through a qualitative questionnaire sent to experts in the field. We received 9 responses from experts from both academic and
Risk Sources Identification in Virtual Organisation
273
industrial backgrounds. This approach, rather than a wider survey of VO participants in general, was adopted because of the nature of the study which requires a deep understanding of the subject, as well as experience of a range of VO instances. The experts were selected from researchers in the field of Enterprise Interoperability and VOs throughout Europe (as identified by participation in the Enterprise Interoperability Cluster convened by the European Commission, and the INTEROP-VLab international network of researchers). Table 1 below summarises the questionnaire results as a raw data. Table 1. Risk sources rank Risk Source
Importance
Confidence about answer
1.
Lack of trust
91%
93%
2.
Loss of communication
89%
93%
3.
Inadequate collaboration agreement
87%
89%
4.
Information sharing
82%
85%
5.
Top management commitment
82%
81%
6.
Wrong partner selection
78%
85%
7.
Ontology differences
73%
85%
8.
Structure and design
73%
78%
9.
Culture
71%
78%
10. Heterogeneity of partners
69%
89%
11. Geographic location
67%
81%
12. Knowledge about risks
67%
67%
13. Bidding for several VO at the same time
64%
78%
5. Conclusion and Future Work In this paper we have identified a number of risk sources in the VO which may cause negative effects on time, cost, quality or even total failure for the collaboration, through a qualitative study through results of our questionnaire. Experts have found from the sources that the most important risk factor was lack of trust which was the area receiving the most interest in the previous studies at the same level; loss of communication is not far behind lack of trust with the same confidence percentage relating to the answers from the experts. Inadequate collaboration agreement, information sharing and top management commitment ranked in the second level of importance. In the third level of the table wrong partner selection, ontology differences, structure and design and culture differences are more important sources than the last four sources (heterogeneity of partners,
274
M. Alawamleh, K. Popplewell
geographic location, knowledge about risks and bidding for several VOs at the same time) which ranked at the bottom of the table. We can see that these four sources of risk are still sparsely covered in the literature and these sources did not receive much attention in the previous studies as they did not occur so much as the others in the literature and ranked in the fourth level as the least important sources. These thirteen risk sources identified in this paper have significant overlaps and relationships that are sometimes difficult to see. This research provides numerous directions for further research where a more complete understanding of these risk sources and their relationships may be reached, through logical structure such as Interpretive Structural Modeling (ISM). ISM is a well established methodology for identifying relationships among specific items which define a problem or an issue. The opinions from the group of experts mentioned above are used in developing the relationship matrix which was later used in the development of the ISM model [60]. This will help partners to take better decision as to whether to join the VO or not. Even the relationships structure can be applied with other approaches such as the analytical network process [50], which requires a decision structure to help determine the strength of relationship and for decision making. Simulation and systems dynamics modeling may also be used to help identify how risk sources in VO will influence it and its performance results.
6. References [1]
Bamford, J., D. Ernest, and D.G. Fubini, Launching a World-Class Joint Venture. Harvard Business Review 2004. 82(2): p. 90-100. [2] Bandyopadhyay, K., P.P. Mykytyn, and K. Mykytyn, A framework for integrated risk management in information technology. Management Decision, 1999. 37(5): p. 437445. [3] Blackhurst, J., T. Wu, and P. O’grady, Network-based approach to modelling uncertainty in a supply chain. International journal of production research, 2004. 42(8): p. 1639-1658. [4] Brand, A., Knowledge management and innovation Journal of Knowledge Management, 1998. 2(1): p. 17-22. [5] Bullinger, H.-J., Collaborative development – potenzials of success in development networks. Supply Chain Management, 2003. 2: p. 31. [6] Camarinha-Matos, L.M. Virtual Organizations in Manufacturing: Trends and challenges. in Proceedings of Flexible Automation and Intelligent Manufacturing. 2002. Munich. [7] Camarinha-Matos, L.M. and H. Afsarmanesh, A framework for virtual organization creation in a breeding environment. Annual Reviews in Control, 2007. 31(1): p. 119135. [8] Chang, Y. and H. Makatsoris, Supply chain Modeling Using Simulation. International Journal of Simulation, 2001. 2(1): p. 24-.30. [9] Chase, R.L., The knowledge-based organization: an international survey. Journal of Knowledge Management, 1997. 1(1): p. 38-49. [10] Chen, J. and W.D. Feng, Formation and Management of Virtual Enterprises Tsinghua University Press, 2002.
Risk Sources Identification in Virtual Organisation 275 [11] Childerhouse, P., et al., Information flow in automotive supply chains – identifying and learning to overcome barriers to change. Industrial Management & Data Systems, 2003. 103(7): p. 491-502. [12] Chiles, T.H. and J.F. MicMakin, Integrating variable risk preferences, trust, and transaction cost economics. Academy of Management Review, 1996. 21: p. 73-99. [13] Chopra, S. and M.S. Sodhi, Managing risk to avoid supply chain breakdown. Sloan Management Review, 2004. Fall: p. 53-61. [14] Crossman, A. and L. Lee-Kelley, Trust, commitment and team-working: the paradox of the virtual organization. Global Networks, 2004. 4(4): p. 375-390. [15] David, W.H. and M.S. Malone, The Virtual Corporation. 1992, New York, NY: HarperCollins Publishers. [16] DeWitt, T., L.C. Giunipero, and H.L. Melton, Clusters and supply chain management: the Amish experience. International Journal of Physical Distribution & Logistics Management, 2006. 36(4): p. 289-308. [17] Finch, P., Risk Management Consultant with AEA Technology, Warrington, UK. Supply Chain Management: An International Journal, 2004. 9(2): p. 183-196. [18] Grabowski, M. and K.H. Roberts, Risk Mitigation in Virtual Organizations. Organization Science, 1998. 10. [19] Gruber, T., Toward principles for the design of ontologies used for knowledge sharing. International Journal of Human Computer Studies., 1995. 43(5-6): p. 907928. [20] Gunasekaran, A., Agile manufacturing: A framework for research and development. International journal of production economics, 1999. 62: p. 87-105. [21] Gunasekaran, A., K.-h. Lai, and T.C.E. Cheng, Responsive supply chain: A competitive strategy in a networked economy. Omega, 2008. 36(4): p. 549-564. [22] Hallikas, J., et al., Risk management processes in supplier networks”. International Journal of Production Economics, 2004. 90(1): p. 47-58. [23] Harland, C., R. Brenchley, and H. Walker, Risk in supply networks. Journal of Purchasing & Supply Management, 2003. 9(2): p. 51-62. [24] Hull, R. Managing semantic heterogeneity in databases: a theoretical prospective. in Proceedings of the sixteenth ACM SIGACT-SIGMOD-SIGART symposium on Principles of database systems. 1997. Arizona, United States ACM. [25] Jung, J., Taxonomy alignment for interoperability between heterogeneous virtual organizations. Expert Systems with Applications, 2008. 34(4): p. 2721-2731. [26] Jung, J., et al., Efficient web browsing with semantic annotation: a case study of product images in e-commerce sites. IEICE Transactions on Information and Systems, 2005. E88-D(5): p. 843-850. [27] Jüttner, U., Understanding the business requirements from a practitioner perspective. The International Journal of Logistics Management, 2005. 16(1): p. 120-141. [28] Jüttner, U., H. Peck, and M. Christopher. Supply chain risk management: outlining an agenda for future research. in Proceedings of the Logistics Research Network 7th Annual Conference. 2002. [29] Kanter, R.M., Restoring people to the heart of the organisation of the future, in The Organisation of the Future, Jossey-Bass, Editor. 1997: San Francisco. [30] Lee, H., V. Padmanabhan, and S. Wang, Information distortion in a supply chain: the bullwhip effect. Management Science, 1997. 43: p. 546-558. [31] Lemken, B., H. Kahler, and M. Rittenbruch. Sustained knowledge management by organizational culture. in proceedings of the 33rd Annual Hawaii International Conference on System Sciences. 2000. Germany: Institution of Computer Science, Bonn university. [32] Lengnick-Hall, C.A., Customer contributions to quality: a different view of the customer-oriented firm. Academy of Management Review, 1998. 21(3): p. 791-824.
276
M. Alawamleh, K. Popplewell
[33] Lewicki, R., D. McAllister, and R. Bies, Trust and Distrust: New Relationships and Realities. Academy of Management Review, 1998. 23(3): p. 438-458. [34] Li, M.-S., et al., Enterprise Interoperability Research Roadmap. 2006. p. 1-45. [35] Mentzer, J.T., Defining Supply Chain Management. Journal of Business logistics, 2001. 22(2): p. 1-25. [36] Mistry, J.J., Origins of profitability through JIT processes in the supply chain. Industrial Management & Data Systems, 2005. 105(6): p. 752-768. [37] Mitchell, V.W., Organisational risk perception and reduction: a literature review. British Journal of Management, 1995. 6: p. 115-133. [38] Narayanan, V. and A. Raman, Aligning incentives in supply chains. Harvard Business Review, 2004. 82(11): p. 94-102. [39] Nguyen, D., et al., Delivering services by building and running virtual organisations BT Technology Journal 2006. 24(1): p. 141-152. [40] Panteli, N. and S. Sockalingamb, Trust and conflict within virtual inter-organizational alliances: a framework for facilitating knowledge sharing Decision Support Systems, 2005. 39(4): p. 599-617. [41] Pascale, R.T., Intentional breakdowns and conflict by design. Planning Review, 1994. 22(3): p. 12-16. [42] Pires, S., Bremer, C., Eulalia, L. D. S. and Goulart, C. (2001) Supply Chain and Virtual Enterprises: Comparisons, Migration and a Case Study. International Journal of Logistics: Research and Applications, 4(3): p. 297-311. [43] Plisson, J., et al., An Ontology for Virtual Organization Breeding Environments. IEEE Transactions on Systems, Man, and Cybernetics, 2007. Part C 37(6): p. 1327-1341. [44] Porter, M.E., Clusters and the new economics of competition. Harvard Business Review, 1998. November-December: p. 77-90.45. [45] Prater, E., M. Biehl, and M.A. Smith, International supply chain agility. International Journal of Operations &Production Management, 2001. 21(5-6): p. 823-839. [46] Quirchmayr, G., et al. Establishment of Virtual Enterprise Contracts. in Proceedings of the 13th International Conference on Database and Expert Systems Applications 2002. Berlin: Springer [47] Rahman, Z., Use of internet in supply chain management: a study of Indian companies. Industrial Management & Data Systems, 2004. 104(1): p. 31-41. [48] Ritchie, B. and C. Brindley, Reassessing the management of the global supply chain. Integrated Manufacturing Systems, 2002. 13(2): p. 110-116. [49] Ryutor, T., et al., Initial trust formation in Virtual Organisations. International Journal of Internet Technology and Secured Transactions, 2007. 1(1-2). [50] Saaty, T., Fundamentals of The Analytic Network Process-Dependence and Feedback in Decision-Making With a Single Network. Journal of System Science and System Engineering, 2004. 13(2): p. 129-157. [51] Sahay, B.S. and A. Maini, Supply chain: a shift from transactional to collaborative partnership. Decision, 2002. 29: p. 67-88. [52] Sari, B., S. Amaitik, and S.E. Kilic, A neural network model for the assessment of partners? performance in virtual enterprises. The International Journal of Advanced Manufacturing Technology, 2007. 34(7-8): p. 816-825. [53] Shtub, A., J.F. Bard, and S. Globerson, Project Management: Engineering, Technology and Implementation. 1994, Englewood Cliffs, NJ: Prentice-Hall. [54] Singh, M.D. and R. Kant, Knowledge management barriers: An interpretive structural modeling approach. International Journal of Management Science and Engineering Management, 2008. 3(2): p. 141-150. [55] Sinha, P.R., L.E. Whitman, and D. Malzahn, Methodology to mitigate supplier risk in an aerospace supply chain. Supply Chain Management: An International Journal, 2004. 9(2): p. 154-168.
Risk Sources Identification in Virtual Organisation 277 [56] Society, R., Risk: Analysis, Perception and Management. 1992, Royal Society London. [57] Speckman, R.E. and E.W. Davis, Risky business: expanding the discussion on risk and the extended enterprise. International Journal of Physical Distribution & Logistics management, 2004. 34(5): p. 414. [58] Tsay, A.A., The Quantity Flexibility Contract and Supplier-Customer Incentives Source Management Science archive, 1999. 45(10): p. 1339-1358. [59] Twomey, D. Inter-organizational conflict resolution: the effects of power and trust. in Academy of Management Proceedings. 1995. [60] Warfield, J.N., Developing interconnection matrices in structural modeling. IEEE Transactions on Systems, Man and Cybernetics, 1974. 4(1): p. 81-88. [61] Weisenfeld, U., et al., Managing technology as a virtual enterprise. R & D Management, 2001. 31(3): p. 323-334. [62] Westphal, I., K.-D. Thoben, and M. Seifert, Measuring Collaboration Performance In Virtual Organizations, in Establishing The Foundation Of Collaborative Networks. 2007, Springer Boston: Guimarães. [63] Wilmot, W.W. and J.L. Hocker, Interpersonal conflict. 2001, New York: McGrawHil. [64] You, T.-h., Z. Zhu, and Z.-c. Yu. Analysis and Assessment of Knowledge Sharing Risk in the Virtual Enterprise. in JCIS-2006 Proceedings 2006 [65] Yu, Z., H. Yan, and T.C.E. Cheng, Benefits of information sharing with supply chain partnerships. Industrial Management & Data Systems, 2001. 101(3): p. 114-121. [66] Zaheer, A., B. McEvily, and V. Perrone, Does trust matter? Exploring the effects of interorganizational and interpersonal trust on performance. Organization Science, 1998. 9: p. 141-159. [67] Zhang, Y. and D. Dilts, System dynamic of supply chain network organization structure. Information Systems and E-Buisness Management, 2004. 2(2-3): p. 187206. [68] Zheng, J., et al. Initial conceptual framework for creation and operation of supply networks. in Proceedings of the 14th IMP Annual Conference. 1998. Turku. [69] Zhenxin, Y., H. Yan, and T.C. Cheng, Benefits of information sharing with supply chain partnerships. Industrial Management & Data Systems, 2001. 101(3-4): p. 114120. [70] Zsidisin, G.A., et al., An analysis of supply risk assessment techniques. International Journal of Physical Distribution and Logistics Management, 2004. 34(5): p. 397-413.
Service-based Model for Enterprise Interoperability: Context-driven Approach Alexander Smirnov1, Tatiana Levashova1, and Nikolay Shilov1 1
SPIIRAS, 39, 14 line, St.Petersburg, 199178, Russia
Abstract. Modern information technologies make it possible to develop new approaches to enterprise modelling. The paper presents an approach based on integration of efficient management of information services in the open information environment with enterprise modelling taking into account the dynamic nature of the business environment. The approach is based on the technologies of ontology and context management. The standards of Web-services are used to provide for interoperability between information services. Application of constraints for knowledge representation makes it possible to integrate with existing services as well as to propose a scenario for self-organization of services for problem solving. Keywords: enterprise modelling, ontology, context, information service, Web-service.
1. Introduction The problem of enterprise modeling is tightly related to the necessity of operational collection, processing and analysis of large volumes of heterogeneous information. Today, the development of ICT significantly extends the borders of the information usage. One of new possibilities is remote usage of information services [1]. This explains the relevance of the information service management in the open information environment. This problem includes tasks of information access, acquisition, transfer, processing and usage. The paper presents an approach to enterprise modelling integrating efficient management of information services in the open information environment with enterprise modelling taking into account the dynamic nature of the business environment. For this purpose the models proposed are actualized in accordance with the current situation. The term enterprise modelling here stands for abstract representation, description and definition of the structure, processes, information and resources of an enterprise. An ontological model is used in the approach to solve the problem of service heterogeneity. This model makes it possible to enable
280
A. Smirnov, T. Levashova, and N. Shilov
interoperability between heterogeneous information services due to provision of their common semantics [2]. Application of the context model makes it possible to reduce the amount of information to be processed. This model enables management of information relevant for the current situation [3]. The access to the services, information acquisition, transfer, and processing (including integration) are performed via usage of the technology of Web-services. This work is work-in-progress and the paper proposes a conceptual ideas for their further development. Fig. 1 represents the generic scheme of the approach. The main idea of the approach is to represent the enterprise model components by sets of services provided by them. This makes it possible to replace the configuration of the enterprise model with that of distributed services. For the purpose of interoperability the services are represented by Web-services using the common notation described by the application ontology. Depending on the considered problem the relevant part of the application ontology is selected forming the abstract context that, in turn, is filled with values from the sources resulting in the operational context. The operational context represents the constraint satisfaction problem that is used during self-configuration of services for problem solving. Application ontology
Abstract context
Operational context
Constraint satisfaction problem solving
Service network
Web-service interface
Self-organization of distributed services
Enterprise Model Enterprise model configuration Service Relationship Correspondence Reference Information flow
Fig. 1. Generic scheme of the approach
The remainder of the paper is organised as follows. Section 2 presents the problem description. The major components of the approach are described in sections 3 (the formalism of object-oriented constraint networks), 4 (abstract and operational
Service-based Model for Enterprise Interoperability: Context-driven Approach
281
contexts), and 5 (Web-service model). The scenario of self-organisation is presented in sec. 6. Section 7 summarises main research features of the approach.
2. Problem Description The paper considers the following formal problem definition. There is an open information environment with a set of services R = {rn n ∈ Ν}. A set of functions of each service is defined: FRi = {f m m = 1,K, M }, where M is a number of
functions of service ri . Values, which are results of the service function execution, are used for the current situation modeling. The current situation is described via a functional network FN = {f n n = 1,..., N }, where N is a number of functions, which are to be executed during modeling, FN ⊆
NR
U FRi , NR = R . The task is to
i =1
build a model of the current situation using information from the services of the open information environment, with the amount of used services being minimal ( Ru → min ). At the same time the set of services has to provide for all functions NRu
describing the model of the current situation: FN ⊆ U FR j , NRu = Ru , and j =1
these services have to be available during the time interval [t 0 , T ] , where t 0 is the time of information acquisition from the services and T is the time of finishing the modeling.
3. Ontological Knowledge Representation In the approach the ontological model is described using the formalism of objectoriented constraint networks. Application of constraint networks allows simplifying the formulation and interpretation of real-world problems which in the areas of management, engineering, manufacturing, etc. are usually presented as constraint satisfaction problems [4]. This formalism supports declarative representation, efficiency of dynamic constraint solving, as well as problem modelling capability, maintainability, reusability, and extensibility of the objectoriented technology. In the presented methodology the enterprise model is supposed to be interpreted as a dynamic constraint satisfaction problem (CSP). In order to provide compatibility of ontology model for knowledge representation and internal solver representations a formalism of object-oriented constraint networks (OOCN) is used. As a result, ontology-based problem model is described by a set of constraints and can be directly mapped into the constraint solver. A result of CSP solving is one or more satisfactory solutions for the problem modelled.
282
A. Smirnov, T. Levashova, and N. Shilov
Compatibility of CSP, ontology, and OOCN models is achieved through identification of correspondences between primitives of these models. CSP model consists of three parts: (i) a set of variables; (ii) a set of possible values for each variable (its domain); and (iii) a set of constraints restricting the values that the variables can simultaneously take. Typical ontology modelling primitives are classes, relations, functions, and axioms. The formalism of OOCN describes knowledge by sets of classes, class attributes, attribute domains, and constraints. Concept “class” in OOCN notation is introduced instead of concept “object” in the way object-oriented languages suggest. The OOCN paradigm (the detailed description can be found in [5]) defines the common ontology notation used in the system. According to this representation an ontology (A) is defined as: A = (O, Q, D, C) where: O – a set of object classes (“classes”); each of the entities in a class is considered as an instance of the class. Q – a set of class attributes (“attributes”). D – a set of attribute domains (“domains”). C – a set of constraints. For the chosen notation the following six types of constraints have been defined C = CI ∪ CII ∪ CIII ∪ CIV ∪ CV ∪ CVI: CI = {cI}, cI = (o, q), o∈O, q∈Q – accessory of attributes to classes; CII = {cII}, cII = (o, q, d), o∈O, q∈Q, d∈D – accessory of domains to attributes; CIII = {cIII}, cIII = ({o}, True ∨ False), |{o}| ≥ 2, o∈O – classes compatibility (compatibility structural constraints); CIV = {cIV}, cIV = 〈o', o'', type〉, o'∈O, o''∈O, o' ≠ o'' – hierarchical relationships (hierarchical structural constraints) “is a” defining class taxonomy (type=0), and “has part”/“part of” defining class hierarchy (type=1);CV = {cV}, cV = ({o}), |{o}| ≥ 2, o∈O – associative relationships (“onelevel” structural constraints); CVI = {cVI}, cVI = f({o}, {o, q}) = True ∨ False, |{o}| ≥ 0, |{q}| ≥ 0, o∈O, q∈Q – functional constraints referring to the names of classes and attributes. Correspondences between the primitives of ontology model, OOCN, and CSP is shown in Table 1. Table 1. Correspondence between ontology model, OOCN, and CSP model Ontology Model Class Attribute Attribute domain (range) Axiom / relation
OOCN Object Variable Domain
CSP Set of variables
Constraint
Constraint
Domain
Below, some example constraints are given: • an attribute costs (q1) belongs to a class cost centre (o1): cI1 = (o1, q1); • the attribute costs (q1) belonging to the class cost centre (o1) may take positive values: cII1 = (o1, q1, R+); • a class drilling (o2) is compatible with a class drilling machine (o3): cIII1 = ({o2, o3}, True); • an instance of the class cost centre (o1) can be a part of an instance of a class facility (o4): cIV1 = 〈o1, o4, 1〉;
Service-based Model for Enterprise Interoperability: Context-driven Approach
283
• the drilling machine (o3) is a resource (o5): cIV1 = 〈o3, o5, 0〉; • an instance of the class drilling (o2) can be connected to an instance of the class drilling machine (o3): cV1 = (o2, o3); • the value of the attribute cost (q1) of an instance of the class facility (o4) depends on the values of the attribute cost (q1) of instances of the class cost centre (o1) connected to that instance of the class facility and on the number of such instances: cVI1 = f({o1}, {(o4, q1), (o1, q1)}); To summarize, the general three-level scheme of the approach is shown in Fig. 2. The technological base is provided by Web-services. The problem is described via object-oriented constraint networks and the semantics is provided for by using ontologies. Semantic level provides for knowledge sharing and exchange in an enterprise model. Semantic level
Ontology
Problem description level Technological level
Constraints Web services
Fig. 2. Three-level scheme of the proposed approach
4. Contexts For modeling the current situation two types of contexts are used: abstract and operational. The abstract context is an ontological model of the current situation build on the basis of selection of knowledge relevant to the current situation. The operational context is a specification of the abstract context for the particular real world situation. In accordance with the chosen formalism the modeling of the current situation can be defined as the following task. Based on the formalized knowledge about the problem domain A = (O, Q, D, C ) , build two-level model of the current situation S , presented via the abstract context Contexta (Ta ) = (S Oa , Qa , Da , C a , Ra , Ta ) ( Oa ⊆ O , Qa ⊆ Q , Da ⊆ D , C a ⊆ C , Ra ⊆ R , where Ta is the time of the model adequacy) and operational context Context p (t ) = S O p , Q p , D p , C p , R p , t ( O p ⊆ Oa ,
(
)
Q p ⊆ Qa , D p ⊆ D a , C p ⊆ C a , R p ⊆ Ra , where t is the current time), and find
{
}
the set of the object instances I (t ) = vqo v ∈ D p , o ∈ O p , q ∈ Q p , i = 1,K, N q , such i
that all constraints C p hold, where N q is the number of attributes of class o , with the parameter values being acquired from the services R of the open information environment at time t. The ontological model of the abstract context is defined as:
284
A. Smirnov, T. Levashova, and N. Shilov
Context a (Ta ) = (S Oa , Qa , Da , C a ,WS a , Ta ) ,
where S is the time of the situation modelled; Oa is a set of classes, generally required to model the situation S ; Qa is a set of attributes of the classes Oa ; Da is a set of domains of the attributes Qa ; C a is a set of constraints included into the abstract context; WS a is a set of Web-services representing services of the open information environment assigning the values to the attributes Qa , WS a ⊆ WS , where WS is a set of registered Web-services. Ta is an estimated time of the model adequacy.
When the information of the open information environment becomes available from the services, references to which are stored in the abstract context, the appropriate values are assigned to the attributes of the classes of the abstract context. Thus, the operational context is getting built. The operational context Context p is the model of the current situation in the notation of the object-oriented constraint networks with values assigned to the variables. This model is interpreted as the constraint satisfaction task. The model of the operational context is represented as: Context p (t ) = S O p , Q p , D p , C p ,WS p ,T a, ΔT ,
(
)
where t is the current time, O p is a set of classes used for the modelling of the situation S in particular conditions, Q p , D p , C p ,WS p are used sets of attributes, domains, constraints and Web-
services respectively; ΔT = t − t0 is the current time of the operational context life span, where t 0 is the creation time of the abstract context. The constraint satisfaction task is a triple of sets: CSP = (VCSP , DCSP , CCSP ) , where VCSP is a set of variables, DCSP is a set of corresponding domains of the values of the variable, C CSP is a set of constraints. The solution of the constraint satisfaction task is a set of values of the variables v ∈ VCSP , such that all the constraints hold. This set contains sets of object instances
{
}
I (t ) = vqo v ∈ D p , o ∈ O p , q ∈ Q p , i = 1,K, N q . i
To provide for the information exchange between heterogeneous services as well as its processing, the service model has been developed that is compatible with the ontological model of the knowledge representation. The compatibility is achieved via usage of Web-service interfaces for the services of the open information environment.
Service-based Model for Enterprise Interoperability: Context-driven Approach 285
5. Web-service Model The approach relies upon the Web-service model represented by the structure <WS_URI, WS_RI, <WS_Function>, WS_Calls, WS_Fails, WS_Price, WS_Access_Time, WS_Time_In, WS_Owner, WS_Weigh>, where • WS_URI – URI of the Web-service; • WS_RI – the identifier of a service; • WS_Function – a set of functions the Web-service implements, for WS _ F ∈ WS _ Function there exists a mapping f m : WS _ Function → C m _ AO so that f m (WS _ F ) = c m _ AO , c m _ AO ∈ C m _ AO , where C m _ AO – a set of classes representing methods in the application ontology (AO); • WS_Calls – the number of Web-service calls; • WS_Fails – the number of failed calls; • WS_Price – the costs of information acquisition (the costs of the functions the Web-service implements); • WS_Access_Time – the access time of the Web-service; • WS_Owner – the owner of the Web-service; • WS_Weight – the weight of the Web-service. Weight is calculated based on the access time of the Web-service, the execution times and costs of the functions the Web-service implements, and the Web-service reliability. The element <WS_Function> is a structure: , where • FK: WS_Function – the reference to the function in the Web-service model; • F_IArg – a set of input arguments of the function, for iArg ∈ F _ IArg there exists a mapping f inp : F _ IArg → Pm _ AO so that f inp (iAgr ) = p m _ AO , p m _ AO ∈ Pm _ AO , where Pm _ AO – a set of properties representing method
arguments in the AO; • F_D_IArg – a set of types of the input arguments, for F_D_IArg there exists a set of functions Φ i : F _ D _ IArg → D _ Pm _ AO , where D _ Pm _ AO – a set of ranges
{
for
properties
Pm _ AO ,
}
so
that
ϕ i (diArg ) = d d j ∈ D _ Pm _ AO , p ∈ Pm _ AO , j ∈ Ν , ϕ i ∈ Φ i ; p j
• F_OArg – the output argument of the function, there exists a mapping f outp : F _ OArg → Pm _ AO so that f outp (F _ OArg ) = p m _ AO , p m _ AO ∈ Pm _ AO ; •
doArg – the type of the output argument, for doArg there exists a function ϕ o : doArg → D _ Pm _ AO so that ϕ o (doArg ) = d p d ∈ D _ Pm _ AO , p ∈ Pm _ AO .
286
A. Smirnov, T. Levashova, and N. Shilov
6. Self-organization The base for self-organization of Web-services into a network is the abstract context. To describe processes of self-organization of a service network a sequence of scenarios is proposed (Fig. 3). The set of input and output arguments comprises 1) properties requiring data values to instantiate classes, which these properties characterise (these properties are instantiated by output arguments of Web-service 2) input arguments of the problems and methods specified in the abstract context; 3) output arguments of the problems and methods specified in the abstract context
^Abstract context has been created (set of input and output arguments) Self-organization of Web-services into a set (WSf) on the basis of their functionality A set of Web-services involvement which in the network result in an instantiation of all the classes and solving all the problems specified in the abstract context
Yes
Involvement of more Web-services Is the set sufficient?
No
Search Searchfor for //creation creation of of lacking lacking Web-services Web-services and and their their registration registration
Yes
Self-organization of the Web-services forming the set WSf into a set (WSa) on the basis of the resource availabilities
A set of Web-services that model resources which ensure the required functionality and are available during predicted time of adequacy of the model of the disaster / accident situation
Are resources available during the preset time?
Yes
No
Are there exist other variants of the WSf set? No
Yes Self-organization of the Web-services into an efficient network
Are there exist other variants of the WSa set? No
Processing of uncertain situations
This block is not considered in the paper
Fig. 3. Scenario of self-organizing service network
The first scenario describes Web-service interactions for the purpose of exchange information of services (functions) provided by a given service and the service requirements. The result of this scenario is a set of Web-services ( WS f ) which can participate in self-organization of the Web-service network based on the service functionalities. The second scenario describes interactions of the Web-services comprising the set WS f for the purpose of determination if the services being modelled by these Web-services are projected to be available during predicted time of adequacy of the model of the accident situation. The result of this scenario is a set of Webservices ( WS a ) which model those services that partly or fully are available during the predicted time.
Service-based Model for Enterprise Interoperability: Context-driven Approach 287
The third scenario describes interactions of the Web-services comprising the set WS a for the purpose of self-configuration of an efficient Web-service network. The
efficient network is configured based on an analysis of • the services that provide several functions. More expedient is considered usage of one service that provides several functions than involvement of several services that provide the same functions separately; • the availabilities of the services. Usage of a service that is available over the predicted time is thought of more efficient than deploy several services that are available at different times; • the alternative services. The efficiency of usage of alternative services is determined by the weights of the Web-services that model such services.
7. Conclusion The paper presents an approach to enterprise modelling based on integration of efficient management of information services with enterprise modeling taking into account the dynamic nature of the business environment. The major idea of the approach is enterprise representation via a set of services it provides. The approach assumes a federated type of interoperability when members do not share their models or methods. Unlike other types of interoperability [6, 7] the advantage of this type is that the internal mechanisms of the enterprise elements can be treated as black boxes and do not require any modifications. The formalism of object-oriented constraint networks used in the approach makes it possible to represent the problem domain in the approach via an ontology with links to the services that provide required information. For each particular situation a fragment of the ontology relevant to the situation is built (the abstract context) and complemented with particular values from the services of the open information environment (the operational context). The operational context, in turn, can be used for problem solving as a constraint satisfaction problem. Acknowledgements. Some of the results are due to research carried out as a part of the project funded by grants # 09-07-00436-a, 08-07-00264-a, and 09-0700066-a of the Russian Foundation for Basic Research, and project # 213 of the research program “Intelligent information technologies, mathematical modelling, system analysis and automation” of the Russian Academy of Sciences.
8. References [1]
[2] [3]
Johanesson, P., Andersson, B., Bergoltz, M., Weigand, H.: Enterprise Modelling for Value Based Service Analysis. In: The Practice of Enterprise Modelling (Eds. by J. Stirna and A. Persson), First IFIP WG 8.1 Working Conference, PoEM 2008, pp. 153-167. Springer, LNBIP 15 (2008) Uschold, M., Grüninger, M.: Ontologies: Principles, methods and applications. Knowledge Engineering Review, 11(2), 93 – 155 (1996) Dey, A. K.: Understanding and Using Context. Personal and Ubiquitous Computing J., 5(1), 4 – 7 (2001)
288
A. Smirnov, T. Levashova, and N. Shilov
[4]
Baumgaertel, H.: Distributed constraint processing for production logistics, IEEE Intelligent Systems, 15(1), 40-48 (2000) Smirnov, A., Levashova, T., Shilov, N.: Semantic-oriented support of interoperability between production information systems. In: International Journal of Product Development, Inderscience Enterprises Ltd., 4(3/4), pp. 225-240 (2007) Doumeingts, G., Muller, J., Morel, G., Vallespir, B. (eds.) Enterprise Interoperability: New Challenges and Approaches, Springer, 588 p. (2007). Chen, D., Doumeingts, G., Vernadat, F. Architectures for Enterprise Integration and Interoperability: Past, Present and Future, Computers in Industry, ScienceDirect, 59(7), 647-659 (2008)
[5] [6] [7]
Transformation of UML Activity Diagram to YAWL Zhaogang Han1, Li Zhang1, Jimin Ling1 1
Department of Computer Science and Engineering, University of BUAA, Beijing, China
Abstract. Model transformations are frequently applied in business process modeling to bridge between languages on a different level of abstraction and formality. In this paper, we define a transformation from UML 2.0 Activity diagram (UML-AD for short) to YAWL, a formal workflow language that is able to capture all of the 20 workflow patterns reported in [1]. We illustrate the transformation challenges and present a suitable transformation algorithm. The benefit of the transformation is threefold. First, it clarifies the semantics of UML-AD via a mapping to YAWL. Second, the deployment of UML-AD business process models as workflows is simplified. Third, UML-AD models can be analyzed with YAWL verification tools. 1 Keywords: Business Process Modeling, Workflow, UML, Activity Diagram, YAWL
1. Introduction The Unified Modelling Language (UML), is frequently referred to as a de facto standard for software modelling. Being a multi-purpose language, UML offers a spectrum of notations for capturing different aspects of software structure and computational/business behaviour. One of these notations, namely Activity Diagrams (AD), is intended for modelling computational and business processes. YAWL[2] is a workflow language especially designed to support the 20 workflow patterns[1] proposed by Van der Aalst, etc. in an intuitive manner, which can be used as a lingua franca for expressing the behavior of Web services (for example, described using BPEL or OWL-S). Despite its graphical nature, YAWL has a well defined formal semantics. In order to benefit from the expressive power of YAWL, a large amount business process models are mapped to YAWL,
This work was supported by the National Basic Research Program of China 973 project No. 2007CB310803 and the National Natural Science Foundation of China under Grant No. 90818017
290
Z.G. Han, L. Zhang and J.M. Ling
such as the Event-driven Process Chain (EPC) to YAWL[3],the business process modelling notation(BPMN) to YAWL[4, 5], and BPEL to YAWL[6] . Our goal is to provide a methodology for transformating a model from UMLAD to YAWL. Such a mapping is challenging in the following ways: – Although UML-AD and YAWL share most of their concepts, there is a fundamental difference in the way joins and splits are treated in each language. While UML-AD defines them as first class objects independent of actions, YAWL includes joins and splits in task objects. Accordingly, there is no direct equivalent in YAWL for UML-AD connector chains, i.e. multiple consecutive connectors. – YAWL requires processes to have exactly one start and one end condition. But in UML-AD, multiple initial and final nodes are allowed. – Besides the initial node, there are three other ways to start a process in UML-AD. While in YAWL, you can only start a process in one way (by an input condition). How to transform these ways starting a process in UMLAD to YAWL is another problem. The rest of the paper is organized as follows. Section 2 provides terminology, concepts, notations and formal definitions that are required in subsequent sections of the paper. Section 3 illustrates the challenges of the transformation from UMLAD to YAWL and introduces a transformation algorithm. After presenting related research (Section 4), we draw a conclusion and present future research issues.
2. Preliminaries UML-AD notation has a vast majority of features most of which make no sense for business process definition, because they have been introduced for completely different purposes (defining executable processes for embedded systems). Therefore we select a list of recommended UML-AD features for business process definition, retaining the original semantics of UML-AD elements. Fig.1 gives the core constructs of UML-AD related with business process definition. Fig.2 gives an overview of YAWL and its main notation. A YAWL process model includes one input and one output condition to denote start and end of a process. Activities of a process are represented via tasks. Tasks can contain join and split rules of type AND, OR, and XOR. The OR-split and OR-join have no direct corresponding notations in UML-AD, the other rules have equal semantics as the respective UML-AD connectors. Tasks are separated by conditions which are the YAWL analogue to places in Petri net. If two tasks are connected by an arc, the arc represents an implicit condition. Furthermore, a task can be decomposed to a sub-process.
Transformation of UML Activity Diagram to YAWL
a) Actions
291
b) Control Nodes
Action
AcceptEvent
InitialNode
ActivityFinal
Fork
CallBehaviorAction
SendSignal
Decision
Merge
Join
Fig. 1. Main UML-AD notations a) Actions Atomic Task
b) Control Nodes
Composite Task
Multiple Instance Atomic Task
Multiple Instance Composite Task
Condition
Input Condition
Output Condition
AND-Split Task
OR-Split Task
XOR-Split Task
AND-Join Task
OR-Join Task
XOR-Join Task
Fig.2. Main YAWL notations
A UML-AD process using the core subset of UML-AD elements shown in Fig. 1 is referred to as a core UML-AD process. We assume that all Activities in a UMLAD diagram are available as named graph structures, so we can difine the syntax of a core UML-AD as follows: Definiton 1.(core UML-AD process)[8]: A core UML-AD is a tuples Name, Nodes, Edges where Name is the name of the Activity, and Nodes are its ActivityNodes, partitioned as Nodes= EN , iN , fN , BN , CN with
EN
the set of ExecutableNodes (i.e. elementary Actions),
292
Z.G. Han, L. Zhang and J.M. Ling
iN , fN
the InitialNodes and FinalNodes (of which there may be only one),
Edges
the set of branch nodes, including both MergeNodes and DecisionNodes, and
CN
the set of concurrency nodes, including ForkNodes and JoinNodes.
Edges is the set of ActivityEdges between all of these ActivityNodes. Note that this covers all ActivityNodes and ActivityEdges except those dealing with data flow. In YAWL, A workflow specification is composed of one or more extended workflow nets (EWF-nets). Below is the definition of a YAWL-Net: Definition 2.(YAWL-Net)[2]: An EWF-net N is defined as a tuple (C , i, o, T , F , split , join, rem, nofi ) such that: – C is a set of conditions, – i ∈ C is the input condition, – o ∈ C is the output condition, –
T is a set of tasks,
–
F ⊆ (C \ {o}× T ) ∪ (T × C \ {} i ) ∪ (T × T ) is the flow relation,
– every node in the graph
(C ∪ T , F ) is on a directed path from i to o ,
{ } – split : T → AND, XOR, OR specifies the split behavior of each task, –
join : T → {AND, XOR, OR} specifies the join behavior of each task,
Transformation of UML Activity Diagram to YAWL
(
293
)
– rem : T → P T ∪ C \ {i, o} specifies the token to be removed from the tasks and conditions given in the mapping2, –
nofi : T → ℕ × ℕ inf × ℕ inf × {dynamic, static}
specifies the multiplicity of
each task(minimum, maximum threshold for continuation, and dynamic/static creation of instances).
3. From UML-AD to YAWL This section starts with a discussion on how the different transformation problems can be solved. Then, we combine the solutions of the sub-problems in a transformation algorithm.Transformation techniques will be focused on controlflow related constructs. 3.1. Transformating Basic Structures Fig. 3 gives the transformation rules from a set of UML-AD actions and control nodes to YAWL. A send signal node, an accept event node or an action is mapped to an atomic task with one input and output. The call behavior action is mapped onto corresponding composite task with one predecessor and one successor. Initial node and final node are easy to transform if there is only one initial node and one final node. In this case, the initial node maps to YAWL input condition and the final node is mapped to a YAWL output condition. However, a UML-AD model can include multiple initial nodes and final nodes, they have to be bundled with empty task with an OR-join or OR-split rule. We will discuss the corresponding detail later in chapter 1.3.3. UML-AD Object
2
We use P to denote the power set of a set
YAWL Object
294
Z.G. Han, L. Zhang and J.M. Ling
fork and join
Fig. 3. Transformation rules for basic structures
3.2. Transformating Connector Chain Joins and splits are first class elements of UML-AD while in YAWL they are part of tasks. As a consequence, there may be the need to introduce empty tasks only to map a connector. Fig. 4(a) illustrates how empty tasks are used to represent a single connector. And this is in particular the case with connector chain. Fig. 4(b) illustrates how a connector chain is transformed.
(a)
(b) Fig. 4. Transformation of single connector and connectors chain
3.3. Transformating Multi Initial and Final Nodes UML-AD initial and final nodes are easy to transform if there is only one initial node and one final node. In this case the UML-AD initial node maps to a YAWL input condition and the final node to a YAWL output condition. If there are multiple initial nodes, they have to be bundled: the one YAWL input condition is followed by an empty task with an OR-split rule. Each UML-AD initial node is then mapped to a YAWL condition that is linked as a successor with the YAWL OR-split. Analogously, each of multiple UML-AD final nodes is mapped to a
Transformation of UML Activity Diagram to YAWL
295
YAWL condition which is all connected with an OR-join of an empty task that leads to the one YAWL output condition (see Fig. 5).
Fig. 5. Transformation of multiple initial and final nodes
3.4. Transformating Other Start Ways There is only one way to start a business process in YAWL, While in UML-AD, there are three other ways to start an activity (i.e., a process) besides the initial node. Fig. 6 gives an example: an accept event action without an input, an accept time event action without an input and activity parameter node 1 act as a start point of an activity (activity parameter node 2 acts as an end point of the activity).They should be transformed to initial and final node respectively and further transformation described in chapter 1.3.3 should be undertaken.
Fig. 6. Transformation of multiple initial and final nodes
3.5. Transformation Algorithm This section we give the main transformation algorithm transformating UML-AD to YAWL. Before this, we first define the predecessor and successor nodes. Notation 1. (preducerror and successor nodes) : Let N be a set of nodes and A ⊆ N × N a binary relation over N defining the arcs. For each nodes n ∈ N , we define the set of
predecessor nodes •n = { x ∈ N | ( x, n) ∈ A} , and the set of successor nodes n• = { x ∈ N | ( n, x) ∈ A} . The transformation algorithm is based on a traversion of the core UML-AD process graph described in chapter 1.2, and we assume that the starting and ending
296
Z.G. Han, L. Zhang and J.M. Ling
ways of a process described in chapter 1.3.4 have been transformed to initial and final node. The symbol node references the current node. Depending on the type of node, respective elements of the YAWL target model are generated. The algorithm is arranged in two loops. The outer loop continues until there are no more UML-AD elements in the to-be-processed set Proc. As there is only one path taken, those successors of a split that are not considered immediately are added to Proc. Similarly, those initial nodes not directly considered are also added to this list. The second inner loop proceeds if node is not an element of the processedelements set J. A YAWL element is generated and node is added to the to-beprocessed set Proc. At the end of this loop node is set to one element next and the other successor elements of node are added to Proc. The algorithm is complete in the sense that every node is processed only once: The termination condition of the inner loop hinders joins and their successor nodes to be processed more than once. The completeness is granted because UML-AD is coherent: every node will ultimately be processed when navigating beginning from the initial node. After every node is processed exactly once, the algorithm will terminate. Algorithm: UMLtoYAWL (UMLmodel M) Input: a UML model M = (EN, iN, fN, BN, CN, F) Output: a YAWL model Y = (C, i, o, T, Flow, Split, Join) 1. Initiation: Proc := {i | i ∈ iN}, J =Φ ; 2. Add YAWL input and output condition: i ∈ C and o ∈ C; 3. Multiple start: if |{i | i ∈ iN}| > 1 then add OR-Split task tin; 4. Multiple end: if |{f | f ∈ fN}| > 1 then add OR-Join task tout; 5. While |Proc| ≠ 0 do (a) Initiation: node := n1 with n1 ∈ Proc, Proc := Proc \ n1; (b) while node ∉ J do i. if node ∈ iN then add c ∈ C and • c = tin; end ii. if node ∈ fN then add c ∈ C and • c = tout; end iii. if node ∈ BN then add empty task with XOR-Split or XOR-Join type; end iv. if node ∈ CN then
Transformation of UML Activity Diagram to YAWL 297
add empty task with AND-Split or AND-Join type; end v. if node ∈ EN then if node ∈ T then add task to YAWL with corresponding attributes; end if node ∈ S then add composite task to YAWL; UMLtoYAWL (node); end end vi. if node ∈ F then add an flow edge to YAWL; end vii. add node to J; add {n | ∀n ∈ node • ∧ n ≠ next} to Proc; node := next | next ∈ node • ; end (c) if task tx and ty is close relations in YAWL(tx • =ty ∧ • ty =tx) ∧ they are two empty task or one is task(composite task) and the other is empty task then tx and ty can be fold to a new task(composite task) t,
• t= • tx ∧ t • =ty • ; end end
4. Related Work As the use of graphical notation to assist the development of complex processoriented systems has become increasingly important, it is necessary to define a formal semantics for UML-AD to ensure precise specification and to assist developers in moving towards correct implementation of business processes. A significant amount of work has been done towards the formal semantics of a subset of UML-AD[7-10] .The semantic model makes it possible to formally analyst and compare UML-AD diagrams. The previous attempt in conversion did so using Petri net [11, 12].The other approaches have mainly focused on mapping the UMLAD to BPEL [13, 14]. Although the UML-AD shares many common features with BPEL, this conversion is fundamentally limited e.g. it excludes diagrams with arbitrary construct, because UML-AD is graph-oriented while BPEL is mainly block-structured, which has its weaknesses in dealing with data structures and
298
Z.G. Han, L. Zhang and J.M. Ling
complex control flows. They belong to two fundamentally different classes of languages. YAWL is founded on the workflow patterns ensuring that it recognizes current practice in the process technology field and supports the capture and enactment of a wide range of workflow constructs in a deterministic way. It looks like that the YAWL shares more common features and is more efficient in conversion than BPEL.
5. Conclusion and Future Work In this paper, we presented a novel transformation algorithm from UML-AD to YAWL. The contribution of this transformation is threefold. First, it clarifies the semantics of UML-AD via a mapping to YAWL. Second, the deployment of UML-AD business process models as workflows is simplified.Third, UML-AD models can be analyzed with YAWL verification tools. As a proof-of-concept we have implemented the algorithm. To the best of our knowledge, no work has been done towards the transformation. In future research, we aim to use this transformation to convert the huge amount of UML-AD process models to YAWL and analyze them with verification tools for YAWL such as Woflan[15], WofYAWL, and ProM[16]. This will provide insight into the correctness of large number of models described in UMLAD.
6. References [1]
van der Aalst, W.M.P., et al., Workflow Patterns. Distributed and Parallel Databases, 2003. 14(1): p. 5-51. [2] van der Aalst, W.M.P. and A.H.M. ter Hofstede, YAWL: yet another workflow language. Inf. Syst., 2005. 30(4): p. 245--275. [3] Mendling, J., M. Moser and G. Neumann. Transformation of yEPC business process models to YAWL. 2006. New York, NY, USA: ACM. [4] JianHong, Y., et al. Transformation of BPMN to YAWL. in Computer Science and Software Engineering, 2008 International Conference on. 2008. [5] Decker, G., et al., Transforming BPMN Diagrams into YAWL Nets. Business Process Management, 2008: p. 386-389. [6] Brogi, A. and R. Popescu, From BPEL Processes to YAWL Workflows. Web Services and Formal Methods, 2006: p. 107-122. [7] Börger, E., A. Cavarra and E. Riccobene, An ASM Semantics for UML Activity Diagrams. Algebraic Methodology and Software Technology, 2009: p. 293-308. [8] Storrle, H. Semantics of Control-Flow in UML 2.0 Activities. 2004. [9] Eshuis, H., Semantics and Verification of UML Activity Diagrams for Workflow Modelling. 2002, Univ. of Twente. [10] Eshuis, R. and R. Wieringa, A Real-Time Execution Semantics for UML Activity Diagrams. Fundamental Approaches to Software Engineering, 2001: p. 76-90. [11] Lopez-Grao, J.P., J.E. Merseguer and J. Campos. From UML activity diagrams to Stochastic Petri nets: application to software performance engineering. 2004. New York, NY, USA: ACM.
Transformation of UML Activity Diagram to YAWL 299 [12] Baresi, L. and M. Pezzè, On Formalizing UML with High-Level Petri Nets. Concurrent Object-Oriented Programming and Petri Nets, 2001: p. 276-304. [13] Gardner, T. UML modelling of automated business processes with a mapping to BPEL4WS. 2003. [14] Ouyang, C., et al., Translating Standard Process Models to BPEL. Advanced Information Systems Engineering, 2006: p. 417-432. [15] Verbeek, E. and W. van der Aalst, Woflan 2.0 A Petri-Net-Based Workflow Diagnosis Tool. Application and Theory of Petri Nets 2000, 2000: p. 475-484. [16] van der Aalst, W., et al., ProM 4.0: Comprehensive Support for Real Process Analysis. Petri Nets and Other Models of Concurrency – ICATPN 2007, 2007: p. 484-494.
Part V
Platforms for Enterprise Interoperability
Gap Analysis of Ontology Mapping Tools and Techniques Najam Anjum1, Jenny Harding1, Bob Young1 and Keith Case1 1
Loughborough University-Wolfson School of Engineering, Loughborough, Leicestershire, UK
Mechanical
&
Manufacturing
Abstract. Mapping between ontologies provides a way to overcome any dissimilarities in the terminologies used in two ontologies. Some tools and techniques to map ontologies are available with some semi-automatic mapping capabilities. These tools are employed to join the similar concepts in two ontologies and overcome the possible mismatches.Several types of mismatches have been identified by researchers and certain overlaps can easily be seen in their description. Analysis of the mapping tools and techniques through a mismatches framework reveals that most of the tools and techniques just target the explication side of the concepts in ontologies and a very few of them opt for the conceptualization mismatches. Research therefore needs to be done in the area of detecting and overcoming conceptualization mismatches that may occur during the process of mapping. The automation and reliability of these tools are important because they directly affect the interoperatbility between different knowledge sources. Keywords: Ontology Mapping, Ontology Mismatches, Ontology Mapping Tools and Techniques
1. Introduction Ontologies have proven to be very helpful in explicitly defining concepts along with their relations and attributes in a formalized way. The characteristic of ontologies being sharable requires the formulation of techniques to allow seamless knowledge transfer between them and thus make discrete systems more interoperable. This problem of interoperability can be resolved by mapping ontologies. The tools and techniques available for mapping ontologies, however, are not fully automatic and most parts of the mapping process require human involvement. In order to make these tools more reliable and automatic, the mismatches that exist in ontologies need to be studied carefully and the tools available for their detection and resolution are require analysis from the mismatches perspective. In this paper an effort has been made to review the
304
N. Anjum, J. Harding, B. Young and K. Case
ontology mismathces identified by researchers. A framework is then developed from this review and this is then used to analyze some of the available mapping tools and techniques. The results of this analysis then discussed.
2. Mapping of Ontologies Mapping is the process in which for each concept in the source ontology a corresponding concept with similar semantics in the target ontology is found [1]. Typically a mapping process consists of three main stages. 1) Mapping discovery, 2)Mapping representation and 3)Mapping execution [2]. Since there needs to be a similarity in the ontologies to be mapped, the mapping discovery stage corresponds to a search for this similarity. Once the similarities are detected a mapping plan is generated in the mapping representation stage and finally the mapping is executed. Due to the heterogeneous nature of ontologies, the mapping process is subject to mismatches in their components and building blocks. Being the specification of a conceptualization, an ontology is considered to consist of five components and sets of their definitions. In this scenario an ontology consists of a set of Class definitions, a set of Function definitions, a set of Relation definitions, a set of Instance definitions and a set of Axiom definitions [3]. Differences in the way these five components are defined give ways in which ontologies can mismatch and these are now discussed. 2.1. Ontology Mismatches Different types of mismatches have been defined by different authors. The most quoted classification is the one given by Visser et al. [3] who divided the semantic mismatches into two main types namely Conceptualization mismatches and Explication mismatches. Some other mismatches have also been identified. A brief overview of these mismatches is given below. 2.1.1 Conceptualisation Mismatches These mismatches occur either due to a difference in the way conceptualisations are distinguished in two ontologies or the way they are related to each other in an ontology. Hence two different types of mismatches are Class mismatches and Relation mismatches. Class Mismatch: This mismatch occurs due to the way classes in two ontologies are differentiated from each other. This mismatch can further be divided into two types namely a categorisation mismatch and aggregation level mismatch. A categorisation mismatch takes place when two similar classes in two ontologies contain different subclasses. An aggregation level mismatch arises when two ontologies define a similar class at different levels of abstraction. Relation Mismatch: This mismatch happens due to the difference in relations and attributes of classes. Three further subdivisions of this type of mismatch are structure mismatch, attribute assignment mismatch and attribute type mismatch. Structure mismatch occurs when a conceptualisation is specified in two ontologies using a similar set of classes or subclasses but the structuring and relation setting is
Gap Analysis of Ontology Mapping Tools and Techniques 305
different. Attribute assignment mismatch occurs when two ontologies assign attributes to two similar classes differently. Attribute type mismatch comes into play when two ontologies in their classes contain similar instances but these instances differ in the way they are defined. 2.1.2 Explication Mismatches Explication mismatches are due to the difference in the way conceptualisations are defined in an ontology. The definitions of classes, relations and instances are considered to be a 3-tuple of terms, definiens and concepts i.e. Def= [3]. An explication mismatch can arise when any of these three components of the 3tuple in two ontologies are different in some way. The relation between the terms, definiens and concepts is that definiens use terms to define a concept. For example, the definition of a Pen can be ‘a writing device’ or it can be ‘a hollow cylinder filled with ink’. Both of these definitions attempt to describe the concept of a pen but they use different definiens and different terms. In the first one the definiens target the application of the pen while in the second one the structure of a pen is made the basis of its description. With these differences in terms, definiens and concepts, there can be six combinations of explication mismatches in ontologies. These are: concept mismatches (C-Mismatches), definiens mismatches (DMismatches), term mismatches (T-Mismatches), concept and definien mismatches (CD-Mismatches), concept and term mismatches (CT-Mismatches) and finally term and definien mismatches (TD-Mismatches). These mismatches are discussed below. 2.1.3 Concept Description Mismatches: Named as Modelling Convention mismatch by Chalupsky [4], this type of mismatch comes under the category of Class Mismatch of Visser et al. [3] This specific type, however, is not identified by them and therefore becomes an additional type of Class Mismatch. Concept description mismatch occurs when a concept is defined using different sub or super-classes. For example, Chalupsky [4] states that to distinguish between tracked and wheeled vehicles, a choice one way is to make two subclasses of Vehicle as Tracked-Vehicle and Wheeled-Vehicle. Alternatively, an attribute of Wheeled can be defined with a relation of Tractiontype. 2.1.4 Model Coverage and Granularity Mismatch: This is another type of the class mismatch of Visser et al. [3] defined by Klien [5] and Chaplusky [4] as Model Coverage and Granularity mismatch. As the name suggests, this mismatch occurs when two ontologies define the same concept with different levels of granularity. For example, a list of names can come under a class Persons or to make it more detailed, the class Person can further be divided into Male and Female. This mismatch appears to be similar to the aggregation level mismatch of Visser et al. [3] but this similarity is not recognized by Klien [5] and Chaplusky [4].
306
N. Anjum, J. Harding, B. Young and K. Case
2.1.5 Single vs. Multi-Valued Property This is first of the three mismatches Qadir et al. [6] claim to be different from the mismatches previously identified by other authors. This mismatch occurs when a data-type or object property is represented in the same class but take different number of values in two ontologies. The authors give an example of a class named Bank_Account. In the ontology of one bank, this class might take just one value because that bank doesn’t allow its clients to have more than one account but in another, the class with same name might allow multiple values (i.e. to represent several different accounts) according to its policy. 2.1.6 Unique vs. Non-Unique Valued Property This mismatch occurs when in one ontology a property can hold only one value that uniquely determines the subject, while in another ontology there can be multiple values but they cannot identify the subject uniquely [6]. Again quoting an example from the authors which explains the situation where in one ontology of a university, a student is identified by a unique rank number which is recognized by all departments while in another ontology the university requires multiple ranks corresponding to different departments and none of them individually determines the student uniquely. 2.1.7 Alignment Conflict among Disjoint Relations A mismatch occurring when a disjoint relation in one ontology is not valid in the other. For example a class Student can be declared as disjoint with the class Employee in one ontology while in another, a student is also allowed to be an employee of an institution [6]. 2.2. The Mismatches Framework Table 1 shows the framework formed by accumulating the possible ontological mismatches as described in the previous section. These mismatches are divided into two categories. The main list of semantic mismatches in relation to which all the other mismatches are analyzed is from Visser et al. [3]. Their work is the most quoted one in the mismatches literature. Mismatches explained by other authors mostly overlap with those described by Visser and colleagues. For example categorization and aggregation level mismatches of Visser et al are similar to the scope differences of Wiederhold [7] and scope mismatch of Klien [5] and Qadir et al. [6]. Similarly, the concept and definiens mismatches of Visser et al have a counterpart in the attribute scope mismatch of Wiederhold [7] and homonym terms mismatch of Klien [5]. On the explication mismatch side, the concept and definiens mismatch of Visser et al. [3] has equivalents in Wiederhold [7] and Klien [5] with the names of Attribute Scopes and Homonym Terms mismatch respectively. Similarly, the Term mismatch and Definiens mismatches of Visser et al. [3] are referred to as Naming Differences and Encoding Differences respectively in Wiederhold [7].
Conceptualization Mm
Explication Mm
Model Coverage Granularity Mm
Model Coverage and Granularity Mm
Alignment conflict disjoint relations
among
Unique vs Non-unique valued property
Single vs Multi valued property
Structure Mm
Single vs Multi valued property Unique vs Non-unique valued property
Coverage Mm
Concept Description Mm
Aggregation-level Mm
Categorization Mm
Cummulative Mismatches
Definiens Mm
Term Mm
Naming Differences Encoding Differences Encoding Mm
Definiens Mm
Term Mm
Concept Mm Term & Definiens Mm
Synonym Terms Mm
Term & Definiens Mm
Concept & Definiens Mm
Concept Mm
Concept & Term Mm
Concept & Definiens Mm
Alignment conflict among disjoint relations
Attribute-assignment Mm
and
Scope Mm
Qadir et al (2007)
Attribute-type Mm
Homonym Terms Mm
Modelling Conventions
Chaplusky (2000)
Concept Description
Scope Mm
Klien (2001)
Attribute-type Mm
Attribute Scopes
Scope Differences
Wiederhold (1994)
Attribute-assignment Mm
Structure Mm
Aggregation-level Mm
Categorization Mm
Visser et al (1997)
Concept & Term Mm
Relation MM
Class MM
Mismatch Category
Gap Analysis of Ontology Mapping Tools and Techniques 307
Table 1. Comparison of Identified Ontology Mismatches* The contraction Mm is used in the table to denote Mismatch
308
N. Anjum, J. Harding, B. Young and K. Case
3. Mapping Tools and Techniques Table 2 lists some of the main techniques used to map ontologies. These techniques include frameworks like MAFRA, OIS, IFF and mapping methods and tools like GLUE, FCA Merge, ONION. These techniques are analyzed for the similarity measures they take to align ontologies and the way they verify the connections made between the mapped ontologies. For the purpose of brevity, a description of these techniques is not included here. The summary of the similarity and verification parameters that these techniques use can be seen in Table 2. Table 2. Ontology Mapping Techniques S.No.
Authors
Technique
Similarity Parameters
Verification Parameters Object Identity Establishment Statistical Analysis of Transformations
2
Maedch et al [8]
MAFRA (MApping FRAmework)
Lexical Similarity Property Similarity (attributes or relations)
3
Calvanese & Lenzerini [9]
OIS (Ontology Integration System)
Replies to Queries (Views)
Completeness Soundness Exactness
5
Doan et al [10]
GLUE
Concept Instances
Similarity Metrics (Probability of similarity of Instances)
6
Noy & Musen [11]
I PROMPT
Class names
7
Noy & Musen [11]
AnchorPROMPT
Anchor Points
8
Mitra & Wiederhold [12]
ONION
Concept names
9
Stumme & Maedche [13]
FCA-Merge
Concept names
Chimaera
Term names, presentation names, term definitions, possible acronym and expanded forms, names that appear as suffixes of other names
10
McGuinness et al [14]
Any term-matching algorithm can be plugged in Context extracted from corpus based word relator Context extracted from corpus of domain specific documents Name resolution list and taxonomy resolution list
Explication Mm
Conceptualization Mm
M
U
U
M
M
M
Unique vs Non-unique valued property
M
U
A
A U
U A
A
U
U
A
A
M
U
Definiens Mm
M
M
Term Mm
M
Concept & Definiens Mm (Homonyms)
A
M
Concept & Term Mm
M
M
Alignment conflict among disjoint relations
M
M
Attribute-type Mm
Term & Definiens Mm (Synonyms)
M
Attribute-assignment Mm
Concept Mm
M
Structure Mm
A
M M
M M
Single vs Multi valued property
U
QOM Detection
Coverage Mm
A
U
Resolution
M
A
GLUE Detection
Concept Description Mm
Aggregation-level Mm
U
Resolution
AnchorPROMPT Detection
M
Resolution
PROMPT Detection
Resolution
MAFRA
Detection
Categorization Mm
Semantic Mismatches
U
U
U
Resolution
ONION
M
M
M
Detection
U
U
U
Resolution
A
A
A
U
U
U
Resolution
FCA-Merge Detection
Chimera
A
A
A
M
M
M
M
M
M
M
M
M
M
M
M
M
Detection
U
U
U
Resolution
Gap Analysis of Ontology Mapping Tools and Techniques 309
Table 3. Analysis of Mapping Tools and Techniques from the Mismatches Point of View A – Automatic, U – Suggests solution to the user, M – Provides Mechanism, Mm – Mismatches
310
N. Anjum, J. Harding, B. Young and K. Case
4. Analysis Table 3 uses the mismatches framework developed from the review of typical ontological mismatches. The matrix formed here helps in analyzing the available tools and techniques from the mismatches point of view. Three symbols are used here to denote the capability of a particular method to detect and resolve a mismatch as done by Klien [5]. ‘A’ stands for automatic and represents a capability of automatically detecting or resolving a mismatch. ‘U’ stands for user and symbolizes the suggestions a tool offers to the user to solve a particular mismatch, and ‘M’ denotes the mechanism provided to the user, by a tool or technique, to detect or resolve a mismatch. Before any results are obtained from this analysis, it is necessary to clarify that the tools and techniques are designed to find out similarities while the mismatches literature stresses the dissimilarities that are present among ontologies. Hence, the fields filled in Table 3 indicate that a certain tool or technique overcomes a particular mismatch in one ontology by connecting a differently placed or named concept to a corresponding concept in another ontology. A quick glimpse of this table reveals some empty fields representing a lack of available features in tools and techniques to detect and resolve conceptualization mismatches. Most of the tools and techniques provide a mechanism to the user to detect and resolve mismatches. It can be seen from Table 3 that QOM (Quick Ontology Mapping) and Chimaera have a mechanism for the users to detect the conceptualization mismatches. This is because in QOM the breadth of scope of similarity measure allows this technique to cover all of the mismatches to be detected. In Chimaera, however, it is its detailed and user friendly interface that helps the user to manually detect any kind of mismatches. This on one hand shows that the available tools and techniques need to be made more automatic and on the other it indicates that these tools should be modified to target conceptualization mismatches. It is also clear from Table 2 that the available tools and techniques mainly focus on finding the similarities rather than dissimilarities between the concepts in two ontologies and then establishing correspondences. So the main steps involved in every technique are: 1. Scanning ontologies for similar concepts, 2. Authenticating the similarity through different algorithms and tools, 3. Establishing correspondences. The second step is the one which deals with the verification of knowledge in shared ontologies and it is here that the research so far is mainly directed towards the explication side of terminologies and concepts. The conceptualization side of the interpretation of terms and concepts is virtually void of any significant work Table 3 shows that only AnchorPROMPT provides an automatic detection of one of the conceptualization type of similarities and also suggests the possible correspondence that can be established between specific concepts in the ontologies to be mapped. The other two tools QOM and Chimaera just provide information about the structure of ontologies so that it becomes easier for the user to detect some conceptualization similarity.
Gap Analysis of Ontology Mapping Tools and Techniques
311
The gap identified here suggests that research is required to find ways through which different conceptualization mismatches can be detected and resolved in order to give accuracy to the process of mapping and thus verifying the knowledge being shared. An improvement in the mapping process will also aid the effort of making these tools and techniques increasingly automatic. This accuracy and automation is directly propotional to the interoperability between knowledge systems. Employment of a more accurate and automated approach is, therefore, vital from the interoperatbility perspective and needs to be considered seriously.
5. Conclusion It is shown here that although mapping similar concepts in two ontologies or knowledge systems helps in improving the interoperability and several tools and techniques have been developed to overcome certain mismatches, these techniques, however, are directed only towards the explication side of detecting similarities. This approach is certainly effective in its own right but an exploration of the conceptual side of similarity detection and mapping may also help in improving the accuracy and reducing the manual work and thereby making the tools and techniques more automatic. This side of the mapping research area therefore needs further attention.
6. References [1] [2]
[3] [4] [5] [6] [7] [8]
Ehrig, M. and Staab, S., 2004, “QOM – Quick Ontology Mapping”, pp. 683-697. Bruijn, J. de., Ehrig, M., Feier, C., Marin-Recuerda, F., Scharffe, F. and Wieten, M., 2006, “Ontology Mediation, Merging and Aligning”, In Davies J, Studer R, Warren P (eds), Semantic Web Technologies: Trends and Research in Ontology-based Systems, Wiley, UK, 2006 Visser, P.R.S., Jones, D.M., Bench-Capon, T.J.M. and Shave, M.J.R.,1997, An Analysis of Ontology Mismatches; Heterogeneity versus Interoperability In AAAI1997 Spring Symposium on Ontological Engineering, Stanford, USA. Chaplusky, H, 2000, “OntoMorph: A translation system for symbolic knowledge”, Proc 17th International Conference on Principles of Knowledge Representation and Reasoning (KR’2000). Klien, M, 2001, “Combining and relating ontologies: an analysis of problems and solutions”, Workshop on Ontologies and Information Sharing, IJCAI'01 Qadir, M. A., Fahad, M. and Noshairwan, M. W., 2007, “On conceptualization mismatches between ontologies”, IEEE International Conference on Granular Compution (2007) Wiederhold, G, 1994, “An algebra for ontology composition”, In Proceedings of 1994 Monterey Workshop on Formal Methods, pp 56-61 Maedche, ,Alexander, Motik, ,Boris, Silva, ,Nuno and Volz, ,Raphael, 2002, “MAFRA - A MApping FRAmework for Distributed Ontologies”, EKAW '02: Proceedings of the 13th International Conference on Knowledge Engineering and Knowledge Management. Ontologies and the Semantic Web, 2002, Springer-Verlag pp235-250.
312
N. Anjum, J. Harding, B. Young and K. Case
[9]
Calvanese, D. and Lenzerini, M., 2001. Ontology of integration and integration of ontologies, In Proceedings of the 2001 Description Logic Workshop (DL, 2001, pp1019. [10] Doan, ,Anhai, Madhavan, ,Jayant, Domingos, ,Pedro and Halevy, ,Alon, 2002. Learning to map between ontologies on the semantic web, WWW '02: Proceedings of the 11th international conference on World Wide Web, 2002, ACM pp662-673. [11] Noy, N. F. and Musen, M. A., 2003, “The PROMPT suite: interactive tools
for ontology merging and mapping”, International Journal of HumanComputer Studies, Vol 59 , No. 6,pp 983 - 1024 [12] Mitra, P. and Wiederhold, G., 2002, “Resolving terminological heterogeneity in ontologies”, Workshop on Ontologies and Semantic Interoperability at the 15th European Conference on Artificial Intelligence (ECAI-2002) [13] Stumme, G. and Maedche, A., 2001, “Ontology margining for federated ontologies on the semantic web”, In Proceedings of the International Workshop for Foundations of Models for Information Integration (FMII-2001), pp 413-418 [14] McGuiness, D.L., Fikes, R., RICE, J. and Wilder, S., 2000, “An environment for merging and testing large ontologies”, Proc. 17th Intl. Conf. on Principles of Knowledge Representation and Reasoning (KR'2000)
Networked Enterprise Transformation and Resource Management in Future Internet Enabled Innovation Clouds Brian Elvesæter1, Arne-Jørgen Berre1, Henk de Man2 and Man-Sze Li3 1 2 3
SINTEF ICT, P. O. Box 124 Blindern, N-0314 Oslo, Norway Cordys, P. O. Box 118, 3880 AC PUTTEN, The Netherlands IC Focus, 42 Clifton Road, London N8 8JA, United Kingdom
Abstract. This paper presents the NEFFICS platform, which is a software platform as a basis for a new innovation driven ecology for networked enterprises, extended on top of an established cloud-based, software-as-a-service business operation platform, combined with an advanced innovation management software platform. The business context and value of the new platform will be demonstrated and validated in enterprise communities in two major European industrial sectors, through optimising their operational performance and innovation capacity. Open Business Model Innovation and process/service/product innovation will be managed and measured to demonstrate value generation at the business level. Keywords: cloud, software-as-a-service, business operation platform, open innovation
1. Introduction and Motivation The Internet is already changing the way in which enterprises operate, cooperate and interoperate with other enterprises. This paper presents the NEFFICS (Networked Enterprise transformation and resource management in future internet enabled Innovation Clouds) platform approach that is based on the hypothesis that Internet and Web based technologies can be leveraged not only to increase efficiency, but also to strengthen the value proposition of enterprises by adding incremental value (incremental innovation) as well as creating game-changing value (radical innovation). In this respect, we consider the Internet as a universal business system which enables value generation in enterprises by collaborating in open knowledge innovation zones (i.e., networked enterprises). Value creation through innovation lies at the heart of the approach. Already, there is a movement towards a broader concept of value creation beyond economic
314
B. Elvesæter, A.-J. Berre, H. de Man and M.S. Li
value, and defining new measurements for success beyond economic performance. For example, enterprises of the future will need to embrace the different perspectives of sustainability, because of customer demands and image, cost of raw material associated with natural resources, and likely regulatory developments that target specific environmental and social practices undertaken by enterprises. Such practices will increasingly have economic consequences and impact on the balance sheet of businesses. By the same token, enterprises will increasingly expect and demand ICT to meet a broader set of business objectives, other than cost reduction and efficiency improvement. ICT will need to support a new form of innovation so that growth is not purely profit-driven in the short term, but a sustainable value creation engine. The entire economic paradigm for business may change in the long run, leading to a fundamental transformation of how businesses are capitalised, assets are determined, stakeholders are involved; in short, a transformation of the notion of enterprises and therefore also the understanding of and approach to business models. Management of innovation is becoming a core concern, and business application innovation has to keep pace with and directly support business innovation. Traditional IT, and in particular ERP, does not adequately support transformation, management of knowledge work and participation in or management of networked enterprises. A new class of IT platforms, so-called Business Operation Platforms (BOP) [1] is emerging, and is better suited for these tasks. A BOP provides real-time connectivity in distributed environment, based on Service-oriented Architecture (SOA) in an Internet of Services context of the Future Internet [2]. It supports effective analysis through models and rapid implementation of change, partly by enabling application modularization and reuse through web services, and partly by adopting the paradigm of "what you model is what you execute“. BOP can be considered as the operating system to manage the business, from the cloud. The main contribution of the paper is to present the architecture for a platform that can support Innovation in Networked Enterprises, through the combination/integration/interoperability of two different existing platforms, a Business Operations Platform interoperating with an Innovation Community Platform. The business motivation for such platforms is in line with the analysis and recommendations of the Position Paper of the European Commission’s FInES Cluster [3].
2. NEFFICS Architecture In our vision, future enterprise networks will be value networks, engaging suppliers, business partners and customers in (1) new kinds of value transaction; (2) new modes of exchange; (3) new organisation structures and models; and (4) new business practices. Compared to the existing supply chains, the value networks will be more open, flexible, adaptive, participatory and peer-to-peer. These value networks will require a new generation of service platforms, which NEFFICS targets.
Networked Enterprise Transformation and Resource Management
315
2.1. Vision The NEFFICS platform vision is to enable Enterprise Networks to be able to dynamically operate in cloud-based environments, independent of geographic location, with collaborative networked business operations support for knowledge workers and business services, integrated with real-time resource management for Internet of Services, Internet of Things/RFID and Internet of People from different partners, and linked directly to Innovation Community services. Based on the above vision, the specific problem that NEFFICS tries to solve is threefold: • • •
What is the foundation of such enterprise systems? How to enable their rapid adoption by enterprises? What are their benefits for enterprises?
For the first, the NEFFICS system will be built using a Cloud-based paradigm and related innovation concepts for networked enterprises, based on a software platform running on top of a public Internet where basic, utility-like functionality for business operation can be assumed. For the second, NEFFICS will demonstrate how networked enterprises, can make management and innovation of their business more effective and efficient based on a Business Operations Platform combined with an Innovation Community Platform for open innovation [4] with associated methods and models. For the third, the business relevance and benefits of these results will be demonstrated and evaluated in two networked enterprise communities, and involving also additional stakeholders and communities, in respect of value generation, new business opportunities and the corresponding business models. 2.2. Approach and Architecture The NEFFICS platform will be based on the European Cordys Business Operations and MashApp platform [5], combined with the European Induct Innovation Community platform [6]. The NEFFICS platform will be extended from classical business processes to the support of Knowledge Worker processes and services that includes support for both Human and IT based processes and services. The platform will also be extended by innovation tools. The innovation tools will support both open innovation and community/group based innovation with various security and access levels. Both the Business Operations platform and the Innovation Community tools will be using an underlying Internet of Services for accessing business services. The NEFFICS platform will also integrate use of the Google Apps cloud platform and the Collaboration and Interoperability services platform from the European COIN project, Access to RFID/Sensors will be through the European ASPIRE project. NEFFICS will investigate several interlinked models and methods to support network-based business innovation. As a starting point, a value delivery model and methodology for networked business value delivery analysis will be used for a value analysis of the planned and the executing process and service models supported by the business operations platform. The value analysis will include a
316
B. Elvesæter, A.-J. Berre, H. de Man and M.S. Li
network based value model and support for a business ecology model with evaluation factors such as financial, business and social sustainability, performance, and quality (the OMG Business Ecology Initiative, amongst others, will be reviewed in this context). Bottleneck and improvement areas in the processes and services of the networked businesses will be identified as potential areas for innovation, and will be supported by complementary innovation management services. NEFFICS Community of highly innovative networked enterprises Applications for highly innovative networked enterprises Virtual Factory Network
Networked process and service models
Connected Retail Network
Networked business value analysis models
BPMN 2.0, SoaML, OSM, CMPM
Networked enterprises MashApp applications and process support
VDM
Networked innovation models, leadership and management processes CEN/389
Networked innovation community services (Innovation Community Platform)
Enterprise SaaS Cloud business operations and orchestration platform (Business Operations Platform) Google Apps/Waves, Cloud computing & Web 2.0 platform Networked Enterprises on Internet of Services and Things
Fig. 1. Overview of the NEFFICS approach
The NEFFICS platform, methods and models will be applied and evaluated in two different networked business applications: • •
Virtual Factory Network: A virtual extended factory for a business network with integrated semantic sensor/RFID information through Vlastuin. Connected Retail Network: A connected retail network through Telecom Italia.
The business benefits of the NEFFICS platform will be identified and evaluated in respect of each of the business application domain. Assessment of new business opportunities and the corresponding business models will be provided. In addition, the NEFFICS platform, methods and models will be widely disseminated and will be validated by additional communities. NEFFICS will actively contribute to standardisation of the results.
Networked Enterprise Transformation and Resource Management
317
3. NEFFICS Platform The NEFFICS platform will be based on the European Cordys Business Operations Platform (BOP) and the European Induct Innovation Community Platform (ICP). These two platforms will be combined, and extended from the support of single enterprises to the support of networked enterprises. 3.1. Business Operations Platform (BOP) The Cordys Business Operations Platform (BOP) is single platform that enables organizations to design, execute, monitor, change and continuously optimize critical business processes, services, applications and operations wherever they are deployed. The Cordys BOP is web-based and fully SaaS enabled, with no client implementation requirements other than a web browser. The platform is grid enabled and contains comprehensive master data management (MDM) capabilities, which ensures one version of the truth is used right across the enterprise, thereby reducing operational costs and delivering a complete, high-performance business process management enabled solution. BOP supports modelling and management of business processes, through models, based on various business process modelling paradigms. The following diagram provides an overview of the capabilities of the product.
Fig. 2. Technical architecture
The SaaS Deployment Framework (SDF) platform component enables dynamic provisioning of applications and rapid on-boarding of tenants; manages the relationships between applications, tenants and users; and measures (meters) utilization of billable entities.
318
B. Elvesæter, A.-J. Berre, H. de Man and M.S. Li
The Cordys Process Factory (CPF) platform component offers a simple, effective mechanism for anyone in the world to create and deploy situational applications and MashApps (mashups of Web-based, process-centric applications). The BOP provides a “unified” SOA-based server that serves as the repository and management facility for all artefacts generated throughout the process lifecycle. The system repository hosts and associates process models, development assets (such as integration code), configuration files and data transformation definitions. 3.2. Innovation Community Platform (ICP) The Induct Innovation Community Platform (ICP) is single platform that enables the creation of virtual social software-based innovation communities where ideas and challenges meet experience and knowledge. Employees with different skills and knowledge collaborate and develop ideas in an efficient way. By organizing your resources this way, companies can facilitate an environment where employees can utilize their competence, experience, and personal skills in areas where they are most talented. The result is increased productivity, employee satisfaction, and new levels of energy across the organization.
Fig. 3. Innovation community
ICP enables you to create, manage, and measure the entire innovation processes, including customizable task management, ensuring that steady progress toward goals is being maintained. You can define the implementation process into any number of different stage-gate phases based on various types of innovation. ICP supports a level of customization that is not available from other innovation management solutions. The entire system, including look, feel, help text, innovation types and subtypes, ranking algorithms and innovation process can be customized with no programming required. The NEFFICS platform will integrate functionality of the Cordys BOP platform and the Induct ICP platform using a service-oriented interoperability approach over
Networked Enterprise Transformation and Resource Management
319
a cloud-based infrastructure to further develop collaborative means in the context of innovation management. As illustrated in Figure 1 the development of new networked innovation community services will take advantage of platform functionality from both the BOP and ICP platforms.
4. NEFFICS Use Case Scenarios The NEFFICS platform, methods and models will be driven by the needs of two user communities: Virtual Factory Network through Vlastuin, and Connected Retail Network through Telecom Italia. 4.1. Future Manufacturing Scenario The purpose of this case study is to develop a virtual extended factory (VEF) in the cloud supporting the (networked) value chain of a company in the Manufacturing Industry. The virtual extended factory makes use of cloud techniques to integrate business processes of the companies that participate in this VEF. Collaboration within and between companies will be supported by extended use of Web 2.0 technology. Chain-wide Business Process Innovation will be integrated part of the VEF solution. This is a new and innovative concept in the Manufacturing Industry, where most enterprises rely on old-fashioned ERP systems. This concept is innovative, not in the sense of developing new technology, but in the sense of using a combination of existing new technologies in a way that is not done before.
Fig. 4. VEF environment – business perspective
320
B. Elvesæter, A.-J. Berre, H. de Man and M.S. Li
Vlastuin, a SME manufacturing company in the Metal Subcontracting Industry, has a lot of experience in aligning business processes and IT. Whereas the production departments adopted the Lean concept, the IT departments tried to implement this concept in the manufacturing software. During developing of this manufacturing software, the idea aroused that in theory it would be possible to integrate the purchase and production control processes. The main idea is to set up a virtual extended factory in the cloud that can be linked in an innovative way to business applications of other companies in the networked value chain. The business applications can be back-end systems that have to be at least partly opened to the cloud. These back-end systems can directly be linked to the VEF, but another option is to build a Cloud based business Process service Layer between the back-end systems and the VEF. 4.2. Future Retail Scenario Traditionally, manufacturers were the dominant forces in the supply chain in the consumer goods industry. With the trend toward retail consolidation and the emergence of large retailers, power in the supply chain has been shifting toward the retail level making Supply Chain Management a central node of the retail business [7]. Responding to this pull in a coordinated and efficient way, in order to avoid what in literature is known as “the bullwhip effect” [8], requires approaches known as Efficient Customer Response [9] (ECR) based on constant access to fresh information. Information availability is extremely important at all stages of the supply chain. For example, suppliers need information from the retailer on sales, inventory turnover, and feedback on competitors or on the level of customer returns. Information is also needed from consumers on attitudes toward the products, brand loyalty, willingness to pay, etc. Retailers need, for example, sales forecasts, information on product specifications, advance notice of new models, training materials for complex products, and information from consumers on their shopping needs, where else they shop and their satisfaction level with the retailer and the merchandise [10]. Retailers play a crucial role in collecting information on consumers, because they have direct contact with the customers at the point of sale and can collect information which goes beyond sales or scanning data and is important for marketing and logistics. They thus can traditionally act as gatekeepers in the supply chain who are able to control information flows.
Networked Enterprise Transformation and Resource Management
321
Fig. 6. Interactions in a CRN
The purpose of this case study in the context of NEFFICS is to develop a connected retail network (CR) in the cloud supporting the value chain of a company in the Fashion Industry. The connected retail makes use of cloud techniques to establish the connection between customer and supplier pushing the concept of ECR to its very edge and complementing retailers’ flow of information. This connection is complemented with the customers’ experiences inside premises ranging from directly operated flagship stores to franchising stores. Retailers and suppliers will also use cloud services to complement traditional enterprise resource planning systems (ERP) or merchandise information systems (MIS) already in place. Collaboration within the CR will be supported by extended use of Web 2.0 technology. Business Process Innovation will be integrated part of the CR solution.
5. Conclusions and Future Work The NEFFICS platform is starting from two of the most advanced Cloud and SaaS based platforms already available for Business Operations and Innovation Communities, from the European providers Cordys and Induct Software respectively. These two platforms will be combined, and extended from the support of single enterprises to the support of networked enterprises in the innovation clouds. The ideas presented in this paper are expected to be implemented in forthcoming activities involving the organisations mentioned in this paper. Future work will focus on a closer integration of the platforms and a practical evaluation
322
B. Elvesæter, A.-J. Berre, H. de Man and M.S. Li
and validation of the approach in the two application areas of future manufacturing and future retail. Strategically, the work could lead to the definition of value delivery models and innovation-centric business models. Future standardisation activities relate particularly to contribution to the OMG standardisation on a value chain metamodel (VDM), case management (CM) and Organisational structure modelling (OSM), as well as the CEN TC389 standard on Innovation Management. This paper presented the NEFFICS platform, which is based on cloud-enabled innovation. It is our belief (to be validated in forthcoming work) that will particularly benefit SMEs. The cloud is emerging as the new “business arena” where participants will join in innovating and operating their networked enterprise business. Cloud-based infrastructures will further contribute to improving sustainability in many different respects, of which “green IT” being one example. In summary, such infrastructures open up new value proposition and business opportunities for enterprise networks, which the NEFFICS platform will address.
6. References [1] [2] [3]
[4] [5] [6] [7] [8] [9] [10]
J. Pyke, "The Business Operations Platform Imperative", bpm.com. http://www.bpm.com/the-business-operations-platform-imperative.html (last visited 2009). Eurescom, "European Future Internet Portal", European Institute for Research and Strategic Studies in Telecommunications (Eurescom). http://www.futureinternet.eu/home.html (last visited 2009). M.-S. Li, M. Kürümlüoğlu, M. Mazura, and R. v. d. Berg, "Future Internet Enterprise Systems (FInES) Cluster Position Paper", European Commission, Information Society and Media, 1 September 2009. ftp://ftp.cordis.europa.eu/pub/fp7/ict/docs/enet/20090901-fines-positionpaper_en.pdf H. Chesbrough, "Open Business Models: How to Thrive in the New Innovation Landscape", Harvard Business School Press, USA, 2006. Cordys, "Business Operations Platform (BOP)", Cordys. http://www.cordys.com/cordyscms_com/business_operations_platform.php (last visited 2009). Induct Software, "Innovation Communities", Induct Software,. http://www.inductsoftware.com/InductWeb/index.aspx (last visited 2009). J. Zentes, D. Morschett, and H. Schramm-Klein, "Strategic Retail Management: Text and International Cases", Gabler, Betriebswirt.-Vlg, 2007, pp. 297-316. H. L. Lee, V. Padmanabhan, and S. Whang, "Information Distortion in a Supply Chain: The Bullwhip Effect", Management Science, vol. 43, no. 4, pp. 546-558, 1997. A. H. Kracklauer, D. Q. Mills, and D. Seifert, "Collaborative Customer Relationship Management: Taking CRM to the Next Level", Berlin, Springer, 2004, pp. 59. B. Berman and J. R. Evans, "Retail Management: A Strategic Approach", 10th ed., Upper Saddle River/NJ., Prentice Hall, 2007, pp. 226-227.
Opportunity Discovery Through Network Analysis Alessandro Cucchiarelli1, Fulvio D’Antonio1 1
Polytechnic University of Marche – Italy
Abstract. Discovering opportunities, defined as situations that can be exploited obtaining from them valuable outcomes, is the main task of entrepreneurs. In this paper we present a formalization of the process of discovering opportunities in a network of research organizations; we introduce opportunity networks constituted by a network modeled as a directed attributed multigraph, an opportunity exploiter, a set of opportunity patterns and opportunity ranking functions. We show how the discovery of joint-research collaborations opportunities in a research-oriented network can be formulated in terms of the proposed formalism. Keywords: Social network analysis, data mining, graph transformations, recommendation systems
1. Introduction The entrepreneur’s main task is to discover and exploit opportunities, defined as situations that can be exploited obtaining from them valuable outcomes. Far from being a trivial task, this kind of work can involve creativity, experience, knowledge of the domain, social capital and an uncountable number of other things. There is currently no agreement among entrepreneurship researchers on major concepts used to define and operationalize the processes in question. Moreover, it is a highly subjective task; different people will discover different opportunities based mostly on their previous experiences and beliefs [6]. All opportunities must not be obvious to everyone all of the time. Different entrepreneurs can find different ways to use available resources in a more proficient way. Timing is a crucial point. An entrepreneur can identify a business opportunity and then discover that another “swift” entrepreneur has already exploited it. "It's not the big that beats the small - it's the fast that beats the slow." says Niklas Zennström, one of the Skype founders. In order to assist the entrepreneurs in the process of discovering opportunities we propose a formal framework whose aim is to suggest a unifying view of the
324
A. Cucchiarelli, F. D’Antonio
process. This framework will allow the fast prototyping of opportunity recommendation systems and their customization for specific domains. We believe that one of the most valuable resources in discovering opportunities is the knowledge of the complex of relations holding among “players” in a particular domain: who are the actors of the domain the entrepreneur is exploring? What type of connections hold among them? What is their strength? Domains in which the connections among actors are relevant can be conveniently described by networks structures. Networks [3][9] (called “graphs” in mathematical domain) include at least a set of nodes and a set of edges (directed or not) also called links. More refined types of networks include attributes on both edges and nodes. In this paper we present a formalization of the notion of opportunity discovery in a network; opportunity networks are constituted by a directed attributed multigraph, an opportunity exploiter, a set of opportunity patterns and opportunity ranking functions. The paper is organized as follows: in section 2 we describe related works. In section 3 we discuss the formalization of opportunity networks and in section 4 we present the application of opportunity networks for the discovery and ranking of opportunities in research networks to a real world case study: the INTEROP-NoE. In section 5 we draw our conclusions.
2. Related Work The notion of opportunity appears frequently in the literature related to social network analysis, i.e. where the network structure reflects some kind of social relations among actors such as friendship, affiliation, joint business, enterprise districts. In [2] the network activities of entrepreneurs through the phases of establishing a firm are studied. In [1] the major factors that influence this core process of opportunity discovery and development are identified. They include: • entrepreneurial alertness; • information asymmetry and prior knowledge; • social networks; • personality traits, including optimism and self-efficacy, and creativity; • type of opportunity itself. In [6] the author focuses on the prior knowledge and experience necessary for successful recognition of opportunities. Entrepreneurs will discover only opportunities connected to the knowledge they already have. In [7] patterns of tie formations in emerging organizations are studied and classified. For example the network transformation along time can be classified as evolution, renewal or revolution according to the number of nodes/edges added. We believe that most of the work in this area is focused on the psychological and social traits of the entrepreneurs, on the investigation of the entrepreneurial process once the opportunities have been discovered or is oriented to trace the evolution of the network following the exploited opportunities.
Opportunity Discovery through Network Analysis 325
By contrast, our work is oriented to the creation of a framework with a “predictive” potential rather than an ex-post analysis power: we are interested in specifying opportunity patterns the can be used by an entrepreneur as a recommendation instrument in the detection of novel opportunities in a given domain.
3. Opportunity Networks An opportunity network is a tuple (G,E,O,R) where G is a network modeling the relationships among actors in the domain of interest represented as a directed attributed multigraph, E is an Exploiter, i.e. an actor interested in discovering opportunities, O={Opp1,…,Oppk} is a set of Opportunity patterns expressed as graph transformations, and R={Ranking1,…, Rankingh} is a set of Ranking functions, where every Rankingsi : Matches(Oppi , G ) → ℜ + is a mapping from the set of possible matches of the pattern Oppi in G to non-negative real numbers. Informally speaking, graph transformations can be modeled as a pair of graphs (L, R), called left hand side L and right hand side R. Applying the transformation p = (L, R) means to find a match of the graph L in the graph G and to substitute L by R obtaining the transformed graph G’. Technically the matching process is called embedding and it is realized by constructing a graph morphism from L to G. It is out of the scope of this paper a complete description of graph transformations concepts. Details can be found in [4][5] The concept of opportunity here is modeled to reflect a potential transformation of the graph by the exploiter; if a match of the pattern is found in the network, the exploiter can try to act in the real world to realize the transformation. For example if the opportunity discovered is a business partnership between two nodes of the graph G that models the business relations among firms, the entrepreneur can try to establish a collaboration between the corresponding firms in the real world. If this action succeeds, this change in the reality will induce a change in the model (i.e. GÆG’). Transformations induced in the network can include (in order of increasing complexity) modifying node/attributes, adding new edges, creating cliques, deleting subgroups etc. The semantics associated with such transformation depends on the application domain: in business modeling networks new relations between edges can be interpreted as new business collaborations while in research networks a new edge can represent, for example, a joint-paper collaboration. The exploiter is the actor that is searching for opportunities in the network in order to obtain valuable outcomes from them. No special constraint is given about the nature of the exploiter: it can be an internal exploiter, a node of the graph or a sub-graph (single enterprise or a consortium), or an external exploiter interested but not directly involved in the phenomena occurring in the network (e.g. the case of public administrations “observing” and stimulating the growth of industrial networks).
326
A. Cucchiarelli, F. D’Antonio
4. Research Opportunity Networks In [8] we proposed a method for analyzing research networks. Now we “stretch” a bit the concept of “entrepreneurs” to include organizations seeking for jointresearch opportunities in a network of research organizations. We re-formulate the experiments carried out in [8], essentially the analysis of INTEROP-NoE network1, in terms of the proposed opportunity networks formalism, i.e. the tuple (G,E,O,R) . We will now describe in a detailed way the instantiation of each component of this tuple in the concrete case study of INTEROP-NoE. 4.1. The Network: The Similarity/Co-authorship Graph In order to discover joint-research opportunities we create a graph with a single node type representing research units and two types of attributed edges: similarity links and co-authorship links. The creation of the similarity edges involves a number of complex steps. The extensive description can be found in [8]: in the following we summarize the process. Given a set RU of research units and a corpus D of documents produced by these units we annotate such documents with terminological (weighted) vectors of terms taken from an ontology O describing the research domain in which RU operate (e.g. computer science, medical, cultural heritage domain). The vectors are supposed to describe in a compact form the content of the document (indeed the vector represents a sort of summary). The terms in O (order 103), are then clusterized to obtain a feature space dimensionality reduction (order 102). The vectors are therefore remapped on this reduced dimensionality space. Now we have for each unit r in RU a set of vectors describing the documents of each r in RU. We call this set of documents r(D). We compute for each r ∈ RU the centroid rcentroid(i.e. the mean vector) of r(D). Eventually we create a network G = (RU,E) where the nodes are constituted by the research units and the edges can be divided in two classes: E=Esim∪Ecoauth Esim are edges connecting each pair a,b of RU with weight wsim (a, b) equals to the cosine similarity between acentroid and bcentroid. If the cosine similarity of the pair is 0 there is no corresponding edge in the graph between the nodes. Such edges and the weights associated represent an estimation of the similarity of the research units based on the similarity of the centroids of the documents produced. It must be taken into account that the ontology O used has an impact on this estimation of similarity and different ontologies can lead to different results the way different viewpoints can put in light different aspects of things.
1
EC-funded INTEROP Network of Excellence (NoE), now continuing its mission within a permanent institution named “Virtual Laboratory on Enterprise Interoperability". http://www.interop-vlab.eu
Opportunity Discovery through Network Analysis 327
Ecoauth edges are weighted with the number of joint-produced documents i.e. | a( D ) ∩ b( D ) | for each pair a,b of units in RU. Again if the weight of such edges is 0 they are not included in the graph. In figure 1 a picture of the similarity/co-authorship network of INTEROP-NoE (filtered according to a user defined threshold values for edges’ weights) is shown. Curved lines are used for co-authorship edges and the bent lines for similarity edges. Moreover the line thickness is proportional to the value of the corresponding relation and the node dimension to the number of publications of the associated group. The opportunities for research units can be visually identified by individuating similarity edges not coupled with co-authorship edges.
Figure 1. The INTEROP similarity/co-authorship network (filtered)
4.2. The Exploiters We will assume two kind of exploiters: • A set of internal exploiters, hereafter referred to as the research units, that are interested in establishing new research co-operations. They are constituted by the RU set of nodes of the graph. • An external exploiter, hereafter referred to as the research watchman, that is interested in the growth of the networks in terms of cooperations. This exploiter is not part of the network but is interested in the network evolution from an external viewpoint. This was exactly the case of INTEROP-NoE. From the local perspective, the research units were interested in starting new collaborations while from the global perspective the project manager was interested in showing to the European Commission the success of the project in terms of de-fragmentation of the research network around interoperability themes in Europe.
328
A. Cucchiarelli, F. D’Antonio
4.3. The Opportunity Patterns: How to Detect whether there are JointResearch Opportunities in a Research Network? In this paper we restrict ourselves to consider only possibilities of collaborations between pairs (dyads) of members in the network. A more general approach can include also to consider larger (≥3) groups. The opportunity of a collaboration is expressed by the absence of an edge of co-authorship in the network, i.e. opportunities are expressed by a set of transformations aiming at creating new edges in Ecoauth. The exploiter take advantage of the opportunity if it creates such new edge. This simple pattern can be expressed in terms of a graph transformation using a NAC (Negative Application Condition) [4][5]. A graph transformation with a NAC is a triple (N,L,R). The transformation is applicable if a match for L can be found in the network and a match for N can not. An example will clarify this concept:
OpportunityPattern: Opp1 coauth A
coauth B
A
A
B
N
L
B
R
Figure 2. The basic opportunity pattern Opp1 in research networks
The transformation depicted in figure 2 has to be interpreted as follows, if two nodes A and B can be found in the network (the left-hand side L) and an edge of co-authorship linking A and B doesn’t exist (the negative application condition N) then create an edge of co-authorship between A and B (the right-hand side R). It seems a very complicated way to express the simple fact that joint research opportunities must be searched wherever a co-authorship link doesn’t exist, but the graph transformation formalism is powerful enough to express a wide range of opportunity patterns across different domains, possibly including complex graph conditions in the N,L,R parts. All the matches of the opportunity pattern Opp1 in the network G are expressed by a set of pairs: Matches(Opp1 ,G) = {{ x,y} ∈ RU : { x,y} ∉ E coauth}
4.4. The Rankings: Ranking Opportunities for Research Units as Exploiters We assume in this sub-section that a feasible research collaboration between research units must depend somehow on the similarity in their research topics. We present here an argument that will be the basis of the creation of the ranking function: • New research collaboration are likely to be established between two research units A and B if the similarity of A and B is high because it
Opportunity Discovery through Network Analysis 329
means that A and B share a lot of interests, probably speaking the same “language” etc. Obviously, this natural language formulation of a ranking function must be “quantified” and formalized in order to be used. We formalize here this argument as follows, for each {x, y} ∈ Matches( P1 , G ) : ranking1 ({x, y}) = wsim ({x, y}) This expresses the fact that a collaboration opportunity between two research units has a rank that is directly proportional to the weight of the similarity edge associated to them (if any). 4.5. The Rankings: Ranking Opportunities for the Research Watchman as the Exploiter
In monitoring a research network evolution over time an external actor could be interested in considering phenomena such as the variation of number and dimension of connected components, detecting de facto-communities around common research topics or the birth of bridges and cutpoints [9] joining otherwise isolated communities. Let’s consider a research watchman that has the duty of guiding the evolution of the network towards a more stable network structure. He can accomplish this task through the reduction of the graph connected components, the creation of resilient bridging structures among different parts of the network and the growth of the number of co-operations and join research interests in a more or less uniform way. This figure, far from being an abstract one, includes for example NoE (Network of Excellence, a type of EC funded project) project managers. In fact the main outcome of a NoE is building a sustainable research network among partners mainly by integrating different competences and increasing the number of cross-collaborations. We assume therefore, as a basic example, that our watchman will be interested in edge creation opportunities that reduce the number of connected components, therefore reducing network fragmentation, by creating bridges (bridges are edges connecting two otherwise disconnected components [9]). The ranking function for the Opp1 pattern will be: ranking '1 ({x, y}) = 0.5 if {x, y} is not a bridge for G ranking '1 ({x, y}) = 1.0 if {x, y} is a bridge for G As we deduce by this ranking function, the research watchman is interested in the growth of collaborations but especially interested in those collaboration the will de-fragment the network. In the real world he will probably encourage/push such “bridge” collaborations to be established before other ones.
5 Conclusions In this paper we have introduced and formalized the concept of opportunity networks and we have shown how it can be applied to implement the task of discovering and ranking joint-research opportunities in a research organization
330
A. Cucchiarelli, F. D’Antonio
network. In future works we will proceed along this line by exploring more complex opportunity patterns and different application domains such as industrial or business networks.
6 References [1] [2] [3] [4]
[5] [6] [7] [8] [9]
Ardichvilli, Alexander, R. Cardozo, and S. Ray 2003, A theory of entrepreneurial opportunity identification and development. Journal of Business Venturing 18/1: 105123. Arent Greve & Janet W. Salaff: Social networks and entrepreneurship. Entrepreneurship, Theory & Practice, 28(1): 1-22, 2003. Mark E. J. Newman. The structure and function of complex networks. SIAM Review, 45(2):167-256, 2003. R. Heckel. Graph transformation in a nutshell. In Proceedings of the School on Foundations of Visual Modelling Techniques (FoVMT 2004) of the SegraVis Research Training Network, volume 148 of Electronic Notes in TCS, pages 187-198. Elsevier, 2006. Rozenberg, G., ed. (1997), Handbook of Graph Grammars and Computing by Graph Transformation, Vol 1: Foundations, World Scientific. Shane, S. 2000. Prior Knowledge and the Discovery of Entrepreneurial Opportunities. Organization Science 11, 4 (Jul. 2000), 448-469 T. Elfring and W. Hulsink. Networking by entrepreneurs: Patterns of tie formation in emerging organizations. Organization Studies, 28(12):1849-1872, 2007. Velardi, P., Navigli, R., Cucchiarelli, A., D'Antonio, F.: A New content-based model for social network analysys. Proc. of IEEE Int. Conf. on Semantic Computing, S. Clara, CA, USA (2008) Wasserman, S., Faust, K.: Social Network Analysis: Methods and Applications. Cambridge University Press (1994)
Reflections on Aspects of Enterprise Interoperability Jane Hall1, Klaus-Peter Eckert1 1 Fraunhofer FOKUS, Kaiserin-Augusta-Allee 31, 10589 Berlin, Germany Abstract. Enterprise interoperability has been the subject of much work in recent years. This paper reports on a European research project VISP which investigated various aspects of enterprise interoperability between small ISPs doing business in a virtual cluster. It provides an overview of the work undertaken, discusses the points considered successful and what would be still required to ensure seamless and reliable interoperability between small ISPs undertaking business together in a cluster. A perspective relating the work in VISP to cloud computing concludes the paper. Keywords: enterprise interoperability, workflow, ISPs, services, cloud computing
1. Introduction Enterprise interoperability has been the subject of much work in recent years and its significance in times of economic uncertainty is increasing. The ability of enterprises to cooperate and interoperate in order to conduct business has many aspects, not just technological but also socio-economic [1], and projects in the area of enterprise interoperability have touched upon these aspects with different focuses according to the work undertaken. This paper reflects on aspects of enterprise interoperability that were pursued by the European project VISP (ISTFP6-027178) when investigating the development of software support required to enable small ISPs to do business together in a virtual cluster. The advantages for SMEs to collaborate in the marketplace of today are numerous. The trigger for the VISP project was the rapidly changing and competitive environment which was creating survival problems for small firms in a difficult economic situation. This paper discusses results from work undertaken in the VISP project, which was set up to investigate and develop a lightweight and affordable solution for distributed service provisioning and technical process automation, particularly suitable for SMEs collaborating in a cluster environment in order to produce tailored ISP services adapted to their local business needs.
332
J. Hall and K.-P. Eckert
The structure of the paper is as follows. Section two introduces the position of small ISPs and their situation in the marketplace together with the requirements for collaboration. The software solution that was developed in the VISP project is presented in section three. How the software solution can be applied to the service life cycle is the subject of section four. Section five discusses the achievements of the project and what is still to be done for commercial application. Section six provides a perspective for future development with the cloud computing paradigm.
2. Problem: Collaborating Businesses The ISPs referred to in this paper are small firms providing Internet Protocol (IP) enabled internet and communication services primarily to business customers who are themselves often SMEs. These small ISPs offer tailored niche services rather than volume services and develop long-term relationships with their customers in anticipating and meeting their needs. The types of service that an ISP offers can be divided into a number of categories, such as access services, bandwidth services, application services, hosting services, security services and telephony services. A variety of services can be offered and composed into a bundle to meet a growing demand from ISP customers for increasingly specialised, complex and individualised services. A small ISP can soon exhaust the availability of appropriate customers and is limited in what it can offer. However, a small ISP in a cluster will be able to provide a wider range of tailored services by composing services for individual customers from services offered by partners in the cluster as well as from its own services. The small ISP will be selling aggregated services based on the expertise of the cluster as a whole, providing the services in an integrated manner for delivery. In the VISP cluster each partner is an independent organisation with its own resources which it uses to provide services. Two operating modes are foreseen. In the Community operating mode each partner owns its own customers and the cluster is not visible externally but is used dynamically as a pool of services for subcontracting. The cluster has no assets but there is a legal agreement defining the cooperation between the partners. In the Virtual Enterprise operating mode, the cluster is a registered trade organisation and all partners share the revenues of the enterprise based on the terms defined in the agreement. The cluster owns the customer relationship, the customer data and the customer transaction. A service request from a customer is dispatched to a partner, which then acts as the cluster mediator and which interfaces with the customer on behalf of the cluster. The vision of VISP, to enable a cluster of small ISPs to create a virtual organisation and undertake business together, requires not only ideas and theory but the capability to carry out the vision by implementing software that can meet the challenges of such cooperation day in day out. Processes needed to be automated so that the ISPs can react more rapidly and flexibly in order to remain competitive in the business situation. The tools and technology were emerging to enable the VISP vision of enterprise interoperability to be developed and become reality. The aim was to provide an automated environment for cluster partners that could provide services to customers efficiently and thus ensure the competitiveness
Reflections on Aspects of Enterprise Interoperability
333
and value of the cluster in the marketplace. Various aspects, such as process modelling and workflows, service decomposition and modelling, and the use of ontology-based concepts were investigated and developed. The technologies had to be open and affordable for small ISPs to use efficiently and reliably. Open standards were adopted to avoid being locked into proprietary systems and, where possible, open source products were preferred. The objective of the VISP project was therefore to provide a software solution supporting enterprise interoperability that would enable the most appropriate individualised services to be provided to each customer. Partners have to be able to communicate and interwork and the software supporting this has to be carefully designed using well-defined interfaces. The complexity increases with the number of processes, the number of partners and the diversity of platforms involved in the communication process. Thus enterprise interoperability represents a major cost and needed to be optimised for the cluster to work effectively.
3. Solution: Open Application à la VISP The solution for collaborating ISPs was the development of an open application that could support enterprise interoperability between the ISPs in the cluster when pursuing their business. Such an application has to be reliable, highly available and scalable and able to execute all the operations required by cluster partners for the entire life cycle of ISP services. The application developed is a lightweight solution for distributed service provisioning and technical process automation in a clustering environment of multiple partners (see Fig. 1.). It provides a simple solution helping ISPs at multiple levels of their technical service life cycle from the specification of services and related technical processes up to the trading of these services with partners, including the automation of the corresponding technical processes. The main features of the application are introduced below. Software platform: The VISP project designed and developed an automated software platform which allows the modelling, deployment and execution of workflows that support the provision of services to customers. It allows cluster partners and their customers to manage and monitor workflows in a secure way and consists of two major parts. The Workflow Modelling and Specification Platform enables the modelling and specification of workflows using different technologies via an integrated tool chain. The Workflow Execution Platform then executes and controls these workflows. The platform interfaces either directly or through mediation devices with partners’ ERPs and with network and system components. Workflows running on partners’ workflow engines are interoperable and enable interaction between the partners. Workflows: Effective inter-enterprise process automation was required. VISP analysed and compared workflow technologies and software in order to ascertain their suitability for use in such a cluster [2]. The application is workflow-based and workflow technologies provide the software infrastructure used by the cluster partners when offering and delivering services to customers and collaborating within the cluster. The modelling and specification of workflows formed a major part of the project work. VISP developed ISP service implementation and
334
J. Hall and K.-P. Eckert
provisioning workflows, so allowing the operations in a cluster of partners to be automated. A top-down approach was adopted starting from the business workflows. The approach used first an informal textual specification, then a more formal one using a template. This was then modelled in BPMN and refined until it could be mapped to BPEL. Such an approach raised many challenges and issues that had to be resolved consistently. Guidelines were therefore established for use within the project. The approach provided a high level of abstraction, integration, efficiency, re-usability and effective achievability. Business workflows were specified both for multi-relationships as well as for binary relationships. Where available, these used standard processes such as those from the OAGIS specifications together with the associated WSDL items [3].
Fig. 1. The VISP Architecture
Technical processes were specified and workflows developed for interacting directly with network elements. The ability to perform technical operations is at the heart of an ISP business and directly impacts scalability and growth. The aim was to provide formalised workflow specifications of currently manually executed technical processes in order to be able to process them automatically in a standardised way. The technical knowledge is hard to acquire, maintain, exchange and especially share between the many internal and external actors of a company. Unlike business processes, technical processes had not been standardised and so innovative work was undertaken here based on the knowledge provided by network engineers and technicians so that a repository of technical processes could be created for use by all cluster partners [4]. The VISP application manages families of technical procedures per service and allows all the operations of the life cycle of a service to be performed. A state machine and model is defined for this life cycle with at least 16 different administrative operations defined as part of the full service life cycle. Each operation matches a given technical process and can thus be automated. Technical
Reflections on Aspects of Enterprise Interoperability
335
processes can combine fully automated activities with human activities, such activities can be synchronous or asynchronous, short-lived or long-lived. Some technical processes can even be mainly workflows involving humans. These technical procedures interface with network, OSS, and BSS (ERP) elements in a non-intrusive way through configurable mediation servers. Web service interfaces were defined using WSDL. The application is a thin lightweight and controllable software layer that can run stand-alone outside of existing ISP systems. It manages the process automation and communicates through mediation servers with the ISP’s network elements and the front- and back-office servers, including the ERP. Repositories: VISP designed several repositories for storing required information on partners, customers, service profiles, technical processes, service descriptions, service sets, and service instances. An ISP Service Knowledge Base was built by decomposing ISP services into atomic service elements, specifying these service elements in a textual language and formalising the semantic links of these service elements and of their parameters with formal ontologies. 77 services were fully specified, representing a unique source of technical information. The ISPs and even their customers can explore this knowledge base and share a common language and understanding of what ISP services are technically, which is essential for collaboration [5]. The common information repositories are a valuable benefit to the partners in the cluster and the more partners contributing to this knowledge base the stronger and more attractive the cluster becomes. Service composition: Each service can be configured and dimensioned individually, and multiple services can be combined in a service set, which can be validated based on technical constraints using service ontologies [6]. Using this feature, sales and customer support representatives can ensure that services are correctly configured and combined. The result can be associated with a customer and communicated to an ERP (e.g. SAP) to serve as a technical annex to offers or contracts. Service sets can be managed as part of a Service Set Catalogue, they can be exchanged and duplicated to support other customers or to modify versions. Technologies: The emergence of Web services technology and XML in conjunction with expanding Internet use has been a significant development in promoting enterprise interoperability. The widespread availability of XML and Web services enabled the VISP project to automate processes between organisation boundaries and support inter-enterprise process interoperability. Web services are loosely coupled, which enables applications on different platforms to interoperate. By relying mainly on XML-based technologies, the VISP project simplified and harmonised the implementation of its solution as much as possible. This is a key aspect for SMEs that cannot afford to use multiple BSS and OSS technologies. SOA: The VISP software is based on an open, scalable and distributed Service Oriented Architecture (SOA) where all services of all components are exposed as internal Web services described by WSDL interfaces. Many components are specified in XML and are consequently easy to manipulate, as for instance the WSDL interfaces, OWL ontologies, and all the executable technical processes (BPEL). Enterprise services and all technical processes are exposed as Web services which can easily be found. These components can in turn be called by applications other than the VISP application. The use of SOA thus played an essential part in promoting enterprise interoperability in VISP.
336
J. Hall and K.-P. Eckert
4. Focus: Service Life Cycle The services offered by the members of a VISP cluster follow a precise life cycle defining the order of appropriate administrative and technical operations and states (see Fig. 2.). In order to meet the customer request, a sales representative (SR) checks the VISP cluster Service Knowledge Base (SKB) for possible solutions and combinations of services. The SKB contains technical information about the services in the cluster. The Market Directory (MD) can also be searched to ascertain which partners are offering which services and the availability of these services. Once the component services required to support a designated service set have been selected, the SR groups the services provided by cluster partners into a candidate service set. This service set is the result of the composition of the services into one group and enables the selection and combination of services, characteristics and values to be validated. Each service set contains a dependency tree defining the execution sequence of the service related administrative operations as a specific service composition instruction. The SR stores the definition of the service set in a personal Service Set Catalogue and validates it.
Fig. 2. VISP Service Life Cycle
In the next step the SR instantiates the service set. Instantiation is a means of reserving resources so that if the customer accepts the offer, the service can be provided to the customer. Instantiation is therefore based on the confirmation of resource reservations from partners contributing services to the service set. If a cluster partner is not able to provide a service of the service set, an appropriate cluster member is selected by means of trading mechanisms performed according to a particular economic model. Trading is thus part of the service instantiation process and has a business (quote, contract) as well as a technical (instantiation) result. Details of the instantiated service set are stored in the Service Instance Base (SIB), which stores all trading and deployment information concerning the service
Reflections on Aspects of Enterprise Interoperability 337
instances comprising the service set. The technical part of the offer can be transferred to an ERP for a full offer preparation, if applicable. This can include all contractual terms and conditions, billing and payment details and SLA information. The offer is then made to the customer and further negotiations may ensue. If the offer is accepted, the sales representative starts the commissioning of the service set. Commissioning of the service set has to be performed according to the requirements of the offer and in particular its timing. The resources previously reserved are allocated to the service set and once commissioning has been carried out, the service set can be activated on the date agreed with the customer. Commissioning and activation of the service set are carried out automatically with the technical workflows developed in VISP. A request for service termination would typically imply three steps, deactivation, decommissioning and deinstantiation of the service instance so that its status becomes ‘historic’. Again, the technical workflows in the infrastructure carry this out. This concept has been completely implemented and validated for a "Voice over IP" service and several SMS services, covering the whole spectrum from business related, administrative operations to technical operations wrapping and managing ISP-specific hardware and software.
5. Lessons Learned The development of a workflow-based application for service management, service provisioning, and technical process automation provided much insight into how such interoperability can best be supported. Many challenges arose when developing the application, needing further investigation into the features being adopted and leading to additional work to resolve the issues. The project was able to show that an enterprise interoperability infrastructure based on workflow technology was feasible and that it could provide an effective automated environment enabling small ISPs to cooperate in offering and delivering services. The VISP solution speeds up service provisioning time, partners can efficiently build new products by trading services with cluster partners, and the application is easy to integrate in OSS and BSS software. The same solution can also be used by a single ISP between different internal constituents; or even stand-alone without any clustering. The application is fully based on a SOA which makes it interoperable with other software. Standards such as BPMN, BPEL, WSDL and XML were used widely in the work and this was a significant factor in achieving interoperability between partners. Because of the specific properties of VISP services, dedicated WS-accessible repositories were introduced to define the VISP white and yellow pages instead of using generic solutions like UDDI or LDAP. Almost all the tools were open source, which reduced the cost of ownership. Over 90% of the code was generated by using modelling tools, and guidelines on how to use the tool chain were developed. The software was implemented incrementally in three releases, with the final release supporting several partners according to the Virtual Enterprise operating mode and validated with an external partner. Other topics, which were outside the scope of the project, would need to be considered for full cooperation in a cluster, including those mentioned below.
338
J. Hall and K.-P. Eckert
Cluster membership: Partner management and partner life cycle phases were investigated and concepts produced. The status of a partner is included in the partner description that is maintained in the Partner Directory, which keeps track of all current and past partners of the cluster; it can be one of ‘candidate’, ‘trial’, ‘full’ or ‘historic’. The status defines the kind of access that the named partner has to the VISP repositories and application. Processes to support these aspects would be required but were out of scope of the project. In addition, trust must exist between partners collaborating closely in business. Work undertaken on a trust and contract management framework in projects such as Trustcom [7] could be beneficial here. Shared knowledge: As well as the information stored in the repositories, it would be very useful for partners to be able to update each other and exchange information on trends concerning their business. Knowledge management providing central and role-based access to the information for the VISP cluster allowing VISP users to exchange documents and information across technical system boundaries was investigated in the VISP project but was out of scope for implementation. Projects such as Synergy, which is looking at knowledge management and sharing to stimulate collaboration within virtual organisations, could be considered for useful input in this domain [8]. Trading: VISP investigated the various trading models between partners that could be used and eight trading models were established based on [9]. These are different ways for selecting partners when more that one can offer a service in a proposed service set. Apart from the conceptual work undertaken in the VISP project, further development was limited to a restricted set of models. Some agreed and consistent means of selecting partners would be needed to provide a satisfactory basis for cooperation in cluster business together with appropriate billing and accounting mechanisms between the cluster partners.
6. Perspective The VISP platform has been specified and implemented using state-of-the art software around Business Process Management and Web services technology such as eClarus BPMN, ActiveBPEL, Intalio BPM, Apache Axis, and Orbeon. Parallel to the development of VISP, the software vendors improved their products towards comprehensive BPM&SOA suites. Thus a reimplementation of VISP would use these newer tools and follow especially the trend from SOA suites to Clouds, as for example Intalio is promoting in their Enterprise Cloud Computing strategy [10]. Cloud computing is a way of managing large numbers of highly virtualised resources such that, from a management perspective, they resemble a single large resource [11] [12]. This approach is very close to the VISP idea of providing a service set as a "highly virtualised resource" provided by the members of a virtual enterprise. As mentioned above, the design and implementation of VISP, as well as of any cloud implementation, follow the service oriented paradigm. Thus several analogies between VISP and Clouds, especially concerning the life cycle of Cloud services, can be identified. These interdependencies can be used either to provide an implementation of VISP in a Cloud environment or to improve Cloud environments using VISP technical concepts and operating modes.
Reflections on Aspects of Enterprise Interoperability 339
Fig. 3. Cloud Service Life Cycle
In a private Cloud both the consumer of Cloud software services and the provider of these services exist within the same enterprise. A public Cloud is one in which the consumer of Cloud services and the provider of Cloud services exist in separate enterprises. A hybrid model is one which combines multiple elements of public and private Cloud, including any combination of providers and consumers. A hybrid Cloud may also contain multiple service layers. In the case of a virtual enterprise like VISP, VISP services or service sets are offered to the customer by the virtual enterprise as a kind of "Software as a Service" (SaaS). From the perspective of the customer it does not matter which cluster member is going to provide the service. The VISP internal trading mechanism can be mapped smoothly to service trading and selection concepts in SOA and Cloud environments. A Cloud service provider offers highly standardised software services, which cover the requirements of many consumers. Interaction with consumers has to be standardised too. Thus life cycle management for Cloud services is the foundation for service adaptations and modifications. The provisioning and usage of IT resources in a Cloud environment is managed in a strict life cycle, both from the provider as well as the consumer perspective (see Fig. 3.). The life cycle of a SaaS can be divided into the following phases. In the fulfilment phase the SaaS provider defines the service and makes it available. The SaaS consumer retrieves and selects the service and agrees on a contract signing a service level agreement. Afterwards the provider reserves and provides the
340
J. Hall and K.-P. Eckert
necessary resources according to the selected service and the SLA. After the provider grants access to the service the consumer is able to access and use it. In the assurance phase the provider monitors the service, adjusts the required resources if necessary and performs usage metering, accounting and billing. In the modification phase service and/or contract can be changed and adapted due to the requirements of consumer and provider. In a discontinuation phase the usage contract can be terminated by both the provider and the consumer. The resources used by the service will be released. Depending on the policies of the provider, the SaaS can either be offered to other consumers or the whole service can be defined as unavailable. Comparing the life cycle models for VISP services and for Cloud services, the implementation of a VISP platform as a hybrid cloud using a shared infrastructure (virtual private cloud) between the VISP cluster members and providing SaaS towards external consumers (as a public cloud) seems to be a promising approach. Nevertheless research and prototyping with available Cloud infrastructures will be necessary to prove the feasibility of this idea.
7. References [1]
Missikoff M, Dini P, Drissi S, Grilo, A, Neves-Silva R, editors. Future Internet Enterprise Systems (FInES) Cluster, Research Roadmap. Version 1.3; 2009 [2] Eckert K-P, Glickman Y, Hall J, Knapik R, Renk R, Fortis F, Cicortas A. Workflow Technologies for a Virtual ISP. In: Cunningham P, Cunningham M, editors. Exploiting the Knowledge Economy: Issues, Applications, Case Studies. Amsterdam: IOS Press; 2006. p. 1631-1638 [3] Open Application Group Integration Specification, 2008. Business Object Documents, www.oagis.org [4] Hall J, Eckert K-P. Business and Technical Workflows for E-Business in a Virtual Cluster of ISPs. In: Filipe J, Marca DA, Shishkov B, van Sinderen M, editors. ICE-B 2008, International Conference on E-Business. Proceedings, Porto, Portugal, July 2629, 2008. Setubal: INSTICC Press; 2008. p. 307-314 [5] Hall J, Mannie-Corbisier E, Pollet H-J. Information and Service Modelling for Collaborating ISPs. In: eChallenges e-2009 Conference Proceedings. Dublin: IIMC International Information Management Corporation Ltd; 2009 [6] Hall J, Koukoulas S. Semantic Interoperability for E-Business in the ISP Service Domain. In: Filipe J, Marca DA, Shishkov B, van Sinderen M, editors. ICE-B 2008, International Conference on E-Business. Proceedings, Porto, Portugal, July 26-29, 2008. Setubal: INSTICC Press; 2008. p. 390-396 [7] Trustcom project homepage. www.eu-trustcom.com [8] Synergy project homepage. www.synergy-ist.eu [9] Buyya R, Abramson D, Giddy J, Stockinger H. Economic Models for Resource Management and Scheduling in Grid Computing. Concurrency and Computation: Practice and Experience. 2002; 14: 1507-1542 [10] Intalio|Cloud. Enterprise Cloud Computing. http://www.intalio.com/products/cloud/ [11] Ebbers M. Cloud Computing: Save Time, Money, and Resources with a Private Test Cloud, IBM Redguides; 2009. www.redbooks.ibm.com/redpapers/pdfs/redp4553.pdf [12] Cloud Computing Use Case Discussion Group. Cloud Computing Use Cases White Paper. Version 2.0; 2009 http://opencloudmanifesto.org/Cloud_Computing_Use_Cases_Whitepaper-2_0.pdf
Specification of SETU with WSMO Wout Hofman1, Menno Holtkamp1, Michael van Bekkum1
1
TNO, Brassersplein 2, 2600 GB Delft, The Netherlands
Abstract. The Organisation for Electronic Transactions within the Temporary Staffing Industry (SETU, [[1]]) operates a major interoperability project concerning temporary staffing of personnel for different application areas. The hr-XML (www.hr-xml.org) standard is applied to the Dutch situation. Several UML (Unified Modeling Language) class diagrams representing different information streams have been drafted as the basis for these Dutch standards. This paper shows the relevance of applying a more formal specification approach like Web Service Modeling Ontology (WSMO, [[1]]) to SETU. By applying WSMO to a case, we also identify potential weaknesses and/or relevant questions regarding the applicability of WSMO. Keywords: case study, formal specification, web services, hr-XML
1. Introduction Over the past decades, services have become the most important part of economies [[3]]. Basically, the service economy refers to the service sector. It leads to more sophisticated forms of cooperation, or what is called value co-creation. A Service Oriented Architecture (SOA, [[7]]) is currently the common paradigm for organizational interoperability. Organizations can then be integrated, based on the description of their externally observable behavior, without the need for knowledge of their internal functioning. SOA stems from the need for service integration. To that end, standards for technically representing services have been specified. Web Service Definition Language (WSDL, [[7]]) and SOAP (Simple Object Access Protocol) and the underlying XML Schema (XML: eXtensible Markup Language) are examples of the standards in the web service stack. Services can be discerned on two distinct levels of abstraction, namely on a business level and on an IT level. It has been widely accepted that previously mentioned standards are not sufficient to specify behaviour of organizations at a business level. Semantics needs to be added for the purpose of service discovery and mechanisms for behaviour definition need to be supported [1]. There are various initiatives supporting the addition of semantics to
342
W.J. Hofman, M. Holtkamp and M. van Bekkum
services, for instance OWL-S (Semantic Markup for Web Services [[8]] and [[9]]), SAWSDL (Semantic Annotations for WSDL and XML Schema [[10]] and [[11]]) and WSMO (Web Service Modelling Ontology [[2]]). In this paper, we report our research on the practical applicability of WSMO to SETU.. We will first present the SETU case, then discuss WSMO, and finally present the results of our research.
2. The SETU Case SETU is the Dutch acronym for ‘Stichting Elektronische Transacties Uitzendbranche’, which translates to “Organisation for Electronic Transactions in the Staffing Industry”. SETU is founded in 2007 by the ABU (the Dutch association of temporary work agencies) and is a non-profit organisation creating and maintaining standards and making recommendations for the actual exchange of electronic information in the Dutch staffing industry. SETU solely develops open standards for all staffing organizations and staffing customers in the Netherlands to use within their own environment. Typical transactions in the staffing industry are: 1. Employers and staffing suppliers most often have framework contracts in which an employer submits an order for work with its particular requirements to a staffing supplier. 1. A staffing supplier checks its available workforce and a (number of) employee(s). 2. A selection is made and a (number of) employee(s) is (are) hired by the employer, this results in an actual assignment under certain conditions for a specific time. 3. An employee starts working on the agreed assignment. 4. An employee keeps track of the time worked on a certain assignment. 5. A staffing supplier invoices an employer for the work as fulfilled by the employee. 6. A staffing supplier pays an employee for the fulfilled work. SETU currently provides the following application standards to support the aforementioned transactions: • • • •
Ordering & Selection of temporary personell Assignment Reporting Time & Expenses Invoicing
These transaction allow the IT support of the entire process, from ordering to invoicing. Each individual transaction is specified by a coherent set of models, including class digrams to represent the information that can be exchanged.. These class diagrams are manually transformed to syntactic descriptions (XML Schema) to enable the technical exchange of information. The XML Schemas that are used
Specification of SETU with WSMO
343
as a basis, are developed by the HR-XML consortium and extended with local requirements of the Dutch staffing industry.
Fig. 1. Ordering and selection (a) and assignment (b) class diagram
Figure 1(a) depicts a (high level) class diagram of Ordering & Selection, whereas Figure 1(b) depicts the one of Assignment. Apart from class diagrams, a number of other diagrams, including context diagrams, use case diagrams, state transitions diagrams and sequence diagrams are specified. Each SETU application standard has its specific model, which reuses some classes or attributes of other models. The relation between any two class diagrams is visualized by including classes of one diagram in another. The previous figures show the classes ‘Supplier’, ‘Consumer’, and ‘Assignment’ as a commonality in both models. At the same time, both models are separate models without any specified relationship, which makes maintenance and extension of these separate models a labor intensive and error prone process. Additionally, and maybe more importantly, the standard is currently based on the ‘message’ paradigm. One way or another, the use of the standards eventually results in a ‘controlled’ sequence of electronic messages. Less attention is paid to services that actors involved should provide, choreography of those services, and the conditions under which these services can be executed. Business redesign based on the ‘message’ paradigm is becoming difficult and would require the specification of completely different standards with new associated diagrams. In chapter 4, we will investigate the application of WSMO to enable business redesign.
3. Web Service Modeling Ontology The Web Service Modelling Ontology consists of concepts, development tools (WSMO Studio), an execution environment (WSMX), and a language (WSML) developed in a number of EU funded projects, e.g. SUPER (http://www.ipsuper.org/). WSMO is based on the theory of Abstract State Machines (ASM, [[14]]). The following design principles are the basis of WSMO, Web Service Modelling Ontology (see also[1] and [[12]]):
344
W.J. Hofman, M. Holtkamp and M. van Bekkum
• • • • • •
Unique identification of resources and Namespace. Description of resources and data are ontology-based. Each resource is described separately to reflect the distributed nature of the web (decoupling). Mediation addresses the issues of interoperability of heterogeneous resources. User requests can be formulated independent of service provision. A web service is a computational entity whereas a service provides the actual added value to its environment. A web service is the abstract specification of a service.
Four concepts are defined that support these design principles: •
• • •
An ontology provides the terminology used across all other descriptions. An ontology can be part of a WSMO document or can be imported. Ontology consists of concepts, associations, rules (functions called in WSMO), and instances (see also [[12]]). Web services are computational entities that provide functionality. The capability describes the actual value delivered across an interface with choreography. Goals describe the relevant aspects related to end-user requirements. Mediators resolve interoperability mismatches between different elements, e.g. between two services. They solve heterogeneity problems.
The concept ‘capability’ and ‘interface’ are the core concepts of a web service. Capability specifies the ontology used for a web service, the mediator that is applicable, and pre- en post-conditions. A pre-condition specifies the required input and a post-condition the output of a web service. Dependency between capabilities is specified by an assumption reflecting the assumed state of the world at initiation of a web service and an effect representing the state after execution of a service. Most examples of capabilities are fairly simple, e.g. the booking of a trip that requires booking information (pre-condition) and results in a reservation (postcondition). The assumption for this capability is that a valid payment method is used and the trip has been paid for by charging the account linked to the payment method for the required amount (effect). The following example shows that a valid credit card must be offered upon payment of a trip. In this particular example, the assumption and effect refer to another web service that needs to be executed. Possibly, that web service is supported by a capability offered by another actor; it is not specified how the capability of the example is aware of the effect (actual payment of the trip). Thus, assumption and effect can be used to model interleaving of two or more web services. capability BookTicketCapability sharedVariables {?creditCard, ?initialBalance, ?trip, ?reservationHolder, ?ticket} precondition definedBy ?reservationRequest[ reservationItem hasValue ?trip, reservationHolder hasValue
Specification of SETU with WSMO 345 ?reservationHolder ] memberOf tr#reservationRequest and ?trip memberOf tr#tripFromAustria and ?creditCard[balance hasValue ?initialBalance ] memberOf po#creditCard. assumption definedBy po#validCreditCard(?creditCard)and (?creditCard[type hasValue "PlasticBuy"] or ?creditCard[type hasValue "GoldenCard"]). postcondition definedBy ?reservation memberOf tr#reservation[ reservationItem hasValue ?ticket, reservationHolder hasValue ?reservationHolder] and ?ticket[trip hasValue ?trip] memberOf tr#ticket. effect definedBy ticketPrice(?ticket, "euro", ?ticketPrice)and ?finalBalance= (?initialBalance ticketPrice)and ?creditCard[po#balance hasValue ?finalBalance]
Whereas a dependency between capabilities expressed by assumptions and effects represents a part of the state of the world, other parts of the state of the world are expressed in ‘choreography’. Choreography is part of an interface for a particular web service and is based on the concept of abstract state machines [[14]]. It specifies state, roles of concepts and relations, and transition rules [[15]]. State is formally expressed in ontology, i.e. it contains those concepts, associations, and rules of which instances can change. One could envisage existence of customer data as part of ‘state’. The roles of concepts and relations express the actions that can be performed on the instances that are part of ‘state’, e.g. in case a customer already exists its customer data cannot be changed and is said to be ‘static’. Transition rules express state changes, e.g. updating and deleting instances. For instance, in case a new trip is booked, a transition rule can be used to relate that trip to a customer and add a new customer to customer data. It is not our objective to discuss the formal aspects of this specification mechanism. In the past, a lot of effort and research has been spent on developing formal specification methods [[2]]. In the following part of this paper we present the specification of SETU using WSMO.
4. Application of WSMO to SETU This section presents the application of WSMO to SETU. The application of semantic service specification is likely to provide advantages since all knowledge and constraints that are currently contained in various documents and models can be formally bundled in a single WSMO model. This will increase maintainability and extensibility of the SETU standards. Based on the concepts involved, constraints, capabilities, goals, and interfaces, we can potentially realize a technical
346
W.J. Hofman, M. Holtkamp and M. van Bekkum
grounding to accepted standards such as XML Schemas, (SA-)WSDL documents and choreographies based on for instance Petrinets [[19]]. As this grounding will be induced from formal specifications, interoperability between supporting organizations can be ‘guaranteed’ or ‘formally proven’ to a certain extent (e.g. deadlock detection), by means of software tools supporting the formalism. We take the process of ordering and selection for applying WSMO. A SETU ontology is specified and services that are required by the involved actors during this process are defined. A number of these services are specified in WSMO by means of goals and capabilities using an ontology specifically created for SETU. We do not include aspects in the specification that are not relevant exploring the possibilities of WSMO to SETU, e.g. non functional properties of a concept, an attribute, a web service, etc. These non functional properties refer to Dublin Core elements like creator, publisher, etc. [[16]]. From a modelling perspective, non functional properties are of importance with respect to maintenance of (parts of) ontology. 4.1. The SETU Ontology Figure 2 shows the SETU ontology with concepts (blue) and relations (green).
Fig. 2. SETU ontology
Specification of SETU with WSMO 347
As part of this ontology, the concept ‘Offer’ and its typed attributes are described in more detail. The proposed ontology is incomplete, but contains the core concepts for temporary staffing. The code-listing of the concept ‘Offer’ and its typed attributes is given by: concept Offer offerId ofType status ofType order ofType
(1 1) _string (1 1) _string
(0 1) _string
purchaseOrder ofType customerId ofType
customerSubId ofType supplier ofType
(0 1) _string
(1 1) _string (0 1) _string
(1 1) _string
contactSupplier ofType
(0 1) _string
availabilityDate ofType
(1 1) _date
hoursPerWeek ofType daysPerWeek ofType rate ofType
(0 1) _decimal (0 1) _integer
(0 1) _string
salaryRange ofType
(0 1) _string
reasonOfChange ofType comments ofType
(0 1) _string
(0 * ) _string
4.2. The SETU Ordering and Selection Process The Ordering and Selection process is the first step of cooperation between an employer and a staffing provider. An employer has a position vacant in his organization for which he needs a (temporal) employee. By describing the position, the qualifications that a candidate needs to have, the period for which the position needs to be fulfilled, and the (financial) conditions of the agreements, an employer assembles a ‘request order’ for this position. Typically, an employer has a framework contract with one or more staffing suppliers to whom he will send the order. As soon as a staffing provider receives an order request for a position, the challenge is to provide a human resource that ‘fits’ on both the position of the employer and the posed conditions. For instance when a senior carpenter is requested, the candidate employee should have the skills that match the profile appropriately. The requirements concerning an employee are typically formalized using educational degrees, certificates, or recognized competences. Additionally, an employee should be available for the requested period and fit within the range of available financial compensation. This matching selection by a staffing provider results in an ‘order response’, which is sent to the requesting employer. The whole process of order requests and responses may iterate a few times before an actual match between an employer and provider is made. As soon as a match is agreed upon, a formal assignment is used to confirm the match and mark the initiation of this agreement. For this assignment, the SETU Standard for Assignment is used.
348
W.J. Hofman, M. Holtkamp and M. van Bekkum
4.3. The SETU WSMO Services In the context of SETU, WSMO services can be distinguished by identifying the goals and needs of involved parties. As described in the previous section, the process starts with an employer’s goal to (temporarily) fulfil a specific position within its organization. A staffing supplier on his turn provides the service which satisfies the goal of employers, their customers. To do so, the staffing provider requires an actual and coherent view on both the availability and quality/competences of its workforce. Currently, a staffing supplier requires all involved employees to provide the necessary information. In practice an employee provides this information only once after which it is available in the Human Resource Management (HRM) systems of a staffing supplier. The place where goals and capabilities meet is the interface between two entities. This interface can be specified as a WSMO web service.
Fig. 3. Overview of service interfaces in the SETU case
For each of these services, certain pre- and post-conditions apply. The precondition is a formal description of a state of relevant entities that needs to apply. When this state applies, the service can be executed. After service execution the initial state is transformed into a state that meets the post-condition of the service. More generally we can say that the difference between the pre- and post-condition is the added value that a service provides to its environment, or in other words, the functionality. The following code listing shows an example of a web service offered by a staffing supplier as intended in SETU:
Specification of SETU with WSMO 349
wsmlVariant _"http://www.wsmo.org/wsml/wsml-syntax/wsml-full" namespace { _"http://www.setu.nl/wsmo#", dc _"http://purl.org/dc/elements/1.1#"} webService StaffingSupplierOfferService importsOntology SETU nonFunctionalProperties dc#description hasValue "A staffing supplier offers a humanresource for a certain available position that matches an order that was previously issued by a StaffingConsumer useing the StaffingConsumerPositionService" endNonFunctionalProperties capability StaffingSupplierOfferCapability sharedVariables {?position, ?humanresource, ?offer, ?positionId, ?competence} assumption definedBy ?humanresource memberOf Employee. //As a precondition: there should be a position available precondition definedBy ?position memberOf Position. //The result is an offer that relates to the initial order //and a humanresource that matches the required competence postcondition definedBy //An offer exists with an order attribute which resembles the positionId (?offer [order hasValue ?positionId] memberOf Offer) and ( exists ?positionId (?position[order hasValue ?positionId])) //A humanresource with the requested competences is available and (?humanresource [competences hasValue ?competence] memberOf Employee) and ( exists ?competence (?position[competences hasValue ?competence])). //As an effect, relations between offer and position and //between offer and humanresource exist effect definedBy exists {?offer, ?position} (OfferMatchingPosition(?position, ?offer)) and exists{?offer,?humanresource}(EmployeeOffering(?offer,?humanresource)).
The StaffingSupplierOfferCapability has the assumption that there is an employed human resource and a position available. The post-condition shows that the required competences for the position match the competences of an employee. The effect of this service is that an offer is available for the original position. This effect uses some The StaffingSupplierOfferCapability has the assumption that there is an employed human resource and a position available. The post-condition shows that the required competences for the position match the competences of an employee. The effect of this service is that an offer is available for the original
350
W.J. Hofman, M. Holtkamp and M. van Bekkum
//An offer matches a position relation offerMatchesPosition ( ofType Position, ofType Offer) axiom offerMatchesPositionDef definedBy ?offer[offerId hasValue ?offerId] memberOf Offer and ?position[positionId hasValue ?positionId] memberOf Position implies offerMatchesPosition(?offer, ?position). //An offer involves a certain employee relation employeeOffering ( ofType Offer, ofType Employee) axiom employeeOfferingDef definedBy ?offer[offerId hasValue ?offerId] memberOf Offer and ?employee[employeeId hasValue ?employeeId] memberOf Employee implies employeeOffering(?offer, ?position).
5. Findings Applying a formal framework like WSMO has its advantages and its drawbacks. Below we will discuss some of the issues we have identified: •
•
•
Global versus localized application standards. hr-XML offers a set of global standards for temporarily staffing. In fact, the SETU standards have been localized to the Dutch situation. One could imagine creating an upper layer ontology representing all concepts that are common to global staffing, whereas local domain ontology adds localization to these standards. Capabilities and goals can still be identical, only ontology is specific to a country or even a company. Richness and depth of specification. In its current incarnation, a domain standard like SETU is built on a set of formal models, detailing both static concepts and their relations and to some extent behavior of the actors in the domain. The application of WSMO however, provides a substantial increase in detail, richness and quality: concept specifications based on formal logics allow for formal reasoning, the addition of axiom’s introduce formally specified rules, behavior specifications are verifiable, etc. The human factor. The aforementioned increase in quality and depth comes at a price however: the standardization process requires far more effort, precision, dedication from its participants in order to achieve a meaningful and correct set of specifications. It also requires an ability or willingness to perceive a domain from a purely abstract point of view, which may prove a daunting task for contributors less well-versed in the application of somewhat more logical and abstract reasoning. In addition, the interested target audience is left with a more abstract and potentially complex standardization product that in turn requires more effort in order to comprehend. If it is feasible to generate better understandable specifications from the abstract ones, the abstract ones will be more acceptable.
Specification of SETU with WSMO
•
•
•
351
Technology solution paradigm. The SETU standard is built around the message paradigm: its models and technology specifications are currently still based on the notion of electronic document exchange patterns firmly rooted in paper based solutions. A formal specification based on WSMO enables a technology independent specification, with state and state transitions as the basic paradigm based on Abstract State Machines [[14]]. The latter paradigm implies that all possible actions expressed like capabilities or goals are expressed by state transition rules. An electronic document is a means for state synchronization between two actors. Thus, a shift from the means (electronic document or message) to its purpose is required when specifying application standards like those of SETU in WSMO. Such a paradigm shift is quite difficult in specifying application standards if one is used to the message paradigm as much people do in practice. Solution mapping. Formal specifications such as the ones WSMO provides, allow for a formally specified transformation between a model of concepts and a technological solution, e.g. XML schema's resulting from ontology. The mapping between model and solution becomes transparent. In SETU, domain models and available technology have partly shaped a standardization process that runs both bottom-up from technology reuse and top-down from a derived set of models. Well specified transformations may aid in separating these levels of abstraction and increase the quality of the standard. Staffing providers can profit when human resource qualifications, certificates and competences are communicated in a comparable manner, for instance using ontology concepts that are interrelated. This approach enables an automated matching process. When no exact match is possible, mediation can be applied to select the ‘most appropriate candidate’ out of a number of offers. This mediation activity can possibly be fed using historic selection decisions of a staffing consumer. This way the staffing provider gains insights in the ‘profile’ that a staffing consumer expects for certain positions. Since a swift and good matching offer of a staffing provider is of utmost importance for the level of service as experienced by the staffing consumer, this way of matchmaking can boost the staffing provider’s success ratio.
With respect to solution mapping, we have some additional remarks. A WSMO web service has a grounding that relates semantic processors to syntactic descriptions. WSMO supports different types of grounding [[20]], e.g. from formal specifications to syntactic descriptions and syntactic descriptions to formal specifications based on SAWSDL (Semantic Annotations for WSDL [[10]]). Grounding to syntactic descriptions like WSDL is by transforming interfaces with their choreography into WSDL operations and ontology to an XML Schema. If our understanding of WSMO is correct, we would expect that ‘interface’ with ‘choreography’ is transformed into WSDL operations, since the pre- and postcondition of choreography relate to input and output of a web service. An XML Schema can be annotated according to SAWSDL. According to WSMO grounding,
352
W.J. Hofman, M. Holtkamp and M. van Bekkum
a complete ontology is represented in XML Schema. Of course, the ontology can represent an individual message, like our approach in SETU for developing class diagrams of individual messages. Whilst all practical applications are most often based on the message paradigm, WSMO is based on a state machine paradigm [[14]]. Those practical applications require XML Schema’s for each input and output message in a WSDL. From an abstract perspective, state machines consider one ontology and state synchronisation of instances of that ontology based on one generic XML Schema. Axiom’s that define transition rules, pre- and postconditions can not be represented in a WSDL document and have to be implemented in for instance a BPEL document. The grounding to BPEL is not (yet) specified in WSMO. Further research is required to support the complete grounding of WSMO to syntactic descriptions that also represent user requirements in practical applications. There are still other issues of WSMO that are not completely clear, e.g.: •
•
•
Capability versus interface. A capability is used for service discovery, whereas the actual service is supported by an interface. A capability expresses a state change by its pre- and post-conditions, whereas interface with choreography also expresses state changes. There are no consistency rules defined to safeguard that the result of an interface is identical to the result expressed by its capability. Choreography. A web service with one capability has a choreography expressing state transitions. As the state is represented by ontology, the state space can become very large [[18]]. Does choreography of a web service consist of a particular (set of) paths with begin and end? It seems that choreography can also contain transitions that do not reflect interaction with a customer. Are these representing an orchestration? How is the orchestration of different web services given? Formal specification methods like Petrinets [18] might offer solutions to express relations between different web services. This issue requires further research. Choreography and orchestration. Orchestration may offer solutions for specifying relations between web services (see the previous issue), but orchestration can consist of internal processes that are not visible externally. Whereas web services only specify external behavior, orchestration will also contain internal state changes. Formal specification methods like Lotos [[21]] used internal events to model this requirement as part of external behaviour. Whilst in WSMO a web service with a capability and its choreography, it is not feasible to specify choreography over web services, other than by orchestration.
Finally, we would like to discuss redesign issues. Redesign requires value modelling like expressed by Gordijn and Akkermans [[23]]. These methods can express changes that we will briefly discuss here. Basically, a staffing supplier is a mediator between capabilities of employees and goals of consumers being employers. A supplier can match in two ways between those employers and employees. The first way is the more traditional way in which an employee offers capabilities to various staffing suppliers that are mediated with goals of employers. The other approach is to reverse the chain: an employee has a goal, seeking work,
Specification of SETU with WSMO
353
and in this particular case a consumer offers capabilities. These capabilities can be matched by a staffing supplier serving as a mediator between employee and employer. The same ontology is used in both ways, but capabilities and goals are defined differently. In case a staffing supplier serves as mediator, it requires part of the earnings of an employee. New methods have to be investigated when considering a staffing supplier as mediator. Currently, that employee has to register the worked hours by both a staffing supplier and an employer. At the employer, they have to be booked correctly in the financial administration and the staffing supplier currently takes a certain part of the payment based on the hours worked. The invoice of a staffing supplier contains the correct financial account information to assure payment. One could imagine that a payment is directly between employer and employee without invoicing and an employer submits an overview of hours worked directly to a staffing supplier. A weakness in this approach is that the employee has to submit the correct amount of its earnings to a staffing supplier. Another approach is payment without invoicing by an employer to a staffing supplier, in which case a staffing supplier acts on behalf of one or more employees.
6. Conclusions We have seen that WSMO is useful to specify a practical case like SETU and can be a basis for redesign. However, we have also identified a number of issues with respect to WSMO that has its foundation in Abstract State Machines. ASM seems to be a good way to express capabilities for service discovery, but additionally methods like Petrinets could be applied for choreography and orchestration. Furthermore, grounding of a formal specification directly into WSDL documents needs to be aligned with user requirements for practical situations. Other conditions of practical application relate to ease of use, which is most often best supported by a graphical user interface. Further research with respect to these conditions is required.
7. Acknowledgments The authors would like to thank Jasper Roes for providing detailed information about the ordering and selection processes of SETU.
8. References [1] [2] [3]
SETU - Stichting Elektronische Transacties Uitzendbranche, http://www.setu.nl/ D. Fensel, M. Kerrigan, M. Zaremba (eds.), Implementing Semantic Web Services – the SESA framework, Springer-Verlag, 2008. Heineke, J., Davis, M., The emergence of service operations management as an academic discipline, Journal of Operations Management 25 (2007) 364–374.
354
W.J. Hofman, M. Holtkamp and M. van Bekkum
[4]
Spohrer J. and Kwam S.K., Service Science, Management, Engineering and Design (SSMED) – An emerging Discipline – Outline and references, International Journal on Information Systems in the Service Sector, May 2009. ISO 15000-5:2006, Core Component Technical Specification, draft version 2.2. W.J. Hofman, EDI, Web Service and ebXML, communication in organisation networks, UTN, 2003 (in Dutch). T. Erl, Service-Oriented Architecture – concepts, technology, and design, Prentice Hall, 2005. OWL-S, Semantic Markup for Web Services, W3C member submission, 2004. J.Scicluna, C.Abela, and M.Montebello, Visual Modelling of OWL-S Services, Proceedings of the IADIS International Conference WWW/Internet, Madrid, Spain, October 2004. SAWSDL, Semantic Annotations for WSDL and XML Schema, W3C Recommendation, 2007. K. Sivashanmugam, KunalVerma, AmitSheth, John A. Miller, Adding Semantic to Web Service Standards, ICWS 2003. C. Feier, D. Roman, A Polleres, J. Domingue, M. Stollberg, and D. Fensel, Towards Intelligent Web Services: The Web Service Modeling Ontology (WSMO), in Proceedings of the International Conference of Intelligent Computing (ICIC’05), 2005. J. Davies, R. Studer, and P. Warren, Semantic Web technologies – trends and research in ontology-based systems, John Wiley & Sons, 2006. Egon Börger: "High Level System Design and Analysis Using Abstract State Machines", Proceedings of the International Workshop on Current Trends in Applied Formal Method: Applied Formal Methods, p.1-43, October 07-09, 1998. J. Scicluna, A. Polleres, and D. Roman (editors), D14v0.2 Ontology-based choreography and orchestration of WSMO services, WSMO final draft February 3rd 2006. Dublin Core Metadata Element Set, Version 1.1, 2008-01-14, www.dublincore.org. X. Wang, T. Vitvar, V. Peristeras, A. Mocan, S. Goudos and K.Tarabanis, WSMOPA: Formal specification of Public Administration Service model on Semantic Web Service Ontology, Proceedings of the 40th Annual Hawaii International Conference on System Sciences, 2007. H.M.W. Verbeek, A.J. Pretorius, W.M.P. van der Aalst ... [et al.], Visualizing state spaces with Petrinets, Eindhoven : Technische Universiteit Eindhoven, 2007. W.M.P van der Aalst, M. Beisiegel, K.M. Hee, D. König, C. Stahl, An SOA-based architecture framework. International Journal of Business Process Integration and Management, 2(2), 91-101, 2007. D24.2v0.1. WSMO Grounding, WSMO Working Draft 27 April 2007. E. Brinksma, On the design of extended LOTOS – a specification language for Open Distributed Systems, 1988. http://formalmethods.wikia. com/wiki/Formal_methods. Gordijn J. and Akkermans H., Value based requirements engineering: Exploring innovative e-commerce idea. Requirements Engineering Journal, 8(2):114–134, 2003.
[5] [6] [7] [8] [9] [10] [11] [12]
[13] [14] [15] [16] [17]
[18] [19] [20] [21] [22] [23]
Part VI
Interoperability Scenarios and Case Studies
A Success Story: Manufacturing Execution System Implementation Albin Bajric 1, Kay Mertins2,Markus Rabe2, Frank-Walter Jaekel2 1 2
Siemens AG Energy Sector, Fossil Power Generation Division, Products, E F PR GT BLN MF3, Huttenstr. 12, 10553 Berlin, Germany Fraunhofer Institute Production Systems and Design Technology, Pascalstr. 8-9, D-10587 Berlin, Germany
Abstract. The paper describes project procedures of a successful implementation for manufacturing execution systems on a real case. It illustrates interoperability barriers arisen during the project and how they are handled. The focus is on aspects of arising issues of organisational interoperability between IT-vendor and its customer (the user). An enterprise model was used as the major tool for the description of the processes but also in the communication between user and IT vendor. This enterprise model has also guided the implementation of the IT system. The project has been finished successfully end of 2009.
Keywords: MES, Interoperability, system implementation, manufacturing management, enterprise model
1. Introduction The technical evolution together with the international division of work demands for management and control systems that can be plugged into Enterprise Resource Planning Systems (ERP) implementations. Nevertheless, it is required that these management and control systems still act independently. Today, the implementation of a process management and control system seems not a spectacular task. The principle technologies, IT-systems and also related standards such as ISA95 [1] are well known. Also supporting organisations like the “Manufacturing Enterprise Solutions Association” (MESA) [2] and guiding principles such as VDA 5600 [3] do exist. However, the specific implementation of a manufacturing execution system incorporating business processes and manufacturing processes is still challenging and expensive. It is affected by
358
A. Bajric, K. Mertins, R. Rabe, F.-W. Jaekel
organisational as well as technical interoperability barriers [4,5]. Thereby the paper will focus on the organisational interoperability aspects. Experiences out of industrial cases illustrate challenges especially between IT providers and their customers resulting in a loss of trust in the provided system. The system is sold with attributes such as adaptable, programmable, modular, service-oriented, ISA95-conform, etc. But, in specific implementation projects limitations of adaptability can appear easily and the conformity to standards is a question of the stakeholder perspective because of missing details for specific implementations. In the viewpoint of the IT system provider, the implementation should not require programming effort on the vendor side. Therefore, it still seems the best solution if the user applies the process of the IT system instead of adapting the system to the business and operational processes of the user. In contrast, the user needs the best support of his processes that are adequate to his business. This results in communication barriers between customer and provider with the risk of inefficient information technology support. The task becomes even more challenging if the IT solutions require technical and organizational interoperability. Technically, the interoperability is necessary between former independent modules of the system such as production data acquisition (PDA), manufacturing control, order management and ERP interfacing. An integrated IT architecture is required, but in a federated sense, because each module should keep its independency. Organizational interoperability [6] has to be taken into account related to independent business units such as purchasing, manufacturing, subcontracting and sales. The foundation of the paper is a real industrial case of a successful implementation of a manufacturing execution system (MES) by managing and overcoming interoperability barriers. The next chapter gives an overview of the industrial case and its targets. The next chapter collects essential success factors which have been identified during the project to motivate the following chapters of project procedures and use of enterprise models. It is followed by a brief sketch of the project procedures which have been applied. Enterprise modelling has been used across all the phases of the project procedures for system requirement analysis, to solve communication challenges and support the involvement of the stakeholders. This is described in the subsequent chapter. This is followed by a conclusion covering lessons learnt.
2. Description of the Industrial Case The Siemens AG manufactures gas turbines for power stations for the international market in their Berlin pant. The project focuses on the manufacturing of rotor blades for the gas turbines. The rotor blade is a high-tech product. It has to sustain high mechanical and thermal stress. Therefore, the manufacturing process is challenging and the process of each part of the product has to be fully traceable. The market requests different types and variants of blades and the production of the rotor blades needs raw material of very high quality. Regarding the market requests a mass-production for standard blades as well as single production for
A Success Story: Manufacturing Execution System Implementation 359
specific types has to be provided. The factory is embedded in an international value chain and has to manage the interfaces with different organizations. The factory organization has to answer to these demands. Therefore, effective and clear controlling of the manufacturing process is essential for a successful business to ensure short reaction time to changing market demands. In order to archive this request, the analysis of the manufacturing management and control process has been authorized. The following targets were defined for the manufacturing execution system implementation: • • • •
Ensure a seamless information flow, Regard the involvement of all planning levels, Support scheduler, controller, materials requirements planner, sales, machine operator concerning the manufacturing process, Include all stakeholders, especially the manufacturing staff in the process improvement.
The production planning and control concept covers a hierarchical structure of various levels for long-term and medium-term material management (MRP). The order management functions of the ERP system is connected with the short-term control functions of the MES system (Figure 1). . The ERP system is responsible for the basic order data, realistic deadlines and the provision of the necessary information of the raw material. In the context of medium-term production planning and in the specific context of the implementation project, it is the responsibility of the MES system to determine the production program for the following year as well as to set realistic short-term production schedules. The determination of the production program for a year is usually not a MES functionality but in the specific case it is required as an additional functionality of the MES. The MES has to provide key data for production order and to monitor contracts to supply the production with raw materials on time and to ensure high date quality. This has to be done across all manufacturing sectors. The short-term tasks of the MES system include: • • • • • •
short-term order management, Throughput and capacity scheduling, Sequencing of the work schedule, Availability checks of resources, Data collection to support reporting and manufacturing process monitoring, Manpower and capacity scheduling.
The autonomy of the different organisation units requires a differentiation between the ERP level with the planning across all units and the decentralise management of the manufacturing units. This effects the data management in the different levels. Data interchanges between the functional levels and across the distributed units require a well defined organisation of the information exchange. Data from the higher level (ERP) are only reference data for the manufacturing control process.
360
A. Bajric, K. Mertins, R. Rabe, F.-W. Jaekel
The bridge between the ERP system and MES was one of the challenges of the project because a change in the architecture and functionality of the ERP system is prohibited. The selected IT provider has already an Enterprise Application Integration (EAI) component which supports especially the required interface between ERP and the MES modules. Therefore, this bidirectional interface could be realised exploiting existing functionalities. However, the difference of abstraction between the data in the ERP system and in the MES requires a lot of effort in defining acceptable mappings. ERP
Purchasing / Sales
Production Program
Production Planning and Control
M anufcturing Control Scheduling of Orders
Detailed Planning inventory
Feedback
M anufacturing control Unit 1 Operational Orders
Feedback
M anufacturing control Unit 2
M anufacturing control Unit 3
Relocation of Orders
Process level Storage
M anufacturing
Refining
Figure 1 Production planning and control concept
3. Success Factors Based on experiences with similar projects an effort should be made to build the mutual understanding between partners, reconcile their potentially different economic targets, match their process knowledge, make the implemented processes coherent and their implementation complete. The applied enterprise modelling approach and the resulting enterprise model was the foundation to archive these targets. The following factors of success have been identified: • • • • • •
Motivation of all stakeholders Modular system architecture System granularity Communication and willingness of the IT provider to adapt the IT system Clear responsibilities Convinced management
Representatives of all stakeholders were involved in the analysis, selection of IT provider and system specification process to ensure the right process support of the MES and also to increase the motivation of the process owners.
A Success Story: Manufacturing Execution System Implementation
361
The modularity of the system and its adaptability was a necessary demand in the selection process of the IT provider. The granularity of the MES was given by the processes which have to be supported. It has been further specified during the project. The willingness of the IT provider to adapt the IT system was essential for the success of the project. So it was possible to adapt the standard functionality of the system to the process requirements. It was not just a configuration task: in fact, programming of new functionalities was required to fulfil especially the process coherence. Beside the technical aspects of the MES clear responsibilities and MES-related processes had to be established to ensure a sustainable implementation. Also, the management has to be confined and later to support the project. The related project procedures are described in the following chapter.
4. Project Procedures The project has to handle interoperability aspects and sometimes to overcome appearing barriers. The whole approach can be briefly described in the following steps: 1. Process analysis and required system architecture 2. Functional requirement book (contract document) 3. Assessment of performance indicators (e.g. capacity, throughput time, output) 4. Return of invest (ROI) estimation based on the enterprise model and the system IT architecture (involvement of senior decision makers) 5. Market survey regarding adequate IT-system providers 6. Selecting and contracting of an IT-system provider 7. Functional specification and requirement specification book with the detailed description of interfaces, system functionalities, user interfaces, performance characteristics, reports and performance indicators to be delivered by the selected system 8. Adaptation of the existing IT-system 9. Definition of functional roles and IT system related roles – establishment of a adequate new organisation 10. Definition of organisational roles accompanying with the specified processes 11. System implementation together with all stakeholders 12. Tests and release of the implemented system The project procedures (Figure 2) cover the project steps, results of the different project phases and required parallel actions such as the master data reorganisation and maintenance.
362
A. Bajric, K. Mertins, R. Rabe, F.-W. Jaekel
Systemdatenverw altung
KERNSYSTEM
Dialogsteuerung
Protokoll-Handler
Event - Manager
Dat enbankmanagement
Auswertungen
Zeiterfassung
MDE
Werkzeugstücklisten
Produktionsdat enerf assung u. -verarb.
BDE
Werkzeugplanverwaltung
Werkzeugkassettenverwaltung
Werkzeugdifferenzlisten
Werkzeugeinrichteblattverwalt.
Deterministisch
Werkzeugmagazinverwaltung
Werkzeugverwaltung
Werkzeuglagerverwaltung
Transportst euerung
Pufferspeicherverwaltung
Lagerverwaltung
Logistik Editieren von NC-Programmen
NC-Programmverwaltung
NC-Programmverwaltung
NC-Programmübernahme
Statistisch
DN C
Kapazitätsabgleich
Proiritätsregeln
Reihenfolgeplanung
Disposition
Prozessfamilienbildung
Auftragsfertigstellung
Auftragsveranlassung
Auftragsfreigabe
Auftragsübernahme/-erfassung
Auftragsverw altung
Analysis: System Functions & As‐Is and To‐Be Model
Contract Document & To‐Be Model Identification of Process Requirements
System and Provider Selection
System Model & ROI
Role Definition & Communication Support & Enterprise Process Design
Functional Specification Book
Specification
Implementation and Test of Components
Test Documentation
Integration Test & Approval
Master Data Reorganisation and Maintenance
Figure 2 Project procedures
Along the project phases the project deals with a number of interoperability aspects: •
• • • •
The path from the business level to the business process automation by enterprise modelling of business and manufacturing processes used to define and evaluate a system architecture concept which leads to an MES implementation Different internal and external organisations (purchasing, manufacturing, subcontractors and sales) Interfaces between systems (ERP / MES) and organisations (user company / IT provider) Combination of different system components and system granularity The user required a movement from the provider database system to the database system of the user.
5. The Enterprise Modelling Approach The modelling approach addressed before starts with a business process analysis on the basis of a related enterprise model. The model covers processes, process interfaces, organisational units, IT systems in use (quite often Excel applications), documents and other required resources. The modelling involved all process owners who might be affected by the discussed manufacturing execution system from the management area to the machine operators. The resulting as-is-model has been modified by substituting the existing IT system fragments with functionalities of an expected integrated manufacturing execution system (MES). In parallel, a functional analysis supports the identification of the required functions related to the operational processes of planning, control and production. This leads to an initial model reflecting the process support of the MES system in the above described project steps 1-3. The model illustrates the required system components (Figure 3) such as the following: •
Market portfolio management
A Success Story: Manufacturing Execution System Implementation
• • • • • • •
363
Order management Production control Production data acquisition Document management Utility management Reporting ERP interfaces
The results of the process analysis and functional analysis created the content of a functional requirement book and the foundation for an evaluation of performance indicators to approximate the return of invest (ROI). The ROI approximation was in project step 4 one element to convince the senior decision makers to continue with the project. In project step 5 and 6 the “functional requirement book” and a market survey are used to contact and select possible vendors. Having selected a vendor the “functional requirement book” including the enterprise model is a part of the contract. Together with the system vendor the functional specification book has been created in project step 7. During this work the model ensures the consistency and coherence of the target manufacturing execution system regarding the operational processes. In the phase of the functional requirement specification the viewpoint between IT provider and user differs. Therefore a lot of changes are requested which are covered in the contract but not finally fulfilled in the system standard of the IT provider. Here an intermediation was required to ensure a further progress of the project. During this process the system of the IT-provider has been designed in the direction of the requirements of the user processes and specified in the enterprise model. The results have been added to the “functional requirement book”. It covers now the specialised system functionalities and a specific enterprise model reflecting the functions of the selected system in the operational processes. In project steps 9 and 10, a further result of the modelling approach is the definition of process related functional roles and system related processes and roles. These role definitions are required to ensure the sustainability of the system as well as to support the further implementation process. Finally, the responsibilities for MES functionalities and communication between IT vendor and user (customer) are clearly defined. This created a direct communication between both the company of the user and the company of the IT provider. It also supported a distributed work on the different modules of the MES system. The coherence of the different (local) activities was provided by the enterprise model expressing the interrelations between the different processes. For example missing transitions and interfaces between processes as well as the impact of decisions within the implementation of the MES are expressed by the model. Especially in the testing phase in project step 12 of the project the model was used to prove the operational processes related to the implemented MES functionalities. It also supports the integration test of the system especially in terms of completeness of the tests. This has been done in close cooperation with the
364
A. Bajric, K. Mertins, R. Rabe, F.-W. Jaekel
system user (Siemens AG), the IT provider and a consultant (Fraunhofer IPK) acting as process analysed, mediator and MES expert.
Figure 3 Enterprise model level 0
The modelling approach is supported by the enterprise modelling tool MO²GO [7,8] applying the integrated enterprise modelling method (IEM) [9,10,11]. The method and tool supports a fast and consistent enterprise modelling approach and a good communication base. Extensions of the tool support consistency checks and the role definition in project step 9 and 10 as well as the test phase in project step 12.
6. Conclusion and Lessons Learned The approach illustrates the existence of a lack of modularity and granularity of the components and functions of IT-systems. This hinders a fast derivation of a system implementation from enterprise models as well as customer satisfaction and cost efficient realisation of the running systems in an enterprise. Further actions are necessary to realise real service oriented and reliable systems available on the commercial market which can be flexible adapted to business and operational process demands. Even such concepts exists in the research area the commercial
A Success Story: Manufacturing Execution System Implementation 365
availability is low or at least to expensive. The challenges identified during the system implementation are following: • • • • •
Difference in the expectations between IT-provider (vendor) and user (customer) System adaptability and programming effort is high Granularity of the system needs to be clearer and the use of services should be more independent from other services Integration of system components is required on data level as well as on functional level even it could be in contradiction with the previous point. Process interfaces between systems (what is the leading system)
Regarding a future visions service orientation is in the focus in terms of granularity, adaptability and interoperability of enterprise applications. Across enterprise applications, Service Oriented Architecture (SOA) interoperability dramatically lowers cost between the enterprise, supply chain, and plant operations systems and other manufacturing/production applications. For SOA vision to be successful, the Industrial Interoperability Compliance Institute (IICI) will certify solutions to Message Oriented Middleware (MOM) standards to provide lean IT systems through common definitions of data and processes. Adaptable global manufacturing is only able to be accomplished through adaptive interoperability of businesses and manufacturing processes. When industrial systems are truly configurable to produce configurable business process, interfaces, reports, and metrics, a key requirement for adaptive manufacturing and production processes for 21st Century global markets would be accomplished.
7. References [1] [2] [3] [4]
ISA-95 - http://www.isa-95.com/, 06.12.2009. MESA - http://www.mesa.org/, 06.12.2009. VDI/VDA 5600 - http://www.vdi.de/uploads/tx_vdirili/pdf/1381197.pdf, 06.12.2009. INTEROP, Enterprise Interoperability -Framework and knowledge corpus- Final report, INTEROP NOE, FP6 -Contact n 508011, Deliverable DI.3 (2007). [5] Naudet,Y., Guédria, W., Chen, D (2009). Systems Science for Enterprise Interoperability. In I-ESA’09 workshops, 5th International Conference Interoperability for Enterprise Software and Applications, Beijing, China (2009). [6] Mertins, K.; Knothe, T.; Jäkel, F.-W.; Jochem, R. Interoperabilität von Unternehmen Interoperability of enterprises. Key to success in global markets. PPS Management 13 (2008), No.3, S.49-52. [7] Mertins, K.; Jaekel, F-W: MO²GO: User Oriented Enterprise Models for Organizational and IT Solutions. In Bernus, P., Mertins, K., Schmidt, G.(ed.). Handbook on Architectures of Information Systems. Second Edition. Springer-Verlag, Berlin Heidelberg New York 2006, S. 649-663. [8] MO²GO – www.moogo.de, 06.12.2009. [9] Spur. G.; Mertins, K.; Jochem, R.: Integrated Enterprise Modelling, Berlin Wien Zürich, Beuth 1996. [10] Mertins, K., Jochem, R.: Quality-Oriented Design of Business Processes. Kluwer Academic Publishers, Boston, 1999.
366
A. Bajric, K. Mertins, R. Rabe, F.-W. Jaekel
[11] PhD Thesis: Jochem, R.: Integrierte Unternehmensplanung auf der Basis von Unternehmensmodellen. Diss. TU Berlin, Berlin 2001.
Enabling Proactive Behaviour of Future Project Managers Georgios Kapogiannis1, Terrence Fernando1, Mike Kagioglou2, Gilles Gautier1 and Collin Piddington1 1 2
Future Workspaces Research Center- Think LAB, School of Build Environment, University of Salford, UK Salford Centre for Research & Innovation (SCRI), School of Build Environment, University of Salford, UK
Abstract. This positioning paper aims at exploring the requirements and challenges to be addressed when implementing proactive project management in the industry. It describes the current challenges of project management and explains how proactive behaviours could increase the success rates of projects in the future. Keywords: project management, organisation, proactive behaviour, collaboration
1. Introduction: Challenges Previous research has demonstrated a strong interdependence between proactive project management and collaborative technologies [1]. Indeed, it was shown that collaborative technologies acted as enablers of proactive project management by providing a real-time access to the project knowledge and information. On the other hand, organisations would need the skills and working culture of proactive project managers to fully exploit the power of collaborative technologies. Only the combination of both could shorten decision making processes while improving decisions before and during project life cycles. The business world is rapidly moving towards a new paradigm in the manner it conducts its work. The Internet has enabled new ways of procurement and increased intellectual services availability; it has also shortened lead time to access information. Business globalisation is becoming a norm, with design objectives, manufacturing, resources and support systems being distributed. This means that ensuring the validity of information is increasingly important when managing issues during projects. Besides, issues must be addressed in similar timeframes as in non-distributed businesses. As a consequence, the role of project management
368
G. Kapogiannis, T. Fernando, M. Kagioglou, G. Gautier and C. Piddington
needs to be reconsidered in the light of this new way of working with particular emphasis on real-time problem resolution. It is also necessary to revise the roles of project managers to ensure they maintain their ability to contribute in a timely manner to the product or service lifecycle. 1.1. Organisational Relationships All organisations have 2 dimensions: that of function and project. The prime axis depends on the corporate strategy in terms of customer relationships and products or services creation. Most organisations are now structured to deliver high customer value and they use their project organisation to assess their performance From a management point of view this can be seen as a series of projects that compose a project portfolio. Each project is managed independently, although maintaining mutual procedures. The portfolio summarises to the Board the states of projects and the returns on stakeholders’ investments (ROI). The budget of every project is managed according to project objectives. Project management is responsible for the management of individual projects and for their delivery to the customers within previously agreed time, cost and quality constraints. From a project perspective, particularly where capital goods are concerned, the process to delivery can be described in a product life cycle. This corresponds to a very high level description of workflows and it shows the evolution of products from concept to recycling. So, the trend is to offer both product and associate through life cycle. The other dimension, usually described as the functional dimension of an organisation depends on its competences or skill sets. It mainly corresponds to the soft asset of an organisation and it can be changed in profile according to the strategic sector the company serves. Increasingly this means a slow change in the electronic/computing capability as products become more intelligent. The stages of life cycles take different orientations and exploit the organisation competences in different combinations depending upon the skills required. For example design will take advice from maintenance personnel to reduce the through life cost, and maintenance personnel will require advice from design if an unexpected problem occurs during the operational phase of the life cycle. Henceforth internal interactions can be complicated due to the distribution of the stakeholders and to their very different skill sets and working cultures. An increased involvement of suppliers also amplifies the needs for collaborative infrastructures with control and facilitation supplied by the project management capability. Project managers’ main role is to plan resources required for a project, negotiate with the discipline leads, and then to control and maintain the project. To do this, one organises a series of progress meetings to monitor and assess any deviations. Identified deviations are then addressed by adding the issue to the agenda of the meeting for resolution. These meetings are predominately a decisional space where multiple views are taken into account in order to agree on an optimised solution. The records of this meeting include the agenda and the minutes/actions.
Enabling Proactive Behaviour of Future Project Managers 369
1.2. Current Issues Meetings are becoming a focus in lean processes because there is much time and cost being spent in their execution. Tele and video conferences save travel time and costs but they lack the ability to discuss problems efficiently where problem spaces can be adequately defined. For example, structural and aerodynamic analyses could be required when designing a building. In this case, both the problem and its context are known. Engineers should be able to make analyses and to share results in real time as the design evolves, so that the best design could be validated before the end of the meeting. Also, participants’ attendance without all the necessary information to contribute to the decision is another feature that leads to inadequate decisional quality or time delays in resolution. In addition representative managers do not always possess the detailed knowledge of a problem as do their discipline colleagues. Their contribution must then be postponed to the next meeting when they have been explained the best solution. A third significant problem is information incompatibility. Copies of information traditionally circulated with minutes are superseded and participants receive different documents according to their roles during the meeting. This leads to confusion and frustration in the meeting, as participants have incomplete sets of information on which they must base their judgment and debate a solution. This is even worse when the information involves complex data of complex products, power stations etc. These 3 major points contribute to defects in plan execution and to the needs for reworking at later dates, where rectification costs escalate exponentially with time. In summary there is much waste in meetings having a negative effect on the principles of objective attainment of Time, Cost and Quality leading to failed customer acceptance and satisfaction.
2. Project Manager Role A number of key behavioural factors emerge when analysing successful project management. Slevin and Pinto [12] identified 12 of these factors and sorted them depending whether they appeared at the micro (individual) or macro (organisational) level of an organisation (Table). These factors correspond to the main research areas in project management. Table 1. The 12 Key Behavioural Factors for Successful Projects [2] Level MICRO
Twelve Key Behavioural Factors for Successful Projects 1.
Personal Characteristics of the Project Manager
2.
Motivation of the Project Manager
3.
Leadership and the Project Manager
370
G. Kapogiannis, T. Fernando, M. Kagioglou, G. Gautier and C. Piddington Table 1. continued
MACRO
4.
Communication and the Project Manager
5.
Staffing and the Project Manager
6.
Cross – functional Cooperation and the Project Manager
7.
Project Teams and the Project Manager
8.
Virtual Teams and the Project Manager
9.
Human Resources policies and the Project Manager
10.
Conflict and Negotiation and the Project Manager
11.
Power and Politics and the Project Manager
12.
Project Organisation and the Project Manager
Pinto completed a research at the very beginning of the 21st century to determine the trends of project management over the past decade [7] and the future direction of project management field based on project managers’ opinions [7]. The main outcome of that research is that an increasing number of organisations is becoming ‘totally projectised’ as they attempt to handle with rapidly evolving technology and changing business environments. Consequently, the ability of project managers to address the above behavioural factors should impact on the successful completion of projects. 2.1. Proactive Behaviour In order to achieve a sustainable competitive advantage, project managers should improve their ability to control and to make more accurate decisions [2]. As shown in Table, project managers’ skills mainly involves their capability to interact with other participants, organisation members or project members. These interactions increase the understanding and trust between co-workers, so that they facilitate communication and collaboration [4]. Furthermore this improved communication enable the establishement of more efficient and more effective partnerships. The role of proactive project managers is to understand the reducion of degree complexity in projects. Specifically, regarding trust, Estrin says that ‘innovators must trust themselves, trust the people with whom they work, and trust the people with whom they partner, balancing their progress in an environment that demands both self-doubt and self-confidence’. Communication can be defined as an knowledge sharing process during which people exchange information, assign meaning to this information and learn. Grant and Ashford [3] support the idea that trust, communication and collaboration are enhanced by the following skills: anticipation, change orientation and self initiative. Henceforth these skills should lead to the development of ProActive behaviours. Additionally, Grant and Ashford explain what forms a proactive behaviour: "It is anticipatory - it involves acting in advance of a future situation, rather than just reacting.
Enabling Proactive Behaviour of Future Project Managers
371
It is change-oriented - being proactive means taking control and causing something to happen, rather than just adapting to a situation or waiting for something to happen. The production operator has caused a change before the machines are routinely changed and prevented a failure. It is self-initiated - the individual does not need to be asked to act, nor do they require detailed instructions. For example, the new management consultant in the example has not waited to be given feedback, but has proactively sought it out". Also, Parker, Williams and Turner [8] defined proactive behaviour as "selfinitiated and future-oriented action that aims to change and improve the situation or oneself" Therefore, successful project managers need to be autonomous, future/change oriented and anticipator. This behavioural profile can be used as a driving force to initiate change in the operational and organisational system of a company. This approach will give an added value to the current state of the art in project management. The pro-activity concept assists project managers to think to act before, during and after a meeting takes place. In general meetings purposes are: information maturity, provided with problems solutions, identified risks, progressed control– operation maintenance and end finally a decision – what to do next. Usually, the decision’s outcome is inefficient and ineffective [7, 9, 10, 11]. This phenomenon unfortunately still exists within the majority of organisations. It happens mainly because either the information maturity or the manager’s skills are not related to solving current problems. So project managers roles are to organise, moderate, control and direct the meeting progress. These roles are related to project managers’ skills in order to keep alive the context and the content of a meeting. On that point should be mentioned that project manager has to address project meetings and organisational objectives. However in order to enable the proactive behaviour collaborative environments will be introduced. 2.2. Collaborative Environments as Enablers of Proactive Behaviour Collaborative environments are based on the creation of collaboration spaces where decisions and issues are discussed. This clearly aligns with the concept of a meeting decisional space. These collaboration spaces also allow the involvement of all stakeholders regardless of their geographical location or the devices they use to access the Internet and the collaboration environment. This should help to ensure the quality of the decisions and reduce the future costs and delays by the involvement of all stakeholders. The expense of time and money is also avoided by the reduction in travel. The case of CoSpaces capability of viewing the same information caters for the need to share the same document from a controlled source avoiding the errors that occur as a result of information being referenced at incompatible issues. Below are further details of benefits that can be realised from the deployment of CoSpaces technologies in enabling proactive skills to project managers.
372
G. Kapogiannis, T. Fernando, M. Kagioglou, G. Gautier and C. Piddington
3. Future Project Managers The need for future project managers to address the 12 behavioural factors (Table) and while working with advanced collaborative technologies requires a redifinition of the way project managers should work. The following table describes how these new roles could improve the success rates of projects by refering to the 12 behavioural factors from Slevin and Pinto. The statements in Table are based on primary data collection, literature review and analysis of case studies from the construction industry. They will be validated duting a PhD by the means of semi-structured interviews and questionnaires. Table 2. Project success and Future Project Managers Project Management Role
Future Project Manager
Personal Characteristics of the Project Manager
Project managers will be able to control project progress from anywhere at any time thanks to mobile and advanced technologies. The risks for project failures will be diminished while procedures will be optimised through the whole PMLC (Project Management Life Cycle). Efficiency will also be increased before the official start of projects, so that projects will have more chances to be successful. This should be facilitated by the analysis capabilities of project managers and by their increased access to project resources. Time and cost will be reduced due to real-time problem resolution and to the ability of project managers to be self-initiated and future-oriented. Project quality will be increased due to the flexibility and adaptability of the project manager.
Motivation, Leadership Project Team and Virtual Team and the Project Manager
Project managers will increase creativity as well as reduce costs and downsizing by building strong and efficient teams. These abilities to manage both co-located and distributed team will allow them to better structure their organisations and teams, while keeping control on them thanks to collaborative technologies.
Communication and the Project Manager
Project managers will be able to communicate more efficiently with his/her teams thanks to collaborative technologies. Conflicts will be easier to manage thanks to better communication and understanding between co-workers. The ability to share knowledge between co-workers instead of data will overcome some interoperability issues that could lead to difficult communications. As a consequence, project managers can make more effective and efficient decisions.
Enabling Proactive Behaviour of Future Project Managers 373 Table 2. continued Staffing and Human Resources and the Project Manager
Problems will be solved faster; better changes and decisions will be made thanks to an increased access to stakeholders and advisers, at any time, from any locations.
Cross – functional Cooperation and the Project Manager
Project managers will be able to create meetings faster with the right participants thanks to an increased understanding of everyone’s skills and roles in the project. Collaborative technologies will enable ad-hoc changes to the meeting space in order to address the evolutions of problems understanding. Also, there is an environmental benefit: carbon reduction due to cut of air travel.
Conflict and Negotiation and the Project Manager
Project managers will be able to predict possible conflicts during meetings and to avoid them by assessing situations with the most relevant and accurate information available in the project. Better negotiation will be possible, with better deals due to better decision.
Power and Politics and the Project Manager
Project managers will be able to anticipate project evolution thanks to a better understanding of their co-workers and of political issues. They will feel more confident in the decisions they make and will better control project progress, with increase knowledge and wisdom about the project status.
Project Organisation and the Project Manager
Project managers should collaborate both internally and externally. Indeed, they should share resources such as data, information or knowledge with partner organisations. The interactions between project managers and external organisations are ruled under the same principals of any organisations: according to a shared set of objectives and to work processes. As a consequence, these interactions occur within virtual organisations.
4. Conclusion This paper shows the power of behavioural factors to successfully deliver projects. These factors need to be addressed by project managers through an appropriate development of their skills. These skills are related to the self-oriented, anticipatory and initiation behaviours that form proactive behaviours. These skills are also mainly efficient when working collaboratively and using collaborative technologies.
374
G. Kapogiannis, T. Fernando, M. Kagioglou, G. Gautier and C. Piddington
These collaborative technologies are working as enablers of proactive behaviours among project managers. This will be the key of project success and appear as improved quality, optimised time and effective cost estimation. Also, the increased information compatibility and accessibility will enhance the decision making power of project managers and hence decrease organisational risks. Realistic scenarios developed in CoSpaces have been used to investigate the benefits of ProActive behaviour as Project Management methodology applied by the industry during the planning, production and maintenance phases in construction, automotive and aerospace sectors. The basic technologies are necessary for the implementation of the Pro-Active behaviour. A Project Management methodology has been developed, facilitated by the CoSpaces project identification of Decisional Spaces.
5. Further Challenges This particular approach to Project Management is being validated in the construction industry in the United Kingdom. We hope to determine the effort required for changes to proactivity in the industry. This will involve education and the upgrading of PMBOK [3], as well as further software enhancements to support various information sources and to cover real time data capture. It is also recognised that new technologies will arise. Among others, service architectures, RFIDs and the Internet of things give great promises to further enable these requirements.
6. Acknowledgments The results of this paper are partly funded by the European Commission under contract IST-5-034245 through the project CoSpaces.
7. References [1]
[2]
[3]
[4]
Cleland, David I., David Ira, 1926- Project management: strategic design and implementation / David I. Cleland. - 3rd ed . - New York; London : McGrawHill, 1999 Gautier G,.G., Kapogiannis.G, Piddington C., Polychronakis Y., Fernando T,(2009) “Pro-Active Project Management’’, I-ESA 09 China, Interoperability for Enterprise Software and Applications . Grant, A. M., & Ashford, S. J. 2008. The dynamics of proactivity at work. Research in Organizational Behaviour, 28: 3-34. http://www.unc.edu/~agrant/publications.htm Kayser T.A. (1994). Building Team Power: How to how to unleash the collaborative genius of work teams. New York: Irwin Professional Publishing.
Enabling Proactive Behaviour of Future Project Managers 375 [5]
[6] [7]
[8]
[9] [10] [11] [12] [13] [14]
Mohamed, L. C. a. S. (2008). "Impact of the internal business environment on knowledge management within construction organisations." Journal of Construction Inovation 8(1). Morris, P. W. G. (2004). The irrelevance of project management as a professional discipline. Manchester, Indeco Management Solutions: 20. Morris P. & Pinto J. (2007). The Willey Guide to Project Organisation & project management competencies. Book Published by John Wiley & Sons, Inc. Parker, S. K., Williams, H. M., & Turner, N. (2006). Modelling the antecedents of proactive behavior at work. Journal of Applied Psychology, 91(3), 636-652. PMBOK Guide 2008, A guide to the project management body of knowledge. Newtown Square, PA: Project Management Institute. PMBOK Guide 2004, A guide to the project management body of knowledge. Newtown Square, PA: Project Management Institute. PMBOK Guide 2000 A guide to the project management body of knowledge. Newtown Square, PA: Project Management Institute. Slevin, D.P., Pinto, J.K. (1987), "Balancing strategy and tactics in project implementation", Sloan Management Review, Vol. 29 No.1, pp.33-41. Thompson, Leigh L. Making the team / Leigh Thompson . - 3rd International ed. - Pearson Education (US) : [distributor] Pearson Education, 2008 . Tropham,J.E. (2003). Making meeting work: Achieving high quality group decisions (2nd 3d). Thousans Oaks, CA:Sage
Gaps to Fill Between Theoretical Interoperable Quality and Food Safety Environment and Enterprise Implementations David Martínez-Simarro1, Jose Miguel Pinazo Sánchez1, Raquel Almarcha Vela1 1
Ainia Technological Centre, Benjamín Franklin 5-11, E46980 Paterna (Valencia) Spain
Abstract. Most of current quality and food safety information systems in the industry do not fill the special needs of the food companies, and the particular conditions of this sector suppose a strong barrier to make deep changes in legacy systems. The need for effective risk assessment and communication is becoming increasingly recognized by many governments and the food industry. Although risk communication of food safety issues is still in its infancy, much can be learned from past experience. This research analyzes initiatives, standards and other solutions that potentially can cover food chain traceability and food safety and quality management needs, and determines existing gaps to fill to achieve an interoperability environment model for the food chain. Keywords. Interoperability, Standards, Supply chain, Food safety, Risk, Contamination, Food quality; Nutrition; Food chain; Consumer
1. Introduction Food safety was in the past often but not always addressed as a public health issue. In recent years, because of a chain of events comprising large-scale food related crises of various degrees of severity, the public perception of the safety of our food supply has been shaken. Food safety issues are causing more concern than ever, if a glance at headlines over the past few years is any indication: mad cow disease, Escherichia coli contamination of green vegetables, dioxin in the food chain and ongoing concerns about mercury and pesticides in food. In last years the globalization and the consumer’s interest in healthy lifestyle have increased considerably the importance of traceability, quality and food safety aspects, thus the interest of users and public administrations in information management systems for these purposes has increased too. There are many definitions of the concept “Traceability”. According to the European regulation (CE Nº 178/2002) traceability is the ability to find and keep
378
D. Martínez-Simarro, J.M. Pinazo Sánchez, R. Almarcha Vela
track, through all stages of production, processing and distribution of food, feed, animal for food production or a substance intended to be incorporated into food, feed or likely to be. By this way, we can define a traceability system as [1] “a coherent set of concepts, tools, working procedures and enabling technologies, that supports the tracking and/or tracing of goods in a production and distribution environment”. Food chain traceability will be relatively simple when all the processing is handled by a single organization, but becomes extremely complex for multipleingredient products which call upon a number of different systems for raw material production, processing and marketing. The environment of our chains of food supply is often composed of many steps, and at each stage there are numerous possible occasions for contamination of the food. Many food production methods have been developed without adequate foresight into the possible consequences of the application of non-traditional techniques. This has for instance led to the spread of the BSE (Bovine Spongiform Encephalopathy) epidemic, an epidemic for which we could not predict the expected course. It is necessary, in this case, to identify and characterize all the material flows (raw materials, additives, semi-finished products, packaging materials etc.) that converge into a given product, as well as all the organizations involved at each stage, in order to ensure that the product's history can effectively be retraced to ascertain the causes and responsibilities for any problems or defects. Food chain traceability is therefore a concept which can be defined as: “the identification of the organizations and material flows involved in the formation of a product unit that is individually and physically identifiable". FAO has already adopted this food-chain approach1 and defines it as recognition that the responsibility for the supply of safe, healthy and nutritious food is shared by all involved, from primary production to final preparation and consumption. Compositional changes (representing either risks or benefits) in food can be introduced at every link. Although developments may be largely beneficial, food composition needs to be monitored to ensure that no harm results to consumers. Finally, collaborative international efforts are needed in order to resolve issues of food-quality and safety across boundaries in a global world trade context. A “holistic food chain approach” would recognize that responsibility for supplying safe and nutritious food lies with all those involved in food production. However, current traceability systems do not fill all the needs to manage food quality and safety in a food chain approach due to the specificity of the food sector. The information about food safety in the chain operates thousands of databases, often duplicating and reduplicating the same information and missing out on opportunities because data sources are not interconnected. Current practice is that corporations rely on costly manual code generation and point-to-point translation scripts for data integration with the value chain. In order to react efficiently to a 1
FAO's Strategy for a Food Chain Approach to Food Safety and Quality: a framework document for the development of future strategic direction. Item 5 of the Provisional Agenda. Committee on Agriculture (COAG), 17th session. Rome, 31 March–4 April (Document COAG/2003/5;
Gaps to Fill Between Theoretical Interoperable Quality and Food Safety
379
food safety alert, it is required: cooperation, value chain vision and fast access to information. On the contrary, food companies have turn-key solutions for their traceability information systems; some of them just have a quality control system or other nonspecific systems (ERP, MES) which manage valuable information for traceability, quality and food safety information. But a very low rate of them combines all these systems in an integrated food safety system. As a result companies do not obtain a value chain vision, neither tools for cooperation or fast access to relevant information in case of alarm or crisis. Moreover, there are key food players, such as laboratories, involved in food quality and safety management who are not involved in traceability management at the moment. The aforementioned challenges in the agrofood sector require a reevaluation of current practices in production and trade, the cooperation between enterprises along the supply chain; relationships between enterprises on similar stages of production or trade; the sector’s infrastructure in production and services, and the influence of governments on enterprises’ management activities. A food chain safety information standard helps map and document a product’s history, creating trust and confidence toward consumers. Furthermore, requirements for product identification, which are applicable in the case of non-GMO (Genetically Modified Organisms) products, organic products, or certification of product origin, demand a secure product identification system and reliable retrieval of related product information. Last but not least, in the case of an incident, an efficient, fast, and precise withdrawal or recall system is needed. Information technologies not only have further potential to support the agrofood sector in coping with the challenges; they are also key enablers for some of the developments to take place. Today’s drive towards globalization builds on modern communication technology, but it is also accelerated by the technology’s communication ability. Interoperability solutions provide the flexibility, business alignment and responsiveness to allow taking advantage of inherited systems to fill the needs of value chain vision, cooperation and fast access information without increasing efforts required from each actor. Adopting a food chain framework goes beyond ensuring the safety of food. It facilitates more generally an approach to quality in agriculture and food safety and quality systems that will comprise government, industry and consumer involvement.
2. Current Interoperability Environment in Food Quality and Safety Management. 2.1. Current State of Traceability Systems. Three different types of traceability systems can be distinguished [2], as outlined in figure 2. In system “A,” each link in the supply chain gets its relevant information from the previous link. The advantage of this type of system is that the amount of
380
D. Martínez-Simarro, J.M. Pinazo Sánchez, R. Almarcha Vela
information to be communicated remains small, which reduces transaction costs. The disadvantage is that this system is largely based on trust; each link must trust the previous link on both the quantity and quality of the information passed. Furthermore, in case of an emergency, all links need perfect administration in order to act quickly. In system “B”, each link receives the relevant information from all former links. With these systems the speed at which tracking and tracing can be handheld is much higher than with systems of type “A”. Moreover, because each link in the chain receives all other information, the information can be controlled for completeness. The chain transparency seems larger than in system “A”. A disadvantage of system “B” is that the amount of information to be transferred increases per link and also food safety information is consequently overkilled. In the third type of system “C”, each link of the supply chain provides the relevant information to a separate organization. Such organization is the responsible for combining all the information for the entire supply chain. As a result, tracking and tracing can, in principle, be carried out very rapidly. Further, since the organization is dedicated to the system, the danger of the system not being well maintained is minimized. On the other hand, total costs may be larger and confidentiality issues are not solved in the best way. Companies must trust in the third organization to share relevant information. New approaches are needed for the generation of traceability systems that improve cooperation and reduce costs. New architectures such as service oriented (SOA), P2P environments or multiagent systems allows the cooperation in the food chain avoiding the replication of information and the necessity of a central repository. In these environments, every company acts as a provider and consumer of relevant information to the whole. Food safety data in companies could then be exposed for its access by third organizations. 2.2. Main Barriers for Interoperability Integration in Food Quality and Safety Management through the Supply Chain. EDI (electronic data interchange) is still the preferred messaging standard among medium-sized and large firms but less than 5% of small companies did so (see Table 1). EDI users were asked whether they had migration plans to switch from standard to internet EDI. In all sectors, the majority of users said that they had no such plans. Table 1. Types of e-standards used by companies2 EDI-based
Weighting scheme: Total (EU-10)
2
XML-based
Proprietary
standards
standards
standards
Other
% of firms
% of firms
% of firms
% of firms
3
Source: e-Business Watch Survey (2006)
5
12
4
Gaps to Fill Between Theoretical Interoperable Quality and Food Safety
381
By firm size Micro (1-9 empl.)
2
6
10
1
Small (10-49 empl.) Medium (50-249 empl.)
4
5
13
2
10
10
24
2
Large (250+ empl.)
29
27
31
7
Food & beverages
6
4
11
3
Pulp & paper
6
5
15
2
ICT manufacturing
3
10
14
3
Consumer electronics
4
6
17
5
Shipbuilding & repair
2
2
19
8
Tourism
2
6
10
1
By sector
Hospital activities
Base (100%) N (for total, EU-10) Questionnaire reference
19
21
30
4
firms using computers
firms using computers
firms using computers
firms using computers
7237
7237
7237
7237
G1a
G1b
G1c
G1f
The deployment of XML-based standards has been very dynamic over recent years, and diffusion now approaches the same level as EDI-based standards. For ebusiness, ebXML is the most important standard within this group. ebXML[3] is a single set of internationally agreed technical specifications and common XML semantics to facilitate global trade. But, as data in Table 1 shows, these standards are not widely adopted by companies. At the moment food safety management is not carried out in a collaborative way due to the following reasons: Organizational aspect: At the moment most information exchange is tailormade or it is based on a not integrated peer-to-peer (P2P) communication. Format aspect: information exchange formats for food sector are not enough standardized, different software and files format are used and data are not enough well described, this makes information ambiguous and imprecise and then difficult to interpret. Means: data interchange is mainly made by physical means (paper) or by electronic means which do not allow the automatic read of the information in the reception point. Business aspect: in the food sector scope, enterprises remark that information privacy and confidentiality are essential in their business.
382
D. Martínez-Simarro, J.M. Pinazo Sánchez, R. Almarcha Vela
These points define the actual collaboration model for food safety management in the food sector.
Fig. 1. Current collaboration model for food safety management in the food sector
In reaching towards a definition of interoperability that suits the needs of food chain regarding food quality and safety, the term must be moved beyond to include a capacity to manage and deploy coordinated expertise to the point of need, in a manner well demonstrated. This appropriation of terminology is similar in nature to, and to an extent inspired by, the migration of the term “biosecurity” from a process of preventing infectious agents leaving a secure facility to a goal encompassing the safety and security of the world’s supply of food and other key natural resources. So what are the dependencies for achieving interoperability at a cultural or operational level in the food chain? They may be summarized as: • • • •
Common discourse Common standards Common values Common good.
2.3. Interoperability Resources for Food Chain Traceability Multiple standards and tools have been developed in Europe to achieve an efficient management of the food safety and traceability. Related to standards, two different lines are tackled: the food product identification and description and different standards for food information exchange. The European Food Information Resource (EuroFIR) has developed the CEN standard for food composition information exchange based on Eurofoods advices [4]. This standard covers food description using Langual [5] [6] and the full data generation process.
Gaps to Fill Between Theoretical Interoperable Quality and Food Safety
383
Other initiative, in the food information exchange line, the extension of the GS1 GTS [7] for the global traceability. It is a process standard that defines the Traceability business process and the minimum data management requirements for all traceability systems independently from the technology. The TraceCoreXML (TCX) its being developed as part of the EU project "Trace". The purpose of TCX is to define a format where all the minimum needed to model traceability relations between organizations in a supply chain are included. Only some basic properties are included in the core, while extension mechanisms are meant to provide ways to include industry specific properties, properties exchanged between specific parties etc. [8] Chifu et al. [9] propose an ontology model for traceability services description to allow automatic Web service composition and dynamic service discovery. Bechini et al. [10] propose a generic data model for food traceability. This model’s characteristics refer both to tracing and tracking requirements, and to quality control. These resources provide a solid base for an interoperability environment which would support a food chain quality and safety management system. However, there are too many issues that should still be covered: The specificity in the food sector is not well covered by ongoing standards. Thus, each sector has different products, processes, needs and requirements and there are no specific standards fulfilling these requests. As aforementioned, TCX for example proposes all the minimum to cover traceability between organisations and gives some particularities for some food subsectors (mineral water, chicken, honey…), but what happens with internal traceability? Are food standards ready enough to guarantee the management of traceability and food safety along the food chain? Are current standards complete enough to tackle with food safety management inside food companies and enable in-company information access and sharing in an understandable way? To fit the food chain approach and react accordingly against an alarm, internal traceability information should be as available as the external one so as to offer reliable and comprehensible information to carry out a proper action plan. Additionally, the concept of food safety excluded elements of nutrition, such food components that are known risk factors for certain chronic diseases, and nutrients in the form of additives, functional foods and supplements. Indeed, the analysis of the composition of food is concerned not only with the nutrients in the human diet, but concomitantly and significantly with anti-nutrients, toxicants, contaminants and other potentially dangerous elements. More recently, requests have been made at international forums to include these elements in risk and safety activities. The impetus has come from concerns about genetically modified foods, functional foods, high levels of nutrient fortificants and nutrient supplements. Thus, as the global food supply evolves, certain aspects of food safety and nutrition must be seen more joint than separated fields of activity. As a consequence, ongoing interoperability developments and efforts in standard descriptions in the food chain should cover nutritional aspects for food information exchange as well. On the other hand, food chains need to become more sustainable to regain and retain consumer trust after recent food incidents and scandals. One of the key
384
D. Martínez-Simarro, J.M. Pinazo Sánchez, R. Almarcha Vela
components of sustainability is environmental care. Transparency of environmental care in the entire food chain is necessary to contribute to increased consumer awareness. To that end, environmental assurance, food miles and carbon footprint, responsible practices welfare, energy saving are current concerns both of companies and users. Nevertheless, existent interoperability solutions in the food chain are mainly limited to traceability for food safety for human health. Modern interoperability solutions should tackle also with this kind of information that will contribute to supply chain value. Environmental safety is as relevant nowadays as food safety was in the 20th century.
3. Conclusions To achieve interoperability between food companies in the food quality and safety management it is necessary to develop a specific information model for it. This model should be based on new standards, or existing ones expanded for Q&S3 specific aims, which should define food Q&S items and its specific information exchange. Interoperability solutions provide the flexibility, business alignment and responsiveness to allow taking advantage of inherited systems to fill the needs of value chain vision, cooperation and fast access information without increasing efforts required from each actor. A new model adapted to food companies Q&S management needs, specific standards allied with the beneficial characteristics of interoperability solutions and the suitable methodology to infuse companies their benefits will broke current Organizational barrier, Logical barrier, Physical barrier and Legal barrier . It is premature perhaps to offer a formalized definition of interoperability in the food sector commensurate with the holistic vision we have described. Interoperability in food safety has a number of obvious enabling conditions or critical dependencies [11]: • • • • • • • • •
3
Systems connectivity/ transparency Shared meta data along the food chain Single data dictionary regarding food identification, description and product and production processes characteristics Common data interchange standards Security and confidentiality Cross-system standard procedures Trained users Automated entities able to make decisions and react according to the action plan in case of alarm Certification procedures
Quality and Food Safety
Gaps to Fill Between Theoretical Interoperable Quality and Food Safety
385
But these enabling conditions merely set the stage for the higher level of interoperability that we believe is of the essence in effective food safety prevention and control, a level described above as “cultural”. In such a model there are three key types of input that interoperability can make transparent to all direct and indirect stakeholders. These may be characterized as: 1. Event and data stream 2. Knowledge flow 3. Expertise In addition to improving communication between each other, food safety agencies need to improve communication with consumers. Outbreaks will move through the population with increasing speed, and agencies need to streamline their processes (and embrace social media like twitter and Facebook) in order to keep up.
4. References [1]
Ian Smith and Anthony Furness(ed.) “Improving traceability in food process and distribution”, Woodhead Publishing Limited, Cambridge, England. ISBN-10: 185573-959-3. [2] Miranda P. M. Meuwissen, Annet G. J. Velthuis, Henk Hogeveen, and Ruud B. M. Huirne Traceability and Certification in Meat Supply Chains.. Journal of Agribusiness 21,2(Fall 2003):167S181 © 2003 Agricultural Economics Association of Georgia. [3] ebXML Requirements Specification: http://www.ebxml.org [4] Schlotke F., Becker W., Ireland J., Møller A., Ovaskainen M.-L., Monspart J., Unwin I. (2000). EUROFOODS Recommendations for Food Composition Database Management and Data Interchange. COST Action 99 – EUROFOODS Research Action on Food Consumption and Composition Data. COST Action 99 Report EUR19538, European Commission, Luxembourg. See also: J. Food Compos. Anal. 2000; 13, 709-744. [5] Langual official website, http://www.langual.org/ [6] Eurofir Public site – Food identification and description, http://www.eurofir.net [7] GS1 Traceability, http://www.gs1.org/traceability 2009 [8] Tracefood Wiki TraceCore XML Overview http://www.tracefood.org/index.php/Tools:TraceCore_XML_Overview [9] Chifu, V.R., Salomie, I., Şt. Chifu, E., Ontology-enhanced description of traceability services, in: ICCP 2007 Proceedings IEEE 3rd International Conference on Intelligent Computer Communication and Processing, art. no. 4352135, pp. 1-8. [10] Bechini, A., Cimino, M.G.C.A., Lazzerini, B., Marcelloni, F., Tomasi, A., A general framework for food traceability, in: Proceedings - 2005 Symposium on Applications and the Internet Workshops, SAINT2005 2005, art. no. 1620050, pp. 366-369. [11] Knowledge Management And Systems Interoperability In Animal Health Publication release 1.0 © Julian Hilton, Apostolos Rantsios and Mark Rweyemamu AVIS College/ www.aviscollege.com 53 Skylines, Limeharbour, London E14 9TS, UK
How to Develop a Questionnaire in Order to Measure Interoperability Level in Enterprises Noelia Palomares1, Cristina Campos1 and Sergio Palomero1 1 Grupo de Investigación en Integración y Re-Ingeniería de Sistemas (IRIS), Dept. de Llenguatges i Sistemes Informàtics, Universitat Jaume I, 12071 Castelló, Spain.
Abstract. Nowadays the market and the global economy are changing constantly and enterprises need to adapt quickly to these changes in order to collaborate between them. Sometimes there are problems to establish these collaborations due to the lack of maturity in aspects related to interoperability. Unfortunately, the enterprises do not know enough about the concept of interoperability. The main goal of this article is to learn more about this framework and show how to develop an evaluation method by means of a questionnaire that allows the interoperability of an enterprise to be measured. The application of the questionnaire will serve to determine the level of interoperability achieved by the enterprise and to detect the aspects to be improved. Keywords: Interoperability, Enterprise Interoperability, Maturity Model, Questionnaire.
1. Introduction When trade began, to do transactions, the only necessary requirements to do a deal were words and simple calculus [1]. The revolution of the information and communication technologies (ICT), together with others factors, have incorporated a set of obligatory prerequisites which have made the collaboration extremely complex. In spite of this, these relationships are effective and profitable if are well done. Nowadays the market and the global economy change every day and it forces enterprises to obtain better methods to collaborate between them. The organisations must be flexible to react to those changes. Usually the changes are external but sometimes internal changes are also necessary: from a technical point of view (new software or hardware) or from an organisational point of view (reorganisation). The worst situation is when the collaboration is between enterprises of different sectors which work with different methodologies and each one wants to preserve its own essence. It is under this framework where the concept of interoperability appears as a new theme of research. For example, many applications use XML but their data models and schemas are very different. The definition of terms like ‘customer’ or ‘order’ can change between different applications [2]. However, interoperability does not only affect the software or ICT
388
N. Palomares, C. Campos and S. Palomero
systems. Full interoperability is scoped if it appears in all the layers of an enterprise [2] [3]. Business processes in the first layer, Knowledge in the second layer and ICT Systems in the third layer. The Semantic description is used in all the layers, as shown in Fig. 1. Enterprise A
Enterprise B
ICT Systems
Semantic
Knowledge
Business Semantic
Business
Knowledge ICT Systems
Fig.1. Layers in Enterprise Interoperability Moreover, there are different interoperability levels. These levels are indicated in an interoperability maturity model. The question is: “How can the interoperability levels of an enterprise be measured?” This article sets out to explain how to develop a questionnaire, as an evaluation method, in order to measure the interoperability levels of an enterprise, since there are no maturity models which do it. We describe a methodology and an example applied to a real enterprise. This document is divided into five sections. In Section 2, we define the concept of interoperability and present the interoperability context. In Section 3, we describe how to develop a questionnaire applying the methodology and some interesting tips. In Section 4, we explain how to apply the questionnaire and how to analyze the results obtained. In Section 5, we expose the conclusions reached and the future work to be done in this area.
2. Interoperability Context 2.1. Interoperability Concept The concept of interoperability has many definitions [2] [3]. IEEE Standard Computer Glossaries [4] define interoperability as ‘exchange information between two systems and use the information exchanged’. This definition may raise doubts because the interaction is actually established between two enterprises, and an enterprise is considered to be more than a piece of software. In spite of using different methodologies, two enterprises can be of different sizes, belong to different sectors, etc. Regardless of this, interoperability assumes that these two enterprises could be in touch. So, to sum up, it can be stated that interoperability allows two enterprise applications to work together in an efficient way. 2.2. Interoperability Domains Three domains in the interoperability context are defined [2], [3]: Architecture & Platform (A&P), Enterprise Modelling (EM), and Ontologies (ONTO).
How to Develop a Questionnaire to Measure Interoperability Levels in Enterprises 389 • • •
Architecture & Platform domain defines a set of technologies based on services or functions that allow enterprise applications to be implemented and executed in a distributed environment. Enterprise Modelling domain describes, during the execution of the application, what parts of a model (data and process) are used in the interaction. Ontologies define the meaning of the terms used in a specific area of work. They are mainly used to cover the semantic aspects of the interoperability.
There is a relationship between the layers identified in [2] and the domains of the interoperability. Table 1 indicates the relationship between both. The notation ‘+++’ means that there is a significant relationship, ‘++’ means that the relationship is sufficient, and ‘+’ means that the relationship is weak. Table 1. Mapping domains to layers EM
A&P
ONTO
Business Process
+++
+
++
Knowledge
+
+++
+++
ICT Systems
++
++
++
2.3. Maturity Models In order to improve the interoperability level of an enterprise, first of all it is necessary to know the current situation of that enterprise. We use a maturity model for that. A maturity model is a method used to evaluate the processes of an organisation. A lot of maturity models exist in the Software Engineering environment, the most important of which is the Capability Maturity Model for Software (CMM). Because of that, other models of great importance were created in this area. Table 2 sums up the evolution of these models. Table 2. Maturity models Model
Year
Capability Maturity Model for Software (CMM), Carnegie Mellon [5]
1986
Levels of Information Systems Interoperability (LISI), Carnegie Mellon [6]
1998
Capability Maturity Model Integration (CMMI), Carnegie Mellon [7]
2000
Interoperability Maturity Model (EIMM), European Union [8]
2005
The aim of CMM was to evaluate the processes related to the development and the implementation of software by the Software Engineering Institute (SEI). During the 90s, the Software Engineering Institute developed models in order to measure the maturity levels in different disciplines, such as: • • • • •
CMM-SW: CMM for Software; P-CMM: People CMM; SA-CMM: Software Acquisition CMM; SSE-CMM: Security Systems Engineering CMM; T-CMM: Trusted CMM;
390
N. Palomares, C. Campos and S. Palomero • •
SE-CMM: Systems Engineering CMM; IPD-CMM: Integrated Product Development CMM.
From these, new models were obtained to represent the levels of enterprise interoperability, such as LISI or EIMM. Specifically, LISI was created to qualify the interoperability levels of information systems, and EIMM, which was based on the structure of the CMM and the enterprise modelling, was created to help enterprises to improve their capabilities. Each one of these models determines a classification into interoperability levels from a lower to higher scale and good practices in order to achieve the maximum level progressively. LISI defines these levels as Isolated, Connected, Functional, Domain, and Enterprise. EIMM considers the following levels: Performed, Modelled, Integrated, Interoperable and Optimising. Nevertheless, maturity models do not show how to evaluate the characteristics that are relevant to know the interoperability level of an enterprise.
3. Development of the Questionnaire A questionnaire is a research instrument that allows the comparison of answers across standardized procedures and the quantitative measurement of a great variety of objective and subjective aspects relative to a population [8]. At first sight, a questionnaire seems easy to prepare, merely a list of questions, but there are methodologies to implement them correctly. Here we enumerate and briefly explain the more significant stages in the development of a questionnaire. 3.1. Stage 1: Determine the Goal To start with, it is necessary to establish the purposes of the questionnaire, that is, what we want to know exactly and the what the aim of the study is. This requires specifying the objectives and the hypotheses that we try to achieve by delimiting the population and the specific subject. However, a questionnaire is not simply a set of direct questions about what we are researching. 3.2. Stage 2: Preview Interview Secondly, in order to start to work, it is recommended to review the bibliographical material on the topic under research because it can provide enough information to elaborate the questionnaire. In many occasions questionnaires from other contexts can be adapted perfectly. However, if the literature review is not useful, it is advisable to carry out a (structured) preview interview to some of the members of the population under study. 3.3. Stage 3: Development Thirdly, there are four basics aspects to bear in mind at this stage, namely: (i) the presentation of the questionnaire, (ii) the organisation related to the topics or questions, (iii) different types of questions, and (iv) the codification of the questionnaire. 3.3.1 Presentation of the questionnaire A formal questionnaire must start with a Protocol of Presentation which includes the following aspects: •
Presentation of the interviewer
How to Develop a Questionnaire to Measure Interoperability Levels in Enterprises • • • • • •
391
Justification of his/her purpose Defining the purposes of the questionnaire Announcing the organisation that supports it Guaranteeing anonymity of the interviewee Acknowledging the collaboration Clarifying any aspects that could be ambiguous.
3.3.2 Organisation The organisation of the information is essential to understand the questionnaire. Therefore, we consider the correct sequence of topics and how to approach a specific topic. Topics must follow the logic of the interviewee and not that of the interviewer, and the interviewer must inform the interviewee of the end of a topic and of the beginning of a new one all time. Furthermore, the questions must obey the following rules: begin with simple questions, continue gradually to the central problem and end up by concluding something. 3.3.3 Types of questions Choosing the type of question is one of the most delicate tasks in the development of a questionnaire. There are different types of questions as far as their formulation is concerned: • •
Open questions: Free response. Closed questions: The response belongs to a category. They can have only two categories of response (yes or not) or allow multiple choices.
Both types must contemplate the possibility that the interviewee does not know the answer and/or does wish to answer. 3.3.4 Codification of the questionnaire Finally, the codification must facilitate the operation of analyzing the answers given by the interviewee. There are two methods of codification: • •
In open questions, the codification is undertaken by analyzing the meaning of the content of the response (more difficult). In closed questions, every possible category of a response corresponds to a numerical value.
3.4. Stage 4: Piloting the Questionnaire Lastly, this fourth stage, commonly known as the piloting of the questionnaire, is the one that allows the validity of the questionnaire to be properly verified. It always needs some type of modification until the final version is obtained. Therefore, it is advisable to follow an iterative process.
4. Application and Analysis In this section we use an example to explain how to use the methodology to develop questionnaires, how to apply the questionnaire and how to analyze the results obtained. To this end, the phases below have been defined.
392
N. Palomares, C. Campos and S. Palomero
4.1. Phase 1: Definition/Choice of a Maturity Model This phase consists in choosing a maturity model which is able to measure the interoperability levels of an enterprise. We have seen the main maturity models about interoperability in Section 2.3. However, another option is to define an own maturity model. In this case, it is necessary to determine the structure of the maturity model, the areas of application and the new maturity levels as shown in Table 3. Table 3. Phase 1: Definition of a maturity model Phase 1: Definition of a maturity model Inputs
Tools/Skills Theoretical Study
Existing maturity models
Consensuses Reviews
Outputs Application areas Maturity levels
In our case, we used EIMM that defines the following areas: Enterprise/Business, Processes, Services, Information/Data, Ontologies and Semantics; and the following maturity levels: Performed, Modelled, Integrated, Interoperable and Optimizing. 4.2. Phase 2: Analysis of the Enterprise This phase presents the business processes and the organizational structure of the enterprise. Table 4 summarizes this phase. Table 4. Phase 2: Analysis of the enterprise Phase 2: Analysis of the enterprise Inputs
Tools/Skills Meetings
Information about the enterprise.
Study/Design IDEF0
Outputs Enterprise modeling Flowchart Matrix of interactions Structure of the questionnaire
In our case, we modelled the enterprise using IDEF0 and designed ‘Interoperability Matrices’ in order to know the relationship between departments or departments and external entities in each business process. Table 5 shows an example of Interoperability Matrix.
How to Develop a Questionnaire to Measure Interoperability Levels in Enterprises
393
Table 5. Interoperability Matrix Macro process
Plan
Interoperability Departments
Business Process
Department
Quality policy
B
Planning of the SGC
B
Planning of aims
B
X
X
X
X X
X
...
Review of the system
B
X
X
X
X X
X
...
A X
B
C
D
E
F
G
…
X
X
X X
X
...
X
...
For example, in the business process ‘Planning of aims’, Department B (Quality Department) interacts with all the other departments. After that, we determined the structure of the questionnaire. We decided to ask about the interoperability level in each interaction, that is, for each department or external entity in each business process. 4.3. Phase 3: Development of the Questionnaire This phase shows how to use the methodology described in Section 3 in order to develop the questionnaire. Table 6 summarizes this phase. Table 6. Phase 3: Development of the questionnaire Phase 3: Development of the questionnaire Inputs
Tools/Skills
Outputs
Development Methodology.
Piloting
Drafts
Meetings
Questionnaire
Improvement cycle We present all the methodology stages from a practical point of view. The results obtained after three iterations are described below. 4.3.1 Stage 1: Determine the Goal The goal was to develop a tool in order to measure the interoperability levels of the business processes of an enterprise. 4.3.2 Stage 2: Preview Interview We coordinated an interview in order to collect information about the business processes and the organisational structure of the enterprise and revised bibliographical material on enterprise interoperability. In fact, we consider that it is advisable to do both. 4.3.3 Stage 3: Development The most important aspects to bear in mind in the development of the questionnaire are summarized below.
394
N. Palomares, C. Campos and S. Palomero
The questionnaire is divided into two parts. The first one is made up of a set of questions directed to the management to know global aspects of the enterprise (mission, vision and strategy) and the interoperability matrices to find the interactions among departments or between departments and external entities; and the second one is the questionnaire itself (one per department). It consists of a list of questions organized in blocks and classified by levels. They correspond with the areas and the levels of the maturity model used, respectively. The areas are delimited in the questionnaire but the questions of each block are not classified by levels (it uses sequential numbers). The questions are mixed: some are freeresponse questions and others are multiple-choice questions. Below, two examples of questions are shown in order to show those which we actually asked to the company. •
Free-response question o Does the department have an organisational structure (hierarchy, etc.)?
•
Multiple-choice question o Does the department have tacit procedures in order to carry out its business processes?
Table 7 shows the responses for this last question answered by the Production Department. Table 7. Example of responses Macro process
Business Process Maintenance
Resource management
Work environment
Interaction with
Actual Situation 1
2
3
4
5
OK
Logistics
X
X
Industry
X
X
Quality
X
Information systems
X
Logistics
X
Industry
X
Comments
Free-response questions do not have a special code. In contrast, multiple-choice questions present five categories. We use the following: 1 = nothing; 2 = little; 3 = enough; 4 = quite good, and 5 = very good. Besides, interviewees can indicate if the actual situation needs to be improved and they can also make comments. For example, the last question shows that the process Maintenance has a high rating (4 out of 5 for both interactions), while the process Work Environment is poor (2 for all interactions). Thus, the interviewee indicated that the first process was right, but the second one needed to be improved. 4.4. Phase 4: Application of the Questionnaire This phase consists in applying the questionnaire. Table 8 summarizes this phase.
How to Develop a Questionnaire to Measure Interoperability Levels in Enterprises 395 Table 8. Phase 4: Application of the questionnaire Phase 4: Application of the questionnaire Inputs Questionnaire
Tools/Skills
Outputs
Interviews
Results
The application of the questionnaire is also divided into two parts. We used interviews to quarrel the results. It is important to take notes, for example: name of the interviewee, date, etc. 4.5. Phase 5: Summary of Information and Analysis of the Results This phase involves the summary of information and the analysis of the results. Table 9 summarizes this phase. Table 9. Phase 5: Summary of information and analysis of the results Phase 5: Summary of information and analysis of the results Inputs Results
Tools/Skills Analysis
Outputs Quantitative analysis Qualitative analysis
The analysis will allow the interoperability level achieved for each process and interaction in each department of the enterprise to be measured. The levels are calculated by considering the range of answers that corresponds to each process and interaction. Then, a level is achieved if the minimum score is obtained for all the questions of a block. For example, according to the results of Table 7, having considered that this block contains only one question, the level is Interoperable for Maintenance and Modelled for Work Environment. The last step is to develop a graphic representation and a qualitative analysis of the results, thus identifying the improvement needs.
5. Conclusions and Future Work To sum up, a strategic plan is necessary to improve enterprise interoperability. This plan leads the enterprises to redesign their organisational structure and business processes. In order to define this strategic plan, measurement of the current situation and identification of needs and gaps are required. Measurement techniques must consider all domains and concerns involved in enterprise interoperability. Layers of the enterprise (Business Processes, Knowledge and ICT Systems) and the domains of the interoperability (Architecture & Platform (A&P), Enterprise Modelling (EM), and Ontologies (ONTO)) must be considered. One of the techniques that can be applied for the measurement of interoperability is the use of questionnaires. Using an application example, this paper describes the development of such a questionnaire by defining the tasks and steps to be followed. The application in the enterprise has provided a good experience for the improvement of the questionnaire. Another important aspect for the success of this application is considering the active collaboration of representative people from the departments involved. The results are quantifiable, but some subjective aspects must be considered taking in account the
396
N. Palomares, C. Campos and S. Palomero
specific characteristics of each enterprise and each process. The interoperability level is not homogeneous in all the existing collaborations for the same business process. The improvement needs identified allow the enterprise to establish new objectives and plans for the future. Another important implicit result obtained is that people involved in this study has learned about interoperability and its concerns. Enterprise managers have noticed that knowledge of interoperability is quite low and education and training plans must be defined. In order to complete the results obtained by the application of the questionnaire other aspects and techniques must be considered in order to define priorities, interoperability policies and training plans.
6. Acknowledgments This work was funded by CICYT DPI2006-14708 and Bancaja.
7. References [1] [2] [3] [4]
[5] [6] [7] [8] [9]
Benguria, G., Santos, I., (2008) SME Maturity, Requirement for Interoperability. Enterprise Interoperability III 29-40 Chen, D., Doumeingts, G., (2003) European Initiatives to Develop Interoperability of Enterprise Applications – Basic Concepts, Framework and Roadmap. Annual Reviews in Control 27 153-162 Chen, D., Doumeingts, G., (2003) Basic Concepts and Approaches to Develop Interoperability of Enterprise Applications. In Camarinha Matos, L.M., Afsarmanesh, H., eds.: PRO-VE. Volume 262 of IFIP Conference Proceedings, Kluwer 323-330 IEEE Standard Computer Dictionary: A compilation of IEEE Standard Computer Glossaries, Institutes of Electrical and Electronics Engineers, New York, NY 1990. http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?tp=&isnumber=4148&arnumber=15934 2&punumber=2238 Carnegie Mellon, Software Engineering Institute. n.d.a. Capability Maturity Model® (CMM) http://www.sei.cmu.edu/cmmi/start/faq/related-faq.cfm C4ISR Interoperability Working Group, Department of Defense. 1998. Levels of Information Systems Interoperability (LISI). Washington, D.C.: United States Department of Defense. http://sei.cmu.edu/isis/guide/introduction/lisi.htm Carnegie Mellon, Software Engineering Institute. n.d.b. Capability Maturity Model® Integration (CMMI) http://www.sei.cmu.edu/cmmi ATHENA (Athena). 2004. Enterprise Interoperability Maturity Model (EIMM). The European Commission, (IST-507849) http://interopvlab.eu/ei\_public\_deliverables/athena-deliverables. Amérigo, M., (1993) Metodología de cuestionarios: Principios y Aplicaciones. Confederación de Asociaciones de Archiveros, Bibliotecarios, Museólogos y Documentalistas, ANABAD
Knowledge Sharing and Communities of Practices for Intra-organizational Interoperability Philippe Rauffet1, Catherine Da Cunha1 and Alain Bernard1 1
IRCCyN, Ecole Centrale Nantes, France
Abstract. This paper discusses the use of knowledge sharing and communities of practices so as to make possible and support collaboration between units inside a same organization. It proposes a framework and some tools so that an intra-organizational interoperability capability emerges without “special efforts”. Keywords: organizational interoperability, Communities of practices, transfer of good practices
1. Introduction Companies are nowadays dynamic structures, which are rearranged on a regular basis depending on projects and alliances. Organizations become therefore multi-functional, multi-products, and geographically distributed. These reconfigurations have significant consequences in terms of performance at all levels, and they face up with much organizational heterogeneity. Organizational interoperability is therefore a key challenge for companies so as to make their resources (sites, people, products, software…) able to communicate and work together. ATHENA defines interoperability as “the ability of two or more systems or components to exchange information and to use [without any misinterpretation or loss of sense] the information that has been exchanged” [1]. Konstantas [2] enriches this definition in précising that this communication and this collaboration must be done without special efforts from the involved systems. Interoperability can be classified in three types [3]: (1) a semantic interoperability, which aims at solving syntax and language problems (signification of speech, sense of the knowledge which can be proceeded and manipulated in the enterprise) through ontologies; (2) a technical (or material) interoperability, which
398 P. Rauffet, C. Da Cunha and A. Bernard
focuses on the heterogeneity resulting from the use of several materials to transport, transmit, and operate information which are not compatible; (3) an organizational (or functional) interoperability which attempts to reduce the gaps caused by the differences in practices and business processes. Indeed, different enterprises or departments are growing and developing their own organization isolated from another. Consequently, same tasks may not be processed in the same way in two different organizational units, resources are not able to “understand” each other, and problems can occur at their cooperation border. In order to obtain this organizational interoperability many works claim the importance of Enterprise Modeling, which aims at adapting, intermediating or formatting the activities and the process of an organization [4, 5]. This process centered view can be nevertheless balanced with a resource centered view, it is to say when organization looks for creating a shared, practical knowledge library and managing a coordinated competency growth of its entities. That enables mutual understanding between entities and the emergence of an operational culture facilitating collaboration. This paper focuses on intra-organizational interoperability (it is to say organizational interoperability inside a same organization). It proposes a resource centered view framework based on the broad transfer of some good practices and the identification of communities of practices, so as to build a sustainable capability to collaborate. It deals with some current works fulfilled in the ANR project Pilot2.0 around a system managing organizational capabilities [6]. Knowledge sharing and communities of practices are discussed in section 2 as means to support interoperability. Sections 3 and 4 present an operational method and a tool to implement this approach. The propositions are finally discussed in the last part of this paper.
2. Using Knowledge Sharing and Communities of Good Practices to Support Interoperability Knowledge sharing is defined [7] as a twofold process which focuses on: - the collection of relevant practical knowledge in “many to one” and “many to many” ways: the “doers” share their experience and their ideas between them and with the organization so as to create an organizational knowledge); - the transfer of this organizational knowledge to all the organizational entities, in a “one to many” way: the “knowers” wich formalize organizational knowledge from the collected, local experience share that with the “doers”, so as to teach them the organization standards. Knowledge sharing is by definition linked with the concept of good practice, which is an approach that have been implemented and used locally and can be captured in a way that it is possible for others to reuse this experience [8]. So as to manage the lifecycle of good practices, Szulanski [9] describes five processes. They explain the
Knowledge Sharing and Communities of Practices for Intra-organizational
399
different transformation stages from the identification of a local solution, towards a conceptualized practice, and then towards a transferred, organizational and used practice (Fig.1, green boxes): (1) Acquisition: an organizational need is identified and knowledge is found locally (by expert or operational workers) to solve this requirement; (2) Adaptation: knowledge is modified and combined, so as to become an organizational knowledge and to be adapted to future learners. (3) Application: This adapted knowledge is communicated and transferred to the learners; (4) Acceptation: Animation around the applied knowledge must be done so that knowledge is effectively acquired by learners and becomes an organizational practice; (5) Appropriation: Organization is mature on the transferred knowledge and skills, and entities use them and are autonomous on them. They adapt them locally or propose modifications to group.
Fig.1. Knowledge sharing and good practices transfer mechanisms
These five processes are actually very similar to the SECI model [10], as emphasized by the red boxes in Fig. 1. There is only a slight difference brought by the Szulanski processes. Indeed knowledge “externalization” is derived in two different processes, “application” and “acceptation”. Thus an organization has to share the practices it wants to implement, but it has also to check if these practices are well understood and well used by operational field. As emphasized by [11, 12] this knowledge sharing is crucial for providing globally to all units the practical, organizational culture, and therefore for facilitating mutual understanding, sy²nergy, and capability to collaborate. This knowledge sharing must be carefully implemented in organization: it can become too standardizing, and the normalized organizational practices can prevent local entities from innovating and make the obtained organizational interoperability rigid. Indeed if organization changes deeply, that is not sure that the transferred
400 P. Rauffet, C. Da Cunha and A. Bernard
practices still work, and the collaboration capability can be therefore reduced, not enough adapted to the new context. So as to « open » knowledge sharing and maintain its efficiency, a way could be the creation of communities of practices (CoPs). Defined by Wenger [13], CoPs are “groups of people who share a concern, a set of problems, or passion about a topic, and who deepen their knowledge and expertise in this area by interacting on an ongoing basis”. These networks are established in order to build strategic capabilities within the organization by leveraging learning and knowledge sharing [14, 15]. The emergence of CoPs around the transferred good practices could allow a sustainable knowledge sharing. In addition to crate natively a collaboration framework, that ensures a collective appropriation by users, in action (they use practices because the others do similarly) as well as innovation (to adapt corporate practices to the CoP context by using for instance good stories, or to propose at organization level some improvement enhancing the power of organizational good practices). Intra-organizational interoperability could be thus guaranteed by two means: - knowledge sharing, based on the creation and the transfer of good practice libraries ensuring that entities inside a same organization share the same values and the same practical solutions, and they are therefore more able to collaborate. - The emergence of CoPs around knowledge sharing, in order to support the appropriation and the maintenance of good practices libraries in taking into account of the specific context of use. The next parts bring some propositions to make these levers operational.
3. Roadmapping for Good Practices Transfer and Collaboration Capability Assessment The roadmapping of management [16] is a learning method, designed by MNM Consulting. Supported by a formalism, the roadmap, and a software tool, it is used to transfer good practices, to integrate new sites, and to assess locally and globally the practical knowledge acquisition by the organizational entities. It can also be used so as to determine if entities are compatible, it is to say if they share common practices enough to be able to collaborate. 3.1. Roadmap Architecture and Lifecycle So as to gather and structure the good practices around a particular concern, the method proposes a pattern, called roadmap. An instance is illustrated by a roadmap’s extract in Fig.3, used for developing the capability of managing Information Systems in each organizational entity. This pattern has a 2-dimensions matrix architecture, composed by: (1) some “actions levers” in the rows, it is to say all the resources required at the
Knowledgee Sharing and Coommunities of Prractices for Intraa-organizational
401
entity leveel for dealing with the partiicular concern defined in the roadmap; (22) five “knowledg ge maturity levvels” in the collumns, rather ssimilar in CMM MI, which enabbles to draw a pro ogressive learnning path towaards the complete acquisition n of the practicces by an entity. The roadmap content is based on the cappture of local good practicees and innovation ns. These one are a organized iin two levels oof granularity: (1) ( the requireements express th he general objjectives for eaach “action leever” at each maturity level. For instance, “a “ manager is appointed” is a requirement. (2) The deliv verables are a list of actions wh hich brings som me details on hhow the above rrequirement co ould be fulfilled. For instance, “a “ selection ccommittee is ccreated” and ““A list of thee applicants foor the position ex xists” are two ddeliverables off the previous rrequirement. The usse of the roadm maps could bee summed up iin the lifecyclee presented in Fig.2 finely thee five [17]. As depicted d as beelow the roadm mapping processes follow very v nerated by strrategic Szulanski’’s processes [11]: (1) the rroadmaps’ subbjects are gen xperts; (2) roaddmaps managers, and necessaryy knowledge iss identified byy functional ex are written n, in order to adapt and combine all the good practicees in the roadm map’s ddle and operaational learning arrchitecture; (3)) the roadmapss are firstly trannsmitted to mid managers, who discuss oobjectives, in terms of level to reach and delay to respeect for ncerned entitiees; (4) level achieevement. Thenn the roadmapss are deployedd in all the con the roadm map are used annd self-assesseed by local managers. Theirr grades expreess the acceptation n degree of the learningg patterns; (55) in a proccess of contiinuous improvement, feedbacks are collecteed about the content and the deploymeent of roadmaps. New versionss of roadmaps can be proposed, and somee learning objeectives m which is ussed to can be modified. Roaddmapping is th herefore a coomplete system transfer good g practices,, and assess th his transfer. T That provides a means to sh hare a practical, operational culture, andd to determin ne the feasiibility of pottential collaborattive activities, eespecially with h the aid of som me knowledge maturity indiccators [18] descriibed in the folllowing paragrraph.
Fig..2. Roadmapping g lifecycle (adappted from [16])
402 P. Raauffet, C. Da Cunnha and A. Bernnard
3.2. Roadm map Assessmeent The ro oadmaps are used u to model and transfer tthe good practtices through a web platform. This T one enablles also to assess if good pracctices are well acquired by enntities. Thus all the t local managers who usee a roadmap have to reporrt the progresss they achieve at least once a month. m That enables a delocallized measurem ment of the praactical knowledgee acquisition. The T grades follow some basiic rules: if all the deliverablles are achieved, then the requiirement is conssidered as fulffilled (the road dmap’s cases tuurn in green); so as to reach a knowledge k matturity level, alll requirements on this level aand on the previou us one must bee done; the finaal grade is com mposed by the maturity level grade and an exttra grade repressenting the com mpletion of thee first unfulfilleed maturity levvel. In Fig.3 (left side), the t maturity leevel is 2 and thhe final grade is 2.33 (33% of the third levell is completed d). All these m measures are coonsolidated at the different group levels, to give g an overviiew of the matturity levels reached by a sitte, a business uunit, a functional network, or th he whole groupp (on the right sside of Fig.3).
Fig.33. Roadmap arch hitecture for knoowledge sharing
On on ne hand these two assessm ment levels proovide indicatoors on the staate of knowledgee transfer, and enable to know w how mature an entity is (w whatever it is, a plant or branch for instance)). On the oth her hand theyy can also bee used to meeasure collaborattion capability between severral entities [19] 9]. Indeed the llocal maturity grade illustrates the knowleddge alignmentt of the diffeerent action le levers, whereaas the consolidatted grades info form on the coompatibility off functional neetworks or prooducts branches on a roadmapp, it is to say oon a particula ar concern on which they haave to acquire an nd share practtices. For instaance in assumiing that the IS S roadmap (fig.3 left side) mustt be managed by different oorganizational services, it coould be observved in fig.3 (right side) that all organization ccould collaboraate on some siimple activitiess (like creating a common intrannet), but only pproduction and d sales departm ments share praactices enough to o collaborate oon more elaboorated activitiees (like impleementing and using together an n ERP).
Knowledgee Sharing and Co ommunities of Prractices for Intraa-organizational
403
4. A Too ol for Identiifying Comm munities of Practices an nd Determining the Entities Able to Collaboratte So as to improve this collaboration capability acqquisition and detect d preciselly the entities ab ble to interopeerate, some ennhancements are a proposed, which w focus oon the organizatio on and the anim mation of CoPss around good practice libraries: - CoPs aree composed acccording to: (1)) the maturity levels l of entitiees; (2) the imppact of this maturiity on their perrformance (by studying the sttatistical depen ndence betweenn Key Performan nce Indicators aand Roadmap grade and disttinguishing based on this criiterion regular an nd singular entities [20]); (3)) and their properties (type of produced ggoods, geographiccal zone). Thaat enables to ccreate some co ontextual sub--groups on whhich it would be possible to launch collabborative action ns. Moreover that establishes a progressio on by “neighboorhood” (accorrding to the sim milarity on thee criteria previously presented)), in addition too the progressiion by maturityy levels, so as to better suppoort the knowledgee transfer. nable due to co ontextual experrience - The collaaboration capaability becomes more sustain member. Animaation around go ood practices liibraries focus oon the feedbacks from CoPs’ m issues infringing collaboration in takingg into account of CoPs’ particularities. w developed d. So as to ensuure its In ordeer to implemennt this point off view a tool was xcel (for mannaging portability y in company,, it was codedd with the aiid of VBA Ex database and extractingg informationn) and Google Maps API (for filteringg and visualizing g CoPs accordding to chosenn criteria), moore popularizeed than some other F all entitiies appear on a world map, annd are academic solutions. s As eemphasized in Fig.4, colored acccording their product type (colored diskss) and their peerformance behhavior mance). Entitiees can (“play” maarker for outpeerformance, “sttop” marker foor under-perform be filtering g with the aidd of several prroperty filters. Then CoPs, itt is to say grooup of entities able to collaboratte, can be visualized (Fig.5).
Fig..4. CoPs search
404 P. Raauffet, C. Da Cunnha and A. Bernnard
ualization Fig.5. CoPs deetection and visu
5. Discussion A resourcce centered viiew approach can be comp plementary to the process based Enterprise Modeling so as to support oorganizational interoperabilitty. Indeed mannaging organizatio onal knowledgge and compettencies can heelp organizatioons for creatinng and transferrin ng a shared, ooperational cuulture which reduces r the vvariety of pracctices, improving g mutual underrstanding and ffacilitating colllaborative actions. As emphaasized by [20, 21], 2 this practtical organizaational learning g becomes thhe only sustainable competitiv ve advantage, because that improves the collaboration capability annd the synergy off resources, andd the performaance of processses where they are involved. To ensu ure this new innteroperabilityy solution, the two levers desscribed in this paper are necessary for: - guaranteeeing a shared, flexible organiizational culturre; - reducing g the adaptationn and understaanding efforts while collaboration actions occur effectively y; - adapting g this practicall culture to thhe changes of context so thhat the collabooration capability becomes sustaainable. w These two t levers must be implemeented together because only CoP creation would a ( of enntities do not always not enablee to create a shhared organizattional culture (interests wledge sharingg could resultt in a serve the organizationaal interests) annd only know normative and not enouggh flexible orgaanizational cultture. mplemented on n the majorityy of plants in Valeo Roadmapping have bbeen already im ment, Productiion system, Suupplier Group on some particulaar concerns (People Involvem integration n, Information systems …), aand it begins to be adopted byy General Counncil of Vaucluse (Quality of services delivvered to citizzens, creationn of an integgrated
Knowledge Sharing and Communities of Practices for Intra-organizational
405
information systems …). It is used for communicating and animating the quality standards, and for launching new transversal and collaborative activities, given the maturity level of entities. The development of the tool for managing CoPs is almost achieved and should be tested soon.
6. Conclusion This paper discusses how knowledge sharing and communities of practices could be used to support intra-organizational interoperability. It provides a method and some tools to implement that. If collaboration capability could be improved by using these propositions inside a same organization, some research works could further be made so as to study if these tools still work in extended organizations (where several stakeholders, like suppliers, distributors, and retailers are involved and have to collaborate). Authors acknowledge the French National Agency of Research (ANR), which supports and funds Pilot2.0 project [4] and the current research works. It involves laboratories (IRCCyN and M-LAB), companies (MNM Consulting, Valeo) and institutional partners (General Council of Vaucluse). The aim of this partnership is to provide a generic method, to improve existing tools and to deploy them on other types of organizational structures.
7. References [1] [2] [3] [4] [5] [6] [7] [8]
ATHENA, “Description of work”, ( 2003),sixth framework program, European Integrated Project ATHENA Konstantas, D., Bourriières, J.P., Leonard M., Boujlida, N., (2006), Interoperability of Enterprise, Software and Applications, Springer Blanc, S., Ducq, Y., Vallespir, B., (2007), Evolution management towards interoperable supply chains using performance measurement, Computers in Industry Vallespir, B., Chapurlat, V., (2007), Enterprise modelling and verification approach for characterizing and checking organizational interoperability, ETFA Chapurlat V., Vallespir B., Pingaud H., (2008), An approach for evaluating enterprise organizational interoperability based on enterprise model checking techniques, 17th World Congress of IFAC ANR, (2007), Report on selected projects for Software Technology Program, http://www.agence-nationale-recherche.fr/documents/aap/2007/selection/Techlog2007.pdf Giannakis M., (2008), Facilitating learning and knowledge transfer through supplier development, Supply Chain Management: An International Journal, Vol.13, pp. 62-72 INTEROP NoE, (2005), Practices, principles and patterns for interoperability, Deliverable D6.1
406 P. Rauffet, C. Da Cunha and A. Bernard [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21]
Szulanski G., (1996), Exploring internal stickiness: impediments to the transfer of best practices within the firm, Strategic Management Journal (1986-1998); Winter 1996; 17, Winter Special Issue; pg. 27 Nonaka, I., (1994), A dynamic theory of organizational knowledge creation. Organization Science Bernard A., and Tichkiewitch S., (2008), Methods and Tools for Effective Knowledge Life-Cycle-Management, Springer Candlot A., Du Preez N., Bernard A., (2004), Synergy and Knowledge in an Innovative Project between Academia and Industry, International Conference on Competitive Manufacturing, Stellenbosch Wenger, E, McDermott, R and Snyder., (2002), Cultivating communities of practice, Harvard Business School Press Rauffet, P., Bernard, A., Da Cunha, C., Du Preez, N., Louw, L., Uys, W., (2008), Assessed, interactive and automated reification in a virtual community of practice, TMCE symposium Prusak, L and Lesser, E.L., (1999), Communities of practice, social capital and organizational knowledge, Information Systems Review., pp. 3-9. Monomakhoff, N., Blanc, F., (2008), La méthode 5Steps® : Pour déployer efficacement une stratégie, AFNOR, 2008 Rauffet P., Da Cunha C., Bernard A., Labrousse M., (2009), Progress management in performance driven systems: study of the 5Steps® roadmapping, a solution for managing organizational capabilities and their learning curves, INCOM conference Xu, Y., and Bernard A., (2009), Knowledge assessing in product lifecycle management, PLM conference Rauffet P., Da Cunha C., Bernard A., (2009), Designing and managing Organizational Interoperability with organizational capabilities and roadmaps, I-ESA conference Rauffet P., Da Cunha C., Beranrd A., (2009), sustainable organizational learning in group: a digital double-loop system based on knowledge maturity and performance assessment, DET conference Diani M., (2002), Connaissance et performance économique : Une nouvelle vision de la firme dans une économie basée sur la connaissance , ACS conference
Part VII
Standards for Interoperability
Aligning the UEML Ontology with SUMO Andreas L. Opdahl1 1
Department of Information Science and Media Studies,University of Bergen, NO-5020 Bergen, Norway
Abstract. The Unified Enterprise Modelling Language (UEML) provides a hub for integrated use of the many different modelling languages available for representing enterprises and their information systems (IS). For this purpose, UEML centres around a common ontology that interrelates the semantics of existing modelling languages and their constructs. This paper uses ontology mapping to preliminarily compare UEML's ontology with the Suggested Merged Upper Ontology (SUMO). In addition to suggesting extensions of and improvements to both ontologies, the comparison paves the way for an eventual inclusion of the UEML ontology into SUMO or into one of its successors. The procedure used to compare and map the two ontologies is another contribution of the paper. Keywords: Ontology, Bunge-Wand-Weber model, BWW model, Unified Enterprise Modelling Language, UEML, Suggested Upper Merged Ontology (SUMO), ontology alignment, ontology engineering, enterprise modelling, information systems modelling, interoperability.
1. Introduction The Unified Enterprise Modelling Language (UEML, see [1] for an overview) is a hub for integrated use of modelling languages for representing enterprises and their information systems (IS). In the longer run, UEML even aims to provide a hub for integrated use of models expressed using those different languages. A central component of UEML is a common ontology that interrelates the semantics of various existing modelling languages and their constructs (see [2]). In line with the current trend towards convergence of and interoperability between different ontology initiatives, there is a need to consider aligning – and possibly even merging – the UEML ontology more closely with other upper and domain ontologies. The first advantage is that the effort of ontology definition and maintenance can thereby be shared with other ontology initiatives, freeing resources to focus on the core purpose of UEML, that of supporting integrated use of different modelling languages and models. A second advantage is that modelling
410
A.L. Opdahl
languages and models incorporated into UEML may thereby become interoperable not only with one another, but also with other information resources that are based on the same common ontology. A third and final advantage is that the process of aligning ontologies may suggest improvements to the ontologies involved. For example, missing or weakly defined ontology concepts can be identified in either ontology, and misaligned concept structures may suggest how the ontologies can be better organised. Terminology can also be harmonised as a result. The purpose of this paper is to contribute to such an alignment, improvement and, perhaps, eventual merger by presenting a first systematic comparison of concepts from the UEML ontology and from SUMO. As a side result, the paper also sketches a new procedure for systematic ontology comparison and mapping.
2. Theory 2.1. UEML Ontology UEML facilitates integrated model use by making semantic correspondences between the modelling constructs of different languages clear. Semantics are described by representation mappings of modelling constructs into a common ontology [4,5]. The mappings use separation of reference to break individual modelling constructs into their ontologically atomic parts [1]. As a result, the UEML ontology is divided into four taxonomies: (i) its classes are organised in a conventional generalisation hierarchy, in which a more specific class specialises one or more less specific ones; (ii) its properties form a precedence hierarchy, where a more specific property succeeds less specific ones; (iii) its states form a state hierarchy so that a more specific state discriminates less specific states; and, finally, (iv) its transformations are organised in a transformation hierarchy, where a more specific transformation elaborates one or more less specific ones. The taxonomies are interrelated. For example, classes are related to the properties that characterise them; properties are related to the states they define; states are in turn entered and exited by transformations; and certain types of properties are laws that restrict other properties: specifically, state laws restrict states, whereas transformation laws effect transformations. The UEML ontology currently comprises 136 concepts (although there may still be a few redundancies to resolve). The concepts have resulted from analysing 130 modelling constructs from 10 enterprise/IS modelling languages [1]. 2.2. Bunge-Wand-Weber (BWW) Model Although UEML is currently represented and maintained as an OWL ontology [5], it adds a layer of more specific ontological assumptions and rules to complement the formal and representational focus of OWL. These more specific assumptions and rules are taken from Mario Bunge’s ontological model [6,7] and from the Bunge-Wand-Weber (BWW) representation model [8,9]. The BWW model adapts central ontological concepts from Bunge's ontology to information systems, while
Aligning the UEML Ontology with SUMO
411
retaining Bunge's ontological commitment to scientific realism, i.e., to the ontological position that “identifies reality with the collection of all concrete things, [...] postulates the autonomous existence of the external world, admits that we are largely ignorant of it, and encourages us to explore it” [10]. 2.3. Suggested Upper Merged Ontology (SUMO) The Suggested Upper Merged Ontology (SUMO, [11], www.ontologyportal.org) is a family of upper and mid-level ontologies that build on and unify concepts taken from many and diverse sources. Together, the SUMO ontologies comprise around 20 000 concepts and 70 000 axioms. A browsable selection of 949 high-level SUMO concepts is available from http://virtual.cvut.cz/kifb/en/toc/root.html and has been used in this paper. There is also an OWL representation, and SUMO is aligned with other ontology-related projects, such as WordNet [12] and YAGO (Yet Another Great Ontology, [13]). This paper focusses on the upper level of SUMO, where most of the matches with UEML's ontology concepts have so far been found. As the UEML ontology grows to become larger with more concepts that are specific to enterprise and IS modelling, however, it is likely to fit best as an additional mid-level ontology to be included in SUMO.
3. Research Method The comparison has followed a systematic and iterative procedure for comparing ontologies, inspired by Wand and Weber's [08] representation theory. The procedure comprises the following steps: 1. Map the concepts of ontology A (in this case the UEML ontology) into ontology B (in this case SUMO). 2. Map the concepts of ontology B into ontology A. 3. Compare the two mappings and try to resolve inconsistencies. 4. Repeat steps 1-3 until the mappings are consistent and no longer change from one iteration to the next. The procedure is iterative in order to encourage incremental learning about the two ontologies and their relationships in the first and second steps. The third step aids the process by identifying and resolving inconsistencies. An example of an inconsistency is when concept a in ontology A is mapped as a “match” to concept b in ontology B in the first mapping, but b is not mapped as a “match” to a in the second one. This means that either the “match” is wrong and must be relaxed in the first mapping or a corresponding “match” mush be added to the second. A first set of consistency criteria has already been formulated to capture this and similar cases. The detailed mappings in steps 1 and 2 are carried out according to the following steps:
412
A.L. Opdahl
a) To map a hierarchically organised ontology A into another hierarchically organised ontology B, start by comparing their root concepts. b) Two concepts a and b are compared according to this recursive procedure: i. Check that the concept pair has not been compared already. ii. If the concepts “match” exactly, store the result and compare all the children of a with all the children of b. iii. If a is a “more specific” concept than b, store the result and compare a with all the children of b. iv. If a is a “more general” concept than b, store the result and compare all the children of a with b. v. If the two concepts “overlap”, but they are not identical and one does not subsume the other, compare all the children of a with b. vi. If the concepts are “disjoint”, store the result. The advantage of this procedure is that it leverages the hierarchical ontology organisation to reduce the number of concept pairs that must be compared. In particular, it does not consider very specific concepts for which there can be no match in the other ontology. This is a necessity when comparing against a large ontology with tens of thousands of concepts like SUMO. A single mapping run may not identify all interesting relationships between the two ontologies. But when it is paired with a corresponding inverse mapping run and iterated until the mappings become consistent and stable, it offers an effective way of comparing ontologies. Space does not permit a more detailed discussion of the procedure. Further work is needed to explore, refine and validate it. Possibly, it should also be verified by formal means. We compared the UEML ontology with SUMO through four iterations (comprising eight mappings), during which the overall comparison procedure and the detailed mapping procedures emerged. The first two iterations (comprising four mappings) were informal and limited to the highest levels of the two ontologies. The third iteration was formal but still reduced in scope. The fourth iteration was both formal and elaborate. It compared the full but somewhat redundant set of 136 UEML concepts with more than 300 concepts from the browsable SUMO hierarchy at http://virtual.cvut.cz/kifb/en/toc/root.html. A detailed trace of comparisons was kept. A selection of results will be presented and discussed in the following sections. We did not conduct a fifth iteration to ensure that the results from the fourth iteration were indeed stable. Hence the comparison in this paper must be considered preliminary.
4. Results A selection of results from the fourth iteration are presented here. Table 1 shows the interpretations of 18 selected high-level UEML concepts in terms of SUMO.
Aligning the UEML Ontology with SUMO
413
Table 1. A selection of high-level UEML ontology concepts along with their closest interpretations as SUMO concepts
UEML concept (Definition)
Closest SUMO interpretation
Anything: the most general class of things, characterised by possessing AnyProperty
entity.physical.Object (match): roughly the class of ordinary objects
ChangingThing: changes (undergoes events) None (deficit) ActiveThing: changing thing that acts on at least one other thing
...physical.object.Agent (match): can act on its own and produce changes
AttributedThing: possesses regular property
None (deficit)
AssociatedThing: possesses mutual property
None (deficit)
Composite: has other things as components/ parts … characterised by PartWholeRelation
...physical.object.Collection (match): members like Classes ... position in spacetime ... members can be added/subtracted
Component: component (part) of other things None (deficit) Resource: content … acted on by a process
...object.SelfConnectedObject (more general): no disconnected parts
InformationResource: resource that carries information (in addition to its resource type)
...selfconnobj.ContentBearingObject (match): object that expresses information
AnyProperty: belongs to a thing
entity.abstract.Attribute (perhaps more special, because some SUMO-relations may also match UEML-properties)
PartWholeRelation: relates a composite to one of its components
part (match, although at the instance level of SUMO): basic mereological relation
RegularProperty: not mutual, not law, not part-whole, nor class-subclass relationship
...attribute.InternalAttribute (match): internal property of the entity
MutualProperty: mutual property that is not law, nor part-whole relation, nor classsubclass relationship
...attribute.RelationalAttribute (perhaps more special, as some SUMO-relations may also match UEML-mutual properties)
Law: restricts the value of other properties of the same thing
...relationalattribute...ObjectiveNorm (more special): normative attributes associated with objective criterion
StateLaw: restricts the combinations of properties a thing can possess in a state
None (deficit, although SUMO-relations can be used to describe UEML-state laws)
TransformationLaw: restricts the combinations of properties that a thing can possess before and after an event
None (deficit, although SUMO-relations and -propositions can be used to describe UEML-transformation laws)
AnyState: the most general transformation
None (deficit)
AnyTransformation: the most general transformation
entity.physical.Process (match): things that happen ... temporal parts or stages
The corresponding Table 2 shows the representations of 15 selected high-level SUMO concepts in terms of UEML. Of course, space does not allow presenting all
414
A.L. Opdahl
the concepts we have considered. Instead, we have chosen concepts that together provide an acceptable overview of how the two ontologies are related at the highest levels. Nor does space allow us to include full definitions of each concept. The most recent publication that gives definitions of UEML's ontology concepts is Berio [2], whereas www.ontologyportal.org defines the concepts in SUMO. The concept definitions we sketch in the tables are based on these two sources. Table 2. A selection of high-level SUMO concepts along with the UEML ontology concepts that represent them most closely
SUMO concept (Definition)
Closest UEML representation
Entity: universal class of individuals ... root None (deficit, as the four taxonomies are node of the ontology disjoint without a common ancestor) entity.Physical: have location in space-time None (deficit) ... locations have a location in space-time entity.physical.Object: roughly the class of Anything (match): the most general class of ordinary objects: normal physical objects, things, characterised by their possessing geographical regions, process locations AnyProperty ...physical.object.Agent: can act on its own ActiveThing (match): changing thing that and produce changes in the word acts on at least one other thing ...physical.object.Collection: position in space-time ... members added/subtracted
Composite (match): a thing that has other things as components (or parts)
...physical.object...ContentBearingObject: InformationResource (match): a resource does not consist of ... disconnected parts that carries information entity.physical.Process: classes of things that happen ... have temporal parts or stages
AnyTransformation (match): the most general transformation
entity.Abstract: properties/qualities as distinguished from [physical] embodiment
None (deficit)
entity.abstract.Attribute: qualities not reified into subclasses of sbject
AnyProperty, which perhaps also covers certain SUMO-relations (more general)
...attribute.Quantity: specification of how many or how much of something there is
None (deficit, as UEML so far has no taxonomy of property values and types)
...attribute.InternalAttribute: internal property of the entity ... e.g., shape, color ...
RegularProperty (match)
...internalAttribute.PerceptualAttribute: whose presence is detected by an act of perception
None (but Bunge (1977) distinguishes real properties of real things from attributes we ascribe to our mental models of things)
...attribute.RelationalAttribute: that an MutualProperty perhaps covers both entity has by virtue of a relationship it bears SUMO-relational attributes and -relations (more general, but SUMO-relations may entity.abstract.Relation: express a also be restricted to describe UEML-mutual complete thought or a set of such thoughts properties) entity.abstract.Proposition: express a complete thought or a set of such thoughts
None (deficit, but SUMO-propositions can be used to describe UEML-laws, in
Aligning the UEML Ontology with SUMO
SUMO concept (Definition)
415
Closest UEML representation particular transformation laws)
5. Discussion Our comparison raises the question of whether two ontologies that embed different fundamental ontological assumptions can indeed be compared at all. We acknowledge that it is very hard – and perhaps practically impossible – to compare ontologies with full philosophical rigour. But the problem we are trying to solve is a simpler and more practical one: is it possible to establish correspondences between the UEML ontology and SUMO that are practically useful, e.g., for such purposes as facilitating integrated use of different modelling languages and models of enterprises and their information systems? We think that establishing such correspondences is both possible and useful, even when it cannot be done with full philosophical rigour. For this reason, the term match should be understood with some caution. It is not intended to indicate a perfect “philosophical match”, but a close similarity we assume will be useful, e.g., for practical concept mappings. The comparison indicates that several of the highest-level concepts of the UEML ontology and of SUMO are well aligned. In particular, UEML-Anything matches SUMO-Object and UEML-AnyTransformation matches SUMOProcedure. On the other hand, UEML-AnyProperty may be more general than SUMO-Attribute, because some mutual properties in UEML may better match subtypes of SUMO-Relation, a sibling of SUMO-Attribute. If so, UEMLAttributes are overloaded relative to SUMO, but further comparisons are needed. The most conspicuous missing concept on the SUMO side is the lack of concepts that describe states. It may indicate that SUMO is deficient in this respect. It may also reflect that SUMO is a three-dimensional ontology (see below), in which a SUMO-Object is “something whose spatiotemporal extent is thought of as dividing into spatial parts roughly parallel to the time-axis” and hence conflates with the state concept in a four-dimensional ontology like Bunge's. Finally, it may even indicate that the property and state taxonomies in the UEML ontology should be collapsed. Indeed, the UEML verification rules state that for every UEML state there is a unique defining property, so the state taxonomy is in principle a surjection of the property taxonomy. The interpretation mapping from UEML to SUMO in Table 1 also reveals that UEML-Laws are not explicitly accounted for in SUMO, although SUMO-ObjectiveNorm covers a particular type of law. SUMO-Relations and -Propositions can be used to describe laws, though. SUMO also lacks concepts that are specific enough to interpret specialised UEML concepts, e.g., for detailed process executions and for systems and their behaviours. Although some of these UEML concepts would better belong in midlevel SUMO ontologies, others, such as the systemic concepts, might be introduced into upper SUMO. Another observation is that, although UEML and SUMO reflect many of the same ontological distinctions, they sometimes organise them differently. For example, the distinctions between UEML-ActiveThing, ActedOnThing and -InteractingThing are approximated in SUMO, not by
416
A.L. Opdahl
subclasses of SUMO-entity.physical.Object, but elsewhere in the concept hierarchy by ...physical.process.DualObjectProcess and its subclasses. This may indicate that neither UEML nor SUMO are yet complete. The analysis also suggests that the UEML ontology lacks a fifth taxonomy of property values and types. A case of missing concepts on the UEML side is the lack concepts for SUMOPerceptualAttribute “whose presence is detected by an act of Perception”. Indeed, Bunge [6] accounts for this by distinguishing between real properties of real things and attributes we ascribe to our mental models of things. It is quite possible that a corresponding distinction should be made in the UEML ontology. Another group of missing concepts in UEML are the ones corresponding to the highest levels of the SUMO hierarchy, above the four UEML taxonomies. These are: Entity, entity.Physical and entity.Abstract (see Table 2). A final group of SUMO concepts that are missing in UEML have been left out of Table 2. These are abstract/mathematical concepts that have so far not been needed to represent enterprise and IS modelling languages, such as ...abstract.SetOrClass, ...abstract.Graph and ...abstract.GraphElement. As already mentioned, a specific philosophical difference is that UEML and Bunge's [6] ontology on which it builds is four-dimensional, in that things are extended in three spatial dimensions as well as in time. According to Bunge, all processes occur in things and all things change. Hence, things and processes are two sides of the same coin. In contrast, objects in three-dimensional ontologies, such as SUMO [11], exist only at a single point in time and thus only have spatial extent. The definition of SUMO-Object states that “In a 4D ontology, an Object is something whose spatiotemporal extent is thought of as dividing into spatial parts roughly parallel to the time-axis”, whereas the definition of SUMO-Process says that “In a 4D ontology, a Process is something whose spatiotemporal extent is thought of as dividing into temporal stages roughly perpendicular to the time-axis.” We note that this distinction is softened by Bunge's “moderate” fourdimensionalism: although his ontology is fundamentally four-dimensional, it contains specific terms for both SUMO-Object (Bunge-State) and SUMO-Process (Bunge-Process).
6. Conclusion and Further Work The paper has presented a preliminary comparison of the UEML ontology with SUMO. The comparison indicates that, although central concepts in the two ontologies match one another well, considerable effort will be required to include the UEML ontology into SUMO or one of its successors as a mid-level ontology. The presented work directly follows from the UEML roadmap [14], which proposed several research directions for UEML, of which validating and improving the UEML ontology through ontology comparison was one. We believe the comparison has been useful in several ways. Most importantly it has paved the way for an eventual inclusion of the UEML ontology into SUMO. Also, aligning the definitions of UEML concepts with the corresponding SUMO concepts can make the UEML ontology easier to understand and learn and thus more acceptable for people with knowledge of SUMO. Aligning the structure of
Aligning the UEML Ontology with SUMO 417
the UEML ontology with SUMO may improve how the UEML ontology is organised (and vice versa from the SUMO side). Elaborating the UEML ontology with additional concepts from SUMO may make UEML easier to understand and learn and hence increase its acceptability. The new procedure for systematic ontology comparison and mapping that has been outlined is a final contribution. Hence, we believe the present paper is a useful contribution to the further evolution of UEML and its ontology and potentially of SUMO too. At the same time we admit that the preliminary analysis presented is only a first step. A fuller comparison of the UEML ontology and SUMO is needed, in particular considering its mid-level ontologies more carefully. Also, the UEML ontology should be compared with other ontologies and ontology-related resources besides SUMO, which is only one among several contenders for standard ontology of the future. Other candidates include Chisholm's common sense ontology [15], OntoClean [16], Descriptive Ontology for Linguistic and Cognitive Engineering (DOLCE, [17]), General Formal Ontology (GFO, [18]), Unified Foundational Ontology (UFO, [19]), WordNet [12] and Yet Another Great Ontology (YAGO, [13]). Relevant domain-specific ontologies include FRISCO [20], the Edinburgh Enterprise Ontology [21], the Toronto Virtual Enterprise (TOVE, [22]), Dietz' speech-act– and systems-inspired enterprise ontology [23] and the ISO 15926 standard for production systems and processes [24]. For other future research paths, see the UEML roadmap [14] and [1].
7. References [1] Anaya, V., Berio, G., Harzallah, M., Heymans, P., Matulevičius, R., Opdahl, A.L., Panetto, H. and Verdecho, M.J.. The Unified Enterprise Modelling Language – Overview and Further Work. Computers in Industry 61(2), Elsevier, March 2010. [2] Berio, G. (2005). Deliverable DEM2. Roadmap and UEML2.1. Contributions by Berio, G., Opdahl, A.L., Anaya, V., Verdecho, M., Heymans, P., Matulevicius, R., Panetto, H. and Harzallah, M. www.interop-vlab.eu. [3] Opdahl, A.L. and Henderson-Sellers, B. (2004). A Template for Defining Enterprise Modelling Constructs. Journal of Database Management 15(2). [4] Opdahl, A.L. and Henderson-Sellers, B. (2005). Template-Based Definition of Information Systems and Enterprise Modelling Constructs. In Ontologies and Business System Analysis, Peter Green and Michael Rosemann (eds.). Idea Group Publishing, 2005. [5] Antoniou, G. and F. van Harmelen (2003). Web Ontology Language: OWL. In Handbook on Ontologies in Information Systems, S. Staab and R. Studer (Eds.). Springer. [6] Bunge, M. (1977). Ontology I: The Furniture of the World (Treat. Basic Phil. 3). Reidel. [7] Bunge, M. (1979). Ontology II: A World of Systems (Treat. Basic Philosophy 4). Reidel. [8] Wand, Y. and Weber, R. (1993). On the ontological expressiveness of information systems analysis and design grammars. Journal of Information Systems, 3:217–237. [9] Wand, Y. and Weber, R. (1995). On the deep structure of information systems. Information Systems Journal, 5:203–223. [10] Bunge, M. (1999). Dictionary of Philosophy. Prometheus Books.
418
A.L. Opdahl
[11] Niles, I., and Pease, A. (2001). Towards a Standard Upper Ontology. In Proc. 2nd Int. Conf. on Formal Ontology in Information Systems (FOIS-2001), Chris Welty and Barry Smith, eds, Ogunquit, Maine, October 17-19, 2001. [12] Miller, G.A. (1995). WordNet: a lexical database for English. Comms. ACM 38(11):39-41. [13] Suchanek, F.M., Kasneci, G. and Weikum, G. (2007). YAGO: A Core of Semantic Knowledge Unifying WordNet and Wikipedia. Proc. WWW'2007, pp. 697-706. [14] Opdahl, A.L. and Berio, G. (2006). A Roadmap for the UEML. In Proc. 2nd International Conference on Interoperability for Enterprise Software and Applications (IESA 2006). [15] Chisholm, R.M. (1996) A Realistic Theory of Categories — An Essay on Ontology, Cambridge University Press. [16] Guarino, N. & Welty, C. (2002). Evaluating Ontological Decisions with OntoClean. Communications of the ACM 45(2):61-66. [17] Gangemi, A., Guarino, N., Masolo, C., Oltramari, A. and Schneider, L. (2002). Sweetening Ontologies with DOLCE. In Knowledge Engineering and Knowledge Management: Ontologies and the Semantic Web, pp. 223-233, LNCS 2473, Springer. [18] Herre, H., Heller, B., Burek, P., Hoehndorf, R., Loebe, F. and Michalek, H. (2007). General Formal Ontology (GFO): A Foundational Ontology Integrating Objects and Processes. Part I: Basic Principles. Research Group Ontologies in Medicine (Onto-Med), University of Leipzig. Latest intermediate revision: Version 1.0.1, Draft, 14.02.2007. [19] Guizzardi, G. & Wagner, G. (2005). Some Applications of a Unified Foundational Ontology in Business Modelling, Chapter 14 in Green & Rosemann (2005a). [20] Falkenberg, E.D., Hesse, W., Lindgreen, P., Nilsson, B.E., Oei, J.L.H., Rolland, C., Stamper, R.K., Van Assche, F.J.M., Verrijn-Stuart, A.A. & Voss, K. (1996). FRISCO: A Framework of Information System Concepts, IFIP WG 8.1 TG FRISCO, Dec. 1996. [21] Uschold, M., King, M., Moralee, S. and Zorgios, Y. (1998). The Enterprise Ontology. Knowledge Engineering Review 13:31-89, Cambridge University Press. [22] Fox, M.S. and Gruninger, M. (1998). Enterprise modeling. AI Magazine 19(3):190121. [23] Dietz, J.L.G. (2006). Enterprise Ontology: Theory and Methodology. Springer. [24] PCA (2009). POSC Caesar Association: Introduction to ISO 15926. https://www.posccaesar.org, accessed 2009-08-24.
Emerging Interoperability Directions in Electronic Government Yannis Charalabidis1, Fenareti Lampathaki1 and Dimitris Askounis1 1
National Technical University of Athens, Athens, Greece
Abstract. Being important at organizational, process and semantic levels, interoperability became a key characteristic of the new electronic government systems and services, over the last decade. As a crucial prerequisite for automated process execution leading to “one-stop” e-Government services, interoperability has been systematically prescribed, since the dawn of the 21st century: Standardization frameworks, that included guidelines ranging from simple statements to well defined international Web-Service standards started to appear at National and Cross-Country levels, powered by governments, the European Union or the United Nations. In parallel, most international software, hardware and service vendors created their own strategies for achieving the goal of open, collaborative, loosely coupled systems and components. The paper presents the main milestones in this quest that shaped electronic government during the last years, describing National Frameworks, key PanEuropean projects, international standardization and main industrial and research achievements in the EU. Moreover, the paper describes the next steps needed to promote interoperability at technical, semantic, organizational, legal or policy level – leading to the transformation of administrative processes and the provision of low-cost, high-quality services to citizens and businesses. Keywords: e-Government interoperability, interoperability standards, state of the art
1. Introduction During the last decade, all countries across Europe and internationally have spent a lot of money in e-Government and the modernization of the public sector, as total (including central, regional and local layers) public administration ICT expenditure in 2004 for EU25 is estimated at about € 36.5 billion, with € 11 billion devoted in e-Government reforms [8]. Since the benefits of e-government became apparent, the number of worldwide e-government projects has also increased in the time period between 1996 and 2001 from three to more than five hundred national initiatives [1]. Throughout the years, e-Government has gone an evolutionary, yet controversial path: from the initial enthusiasm and e-xcitement spiraled out of
420
Y. Charalabidis, F. Lampathaki and D. Askounis
proportion to losing its magic and being at a crossroads between a number of other research domains, particularly computer science, information systems, public administration, and political science [12]. Despite the fact that government reality today is explicitly seen as what “would have seemed a utopian dream just a decade ago”, e-Government is claimed to have fallen short of its potential to transform government service delivery and trust in government. In order to enable value innovation at the business level, though, organization systems of the future must be open to dramatic change, rather than lock in the status quo [10]. In this context, interoperability appears as a key enabler to unlocking full potential for the public sector. Since its inception and through the years, interoperability tends to obtain a broader, all-inclusive scope of a repetitive, well organized, and automated at ICT level feature of organizations, as indicated in the definition of the European Interoperability Research Roadmap [3] and the draft EIF 2.0 [13] as “the ability of disparate and diverse organizations to interact towards mutually beneficial and agreed common goals, involving the sharing of information and knowledge between the organizations via the business processes they support, by means of the exchange of data between their respective Information and Communication Technology (ICT) systems”. Eliciting the 4-stage models of Layne and Lee [17] and the EU Measurement Reports [2], e-Government evolution is now measured against a five stage model which designates interoperable governmental agencies as the last stage (Connected-Stage V). Without interoperable E-Government systems, today’s public administrations struggle to keep pace with rapid evolving economic alterations, advancements in technologies and regular emergence of new legal settings [24]. Improved interoperability among public organizations and between public and private organizations is of critical importance to make electronic government more successful [11]. In this context, the present paper presents the main milestones in this fascinating quest that shaped electronic government interoperability in the European Union during the last years, and outlines the next steps needed to achieve interoperability at technical, semantic, organizational, legal or policy level. The remainder of the paper is structured as follows: In the second section, the progress in e-Government Interoperability standardisation in the European region is discussed, providing the state of the art, background and related work. A discussion around the main observations follows in section 3 while section 4 proceeds with the presentation of the conclusions and the way forward in eGovernment interoperability.
2. Interoperability Initiatives in Electronic Government In the realm of time, a plethora of interoperability initiatives originating either from the public sector, the standardization organizations or the industry has emerged leading to a prevalent standards dilemma. There is indeed a diversity of initiatives developing standards that address particular interoperability requirements at legal, organizational, semantic and technical level, but are designed on such a different basis that make the choice of a specific standard to be
Interoperability Directions in the Area of Electronic Government
421
adopted a new challenge for organizations, which is further undermined by the fact that they are constantly changing. In this paper, interoperability progress in Electronic Government is examined under the prism of: • • •
e-Government Policies and Strategic Plans at national or cross-country (i.e. pan-European) level Governmental Initiatives summarized into National Interoperability Frameworks and e-Government Projects implementation Research results emerging from academia and industry that have been disseminated as academic publications or projects deliverables.
Fig. 1. eGovernment Interoperability Initiatives in the EU
As indicated in Figure 1, further initiatives in the direction of Interoperability include: Standards maintained by international standardization bodies, such as ISO, UN/CEFACT, OMG, W3C and OASIS, that may be generic or come from other vertical domains conflicting with e-Government, such as e-Health, e-Defense and e-Payments; Working Groups and Committees, such as IFIP WG 8.5 on Information Systems in Public Administration and NESSI iGovernment WG; Future internet envisioning initiatives, such as the EC Enterprise Interoperability Research Roadmap (EIRR) and the Future Internet Enterprise Systems (FInES) Cluster, together with the Enterprise Interoperability Science Base (EISB). Finally, as far as most international software, hardware and service vendors are concerned, they have already created their own strategies for achieving the goal of open, collaborative, loosely coupled systems and components with IBM, Microsoft, Oracle and SAP being the typical examples following this path.
422
Y. Charalabidis, F. Lampathaki and D. Askounis
2.1. Interoperability in Pan-European and International Contex Today, e-Government is well embedded in policies and strategies across the world defining their milestones and action plans at national and cross-country level. As stated in the latest Ministerial Declaration on e-Government [20], e-Government has not only become mainstream in national policies but has also reached beyond national boundaries to become an important enabler to deliver European-wide policy goals across different sectors, from justice to social security, to trading business services and beyond. In the European Union, interoperability has become the key issue in the agenda of the public sector [7] since providing one-stop services calls for collaboration within and across public authorities. i2010, the strategic action plan of the European Commission which replaces eEurope initiatives and comes to fulfill the Lisbon’s Strategy objectives, [6], [7] has explicitly addressed interoperability as a prerequisite for “devices and platforms that ‘talk to one another’ and services that are portable from platform to platform“ and identified it as one of the main building blocks for the single European information space of eServices (SEIS). In fact, the achievement of pan-European, cross-border interoperability is a key element and prerequisite of all the EU's ambitious e-Government initiatives while new challenges (such as the EU Services Directive 2006/123/EC) appear that need novel approaches in solving long-standing cross-country interoperability issues. eGovernment interoperability is also becoming an increasingly crucial issue, especially for developing countries that have committed to the achievement of the Millennium Development Goals by 2015 [25]. In this context, IDABC (Interoperable Delivery of European e-Government Services to public Administrations, Businesses and Citizens), has been established as a European Programme for 2005-2009 in order to use the opportunities offered by information and communication technologies, to encourage and support the delivery of cross-border public sector services to citizens and enterprises in Europe, and to improve efficiency and collaboration between European public administrations. Its follow-on programme ISA (Interoperability Solutions for European Public Administrations) is anticipated to run for the period 2010-2015, focusing on back-office solutions supporting the interaction between European public administrations and the implementation of Community policies and activities. Today, with 2010 targets nearing, many countries are revisiting their eGovernment strategies. The political priorities that determine the way forward beyond 2010 as regards e-Government have been further outlined in preparatory orientation papers [9]: Support to the Single Market, Empowerment of businesses and citizens, Administrative efficiency and effectiveness, and Provision of key enablers, with interoperability being characterized as a core precondition. 2.2. Interoperability in Governmental Initiatives Interoperability research is closely linked to the topic of standardization, since the ultimate goal of standards is to ensure interoperability and integration of different systems. Today, implementation standards for e-Government have been specified
Interoperability Directions in the Area of Electronic Government
423
and guided by National e-Government Interoperability Frameworks (NIF) that pose today as the cornerstone for the resolution of interoperability issues in the public sector and the provision of one-stop, fully electronic services to businesses and citizens. Such interoperability frameworks aim at outlining the essential prerequisites for joined-up and web-enabled Pan-European e-Government Services (PEGS), covering their definition and deployment over thousands of front-office and back-office systems in an ever extending set of public administration organizations [4]. According to the European Interoperability Framework [13], an Interoperability Framework describes the way in which organizations have agreed, or should agree, to interact with each other, and how standards should be used. In other words, it provides policies and guidelines that form the basis for selection of standards and may be contextualized (i.e. adapted) according to the socio-economic, political, cultural, linguistic, historical and geographical situation of its scope of applicability in a specific circumstance/situation (a constituency, a country, a set of countries, etc). NIFs further provide the necessary methodological support to an increasing number of projects related to the interoperability of information systems in order to better manage their complexity and risk and ensure that they deliver the promised added value [22] functioning as an umbrella for the harmonization among the various projects’ results. Current frameworks in this direction have been adopted across the European Union as mentioned in the National Interoperability Frameworks Observatory (NIFO) [19] and indicatively presented in Figure 1. Generally, the initiators of these frameworks have been practitioners or public administrations which are pursuing the goal of standardizing across distributed organizations and avoiding technology vendor lock-in [16], [18]. Typically, a NIF includes the context, the technical content, the management processes and the tools [25]. It provides guidance to practitioners what to consider and to do in order to enable seamless interaction within their public administration as well as with other public authorities. However, in most cases the scope of the NIFs needs to be extended applying best practices drawn from other NIFs in order to provide a thorough set of specifications that span [4], [5]: • • •
The “Standards & Specifications” Level, which includes the paperbased specifications in alignment with the levels of interoperability: legal, organizational, semantic and technical The “Systems” Level on the basis of deploying the following supporting infrastructures which store and manage the artifacts of the “Standards & Specifications Level” The “Coordination” Level, which mainly deals with long-term envisioning, raising awareness and ensuring maintenance
In this direction, interoperability research needs to focus particularly on those fields where compatibility is still low, i.e. areas with lacking or conflicting standard developments or with lacking uniform implementation of standards [15], [18].
424
Y. Charalabidis, F. Lampathaki and D. Askounis
2.3. Research Results E-government (or digital government) research has been in progress since the mid 1990s. Research is most extensive and advanced in Europe and the United States, but significant work is also now being conducted in Asia, India, Latin America, and other parts of the developing world. The first phase of e-Government research focused mostly on ways to devise, implement, and evaluate online information and services to citizens, as well as on citizen involvement in the decision making processes of government [14]. According to Legner & Lebreton [18], the discussion of interoperability within the scientific community started in the early 1990s with a significant increase of publications since 2004. A large number of e-Government-related publications on integration, interoperation, and interoperability has been published by and for distinct communities of practice encompassing all levels and branches of government, both nationally and internationally [23]. Earlier research has mainly focused on the information structures and interfaces or the communication and transport level, whereas recent work introduces a broader perspective on interoperability on the one side, and increasingly addresses semantic aspects and business process compatibility on the other side. Interoperability in e-Government is today acknowledged as the main driver for delivering effective cross-country electronic government services towards citizens and businesses. Consequently, there has been increasing pressure on the academic and practitioner communities for research that focuses on bridging the gap between e-government and interoperability theory and practice and this attitude is anticipated to be more and more reflected in emerging research results in the forthcoming years. Upon querying the most prominent research databases (i.e. ISI Web of Science, SCOPUS, DBLP (Faceted / Complete Search), Citeseer and Google Scholar) for the combination of terms “interoperability” and “e-Government”, the results suggest that interoperability is recognized as an inter-disciplinary research topic with high political and technological value. The semantic aspect of interoperability has attracted the most attention through the years, yet the organizational and legal aspects have also gained momentum recently. Furthermore, it needs to be taken into account that in order to enable a cooperation of public administrations and to cross-link the corresponding information systems, the European Commission has launched several research projects in the Enterprise Interoperability Cluster (i.e. the FP6 Interoperability Cluster Projects, such as ATHENA-IP, Interop-NoE, GENESIS, FUSION, ABILITIES, etc.) or with interoperability aspects, such as the FP7 COIN, COMMIUS, iSURF, and NEXOF-RA, which have made significant advancements. The importance of interoperability at a pan-European context, with the active participation of the software and services industry is also depicted in the Competitiveness and Innovation Programme (CIP) recent research initiatives to provide solutions in key infrastructures and interoperability standardization, such as the PEPPOL project (for e-Procurement at pan-European context), STORK (dealing with eID Management) and SPOCS (that aims to implement the Services Directive 2006/123/EC) projects.
Interoperability Directions in the Area of Electronic Government 425
3. Observations By looking at how policy making, national and international initiatives, research and standardization on interoperability has evolved over the last years, one can draw significant conclusions as also mentioned in relevant recent studies [21] on the current shortcomings at European level, but at national and international level, as well. Such observations cover the full spectrum of organizational, semantic and technical aspects of interoperability in governmental services provision and indicate that although interoperability is well-sought for at a national context, powered by National Interoperability Frameworks and country-wide initiatives for one-stop electronic service provision, cross-country collaboration issues are an important barrier. Initiatives at pan-European or International level do not yet have the necessary momentum to shape solutions that will break the national barriers and provide citizens and businesses with the ability to obtain or provide services outside their home country in a seamless way. The transformation of the national legal structures is also posing as an important gap that needs to be bridged, so that the necessary processes and final service outputs become an everyday practice for citizens and businesses. Organizational and semantic interoperability solutions usually ask for significant changes in the legal system, if to be embodied into the everyday practice of public sector officials. This fact is becoming more of a problem than a mere challenge, as the systematic support needed for managing and guiding this legal transformation is not yet a common practice. The diffusion of needed basic infrastructure, such as electronic ID’s, base registries and fundamental governmental web services has not reached the minimum needed threshold – directly affecting the adoption and sustainability of otherwise novel and applicable approaches, springing out from the research community or the industry. Despite the fact that e-Government investment is still strong, prioritization and goal-setting is still not efficient enough to prevent attempts to develop highly progressed, interoperable infrastructures for final service provision, before going systematically after the basic enablers. Although interoperability standardization is a very active field both for the industry and the national governments, the systematic creation, maintenance, and diffusion of standards and best practices has not been achieved yet. As a result, stakeholders still fail to find their way within thousands of pages and web sites, usually yielding to lower adoption of standards or, even worse, to the reinvention of the wheel. A system-based support of interoperability standardization is clearly needed, especially as the practice communities are more and more looking for service implementation patterns, open source software components or standardized XML schemas and not for plain paragraphs of text. Going further in the systematic provision of solutions to complex, multidisciplinary problems, we also need a completely new approach towards the interoperability science: formalization methods, assessment metrics, complexity algebra, conceptual theory, logic and rules, ontology engineering, simulation and stochastic methods are now to show their potential within this 'discipline of many disciplines'. Adding theory to practice, and being able to generalize this new knowledge will soon be needed in e-Government interoperability. But mostly, as
426
Y. Charalabidis, F. Lampathaki and D. Askounis
the first prototypes of such approaches, driven by talented researchers, are finding their ways in public administration worldwide, it has to be shown how scientific excellence leads to better services for all.
4. Conclusions and Future Research Recent works have shown that interoperability is a very useful capacity of organizations and systems: it can assist governments and enterprises in jumping onto the next streams of service delivery. By setting and solving specific, highly repeatable patterns of interoperability problems, scientists and practitioners can now greatly assist in achieving record-time/high-quality electronic service delivery for citizens and businesses. Moving forward towards the second decade of interoperability in e-Government, the e-Government research agenda needs to expand in order to achieve resolution of interoperability at all levels, either legal and organizational, semantic or technical, and exploit the advancements achieved as Web evolves from global hypertext system to distributed platform for end-user interaction with the help of Web 2.0 tools (mash-ups, service front-ends, social networking, etc.). In this quest of public sector administrations to achieve on-line collaboration among their systems and organizational units, orchestrated by international organizations and assisted by the research and practice communities, several important steps have been made the last 10 years. However, the assessment of real applications and adoption rates by citizens and businesses shows that there are still significant steps to be taken by administrations and industry worldwide, in order to achieve the interoperable delivery of services in everyday life. Further research on interoperability in digital public services provision will have, in the next years, to face the multi-disciplinary nature that interoperability problems present. Having a sound basis on information systems research, interoperability community has now to reach other systemic or social sciences and follow on new directions, such as: •
•
•
The design and deployment of federated infrastructures for managing common semantic definitions and information elements, standards and folksonomies needed for service provision at national and international level. Such base registries, properly solving their maintenance challenge through content syndication, will soon become the key enablers of automated process execution, or executable interoperability. The development of simulation tools, assessment models and metrics for evaluating and forecasting the real benefits of service transformation and system-to-system collaboration - towards the reduction of administrative costs and the minimization of service yield time. The adoption of Web 2.0 and Government 2.0 approaches for greatly assisting the adoption of new one-stop services by citizens and businesses and the effective collaboration between stakeholders.
Interoperability Directions in the Area of Electronic Government 427
• •
•
The exploitation of Government-clouds (“G-clouds”) in the direction of the emerging Cloud Computing that dramatically reduce the cost of IT infrastructures and improve data sharing. The empowerement of organizational and semantic aspects, as well as the incorporation of legal and business rules within information systems, allowing for fully automated service provision by interconnected systems and services. The development of formal tools for the description of the complex problem space and for associating interoperability solution patterns, thus providing scientifically sound approaches to repeating problem scenarios, found within administrations and their information systems.
5. Acknowledgements This work has been created closely to research activities during the EU-funded project GIC – Greek Interoperability Centre (Contract Number FP7-204999).
6. References [1] [2]
[3] [4] [5]
[6]
[7] [8] [9]
Al-Kibsi G, de Boer K, Mourshed M and Rea N (2001) Putting Citizens Online not in Line. The McKinsey Quarterly 2, 65-73. Capgemini, Rand Europe, IDC, Sogeti and DTI (2009) Smarter, Faster, Better eGovernment, 8th Edition, Available at: http://www.epractice.eu/files/Smarter,%20Faster,%20Better%20eGovernment%20%208th%20Benchmark%20Measurement.pdf Charalabidis, Y., Gionis, G., Moritz Hermann, K. and Martinez, C. (2008) Enterprise Interoperability Research Roadmap, Draft Version 5.0, Available at: ftp://ftp.cordis.europa.eu/pub/fp7/ict/docs/enet/ei-roadmap-5-0-draft_en.pdf Charalabidis Y, Lampathaki F, Kavalaki A and Askounis D (2009) A Review of Interoperability Frameworks: Patterns and Challenges. International Journal of Electronic Governance, Inderscience Publications, Article in Press. Charalabidis Y, Lampathaki F, Askounis D (2009) A Comparative Analysis of National e-Government Interoperability Frameworks, in Proceedings of the 15th Americas Conference on Information Systems (AMCIS), San Francisco, August 2009, AIS Electronic Library Commission of the European Communities (CEC) (2006a) i2010 e-Government Action Plan: Accelerating e-Government in Europe for the Benefit of All, COM (2006) 173 final, Available at: http://europa.eu.int/eurlex/lex/LexUriServ/site/en/com/2006/com2006_0173en01.pdf Commission of the European Communities (CEC) (2006b) Interoperability for PanEuropean eGovernment Services, COM (2006) 45 final, Available at: http://europa.eu.int/idabc/servlets/Doc?id=24117 eGovernment Economics Project (eGEP) (2006) Expenditure Study, Final Version, http://82.187.13.175/eGEP/Static/Contents/final/D.1.3Expenditure_Study_final_versi on.pdf eGovernment Sub-group (2009) Visions and priorities for eGovernment in Europe: Orientations for a post 2010 eGovernment Action Plan, Available at:
428
[10] [11] [12] [13] [14] [15]
[16] [17] [18] [19] [20]
[21] [22] [23] [24] [25]
Y. Charalabidis, F. Lampathaki and D. Askounis http://ec.europa.eu/information_society/activities/egovernment/docs/2015_backgroun d_doc-210-pvt.pdf European Commission (2008) Unleashing the Potential of the European Knowledge Economy Value Proposition for Enterprise Interoperability, Version 4.0 Gottschalk P (2009) Maturity levels for interoperability in digital government. Government Information Quarterly 26 (1), 75-81. Heeks R and Bailura S (2007) Analyzing e-government research: Perspectives, philosophies, theories, methods, and practice. Government Information Quarterly 24 (2), 243-265. IDABC (2008) European Interoperability Framework draft version 2.0, Available at: http://ec.europa.eu/idabc/servlets/Doc?id=31508 Irani Z, Love P and Montazemi A (2007) e-Government: past, present and future. European Journal of Information Systems 16 (2), 103-105 Jakobs K (2006) ICT Standards Development — Finding the Best Platform. In Enterprise Interoperability: New Challenges and Approaches (Doumeingts G, Müller J, Morel G, Vallespir B, Eds.), 543-552 & Proceedings of the Second International Conference on Interoperability for Enterprise Software and Applications (I-ESA 06). Klischewski R (2004) Information Integration or Process Integration? How to Achieve Interoperability in Administration. In Proceedings of EGOV 2004 Conference (Traunmüller R, Ed), LNCS 3183, pp. 57–65. Layne K and Lee J (2001) Developing fully functional E-government: A four stage model. Government Information Quarterly 18, 122-136. Legner C and Lebreton B (2007) Business Interoperability Research: Present Achievements and Upcoming Challenges. Electronic Markets 17 (3), 176 - 186. Malotaux M, Hahndiek F, Hazejager S (2009) National Interoperability Frameworks Observatory (NIFO Project), http://ec.europa.eu/idabc/servlets/Doc?id=32120 Ministerial Declaration on eGovernment (2009) approved unanimously in Malmö, Sweden, http://www.se2009.eu/polopoly_fs/1.24306!menu/standard/file/Ministerial%20Declar ation%20on%20eGovernment.pdf MODINIS (2007) Study on Interoperability at Local and Regional Level, Version 2.0, Available at: http://www.epractice.eu/files/media/media1309.pdf Ralyte J, Jeusfeld M, Backlund P, Kuhn H and Arni-Bloch N (2008) A knowledgebased approach to manage information systems interoperability. Information Systems, 33, 754–784. Scholl HJ and Klischewski R (2007) E-Government Integration and Interoperability: Framing the Research Agenda. International Journal of Public Administration 30 (8), 889–920. Ziemann J, Matheis T and Werth D (2008) Conceiving interoperability between public authorities - A methodical framework. In Proceedings of the 41st Annual Hawaii International Conference on System Sciences, Hawaii. United Nations Development Programme (UNDP) (2007) e-Government Interoperability: A Review of Government Interoperability Frameworks in Selected Countries, Available at: http://www.apdip.net/projects/gif/serieslaunch
Enterprise Collaboration Maturity Model (ECMM): Preliminary Definition and Future Challenges Juncal Alonso1, Iker Martínez de Soria1, Leire Orue-Echevarria1, Mikel Vergara1 1
Fundación European Software Insitute (ESI-Tecnalia). Parque Tecnológico Ed. #204. E48170 Zamudio. Spain.
Abstract. Collaboration and interoperability are pervasive topics today as organizations struggle to attain competitive edge in today’s global market fostered by new economies of scale. Moreover, the deep introduction of Internet in our lives both at personal and business level is changing all aspects of our economies and societies and the way enterprises and individuals interoperate and collaborate. In the COIN IP project [IST-216256], a strategy based on readiness assessment to adopt best collaboration and interoperability practices has been implemented following the maturity models approach. The aim of this paper is to present a process improvement approach conceived as a maturity model for collaborative networked organizations, in which organizations participating in a network are assessed, both as a stand-alone company and with respect to the network. The result of this assessment will provide the organization with a picture of where they are at that moment and where they have to be if the maturity model is fully complied. Additionally a roadmap and an improvement plan is suggested, which will help the company fill the gap. Keywords: Enterprise Collaboration, Enterprise Interoperability, Maturity Models, Networked Environments, Readiness Assessment.
1. Introduction and Problem Description In the current globalized and networked society, it is widely recognized that collaboration and interoperability are key issues for today’s organizations. As professed in the introduction to the COIN IP project, both concepts are different but they are so interconnected that can be considered two sides of the same COIN [1]. In this new situation where enterprises have shifted towards networked enterprises, companies need to adopt innovative forms of collaboration in order to compete and maintain their position in the global market. These new ways of collaboration are mainly based on Information Technologies and therefore interoperability capabilities at different levels have become crucial to create value
430
J. Alonso, I. Martínez de Soria, L. Orue-Echevarria, M. Vergara
and success, combining technology and business approaches to catalyze and sustain added value for enterprises and customers. New economic activities have arisen alongside with new classes of networks and services, new forms of enterprise collaboration, new business models and new value propositions. Business has changed as well [2]. As stated by the European Commission in its published Enterprise Interoperability Value Proposition, economies of scale can now reach world wide, allowing firms to tap into the narrowest parts of the long tail of demand. In fact, collaboration is one of the global trends in business nowadays and collaborative practices are gaining importance in firms. These collaborative practices are being carried out in different forms, from cohesive and stable networks like Collaborative Networked Organisations (CNO) to more ephemeral and occasional cooperation like Virtual Business Ecosystems. Existing literature points out different definitions and analysis of new types of collaboration forms [3], as well as numerous enterprise interoperability types and practices [4]. There are also existing proposals on readiness for certain types of collaborations forms, like the Aricon approach [5], where a methodology for Virtual Enterprises and Product development is presented. However, for enterprises, it is still a hard task to identify best practices and improvements to start implementing collaboration and interoperability practices inside different types of networked environments. Presuming that an organization is collaborating in any type of the networks aforementioned, the question they face is: how is my company performing alone and in the network, that is, how mature is my organization in terms of collaboration and interoperability and what can be improved to perform better? In order to answer this question, a maturity model is being developed in the framework of the COIN project. This maturity model, named ECMM (Enterprise Collaboration Maturity Model), has the main objective of assessing organizations that desire to know their collaboration and interoperability maturity level with respect to a set of best practices. The result of these assessments will present, among other issues, an improvement plan and a roadmap to increase the enterprise’s collaboration and interoperability capabilities, instilling in organizations the benefits of excellence models.
2. ECMM A maturity model is a framework that describes, for a specific area of interest, a number of levels of sophistication at which activities in this area can be carried out. In the current case of ECMM, the specific areas of interest are Collaboration and Interoperability since ECMM focuses on different disciplines that an organization can address to improve its business in a networked environment. However, a number of interoperability maturity models provide some guidance to enterprises interested in developing or improving their ability to work in networked environments. Table 1. lists some of these models.
ECMM: Preliminary Definition and Future Challenges
431
Table 1. Existing Interoperability Maturity Model Examples Area Software development and systems engineering
Model Capability Maturity Model for Software (CMM), Carnegie Mellon Levels of Information Systems Interoperability (LISI), Carnegie Mellon Capability Maturity Model Integration (CMMI), Carnegie Mellon Business Process Orientation (BPO) Maturity Model, Carnegie Mellon
Defense
Government Services
Organizational Interoperability Maturity Model for C2 (OIMM), Australian Defence Science and Technology Organization Interoperability Maturity Model (EIMM), European Union Government Interoperability Maturity Matrix (GIMM), Sarantis, Charalabidis, and Psarras
Industry
Collaboration Engineering Maturity Model (CEMM), E. L. Santanen, K. G.L., and K. Golla Supply Chain Management (SCM) Maturity Model, Lockamy A, and McCormack K.
ITC
IT-enabled Collaborative Networked Organizations Maturity Model (ICoNOs MM), Roberto Santana Tapia
Nevertheless, none of these maturity models cover with sufficient depth both interoperability and collaboration concepts. Furthermore, ECMM is focused on the specific collaborative form called Virtual Organizations . This new paradigm could be considered a great advance especially for SMEs which are currently facing up new challenges caused by the present changeable economic situation. Hence, this maturity model approach elicited in the context of COIN project will help organizations to evaluate and improve the capability for collaboration of an enterprise inside its collaborative network and to support collaborative and interoperability practices in the scenarios defined within the project: collaborative networks, supply chains and business ecosystems. 2.1. Scope and Structure One of the major challenges while defining the scope of the Enterprise Collaboration Maturity Model has been to determine and establish which domains, vital to collaborative and interoperable environments, need to be covered. Other maturity models, like CMMI, focus on measuring and assessing the business processes, whereas they do not strive to measure and assess business strategies and business models. In the case of ECMM, since Enterprise Collaboration and
432
J. Alonso, I. Martínez de Soria, L. Orue-Echevarria, M. Vergara
Enterprise Interoperability are not only influenced by Business Processes but they are also tightly connected to Business Models and Business Strategies, it is clear that these aspects need to be treated. However, the ECMM requirements have been elicited using different resources. The first step in the requirements elicitation has been the distinction among the different types of possible requisites, like for example: General Requirements, Collaboration Requirements and Interoperability Requirements. Furthermore, potential users of collaboration networks have also been selected to provide their needs, experience and vision about the issue of measuring, and assessing organizations in order to evaluate their preparedness for collaboration. The figure below shows the different sources from which the ECMM has obtained its requirements and thus, its structure:
Fig. 1. Maturity Model requirements sources
As shown in the previous figure, General Requirements are those requirements related to common characteristics of models (structure of the model, evaluation method etc). For more specific Collaboration and Interoperability issues, concrete requirements coming from ECOLEAD [3] and ATHENA [4] projects have been identified. End Users are the fourth source identified to capture requirements for the Maturity Model. These end user’s requirements covers the three previously explained areas of general, collaboration and interoperability requirements but within the perspective of the potential users of the maturity model. Finally, after making a depth analysis of the COIN end-users’ answers based on multiple requirements and the vision of other maturity models and European projects, it was decided to design the ECMM with four maturity levels and seven domains: Table 2. Levels of the ECMM Level 1. Performed
Description Collaboration with external entities is done, but in an ad-hoc and chaotic manner.
ECMM: Preliminary Definition and Future Challenges
2. Managed
Create a management foundation for collaboration. Network technologies are used to collaborate and interoperate.
3. Standardized
Establish a common business strategy and business process infrastructure for collaboration.
4. Innovating
433
Manage and exploit the capability of the networked organization process infrastructure to achieve predictable results with controlled variation. Table 3. Domains of the ECMM
Domain Project and Product Management Business Process and Strategy Customer Management Collaboration, Legal Environment and Trust
Description This domain contains the cross-project and product activities related to defining, planning, developing, risks management and quality assurance. This domain covers areas that support business process management and financial aspects. This domain contains aspects related to relationship with the customer and evaluation. Legal activities and terms of collaboration relationships.
Organisation
This domain covers activities related to management of resources, development of competences, measurement.
Systems and Technology
Technologies and Services for Interoperability and Collaboration.
Innovation
Activities related to innovation processes.
2.2. Process Areas Taking into account the previous requirements chose by COIN end-users where a set of process areas were proposed according to different types of collaboration, next it is presented a brief description of each of the process areas that were selected: • •
•
Business Management (BM): Business Management (BM) plans and manages the business and financial aspects of a CNO. Collaboration Agreement (CA): The purpose of the Collaboration Agreement (CA) is to set up the terms in which the collaboration within the CNO takes place as well as the management of this collaboration throughout the whole life of a CNO. Collaborative Project Management (CPM): The purpose of Collaborative Project Management (CPM) is to establish and manage the project and the involvement of the relevant stakeholders.
434
J. Alonso, I. Martínez de Soria, L. Orue-Echevarria, M. Vergara
•
• • •
•
•
•
• •
•
•
•
Configuration Management (CM): The purpose of Configuration Management (CM) is to establish and maintain the integrity of work products using configuration identification, configuration control, configuration status accounting, and configuration audits. IPR: The purpose of the Intellectual Property Rights (IPR) is to clarify and agree the terms of the Intellectual Property Rights within the CNO. Measurement and Analysis (MA): The purpose of Measurement and Analysis (MA) is to develop and sustain a measurement capability of the CNO that is used to support management information needs. Process and Product Assurance (PPA): Process and Product Assurance provides appropriate conformance guidance and objectively reviews the activities and work products of work efforts within the CNO to ensure they comply with applicable laws, regulations, standards, organizational policies, business rules, process descriptions, and work procedures. Requirements Management (REQM): The purpose of Requirements Management (REQM) is to manage the requirements of the project’s products and product components and to identify inconsistencies between those requirements and the project’s plans and work products. Resource Management (RM): Resource Management plans and manages the acquisition, allocation, and reassignment of people and other resources needed to prepare, deploy, operate, and support the CNO’s products and services. Trust Management (TM): The purpose of Trust Management (TM) is to promote the establishment of trust relationships among CNO participants, including the assessment of the trust level among members and between members and the CNO as a whole. Business Governance (BG): Business Governance (BG) establishes executive accountability for the management and performance of the CNO’s work and results. Collaborative Business Process (CBP): The purpose of Collaborative Business Process (CBP) is to establish and maintain a usable set of collaborative business process assets and work environment standards. Collaborative Customer Relationship Management (CCRM): The purpose of Collaborative Customer Relationship Management (CRM) is to manage the interaction of potential or actual customers with the CNO Defect and Problem Prevention (DPP): Defect and Problem Prevention identifies and addresses the causes of defects and other problems that are the primary obstacles to achieving a CNO’s plans and quantitative improvement goals so these defects and problems do not recur. Organizational Innovation (OI): The purpose of Organizational Innovation (OI) is to select and deploy incremental and innovative
ECMM: Preliminary Definition and Future Challenges 435
•
improvements that measurably improve the CNO’s processes and technologies. • Requirements development (RD): The purpose of Requirements Development (RD) is to produce and analyze customer, product and product component requirements. • Risk Management (RSKM): The purpose of Risk Management (RSKM) is to identify potential problems before they occur so that risk-handling activities can be planned and invoked as needed across the life of the CNO, product or project to mitigate adverse impacts on achieving objectives. • Interoperability and Collaboration Technologies (ICT): The purpose of Interoperability and Collaboration Technologies (SICT) is to standardize the usage of a set of baseline tools, techniques and methods for interoperability and collaboration • Technical Solution (TS): The purpose of Technical Solution (TS) is to design, develop, and implement solutions to the committed requirements. • Customer Evaluation (CE): The purpose of Customer Evaluation (CE) is to measure the customers’ satisfaction regarding the delivered products and services and to set up a set of indicators internal to the CNO w.r.t. the customers. • Open Innovation (OPI): Systematically explore a wide range of internal and external sources for innovation opportunities, integrate and exploit those opportunities through multiple channels • Organizational Process Performance (OPP): The purpose of Organizational Process Performance (OPP) is to establish and maintain a quantitative understanding of the performance of the CNO’s set of standard processes in support of quality and processperformance objectives, and to provide the process-performance data, baselines, and models to quantitatively manage the CNO's projects. • Quantitative Project Management (QPM): The purpose of Quantitative Project Management (QPM) is to quantitatively manage the project’s defined process to achieve the project’s established quality and process-performance objectives. Training and Competency Development (TCD): Competency Development develops the competencies within the CNO’s workforce that are needed to perform the organization’s work using the organization’s standard processes. The purpose of Training is to develop the skills and knowledge of people so they can perform their roles effectively and efficiently.
The development of these process areas is currently being finalized. Each of these process areas consists of one or more specific goals, each of these are composed of several specific practices, which result in one or several work products. Additionally, a generic set of goals and practices is currently being developed for each maturity level defined.
436
J. Alonso, I. Martínez de Soria, L. Orue-Echevarria, M. Vergara
Fig.2. Relation between process areas and domains
2.3. Assessment Method A way to analyse the readiness of an organization and to set up an improvement plan is via an assessment. The objectives of the assessment method are to: • •
Provide a structured approach to assess the network and member organizations’ processes against the ECMM selected domains/process areas. Establish basic requirements to make an evaluation in order to ensure that different assessments are consistent and comparables between them and it could be repeated as well.
One of the main objectives of ECMM is related to the assessment of an enterprise network. Obviously, an enterprise network might be composed of several homogeneous or heterogeneous enterprises such as SMEs, large companies or public administration and with different topologies such as supply chains, collaborative network organizations or business ecosystems. Therefore, the amount of enterprises in the network is not a trivial issue since it could determine how marks should be distributed in order to evaluate the real collaboration and interoperability or how many enterprises should have a green mark to pass the assessment. That is, if an enterprise network is composed of three SMEs, the impact of each one represents the 33% whereas if an enterprise network is composed of ten SMEs the impact of each one only represents the 10%. On one hand, it must be considered the number of employees of each one because this
ECMM: Preliminary Definition and Future Challenges 437
parameter affects on the collaboration and, on the other hand, the role of the SME in the collaborative project might also be considered since it might be possible to find a network where some SMEs practically do not contribute to the collaborative project. All these reflections attempt to show the difficulties when evaluating objectively an enterprise network because of the different nuances and weights that could be applied. Next, a table describes the marks in which an enterprise network would be evaluated: Table 4. Marks in the assessment method Mark
Description
Red
The enterprise network does not collaborate on a solid way due to the low collaboration and interoperability level in most of the enterprises.
Yellow
The enterprise network does not collaborate on a solid way due to the low collaboration and interoperability level in some of the enterprises.
Green
The enterprise network collaborates on an adequate collaboration and interoperability grade in all or most of the enterprises.
However, a set of indicators have been defined such as number of enterprises, number of employees or the role (essential, important, normal and dispensable) in order to take into account the possible characteristics which might have an influence on the assessment. On the other side, a set of rules are being created combining the indicators previously described so as to manage the marks accurately.
3. Conclusions and Future Challenges This paper has presented the research, basis and structure of the Enterprise Collaboration Maturity Model (ECMM) developed in the context of the COIN IP project during its first two years. On one side, ECMM will strive to make enterprises (SMEs in particular) more prepared to react to business opportunities in a networked environment and, on the other side, to make the construction of such collaboration smoother thanks to interoperability practices. Both challenges will entail a significant impact in the competitiveness of companies since ECMM can be used as an opportunity for stregthening the cooperation between enterprises that belong to the same or different sectors in order to aspire to manage bigger projects, fill a gap in the market or generate innovative products. Future work includes the validation of the model in enterprises belonging to a CNO, a supply chain or a virtual business ecosystem. As the validation of the complete ECMM is not viable, pilots will focus on a predefined set of process
438
J. Alonso, I. Martínez de Soria, L. Orue-Echevarria, M. Vergara
areas that the enterprises select as critical for their business. These piloting activities will allow us to update and improve our model based on the input received in the assessments following an iterative approach.
4. Acknowledgement This work has been partly funded by the European Commission through IST Project COIN: Collaboration and Interoperability for Networked Enterprises (No. IST-2007-216256). The authors wish to acknowledge the Commission for their support. We also wish to acknowledge our gratitude and appreciation to all the COIN project partners for their contribution during the development of various ideas and concepts presented in this paper.
5. References [1] [2] [3] [4] [5]
COIN Consortium (IP-216256). Collaboration and Interoperability for Networked Enterprises. Anex I-Description of work, 2007 Li, M., Crave S., Grilo, A., van den Berg, R Unleashing the Potential of the European Knowledge Economy: Value proposition for Enterprise Interoperability. Version 4.0, European Communities, 2008. ECOLEAD Consortium (IP-506958). A reference Model for Collaborative Networks. Deliverable D5.2.3, 2007. ATHENA Consortium (IP-507849). Guidelines and Best Practices for Applying the ATHENA Interoperability ARICON Project http://cordis.europa.eu/data/PROJ_FP5/ACTIONeqDndSESSIONeq112242005919nd DOC eq1905ndTBLeqEN_PROJ. htm, 2008.
Towards a Conceptualisation of Interoperability Requirements Sihem Mallek1, Nicolas Daclin1 and Vincent Chapurlat1 1
Laboratoire de Génie Informatique et d'Ingénierie de Production - LGI2P - site EERIE de l’Ecole des Mines d’Alès, Parc Scientifique Georges Besse, F30035 Nîmes cedex 5, France.
Abstract. Enterprises that aim to work together want, prior to any effective collaboration, to know if they are able to interoperate. On the one hand, this induces to be able to define the particular needs having to be taken into account in order to demonstrate if an enterprise can be or must be interoperable. On the other hand, it requires techniques and approaches allowing to formalise these needs as a set of unambiguous and, as formal as possible, requirements called here interoperability requirements. Finally, verification techniques can be used to detect how and where some of these requirements cannot be satisfied. This allows to highlight interoperability problems. This paper focuses on the two first phases and describes an approach to define and to formalise interoperability needs into interoperability requirements. These requirements are decomposed on three classes named compatibility, interoperation and reversibility requirements. Keywords: Interoperability requirements, compatibility, interoperation, reversibility, verification, enterprise.
1. Introduction Collaboration between enterprises, to fulfill a common project, remains difficult and can be considered as efficient if these enterprises are really able to interact. In this context, the development of interoperability has become an important issue, in today’s globalized world, to facilitate, optimize and improve these interactions. Basically, interoperability is defined in [1] as “the ability of two or more systems or components to exchange information and to use the information that has been exchanged”. Therefore, interoperability can be seen as a set of needs that enterprises have to satisfy. However, these needs remain difficult to specify exhaustively considering the enterprise nature, organisation, skills etc. Thus, the description of an interoperability need can be ambiguous, not completely coherent and not complete. This is due to their specification based often on complex
440
S. Mallek, N. Daclin and V. Chapurlat
expression in natural language. As a consequence, to obtain an unambiguous and clear description of the interoperability needs, a formalisation of these needs is necessary. On the one hand, this formalisation allows to clarify and to structure each need as a set of requirements represented in a model of requirements. These requirements are done by the use of a limited set of concepts and terms coming from natural langage. On the other hand, this formalisation allows to become able to prove each need. This proof can be based on several techniques (expertise, behavioural simulation, formal approaches...). The here research work proposes that, formal approaches such as verification techniques will be used to verify if these requirements are satisfied by enterprises. This verification is performed thanks to requirements model applyed on collaborative model of enterprises. Moreover, to make this verification possible, formalisation of requirements must be made by (1) using formal languages such as mathematical language, and (2) an enrichment of the modelling languages used to describe the enterprises and their collaborative processes. The here proposed research focuses on interoperability requirements specification and verification. It allows: •
•
•
to characterise what are the needs of an enterprise which want to improve its interoperability and its ability to interact. The goal is to determine these needs by using different approaches such as literature analysis and questionnaire to conduct interviews. to formalise these needs in the form of a set of unambiguous, and as formal as possible, requirements called “interoperability requirements”. The main objective of this formalisation is to provide a requirement model allowing to define and structure interoperability requirements as proposed in different approaches such as [2], [3]. to use this formalisation in order to verify (via formal verification techniques) if the enterprise components respect or not these requirements. This verification allows to highlight interoperability problems. The goal is to adapt existing techniques of verification. This adaptation has to consider (1) the model of interoperability requirements ; (2) existing modelling languages commonly used in enterprises and collaborative process modeling domain [4].
This paper is structured as follows. Section 2 introduces interoperability needs with the presentation of different relevant works on the domain. An interoperability requirements model is presented and explained in section 3. To demonstrate the usefulness of the interoperability requirements model, a case study is briefly presented in section 4. Finally, section 5 presents a conclusion and outlines perspectives.
2. Interoperability Needs and Related Works Enterprises face many challenges related to the lack of interoperability such as loss of money due to the necessary efforts for an effective collaboration. Thus, the development of interoperability is became a crucial question for enterprises that want to gain competitiveness in a globalized environment. As a consequence,
Towards a Conceptualisation of Interoperability Requirements
441
several works have been developed from the last years in order to reply to the needs of interoperability expressed by enterprises. For instance, numerous maturity models (LISI, LCIM, OIM…) [5], [6], [7] have been developed. These models allow systems to know its ability to interoperate (at a given moment) with regards to different interoperability levels. Some of these models give recommendations that allow a given system to evolve from its own interoperability level toward full interoperability. In [8] a methodology is proposed to measure interoperability of enterprises. This measure includes two measures such as compatibility measurement and interoperation performance measurement. To perform the measure of compatibility, a matrix of compatibility is used to inform enterprises on the attendance (or not) of interoperability problems. To perform measure of interoperation performance, some criteria such as cost of interoperation, time of interoperation and quality of interoperation are measured. Furthermore, in [9], characterisation of interoperability is proposed to evaluate performance of enterprise interoperability. This characterisation identifies requirements as “rules of interoperability”. Five rules of interoperability are identified. However, these rules are generic and hardly difficult to exploit (e.g. rule 1 explains that a feedback loop is required between two interoperating systems). To structure and capitalize these different works, these ones are integrated in an enterprise interoperability framework developed in [10]. This framework considers two fundamental aspects to treat interoperability needs: the “interoperability barriers” and “interoperability concerns”. The goal is to achieve an efficient collaboration by solving interoperability barriers (conceptual, organisational and technological) with regards to interoperability concerns (data, services, processes and business). Based on these different works, this research work proposes and provides a model of interoperability requirements, a structure and a method of work (such as REGAL [2]) that enables a user to be guided – with a minimum of omission - in the expression of its needs.
3. Interoperability Requirements Model As mentioned before, interoperability can be seen as a set of needs (often ambiguous and complex) and furthermore as requirements (more formal, clear and unambiguous) that enterprises must satisfy to be interoperable. A requirement can be defined as “a statement that specifies a function, ability or a characteristic that a product or a system must satisfy in a given context” [11]. Generally speaking, interoperability is related to the concept of compatibility. Compatibility means to harmonize enterprises (method, organisation, tool...) together so that heterogeneous information exchanged can be understood and exploited by each ones without interfacing effort. However, compatibility is not sufficient to formulate interoperability requirements. In fact, during the collaboration between two enterprises, many problems, other than compatibility, can occur. For instance, two enterprises use the same language to code their data, but one of them is unable to send its data. In this case, “two interoperable systems
442
S. Mallek, N. Daclin and V. Chapurlat
are compatible, but two compatible systems are not necessarily interoperable” [12]. This statement leads to consider a second kind of interoperability requirements called “interoperation requirements”. These requirements are related to the interaction between enterprises during the collaboration. It means that enterprises can work efficiently together when interacting. Moreover, after working enterprises wish to retrieve their performance and autonomy while remaining efficient. For instance, after collaboration, one of these enterprises may lose performance that it had before this collaboration. In this case, it is question to be sure that a given enterprise will reach again, its own performance after the end of collaboration. Thus, interoperability considers another concept such as the reversibility [10]. Reversibility means that enterprises can return to their original state and retrieve their autonomy at the end of collaboration with regards to its own performance and including positive and/or negative variation that accepted. As a consequence, requirements related to the reversibility of enterprises are also required. In summary, interoperability requirements can be decomposed in three categories such as: compatibility requirements, interoperation requirements and reversibility requirements. For instance, these three categories are fully in agreement with the virtual enterprise life cycle (creation, operation and dissolution) where the implementation of interoperability plays a preponderant part [13]. The relationship which consistently relates these three categories with the VE life cycle is highlighted in figure 1. Thus, two enterprises which aim to work together must, firstly create the collaboration, then collaborate efficiently, and finally stop the collaboration. The creation phase takes place before collaboration when a business opportunity is detected. The need is to know if the future collaboration can take place or not in good condition. Compatibility requirement are deployed and verified (e.g. during the collaboration, if a document from a given format is received by a person who dispose of an earlier version, he cannot read this document). Through the runtime phase, a good interoperation is required for an efficient collaboration (e.g. if a product is received later than required in an enterprise, it can retard the continued collaboration). The dissolution phase takes place after the collaboration where reversibility of enterprises is required to return to their original state. Interoperation
E2
E1
Runtime collaboration
Operation
Compatibility
E2
E1
Before collaboration
Creation
Reversibility
E1
E2 After collaboration
Dissolution
Fig. 1. The three interoperability requirement categories related to the VE life cycle [adapted from 13]
Towards a Conceptualisation of Interoperability Requirements
443
Furthermore, the set of requirements in each category (compatibility, interoperation and reversibility) must be represented clearly and in an unambiguous way. For this reason, the principle of a requirement model as a causal tree is developed. The concept of causal tree allows to represent the refinement of interoperability requirements by a causal relation linking an abstract requirement to more precise requirements with logical function. Thus, the causal tree allows with the refinement to reduce the residual ambiguity of requirements that may exist for each level as shown in figure 2 and presented in [15]. Requirement more precise
Abstract Requirement
node(i.1) node(i.2)
R
node i
node(i.k.1) node(i.k.2) node(i.k.n)
R’
node(i.k)
Causal relation Logical function: node i : node(i,1) ∧ node(i,2) ∧ ... ∧ node(i,k)
Fig. 2. Requirements refinement structured in a causal tree
This research work proposes a causal tree to represent refinement of interoperability requirements. The first node represents the first interoperability requirement that means enterprises aim to be interoperable. This first requirement can be refined by the three categories of interoperability requirements: compatibility, interoperation and reversibility. Requirements of each category must be refined by concepts related to the INTEROP framework (interoperability concerns and interoperability barriers) [10]. Therefore, each category can be refined by interoperability concerns which are refined by interoperability barriers as shown in figure 3.
Fig. 3. The Interoperability requirements causal tree
The definitions and concepts related to the three categories of interoperability requirements are presented in following sections.
444
S. Mallek, N. Daclin and V. Chapurlat
3.1. Compatibility Requirements A compatibility requirement is defined as “a statement that specifies a function, ability or a characteristic, independent of time and related to interoperability barriers for each interoperability concerns, that enterprise must satisfy before collaboration effectiveness”. Compatibility requirements are derived from the enterprise interoperability framework. They are concerned by the compatibility of interoperability barriers which includes: conceptual (syntax and semantic), technological (platform, communication protocol) and organisational barrier (persons, organisational structure) at each level of enterprise. It is to note that compatibility requirements remain independent of time and have a static aspect. Thus, a set of compatibility requirements is established taking into consideration these concepts. The following figure represents a set of compatibility requirements structured in the causal tree.
Fig. 4 Compatibility requirements structured into a causal tree
An example of compatibility requirement at the level of services and related to organizational barrier is described as: “Services must have a clear and defined responsible”, in order to know who is responsible for doing what. To help enterprises to see where their compatibility problems are, verification of compatibility requirements can be performed. From this verification, enterprise will be informed of their compatibility problems in agreement with the three barriers of interoperability and the four levels where interoperability take place in enterprises. After, enterprises will have the opportunity to solve their problems. 3.2. Interoperation Requirements An interoperation requirement is defined as “a statement that specifies a function, ability or a characteristic, dependent of time and related to the performance of the interaction, that enterprise must satisfy during the collaboration”. In the runtime phase of collaboration, enterprises want to know if the interoperation is efficient. The performance of the interoperation can be measured by some performance criteria such as: the cost, the quality and the time of the interoperation as mentioned in [8]. For instance, time criterion can be related to the duration of the interoperation and the quality criterion can be related to the quantity received. It is to note that interoperation requirements are dependent of time and have a dynamic aspect.
Towards a Conceptualisation of Interoperability Requirements 445
As a consequence, a set of interoperation requirements is established according to these concepts and structured in the causal tree as shown in the following figure.
Fig. 5. Interoperation requirements structured into a causal tree
An example of interoperation requirements at the level of data related to technological barrier for quality performance can be described as: “for each data received, a receipt must be returned” to be sure of the good reception of data. To help enterprises to see if the interoperation is efficient, verification of interoperation requirements can be performed. From this verification, enterprise will be informed of their interoperation problems in according to their performance criteria. After, enterprises will have the opportunity to solve their interoperation problems. 3.3. Reversibility Requirements A reversibility requirement is defined as “a statement that specify functions, abilities or characteristics related to the capacity of enterprises to retrieve their autonomy and to return to their original state (in terms of their own performance) that they must satisfy after collaboration”. During collaboration, enterprises working together are involved in a collaborative public process. This public process can implicate a part of (or whole) activity of an enterprise. On the one hand, enterprises must be able to ensure their own performance related to its private process and, on the other hand the performance related to the public process as shown in Fig 6. Therefore, as mentioned above, usually after collaboration, enterprises want to retrieve their original own performance. This means that, if the implementation of interoperability leads to adaptations or modifications (working methods, tools, organisational structure ...) in enterprises, they must be able to return to their original state at the end of the collaboration. Therefore, enterprises want to know if their performance after the collaboration is in accordance with their performance before the collaboration including variations (e.g. enterprises can save time production) which can be admitted by enterprises. In this case, enterprises are reversible and can resume their former activities.
446
S. Mallek, N. Daclin and V. Chapurlat
Runtime phase of collaboration
After collaboration
Private process of enterprise 1
C1 Q1 T1
Collaborative public process
C3 Q3 T3
Private process of enterprise 2
C2 Q2 T2
Performance criterion C: Cost Q: Quality T: Time
C1+/- ε, Q1+/- ε, T1+/- ε
C2+/- ε , Q2+/- ε , T2+/- ε
ε : Variations
Fig. 6 Reversibility of enterprises
A set of reversibility requirements according to the performance of the enterprise measured by cost, quality and time, are structured in the causal tree as shown in the following figure.
Fig. 7. Reversibility requirements structured into a causal tree
An example of reversibility requirements at the level of service related to organisational barrier for quality criterion can be given as: “the cost of a given service after the collaboration corresponds to the cost before collaboration including variations”. To help enterprises to see if they are able to retrieve their own performance, verification of reversibility requirements can be performed. In order to illustrate the usefulness of the proposed interoperability requirements model, a case study is presented in the next section.
4. Case Study The collaborative process issued from [14] represents the collaboration between a retailer and a provider. The sales department of the provider want to collaborate with external and independent retailers in charge of delivering products to customers. The different activities and flows between partners (retailer and provider) are shown in figure 8. The goal is to detect if compatibility, interoperation and reversibility problems can occur, thanks to the interoperability requirements model.
Towards a Conceptualisation of Interoperability Requirements 447
Logistics rules Order Orders generation Retailer
To check available quantities
Sufficient
To deliver
To produce Not Sufficient
Sales department
Logistic Department To make out invoice
Invoice To pay Retailer
Payment
To collect payment
Sales department
Fig. 8. Collaborative process between provider/retailer
Before the retailer sends its order, several compatibility requirements have to be verified. As far as organisational aspect is concerned, one requirement that must be verified could be: “A service must have a clear and defined responsible”. Indeed, if this requirement is not verified, it may dread mistakes during the transmission of orders, that could lead to a deterioration of collaboration performance (loss of time to convey the right order to the right person...). Then, during the collaboration, it is necessary to verify some interoperation requirements. One requirement considered here can be: “For each data received, a receipt must be returned”. For instance, the reception of invoices has to induce, in all cases, the generation of an acknowledgement by the retailer. When the collaboration is over, it is necessary to verify that the collaboration has not deeply impacted behaviour, organisation, and practices of partners. Thus, verification of reversibility requirements is required. A reversibility requirement considered here, could be: “Internal functioning rules(e.g. logistics rules) of a partner P1 must remain the same all along the collaboration and not be modified suddenly and under the control of another partner Pn , excepted if this modification are proposed and validated by P1 itself”. The verification of this requirement allows to detect if new (or deeply modified) logistics rules used by sales department (P1) during the activity named ‘to check available quantities’ emerge independently of the sales department.
5. Conclusion The current economic context forces enterprises to work in collaboration to improve their performances. Therefore, formalisation, structuration and verification of interoperability requirements to help enterprises to find their interoperability problems can be a solution to improve this collaboration. This paper focused on the conceptualisation of interoperability requirements. Three categories of interoperability requirements were identified and defined: compatibility, interoperation and reversibility requirements. To help enterprises to find their interoperability problems, analysis of interoperability requirements can be performed. The provable requirements which can have a static or a dynamic aspect will be analysed by formal verification
448
S. Mallek, N. Daclin and V. Chapurlat
techniques. There are two techniques related to the types of the provable interoperability requirements (static or dynamic). The first technique allows to verify static requirements as shown in [15] based on Conceptual Graphs. The second technique allows to verify dynamic requirements using model checker (Uppaal, SMV…) [16]. The requirements that are not provable will be analysed by experts. Future works are related to the formalisation of interoperability requirements into properties with a formal language to verify them later with formal verification techniques.
6. References [1] [2] [3]
[4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16]
IEEE, (1990) A compilation of IEEE standard computer glossaries. Standard computer dictionary, New York REGAL: Requirements Engineering Guide for All, Available on line at: http://www.incose.org/REGAL Van Lamsweerde A., Dardenne A., Delcourt B., Dubisy F., (1991) The KAOS Project: Knowledge Acquisition in Automated Specification of Software. Proceedings AAAI Spring Symposium Series, Stanford University, American Association for Artificial Intelligence, pp. 59-62, March Petit M., (2002) UEML Thematic Network - Contract n°: IST – 2001– 34229 – Work Package 1 Report - Enterprise Modelling State of the Art, October C4ISR Architecture Working Group, (1998) « Levels of Information Systems Interoperability (LISI) ». United States of America Department of Defence, Washington DC, USA, 30 March Tolk A., Muguira J.A., (2003) « The Levels of Conceptual Interoperability Model ». Proceedings of Fall Simulation Interoperability Workshop (SIW), Orlond, USA, 2003 Clark T., Jones R., (1999) « Organisational Interoperability Maturity Model for C2 ». Proc of Command and Control Research & Techn. Symposium, Newport, USA Daclin N., Chen D., Vallespir B., (2008) Methodology for enterprise interoperability. 17th IFAC World Congress (IFAC’08), Seoul, Korea Blanc S., Ducq Y., Vallespir B., (2007) A graph based approach for interoperability evaluation. Interoperability for Enterprise Software and Applications, IESA 2007, Madeira, Portugal INTEROP, (2007) Enterprise Interoperability-Framework and knowledge corpus Final report, INTEROP NoE, FP6 – Contract n° 508011, Deliverable DI.3, May 21st Scucanec S. J., Van Gaasbeek J. R., (2008) A day in the life of a verification requirement. U.S Air Force T&E Days, Los Angeles, California, February Kasunic M., Anderson W., (2004) Measuring systems interoperability: challenges and opportunities. Software engineering measurement and analysis initiative, Technical note CMU/SUE-2004-TN-003 Camarinha-Matos L. M., Afsarmanesh H., Rabelo R. J., (2003) Infrastructure developments for agile virtual enterprises. Journal of Computer Integrated Manufacturing, ISSN 0951-192X, Vol. 16, N. 4-5, Jun-August Singular Enterprise, « Implementation methodology – Blue Print – Demo TelCo », 2003 Chapurlat V., Roque M., (2009) Interoperability constraints and requirements formal modelling and checking framework. Advances in Production Management Systems, APMS 2009, Bordeaux, France Behrmann G., David A., Larsen K. G., (2004) A tutorial on Uppaal. Department of Computer Science, Aalborg University, Denmark
Use Profile Management for Standard Conformant Customisation Arianna Brutti1, Valentina Cerminara2, Gianluca D’Agosta1, Piero De Sabbata1 and Nicola Gessa1 1 2
ENEA XLab, via Martiri di Monte Sole, Bologna, 40129, Italy FTI-Forum per la Tecnologia dell’Informazione, via Martiri di Monte Sole, Bologna, 40129, Italy
Abstract. The adoption of public applicative standards could improve eBusiness adoption, especially among networks of SMEs. Nevertheless the adoption of such specifications encounters obstacles and hampering factors. This paper analyses some of such factors and the experience of promoting the adoption of standards for eBusiness in sectors dominated by SME’s presence and outlines some of the actions that can be pursued through the adoption of use profiles. The paper also present an approach for the management of use profiles, that appear as a way to overcome some of the major problems arising from the nature of the standardised specifications and to reduce the efforts necessary to achieve true interoperability between systems. Keywords: interoperability, UBL, standard, use profile, eBusiness, co-constraints, SMEs.
1. Introduction In the scenario of data exchange for eBusiness interoperability, standards play a relevant role in creating a favourable environment: they are perceived as a guarantee for the investments when companies buy eBusiness solutions. This is especially true when the collaborating companies are small in size and not able to impose a data format to the community. It cannot be ignored that standardisation initiatives and their results partially collided with the complexity and the variety of the business models and systems that regulate the real business scenarios: the number of critical issues increases dramatically when the context changes from an environment characterised by few large players to an environment with a widespread framework of networked organisations. Often in the past, the proposed solution to these problems was
450
A. Brutti, V. Cerminara, G. D’Agosta, P. De Sabbata and N. Gessa
increasing the complexity of the standard specifications in order to match a large share of needs. This is especially true for large spectrum standards, where a high degree of generality leads to greater complexity of the specifications and, in turn, of the software solutions that manage such information. The authors performed a deep analysis related to the complexity of UBL (Universal Business Language - an OASIS standard for business documents [1]) in order to highlight the difficulties that exist in the adoption of such kind of specifications in real business environment characterized by the presence of a large number of SMEs. The focus of the analysis of this paper is the Textile/Clothing and Footwear sectors, but other experiences are known by the authors, like in the furniture industry (FunStep, www.funstep.org) or in the paper and wood industry (Papinet, www.papinet.org). Focusing the attention to Textile, Clothing and Footwear industry, we have recognised they are characterized by: • • •
Large presence of SMEs in the network; Long and segmented value chain; The existence of different standards which are valid only for parts of the value chain.
The resulting main drawbacks for standard specifications that emerge in this scenario are: • •
The uncertainty in the semantics of the standardised models; The increased complexity and cost of the systems that implement the standard specifications.
The aim of this paper is to analyse, considering a specific use context, what is standard complexity and the criticalities for standard adoption, to present a different approach for eBusiness standard adoption in large SMEs networks by reusing already defined and well-established specifications and to define innovative solutions to customize general standards to particular SME’s needs.
2. The Problem and the Status of Art Nowadays one of the challenge for the industry is to be able to adopt ecollaboration combined with other manufacturing and supply chain paradigms to strengthen or re-gain global competitiveness. Some traditional retailers and manufacturers try to solve the conflict between long lead times and efficient consumer response with a vertical integration of the value chain, if possible. And if this is not possible, by e-linking and e-collaboration in the value chain to have the same fast answers to consumer demand. But the central knot remains hard to solve: on one hand the real benefit of these solutions for the industries increases very quickly with the number of adopters while, on the other hand, this number remains small. In the context of SME the problems are tackled from different points of view:
Use Profile Management for Standard Conformant Customisation
451
Industry. Initiatives were promoted but unable to achieve a critical mass of adopters: some communities where built around ASP providers of integration services or company portals [2]. But these models appear less effective than in other sectors with lower predominance of SMEs, like automotive [3]. We can observe only few local islands of interoperability. In many cases the electronic flow of data, even when established, does not seem to result in a positive impact on the involved organisations. The key for a more wide connectivity seems to be the interoperability of systems based on commonly agreed open standards and in a strong role of the industry associations and other public actors. Standardisation. A lot of efforts were done in the field of standardisation for Textile/Clothing and Footwear industry in these years (the first one in the 90s, EDITEX), that is the target scenario of our study. Euratex and CEC together with their national member federations, as well as the EU Commission, CEN/ISSS, GS1 and others have been involved in e-business standardisation initiatives for the fashion industry: B2B standard were developed within the framework of ESOs (European Standardisation Organisations): CEN/ISSS TexSpin [4] and TexWeave [5] for Textile/Clothing, CEN/ISSS FINEC[6] for Footwear. Innovation prototypes. Various ‘user centric’ demonstration activities, collected requirements from the industry (mainly SMEs) and tried to adapt technology and, mainly, standards to the real needs: initiatives and projects like eTexML, Visit, Moda-ML [7], EFNET2/3, CecMadeShoe, ShoeNet [8], with a wide involvement of industry associations, prepared a background of analysis and specifications that was (almost) ready to be implemented by the industry. Yet, a wide adoption and an overall harmonisation were lacking in the user community. There was a reluctance of many firms and technology providers to implement standards (and eBusiness technologies [9]); on one hand they feared an excessive ‘normalisation’ of the applications that would lead them to lose their assets towards the customers; on the other hand they preferred to wait and see the success of an initiative and to invest only when the risk is lowered to zero; meanwhile they invested on the products. This situation was evidenced also by the eBusiness Watch reports for TCF sectors [1][10][11]: IT and eBusiness uptake was below the average of other sectors in the EU. As a result, the landscape of existing B2B applications is extremely varied, spanning from P2P solutions to a variety of Internet based solutions, all characterised by difficulties in achieving a critical mass of participants and in connecting small companies. Nevertheless data at European level [2] suggest the existence of an unsatisfied demand for a common standard architecture. To satisfy this demand, eBusiness standards where developed, depending upon the specific requirements and following different approaches, in order to find the best trade-off between completeness and rapid development. In fact, there exist many different standards and we can group the applicative level standards related to data models in two main categories: horizontal or vertical standards. While a horizontal standard (like EDIFACT, UBL, GS1 XML…) is crosssector and aims to cover a generality of processes (and data), a vertical standard (Papinet, OTA, HL7, TexWeave/Moda-ML) is closer to a specific domain.
452
A. Brutti, V. Cerminara, G. D’Agosta, P. De Sabbata and N. Gessa
This means that a horizontal standard has specifications that are only partially used in a concrete scenario while vertical standards (try to) provide data models that are properly tailored to manage the information exchanged in a specific business domain. But also vertical standards have to support a variety of data that are only partially in use: concrete implementations of vertical standards have similar problems, even if smaller in size, like horizontal ones. In fact in real supply chain networks, the enterprises need to define constraints that are stricter than the vertical standard specifications, to reflect the requirements arising from very specific and dynamic businesses. The adoption of standardised technologies should encourage the early adopters but another issue arises from the nature of the standards themselves. When a network of organisations wants to adopt a common standard, the balance between the resources necessary to map the local organisation/systems into the standard and its capacity to fit real business is critical. More in detail, two aspects are critical: • •
The mapping between local (internal) processes and data models into the standardised models (and issues arising when they do not appear to fit each other); The reconciliation between different implementations of the same standardised specifications managed by different organisations.
Note that to tackle the first point, that is maintaining an adequate degree of generality, the standards must model many potential information, supporting a wider variety of data that are used only in certain scenarios; increasing the richness of the semantics leads to more complexity of the ICT systems that manage such information and to a plurality of different potential implementations. As a general matter a vertical standard appears more focused and effective to support real eBusiness, but the problem is that certain industrial domains lack a sectorial standard to adopt for their eBusiness transactions. This case has been for example experienced in the eBIZ-TCF project, where no sectorial specification was available for the relationships between producers and retail organisations in the footwear and clothing sectors. In any case, both using a horizontal and a vertical standard, at different levels, one of the problems is to find a trade-off between generality and complexity of the specifications. To solve the aforementioned criticalities, a possible approach is based on issuing profiles related to a specific use/domain. It is followed for example by UML, one of the most known modelling language, released by the OMG consortium: it provides rules for the definition of profiles intended as extensions of the modelling language in order to make it fitting specific application domains. Also in the world of EDI standard exists the possibility to define use profiles, called EDI subsets. These subsets are tailored for specific industrial sectors, but the specifications are released only in term of hard paper manuals, without a clear and simple machine readable format that could ease their adoption. Another possible approach, based on the semantic reconciliation through a domain ontology without relying on a standardised specification, was not pursued by eBIZ-TCF because the largest part of the IT suppliers of the industry were not
Use Profile Management for Standard Conformant Customisation
453
able to manage such kind of technologies while the more diffused XML based technologies are accepted and recognised as a mean to publish a common semantics.
3. A Method to Analyse XML Data Model Specification We relate the increase of “complexity“ of the specifications to two parameters that can be observed in a business document template: •
•
Redundancy: the total number of possible distinct data containers in a document, necessary or unnecessary to support a specific business (for example, the possible XPaths that identify the leaves in a XML instance tree). Uncertainty: the number of distinct data containers that exist for a single specific type of information in a document (for example, the possible alternatives to specify the Order ID in an XML instance).
To adapt the specifications to the real business, the standard specifications need something that defines use restrictions (Use Profiles) for specific domains or processes in order to limit these two parameters. Some samples, taken from our analysis of UBL documents, can clarify the problems. Table 1 shows an example of uncertainty in the usage of the UBL 2.0 Order Response: the element SalesOrderID can appear in the document in different positions having the same meaning, thus the Order ID assigned by the Seller could be inserted indifferently in both the XPaths [13]. Table 1. Different elements with the same semantics (Uncertainty)
XPath of the element 1 OrderResponse/cbc:SalesOrderID 2 OrderResponse/cac:OrderReference/ cbc:SalesOrderID
Description An identifier for the Order issued by the Seller. Identifies the referenced Order assigned by the Seller.
Occ . 0..1 0..1
Table 2 reports a comparison between the number of potential different XPaths (calculated from the official UBL 2.0 XPath Information files [15]) identifying different containers of data allowed by different document specification. In this table, the second column reports the number of information item a generic document should carry for data exchange, coming from our analysis in the eBIZ-TCF Textile clothing scenario. These numbers can be compared with the ones in the third and forth column, that represent the number of data containers (‘leaves’ in the ‘tree’ of the XML structure) made available by UBL 2.0 and the Moda-ML specification. We assume that a high number of possible XPaths in a document schema is a warning sign of higher development cost and of potentially slow process in setting
454
A. Brutti, V. Cerminara, G. D’Agosta, P. De Sabbata and N. Gessa
up data exchanges, together with ambiguities in subset of these XPath (Table 1): the same information could be represented in different ways and the software should manage all of them. Speaking about hundred, or thousand, potential alternatives the cost becomes prohibitive. The data reported in Table 2 show that such risk is lower in a vertical and domain specific scenario (like Moda-ML): in a vertical standardised specification the number of possible XPaths is really lower. The benefit in order to reduce semantic uncertainty and redundancy is evident. Table 2. Comparison between Horizontal Specifications, Vertical Specifications and Use Profiles in terms of potential XPaths (Redundancy)
Document template
catalogue
eBIZ-TCF Textile clothing scenario: data to be transferred 55
UBL 2.0 XML Schemas # of XPaths containing data 38.630
Moda-ML XML Schemas for a fabric purchase process # of XPaths containing data 99
UBL Use Profile for a retail-side purchase process from eBiz-TCF # of XPaths containing data 60
163
36
2.895.909
163
39
27
915.815
136
40
29
913.812
69
41
37
61.162
148
66
order
22
order response despatch advice receipt advice invoice
28
2.893.732
4. The Approach for Business Document Customisation According to this analysis, in business document modelling we envisage an approach not only to support the reduction from a general horizontal specification (UBL 2.0) to a sectorial scenario without loosing the conformance with the standard, but also to simplify the process of implementation and to limit the complexity of UBL documents for the final users; this approach leads to the definition of a use profiles (Figure 1.). The fifth columes in Table 2 can give an idea of the dimension of the UBL eBIZ-TCF Use Profile we have defined. The reduction of the document complexity for the final users is evident and the comprehension and adoption of the specification is symplified. Depending on the characteristics of the business sector it is possible to point out a second aspect: the tension that exists between standardised models and the need for customisation at company level [14]. On one hand, there is the need to have, for the collaborating enterprises, a common and shared “language” (the standard
Use Profile Management for Standard Conformant Customisation
455
specifications) between heterogeneous partners; on the other hand each enterprise, with its informative systems, needs a mechanism that can reflect and solve its specific requirements. At first glance, these two visions seem to be incompatible. Thus the defined approach aims also to support a second type of customisation: from the sectorial specification to the specific applications running between two or more systems. Finally, in order to support the reconciliation phase between ‘standard based’ different applications of different companies, it is necessary to have a clear documentation of the customisations implemented in this second step. General level
General (horizontal) specifications (i.e. UBL)
A. customisation
Sectorial (vertical)
to the domain
specifications (i.e. eBIZ-TCF)
B. customisation to the
Local level
Local use (inter company agreement)
applications
Fig. 1. Use profile definition
Applying this approach in the eBIZ-TCF project, we observed that in the Textile/Clothing and Shoes sectors different specifications were in place for intermanufacturer relationships (Moda-ML and Shoenet) but on the relationships between manufacturers and retailers no sectorial specifications were available: there relationships could be covered using already defined standards, like UBL. To overcome this issue we assumed as starting point an abstract data model provided by the sectorial experts and compared it with the UBL standardised documents. The result of such type of comparison was a set of use profiles. The obtained use profiles reduce a general standard (UBL) to a size comparable with the corresponding data structures of an enterprise informative system (Table 2). Still, although in this way it is possible to limit and specify the set of abstract information to manage, different data models could exist and still slow down the process of implementation.
5. Profile Customisation and Validation During the analysis, together with the activities related with the profile definition, a further need emerged: a more powerful customisation mechanism that, exploiting co-constraint definition, can definitively guarantee a complete specification of all the data format constraints that reflect the enterprise informative system needs.
456
A. Brutti, V. Cerminara, G. D’Agosta, P. De Sabbata and N. Gessa
Usually, XML Schema is the XML technology adopted for document template definition. Nevertheless, in the context we are considering, and after our analysis on the eBIZ-TCF scenarios, XML Schema is not enough to support profile customisation and validation. Customisation issues are also strictly related with the structure of the reference standard, in this case UBL 2.0. UBL 2.0 defines a component library that is shared by the set of the main documents (Order, Invoice, etc.). Moreover, both the XML elements and the XML types are declared globally, following the Garden-of-Eden design style. In this way, it is impossible to redefine an element (maintaining the full compatibility with the UBL standard), since all the components are global. In this situation, we needed two levels of restrictions on UBL 2.0 components: 1. A global restriction for the eBIZ-TCF component library, respect to the UBL component library. This restriction is performed at sectorial level. The basis of this work has been an analysis done by sectorial expert about the needs (in terms of data model to adopt) shared by all the actors 2. Ad hoc restrictions for specific component in the eBIZ-TCF profiles respect to the general eBIZ-TCF component library. This restriction is perfomed at the enterprise level by personnel expert of the informative system of the enterprise itself. Performing this kind of restrictions is extremely important to maintain only one library that remains common for all the profiles, for maintenance requirements. It is not feasible to produce each profile with its own different library that specifies its ad hoc restriction. Using a single shared component library, XML Schema is not proper to model every possible use of the different components. Moreover, XML Schema does not allow specifying co-constraints, as emerged from the analysis of the eBIZ-TCF Textile clothing scenario. In order to overcome these issues, we have introduced a three steps mechanism for eBIZ-TCF library and profile definition: 1. Reduction of the XML Schema element in the library, in order to delete the unused components in the set of profiles and to refine the cardinalities of the elements. 2. Definition in the library, using Schematron code, of co-constraints that are common for every eBIZ-TCF document. 3. Definition in the main documents, using Schematron code, of coconstraints that are specific for eBiZ-TCF document. This set of restrictions is defined for the whole industrial sector that is target of the eBIZ-TCF project. For example, the UBL 2.0 “Item” element is composed of 28 child elements (10 elements have simple type, 18 elements have complex type), whereas in the eBIZ-TCF catalogue profile the “Item” element is restricted to 13 child elements and in other eBIZ-TCF documents the “Item” element has only one child element. In this way, optional elements for UBL, that are not useful in the eBIZ-TCF use context, are not present and the documents are easier to manage.
Use Profile Management for Standard Conformant Customisation 457
Concerning the step two, the following code, included in the definition of the component library, describes the mechanism adopted to invoke a validation pattern defined for a specific component (in this case, the cac:ReceiptLine/cac:Item component). We do not report the abstract definition of the pattern. <sch:pattern id="RecpLineItem" is-a="item"> <sch:param name="mypath" value="cac:ReceiptLine/cac:Item"/>
Concerning the step three, the following code represents a definition for a validation pattern that is defined only for a specific document (the eBizORD:Order document). These rules are included in the definition of the main document. <sch:rule context="/eBizORD:Order/cac:Delivery"> <sch:report test="cbc:ActualDeliveryDate">In this path, the 'cac:Delivery' element requires the 'cbc:ActualDeliveryDate' child. <sch:assert test="cac:DeliveryLocation">In this path, the 'cac:Delivery' element requires the 'cac:DeliveryLocation' child.
Following this procedure, within the eBIZ-TCF 17 Use Profiles have been designed, starting from 8 official UBL 2.0 documents; 13 of these profiles are full conformant with the UBL documents, while 4 of them, since are completely new documents not existent in the UBL 2.0 document set (built following the UBL customisation rules and exploiting when possible the UBL library), are only compatible. These documents can be used as common data models for data exchange between the enterprises of the sector. According to the last column of Table 2, such documents have a very low ‘# of XPaths’ that make them much more similar to the documents belonging to a vertical standard and thus are expected to have less redundancy and uncertainty when the implementers design their applications.
6. Conclusions The problem of achieving a critical mass of adopters in sectors characterised by the large presence of SMEs is challenging both for policy makers and for IT research. Sector specific initiatives based on public open standards appear to be one of the leverages to tackle the problem, but standards traditionally are perceived as costly and resource consuming elements of possible technological solutions. The analysis of different approaches to standardization has pointed out that some of the problems arise from the intrinsic nature of standardised specifications: the first idea at the basis of this work has been to characterize the intrinsic properties of standardised specification, firstly by identifying some metrics. Secondly this paper has shown that some of the disadvantages of using horizontal rather than vertical or sector related specifications could be overcome by using use profiles but they need to be automatically processable (expressed through XML Schema and Schematron).
458
A. Brutti, V. Cerminara, G. D’Agosta, P. De Sabbata and N. Gessa
Thirdly we expect also that in some cases the same approach could prevent an excessive proliferation of specialized data model structures even in vertical specifications. Future developments are in the field of: •
improvement of metrics related to standard specifications’ properties
•
exploitation of co-constraint languages to enforce business rules at a smaller granularity (inter-company level instead of sector level). automatic management of the alignment between different customisation descriptions to speed-up the implementation processes.
•
7. References [1] [2] [3]
[4] [5] [6] [7]
[8] [9] [10] [11] [12] [13] [14]
“Special report - e-Business interoperability and standards-A cross sectorial perspective and outlook”, e-Business w@tch, September 2005, Brussels eBiz-TCF project: Analysis report on eBusiness adoption in Textile/Clothing and Footwear sectors, Deliverable D2.1, Bruxelles, June 2008 Gerst, M., Jakobs, K, “E-business standardisation in the automotive industry - two approaches towards the integration of SMEs”, E-Commerce Technology, 2005. CEC 2005. Seventh IEEE International Conference on Volume , Issue , 19-22 July 2005 Pages: 118 – 125 “TexSpin, Guidelines for XML/EDI messages in the Textile/Clothing sector”, CWA 14948:2004, CEN/ISSS, March 2004, Bruxelles “TexWeave: Scenarios and XML templates for B2B in the Textile Clothing manufacturing and retail”, CWA (CEN Workshop Agreement) 15557:2006, CEN/ISSS, 2006, Bruxelles; http://www.TexWeave.org “FINEC”, CWA 14746:2003, CEN/ISSS, 2003, Bruxelles Gessa N., De Sabbata P., Fraulini M., Imolesi T., Cucchiara G., Marzocchi M., Vitali F., "Moda-ML, an interoperability framework for the Textile Clothing sector", IADIS International Conference WWWInternet 2003, p. 61-68, ISBN: 972-98947-1-X , November 2003, Carvoeiro, Portugal. Gonçalves R.J., Müller J.P, Mertins K. and Zelm M., "Enterprise Interoperability II New Challenges and Approaches", ISBN 978-1-84628-857-9, Springer, October 2007, London “Unleashing the Potential of the European Knowledge Economy, Value Proposition for Enterprise Interoperability Version 4, 21 Jan 2008”, EC, DG-INFSO, EI Cluster “Electronic Business in the Textile, Clothing and Footwear Industries”, Sector Report: No. 01-II, e-Business w@tch, August 2004, Brussels “ICT and e-Business in the Footwear Industry, ICT adoption and e-business activity in 2006”, Sector Report: No. 02, e-Business w@tch, 2006, Brussels Bosak J., McGrath T., Holman G.K., “Universal Business Language v2.0, Standard”, OASIS Open, 12 December 2006; http://docs.oasis-open.org/ubl/os-UBL-2.0/ “XML Path Language (XPath)”, version 1.0, W3C P. De Sabbata, N. Gessa, C. Novelli, A. Frascella, F. Vitali, "B2B: Standardisation or Customisation?", in "Innovation and the Knowledge Economy Issues, Application, Case Studies", e-Challenges 2005 conference, Ljubljiana, October 19-21 2005, pp 1556-1566, edited by Paul Cunningham and Miriam Cunningham, IIMC International Information Management Corporation LTD, Dublin, Ireland, IOS PRESS, ISBN 158603-563-0.
Use Profile Management for Standard Conformant Customisation 459 [15] Universal Business Language XPath Information http://docs.oasis-open.org/ubl/submissions/XPath-files/
Files
-
2006-10-06:
Index of Contributors
Alarcón, F. ...................................... 25 Alawamleh, M. ............................. 265 Alemany, M.M.E. ........................... 25 Almarcha Vela, R. ........................ 377 Alonso, J. ...................................... 429 Anjum, N. ..................................... 303 Askounis, D. ................................. 419 Bajric, A. ...................................... 357 Baken, N. ...................................... 123 Bénaben, F. ................................... 187 Bernard, A. ................................... 397 Berre, A.-J. ................................... 313 Bourey, J.P. .................................. 135 Brutti, A. ....................................... 449 Buschle, M. .................................... 81 Campos, C. ............................. 35, 387 Case, K. ................................ 147, 303 Cerminara, V. ............................... 449 Chalmeta, R. ................................... 35 Chapurlat, V. ........................ 187, 439 Charalabidis, Y. .................... 245, 419 Chen, D................................... 57, 111 Cho, H. ........................................... 47 Cucchiarelli, A.............................. 323 Da Cunha, C. ................................ 397 Daclin, N. ..................................... 439 D’Agosta, G.................................. 449 Dai, X. .................................. 179, 235 D’Antonio, F. ............................... 323 de Juan-Marín, R. ......................... 169 de Man, H. .................................... 313 De Sabbata, P. .............................. 449
den Hartog, F. ............................... 123 Ducq, Y. ....................................... 101 Eckert, K-P.. ................................. 331 Elvesæter, B.................................. 313 Fan, Z. ............................................ 91 Fernando, T. ................................. 367 Franco, R.D. ................................. 169 Franke, U. ....................................... 81 Gao, H. ........................................... 91 Gao, S. ............................................ 69 Gautier, G. .................................... 367 Gessa, N. ...................................... 449 Gonçalves, R.J. ............................. 245 Gopalakrishnan, S. ......................... 13 Grangel, R. ..................................... 35 Gu, X. ........................................... 225 Hall, J. .......................................... 331 Han, Z. .......................................... 289 Hanachi, C. ................................... 187 Hans, C. ........................................ 157 Harding, J. ............................ 147, 303 Hernández, J. ................................ 213 Hofman, W. .................................. 341 Holtkamp, M. ............................... 341 Hribernik, K.................................. 157 Ivezic, N. ........................................ 47 Jaekel, F.-W.................................. 357 Jean-Pierre, L.................................... 3 Johnson, P....................................... 81 Kagioglou, M................................ 367 Kapogiannis, G. ............................ 367 Kramer, C. .................................... 157
462
Index of Contributors
Krogstie, J....................................... 67 Lampathaki, F............................... 419 Lario, F.C. ...................................... 25 Levashova, T. ............................... 279 Leymann, F................................... 255 Li, M.-S. ....................................... 313 Ling, J. .......................................... 289 Liu, A. .......................................... 199 Liu, H. .......................................... 135 Lorre, J.-P. ........................................ 3 Lu, Y............................................. 225 Ma, C. ........................................... 199 Madureira, A. ............................... 123 Mallek, S. ..................................... 439 Martínez de Soria, I. ..................... 429 Martínez-Simarro, D. ................... 377 Mertins, K..................................... 357 Mula, J. ......................................... 213 Ni, Y. ............................................ 225 Nie, L. ........................................... 111 Nikos, P. ........................................... 3 Opdahl, A.L. ................................. 409 Orue-Echevarria, L. ...................... 429 Palomares, N. ............................... 387 Palomero, S. ........................... 35, 387 Papageorgiou, N. .............................. 3 Piddington, C. ............................... 367 Pignon, J.P. ................................... 187 Pinazo Sánchez, J.M. .................... 377 Pingaud, H. ................................... 187 Poler, R. .................................. 25, 213 Popplewell, K. ...... 179, 235, 245, 265
Prats, G. ........................................ 169 Rabe, M. ....................................... 357 Rauffet, P. ..................................... 397 Roller, D. ...................................... 255 Salatgé, N. ................................ 3, 187 Scheibler, T. ................................. 255 Shen, J. ........................................... 91 Shilov, N....................................... 279 Silva, E. ........................................ 123 Sindre, G......................................... 13 Smirnov, A. .................................. 279 Thoben, K.-D................................ 157 Truptil, S....................................... 187 Tu, Z. .............................................. 57 Ullberg, J. ....................................... 81 Usman, Z. ..................................... 147 Vallespir, B................................... 101 van Bekkum, M. ........................... 341 Vergara, M. .................................. 429 Verginadis, Y.................................... 3 Vicien, G. ..................................... 101 Wang, H. ...................................... 225 Wang, Z. ............................... 199, 225 Woo, J............................................. 47 Wulan, M. ............................. 179, 235 Xu, X. ................................... 111, 199 Yiannis, V......................................... 3 Young, B. ..................................... 303 Young, R. ..................................... 147 Zacharewicz, G. ...................... 57, 111 Zhan, D. ........................................ 111 Zhang, L. ................................ 91, 289