This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Preface Kai Mertins1, Rainer Ruggaber2, Keith Popplewell3 and Xiaofei Xu4 1
2
3
4
Fraunhofer IPK Berlin, Pascalstr. 8-9, 10587 Berlin, Germany General Chairperson of I-ESA’08 [email protected] SAP AG, Vincenz-Prießnitz-Straße 1, 76131 Karlsruhe, Germany General Co-Chairperson of I-ESA’08 [email protected] Future Manufacturing Applied Research Centre, Coventry University, Priory Street, Coventry, CV1 5FB, UK Chairperson of I-ESA’08 International Programme Committee [email protected] School of Computer Science and Technology, Harbin Institute of Technology, 92 West Dayhi Street, Harbin, P.R. China 150001 Co-Chairperson of I-ESA’08 International Programme Committee [email protected]
Interoperability in the context of enterprise applications is the ability of a system or an organisation to work seamless with other systems or organisation without any special effort. The capability to interact and exchange information both internally and with external organisations (partners, suppliers, customers) is a key issue in the global economy. It is fundamental in order to speed up production of goods and services at lower cost, while ensuring higher levels of quality and customisation. Despite the fact of many efforts spend in the past decade to overcome interoperability barriers in the industry non interoperability cause an enormous cost for all business partners. Studies show: More than 40 % of IT-costs are devoted to solve interoperability problems. This book provides knowledge for cost savings and business improvement as well as new technical solutions. I-ESA'08 (Interoperability for Enterprise Software and Applications) is the fourth of a series of conferences, this time under the motto "Science meets Industry". The I-ESA’08 Conference was organised by Fraunhofer IPK and DFI (Deutsches Forum für Interoperabilität e.V.) jointly promoted by the INTEROPVLab (European Virtual Laboratory for Enterprise Interoperability - www.interopvlab.eu) and the EIC (Enterprise Interoperability Centre - www.eiccommunity.org). World's leading researchers and practitioners in the area of Enterprise Interoperability contributed to this book. You will find integrated approaches from different disciplines: Computer Science, Engineering and Business Administration.
vi
Preface
The structucture of this book Enterprise Interoperability III: New Challenges and Industrial Approaches was inspired by the ATHENA Interoperability Framework. The House of Enterprise Interoperability is designed top-down from Business to Process and Service Execution aligned with the important topics for Semantics and Ontology’s. Common basis of the levels are the aspects of Systems Engineering, Modelling as well as Architectures and Frameworks.
Business & Strategies, Cases Cross Organisational Collaboration and Cross Sectoral Processes Service Design and Execution
Ontology’s and Semantics for Interoperability Interoperability in Systems Engineering Modelling and Meta Modelling Methods and Tools for Interoperability Architectures and Frameworks for Interoperability Fig.: Enterprise Interoperability House of I-ESA’08
Kai Mertins, Berlin Rainer Ruggaber, Karlsruhe Keith Popplewell, Coventry Xiaofei Xu, Harbin January 2008
Acknowledgements
We like to thank all the authors, invited speakers, reviewers, Senior Programme Committee members, and participants of the conference that made this book a reality and the I-ESA'08 a success. We express our gratitude to all organisations which supported the I-ESA'08 preparation especially Interop V-Lab and its German Pole DFI as well as the Enterprise Interoperability Centre. We are deeply thankful to the local organisation support notably Thomas Knothe, Amparo Roca de Togores, Anett Wagner, Sebastian Zabre and Nikolaus Wintrich for their excellent work for the preparation and the management of the conference.
Contents
Part I - Business and Strategies, Cases An Approach for the Evaluation of the Agility in the Context of Enterprise Interoperability S. Izza, R. Imache, L. Vincent, and Y. Lounis ...........................................................3 Industrialization Strategies for Cross-organizational Information Intensive Services Ch. Schroth.............................................................................................................15 SME Maturity, Requirement for Interoperability G. Benguria, I. Santos ............................................................................................29 Information Security Problems and Needs in Healthcare – A Case Study of Norway and Finland vs Sweden R.-M. Åhlfeldt and E. Söderström...........................................................................41 Impact of Application Lifecycle Management – A Case Study J. Kääriäinen, A. Välimäki .....................................................................................55 Part II - Cross-organizational Collaboration and Cross-sectoral Processes A Service-oriented Reference Architecture for Organizing Cross-Company Collaboration Ch. Schroth.............................................................................................................71 Patterns for Distributed Scrum – A Case Study A. Välimäki, J. Kääriäinen .....................................................................................85 Understanding the Collaborative Workspaces G. Gautier, C. Piddington, and T. Fernando..........................................................99 Models and Methods for Web-support of a Multi-disciplinary B2(B2B) Network H. Weinaug, M. Rabe ...........................................................................................113
x
Contents
Platform Design for the B2(B2B) Approach M. Zanet, St. Sinatti ..............................................................................................125 Trust and Security in Collaborative Environments P. Mihók, J. Bucko, R. Delina, D. Palová ............................................................135 Prototype to Support Morphism between BPMN Collaborative Process Model and Collaborative SOA Architecture Model J. Touzi, F. Bénaben, H. Pingaud.........................................................................145 Heterogeneous Domains’ e-Business Transactions Interoperability with the use of Generic Process Models S. Koussouris, G. Gionis, A. M. Sourouni, D. Askounis, K. Kalaboukas..............159 Matching of Process Data and Operational Data for a Deep Business Analysis S. Radeschütz, B. Mitschang, F. Leymann............................................................171 Methods for Design of Semantic Message-Based B2B Interaction Standards E. Folmer, J. Bastiaans ........................................................................................183 Part III - Service Design and Execution An Adaptive Service-Oriented Architecture M. Hiel, H. Weigand, W.-J. Van Den Heuvel .......................................................197 FUSE: A Framework to Support Services Unified Process N. Protogeros, D. Tektonidis, A. Mavridis, Ch. Wills, A. Koumpis ......................209 Adopting Service Oriented Architectures Made Simple L. Bastida, A. Berreteaga, I. Cañadas..................................................................221 Making Service-Oriented Java Applications Interoperable without Compromising Transparency S. De Labey, E. Steegmans ...................................................................................233 A Service Behavior Model for Description of Co-Production Feature of Services T. Mo, X. Xu, Z. Wang ..........................................................................................247 An Abstract Interaction Concept for Designing Interaction Behaviour of Service Compositions T. Dirgahayu, D. Quartel, M. van Sinderen .........................................................261 Preference-based Service Level Matchmaking for Composite Service Y. Shiyang, W. Jun, H. Tao...................................................................................275 Ontological Support in eGovernment Interoperability through Service Registries Y. Charalabidis.....................................................................................................289 Towards Secured and Interoperable Business Services A. Esper, L. Sliman, Y. Badr, F. Biennier.............................................................301
Contents
xi
Part IV - Ontologyies and Semantics for Interoperability Semantic Web Services based Data Exchange for Distributed and Heterogeneous Systems Q. Wang, X. Li, Q. Wang......................................................................................315 Ontology-driven Semantic Mapping D. Beneventano, N. Dahlem, S. El Haoum, A. Hahn, D. Montanari, M. Reinelt..329 Self-Organising Service Networks for Semantic Interoperability Support in Virtual Enterprises A. Smirnov, M. Pashkin, N. Shilov, T. Levashova ................................................343 Semantic Service Matching in the Context of ODSOI Project S. Izza, L. Vincent .................................................................................................353 Ontology-based Service Component Model for Interoperability of Service Systems Z. Wang, X. Xu......................................................................................................367 Supporting Adaptive Enterprise Collaboration through Semantic Knowledge Services K. Popplewell, N. Stojanovic, A. Abecker, D. Apostolou, G. Mentzas, J. Harding.............................................................................................................381 Part V - Interoperability in Systems Engineering Semantic Web Framework for Rule-Based Generation of Knowledge and Simulation of Manufacturing Systems M. Rabe, P. Gocev................................................................................................397 Semantic Interoperability Requirements for Manufacturing Knowledge Sharing N. Chungoora, R.I.M. Young ................................................................................411 Collaborative Product Development: EADS Pilot Based on ATHENA N. Figay, P. Ghodous ...........................................................................................423 Contribution to Knowledge-based Methodology for Collaborative Process Definition: Knowledge Extraction from 6napse Platform V. Rajsiri, A-M. Barthe, F. Bénaben, J-P. Lorré, H. Pingaud..............................437 SQFD: QFD-based Service Quality Assurance for the Lifecycle of Services S. Liu, X. Xu, Z. Wang ..........................................................................................451 Coevolutionary Computation Based Iterative Multi-Attribute Auctions L. Nie, X. Xu, D. Zhan ..........................................................................................461 Knowledge Integration in Global Engineering R. Anderl, D. Völz, Th. Rollmann .........................................................................471
xii
Contents
Part VI - Modelling and Meta-modelling Methods and Tools for Interoperability A Framework for Executable Enterprise Application Integration Patterns Th. Scheibler, F. Leymann....................................................................................485 Experiences of Tool Integration: Development and Validation J-P. Pesola, J. Eskeli, P. Parviainen, R. Kommeren, M. Gramza ........................499 Interoperability – Network Systems for SMEs K. Mertins, Th. Knothe, F.-W. Jäkel.....................................................................511 Engineer to Order Supply Chain improvement based on the GRAI Meta-model for Interoperability: an Empirical Study A. Errasti, R. Poler...............................................................................................521 Proposal for an Object Oriented Process Modeling Language R. Anderl, J. Malzacher, J. Raßler .......................................................................533 Enterprise Modeling Based Application Development for Interoperability Problem Solving M. Jankovic, Z. Kokovic, V. Ljubicic, Z. Marjanovic, Th. Knothe........................547 IS Outsourcing Decisions: Can Value Modelling Be of Help? H. Weigand...........................................................................................................559 Process Composition in Logistics: An Ontological Approach A. De Nicola, M. Missikoff, L. Tininini.................................................................571 Interoperability of Information Systems in Crisis Management: Crisis Modeling and Metamodeling S. Truptil, F. Bénaben, P. Couget, M. Lauras, V. Chapurlat, H. Pingaud ...........583 A Novel Pattern for Complex Event Processing in RFID Applications T. Ku1, Y. L. Zhu, K. Y. Hu, C. X. Lv....................................................................595 Part VII - Architectures and Frameworks for Interoperability Enterprise Architecture: A Service Interoperability Analysis Framework J. Ullberg, R. Lagerström, P. Johnson .................................................................611 Logical Foundations for the Infrastructure of the Information Market M. Heather, D. Livingstone, N. Rossiter ..............................................................625 Meeting the Interoperability Challenges of eTransactions among Heterogeneous Business Partners: The Advantages of Hybrid Architectural Approaches for the Integrating Middleware G. Gionis, D. Askounis, S. Koussouris, F. Lampathaki ........................................639 A Model-driven, Agent-based Approach for a Rapid Integration of Interoperable Services I. Zinnikus, Ch. Hahn, K. Fischer.........................................................................651
Contents
xiii
BSMDR: A B/S UI Framework Based on MDR P. Zhang, S. He, Q. Wang, H. Chang ...................................................................665 A Proposal for Goal Modelling Using a UML Profile R. Grangel, R. Chalmeta, C. Campos, R. Sommar, J.-P. Bourey .........................679 Index of Contributors ...........................................................................................691 Index of Keywords ...............................................................................................693
Part I
Business and Strategies, Cases
An Approach for the Evaluation of the Agility in the Context of Enterprise Interoperability S. Izza(1), R. Imache(2) L. Vincent(1), and Y. Lounis(3) (1)
(2)
(3)
Ecole des Mines de Saint-Étienne, Industrial Engineering and Computer Science Laboratory, OMSI Division, 158 cours Fauriel, 42023, Saint-Etienne, Cedex 2, France. {izza, vincent}@emse.fr University of Boumerdes, Department of Informatics, LIFAB, Boumerdes, Algeria. [email protected] University of Tizi-Ouzou, Department of Informatics, Tizi-Ouzou, Algeria. [email protected]
Abstract. Today the concept of agility has become popular and is still growing in popularity in the management and technology literatures. How do we really define an agile information system? How do we know if an information system is agile? And how do we evaluate the agility in the context of enterprise information systems? This paper tries to answer some of these questions. It precisely aims to present the concept of agility of information systems and also provides an approach for the evaluation of this agility, namely POIRE approach that evaluates the agility as an amalgamation function that combines the agility measure of five complementary aspects that are Process, Organizational, Informational, Resource and Environmental aspects. Finally, this paper studies the role of interoperability in achieving agility and also the rapprochement of the concept of interoperability with the concept of agility. Keywords: Enterprise Information System; Business; Information Technology; Agility; POIRE; Agility Evaluation; Fuzzy Logic; Interoperability.
1 Introduction In the last few years, the concept of agility has become popular and is still growing in popularity in the management and technology literatures. On the management front, enterprises are daily facing pressures to demonstrate agile behaviour according to the dynamic environment. On the technology front, resources deployed to manage and automate information systems are facing flexibility and leanness issues in order to make easier the first front.
4
S. Izza, R. Imache L. Vincent, and Y. Lounis
The purpose of this paper is to investigate the notion of agility and the role of interoperability in achieving business agility. Also, this paper provides some measuring agility principles that can be exploited in the context of enterprise interoperability and some discussions on the relation between interoperability and agility. This paper is organized as follows: Section 2 introduces some of the important related work on the concept of agility. Section 3 presents the POIRE (Process, Organization, Information, Resource and Environment) approach for measuring agility in general, with a focus on the relation that exists between interoperability and agility. Section 4 describes the agility evaluation in the context of enterprise interoperability. Finally, section 5 presents some conclusions and outlines some important future work.
2 Related Work 2.1 The Concept of Agility The concept of agility originated at the end of the eighties and the early nineties in the manufacturing area in the United States. Agile Manufacturing was first introduced with the publication of a report by [8] entitled "21st Century Manufacturing Enterprise Strategy". Since then, the concept was used for agile manufacturing and agile corporations. The concept was extended to supply chains and business networks [2], to enterprise information systems [15] and also to software development [1]. Despite the age of the concept, there is no consensus yet on a definition of agility. According to [4], most of the agility concepts are adaptations of elements such as flexibility and leanness, which originated earlier. In developing their definitions, [4] draw on the concepts of flexibility and leanness to define agility as the continual readiness of an entity to rapidly or inherently, proactively or reactively, embrace change, through high quality, simplistic, economical components and relationships with its environment. According to [5], being agile is generally result in the ability to (i) sense signals in the environment, (ii) process them adequately, (iii) mobilize resources and processes to take advantage of future opportunities, and (iv) continuously learn and improve the operations of the enterprise. In the same idea, [9] interpreted agility as creativity and defined the enterprise agility as the ability to understand the environment and react creatively to both external and internal changes. In the same way, [11] interpret agility of information systems as the ability to become vigilant. Agility can also be defined in terms of the characteristics of the agile enterprise [20]: (i) sensing: the ability to perceive environmental conditions; gather useful information from the system and readily detect changes and the ability to anticipate changes; (ii) learning: the ability to effectively modify organizational behaviours and beliefs through experience and the ability to use information to improve the organization; (iii) adaptability: the ability to effect changes in systems and structures in response to (or in anticipation of) changes in environmental condition; (iv) resilience: robustness to diversity and variability, and the ability to recover from changes; (v) quickness: the ability to
An Approach for the Evaluation of the Agility in the Context of Enterprise Interoperability 5
accomplish objectives in a short period of time; pace at which changes are accomplished, and the rate of movement within organizational systems; (vi) innovation: the ability to generate many solutions to a problem; (vii) concurrency: the ability to effectively perform related activities at the same time with the same assets; and (viii) efficiency: the ability to use minimal resources to accomplish desired results. Currently it is well established that the agility can be studied through two complementary perspectives: the management (or business) perspective and the IT (information technology) perspective [3] [5]. First, the concept of agility in the management perspective is often synonym of the concept of alignment, of effective and efficient execution of business processes [7]. Another view of agility can be expressed in terms of an enterprise’s abilities to continually improve business processes [18], or as "the capacity to anticipate changing market dynamics, adapt to those dynamics and accelerate change faster than the rate of change in the market, to create economic value". Second, in the IT perspective, agility of information systems is often studied through IT solutions that compose information systems. Currently, it is well established that IT and business are related and enterprises invest more and more in IT to drive current business performance, enable future business initiatives, and create business agility [17]. There are many works, in both academia and industry, that studied the impact of IT on business agility [17]. These works present two distinct and opposing views in relation to the impact of IT investment on business agility. The first view is that IT can stabilize and facilitate business processes but can also obstruct the functional flexibility of organizations, making them slow and cumbersome in the face of emerging opportunities. In such systems, business processes are often hardwired by rigid predefined process flows. The second view portrays IT applications as disruptive influences, often dislodging efficiently working processes and leading to widespread instability that reduces the effectiveness of the organization's competitive efforts [13]. These two opposing views need resolution for managers who face a continually changing IT-driven business environment. Being agile is a compelling catch cry but may lead to a complex and confusing interaction between stability and flexibility [17]. 2.2 Approaches for Characterizing and Measuring Agility There are also some works that treats the agility issues within enterprises and they mainly concern the strategizing of agility [7], the identification of the capabilities of agility [19], the identification of the agility levels [14], and the proposition of conceptual agility frameworks [16], and the measurement of the agility [21]. [7] studied the agility in the strategy point of view and mentions that there are three main points for strategizing agility: (i) the exploitation strategy, (ii) the exploration strategy, and (iii) the change management strategy. The exploitation strategy concerns the environmental and organizational analysis, the enterprise information and knowledge systems, the standardized procedures and rules, and the information services. The exploration strategy is related on the alternative futures of information systems, the existing communities of practice, the flexibility of project teams, the existence of knowledge brokers, and the possibility of cross-
6
S. Izza, R. Imache L. Vincent, and Y. Lounis
project learning. The change management strategy depends on the ability to incorporate the ongoing learning and review. [19] distinguish three interrelated capabilities of agility: (i) operational agility: is the ability to execute the identification and implementation of business opportunities quickly, accurately, and cost-efficiently; (ii) customer agility: is the ability to learn from customers, identify new business opportunities and implement these opportunities together with customers; and (iii) Partnership agility: is the ability to leverage business partner's knowledge, competencies, and assets in order to identify and implement new business opportunities. This distinction is in line with the multiple initiatives proposed in the literature: (i) internally focused initiatives (operational agility), (ii) demand-side initiatives (customer agility), and (iii) supply-side initiatives (partnership agility). Concerning the identification of agility levels, [14] argues that systems can be agile in three different ways: (i) by being versatile, (ii) by reconfiguration, and (iii) by reconstruction. Being versatile implies that an information system is flexible enough to cope with changing conditions as it is currently set up. If current solutions are not versatile enough, reconfiguration will be needed; this can be interpreted as pent-up agility released by a new configuration. If reconfiguration is not enough, reconstruction will be needed; this means that changes or additions have to be made to the information system. Furthermore, [14] proposed a framework that discusses how agility is produced and consumed. This is closely related to the level of agility that can be interpreted as a result of an agility production process to which resources are allocated. These agility levels are then used in order to consume agility when seizing business opportunities. Additionally, he outlines that when consuming agility within a business development effort, in many situations agility is reduced. This means that we are confronted to negative feedback that indicates how much enterprise's agility is reduced by this business development effort. An important agility framework, which concerns the management perspective, is that proposed by [16]. In this framework, we begin with the analyses of the change factors, where a required response of the enterprise is related to the enterprise's IT capability. Then, an enterprise's agility readiness is determined by its business agility capabilities. These latter are the reasons behind the existence or non existence of agility gaps. If there is a mismatch between the business agility needs and the business agility readiness, there is a business agility gap. This has implications for the business agility IT strategy. Another important work is by [12] who studied the agility in the socio-technical perspective. In this latter, the information system is considered as composed of two sub-systems: a technical system and social system. The technical subsystem encompasses both technology and process. The social subsystem encompasses the people who are directly involved in the information systems and reporting structure in which these people are embedded. To measure information system agility using the socio-technical perspective, [12] use the agility of the four components: technology agility, process agility, people agility, and structure agility. Hence, [12] argue that the agility is not a simple summing of the agility of the four components, but it depends on their nonlinear relationship. Furthermore, [22] mention the importance of preservation of agility through audits and people education. This
An Approach for the Evaluation of the Agility in the Context of Enterprise Interoperability 7
latter aspect is important because most of organizations continually need education for continuous agility. Finally, [21] proposed a fuzzy logic knowledge-based framework to evaluate the manufacturing agility. The value of agility is given by an approximate reasoning method taking into account the knowledge that is included in fuzzy IFTHEN rules. By utilizing these measures, decision-makers have the opportunity to examine and compare different systems at different agility levels. For this purpose, the agility is evaluated accordingly to four aspects: (i) production infrastructure, (ii) market infrastructure, (iii) people infrastructure and (iv) information infrastructure. Although all these works are important, our work is mostly closed to those proposed by [12] and [21]. In the following, we propose to extend these last researches to the evaluation of the agility of information system, and in particular in the context of enterprise interoperability perspective.
3 The POIRE Framework Based our research on the work of [12] and [21], we suggest framework, called POIRE (Process, Organization, Information, Environment). In the following, we will briefly expose the main POIRE and then we will focus on the evaluation of the agility in enterprise interoperability.
the following Resource and dimensions of the context of
3.1 POIRE Dimensions We suggest for our agility framework (POIRE) the following dimensions that are necessary in the context of measuring the agility of enterprise information systems (Figure 1):
8
S. Izza, R. Imache L. Vincent, and Y. Lounis
interacts with Environment (E)
Organization (O)
provides is sent to, is received from
uses
uses
Information (I)
contains
Resource (R) uses
manipulates
Process (P)
Fig. 1. POIRE dimensions for enterprise information system agility
x
x
x
x
Process dimension (P): This dimension deals with the enterprise behaviour i.e. business processes. It can be measured in terms of time and cost needed to counter unexpected changes in the process of the enterprise. Agile process infrastructure enables in-time response to unexpected events such as correction and reconfiguration. It can be measured by their precision, exhaustively, non redundancy, utility, reliability, security, integrity, actuality, efficiency, effectiveness, and feasibility. Organization dimension (O): This dimension deals with all the organizational elements involved in industry, i.e. structure, organization chart, etc. It can be measured by their hierarchy type, management type, range of subordination, organizational specialization, intensity of their head quarter, nun redundancy, flexibility, turnover, and exploitability. Information dimension (I): This dimension deals with all the stored and manipulated information within the enterprise. It concerns the internal and external circulation of information. It can be measured from the level of information management tasks, i.e. the ability to collect, share and exploit structured data. It can be measured by their accuracy, exhaustively, non redundancy, utility, reliability, security, integrity, actuality, publication, and accessibility. Resource dimension (R): This dimension is about the used resources within the enterprise. It can mainly concern people, IT resources, and organizational infrastructures. It can be measured by their usefulness, necessity, use, reliability, connectivity and flexibility. Concerning the people, which constitute in our opinion the main key in achieving agility within an enterprise, it can be assessed by the level of training of the
An Approach for the Evaluation of the Agility in the Context of Enterprise Interoperability 9
x
personnel, the motivation/inspiration of employees and the data accessible to them. Environment dimension (E): This dimension deals with the external factors of the enterprise, including customer service and marketing feedback. It can be measured by the ability of the enterprise to identify and exploit opportunities, customize products, enhance services, deliver them on time and at lower cost and expand its market scope. It can be measured by their reactivity, proactivity and accuracy.
3.2 POIRE Metamodel Figure 2 shows the POIRE metamodel. As illustrated, the agility is evaluated according to a certain number of agility factors which are determined using a set of agility evaluation criteria for each dimension of the information system. These criteria are measured thanks to some identified metrics, that concern a given aspect (or dimension) of the information system. The evaluation of the metrics is practically based on the evaluation of a certain number of questions that are defined within a questionnaire of the corresponding dimension. Furthermore, agility factors and criteria are not independent in the way that they may mutually influence each other. Information System
Questionnaire
concerns
c
concerns
Agility grid
c on
er
ns
contains
Information System Dimension
Question
concerns
c
er
ns
evaluates
evaluates
concerns
Agility Factor
c on
Aglity Criteria influences
influences
Fig. 2. POIRE metamodel
Metric
10
S. Izza, R. Imache L. Vincent, and Y. Lounis
3.3 POIRE Methodology In order to evaluate the agility, we begin with the analysis of the information system and the determination of the target information system grid. Then, we customize the questionnaire and we apply the linguistic variable concept of fuzzy logic to evaluate the different metrics that allows determining the agility criteria and also the real agility grid. Once the real and target grids are compared, we conclude with an HAIS (High Agility of the Information System) message, or we make the necessary adjustments in the case where there is an AAIS (Average Agility of the Information System) or LAIS (Low Agility of the Information System). Figure 3 briefly illustrates the main principle of the proposed methodology.
Analyze and/or modify the I.S.
I.S
Determine the target I.S. agility
target agility grid
Customization
Fulfill the questionnaire
questionnaire
Evaluate agility metrics
real agility metrics
Evaluate the real agility grid
adjustments
real agility grid
Compare between target and real agility grids
H.A.I.S
A.A.I.S LAIS
Define corrections
Fig. 3. POIRE methodology
4 Agility Evaluation in the Context of Enterprise Interoperability A basis for assessing agility in a context of interoperability is still at its beginning; hence, we notice the fact that the concept of agility is still being investigated and refined with time and context of the enterprise. In the present work, according to the preceding sections, agility evaluation concerns all the dimensions of agility; hence, for each agility dimension, we suggested a list of pertinent points that must be considered in order to evaluate the agility of enterprise information systems. In the same way, we conducted a study in order to learn about the role of interoperability in achieving the agility, and also in order to better identify the rapprochement that may exist between these two concepts and that is not yet discussed in our opinion in the literature. Let’s recall that interoperability is the ability for two heterogeneous components to communicate and cooperate with one another despite differences in languages, interfaces, and platforms [23] [24]. In order to insure true
An Approach for the Evaluation of the Agility in the Context of Enterprise Interoperability11
interoperability between components or systems, the syntactic interoperability [25] and the semantic interoperability [10] [23] must be specified. For the purpose of studying the rapprochement between agility and interoperability, we identified several metrics within the five above dimensions that are related to the interoperability aspect. Then we evaluated them using maturity grids, basing on CMMI model (with five maturity levels) and using sample examples taken from industrial realities. The idea is to understand how to endow the enterprise with the ability to maintain and adjust its interfaces, in terms of agility of interoperability, at different levels of abstraction of its dimensions under unexpected changes and conditions. Figure 4 illustrates the principle of the rapprochement between interoperability and agility using maturity grids. Maturity Interoperability Level 0 1 2 Grid 3 4 5
Maturity Grid
A
A
M
B
M
B
e
C
e
C
t
D
t
D
r i
E F
r i
E F G
G
c
c
Agility Level 0 1 2 3 4 5
H
H
I
I
Legend: : Real : Target : Ideal
Fig. 4. Evaluating the role of interoperability in achieving the agility with grids
Due to the data complexity and the large number of details concerning the conducted experiment, we briefly describe the main obtained results. First of all, we can retain that interoperability can be seen as one important property of agility. Furthermore, agility can be considered as a non linear function of interoperability. This may be explained in theory by the fact that the interoperability which is correlated to complexity influences the agility in two ways. First, with no interoperability there will be no agility, while with excessive interoperability, and thus with excessive complexity, agility will also decline as illustrated in figure 5. We also notice that there is an asymptotic equilibrium that is defined as the tendency of the agility of the system during its life cycle.
12
S. Izza, R. Imache L. Vincent, and Y. Lounis
Agility
Asymptotic equilbrium
Interoperability
Fig. 5. Relating agility and interoperability - theoretical tendency
In addition, we can notice that practically we have some turbulence in the agility grids due to the happening of some changes (technology changes, business changes, strategic changes, organizational changes) in the enterprise (Figure 6). These changes lead to breaking zones that need appropriate management in order to conduct the enterprise during this state of transition.
Agility
Asymptotic equilbrium
Breaking zone (changes in technology in the IT teams, … )
Fig. 6. Relating agility and interoperability – practical tendency
Interoperability
An Approach for the Evaluation of the Agility in the Context of Enterprise Interoperability13
5 Conclusions We have presented in this paper an approach of the evaluation of the agility in the context of enterprise interoperability. We have precisely presented the main principles of POIRE framework. We also studied the role of interoperability and also its rapprochement with agility. We notice that implementing syntactic and semantic interoperability yields in an increase in agility by making easier the process of reconfiguration when adapting the information system to unpredictable changes. However, there is an asymptotic equilibrium for the agility level after a certain degree of interoperability. The future work will concern the exploitation of this framework in large realities in order to validate it and will also concern the investigation in more details of other forms of agility properties such as vigilant information systems, lean information systems, outsourced information systems and also information systems in the context of the Enterprise 2.0 wave.
References [1]
Abrahamsson P., Warsta J., Siponen M.T. and Ronkainen J., "New Directions on Agile Methods: A Comparative Analysis". Proceedings of ICSE'03, 2003. pp. 244254. [2] Adrian E., Coronado M. and Lyons A. C., "Investigating the Role of Information Systems in contributing to the Agility of Modern supply Chains". In Desouza K. C. editor, Agile Information Systems: Conceptualization, Construction, and Management. Elsevier, Burlington, USA, ISBN 10: 0-7506-8235-3, 2007. pp. 150162. [3] Ahsan M. and Ye-Ngo L. The relationship between IT infrastructure and strategic agility in organizations. In proceedings of the Eleven Americas Conference on Information Systems. N. C. Romano, Jr. editor, Omaha, NE. 2005. [4] Conboy K. and Fitzgerald B., "Towards a Conceptual Framework of Agile Methods: A Study of Agility in Different Disciplines". ACM Workshop on Interdisciplinary Software Engineering Research, Newport Beach, CA, November 2004.. [5] Desouza, K. C., "Preface". In Desouza K. C. editor, Agile Information Systems: Conceptualization, Construction, and Management. Elsevier, Burlington, USA, ISBN 10: 0-7506-8235-3, 2007. [6] Dove R., "Response Ability: the Language, Structure and Culture of Agile Enterprise". New York, Wiley, 2001. [7] Galliers R. D., "Strategizing for Agility: Confronting Information Systems Inflexibility in Dynamic Environments". In Desouza K. C. editor, Agile Information Systems: Conceptualization, Construction, and Management. Elsevier, Burlington, USA, ISBN 10: 0-7506-8235-3, 2007. pp. 1-14. [8] Goldman S. et al., "21st Century Manufacturing Enterprise Strategy". Brthlehem, PA: Iacocca Institute, Lehigh University. 1991. [9] Goranson H. T., "The Agile Virtual Enterprise, Cases, Metrics, Tools". Quorum Books. 1999. [10] Heiler S., "Semantic Interoperability". ACM Computing surveys, vol. 27, issue 2, pp. 271-273, 1995. [11] Houghton R. J. et al., "Vigilant Information Systems: The Western Digital Experience". In Desouza K. C. editor, Agile Information Systems: Conceptualization,
14
[12]
[13]
[14]
[15]
[16]
[17] [18]
[19]
[20]
[21]
[22]
[23] [24]
[25]
S. Izza, R. Imache L. Vincent, and Y. Lounis
Construction, and Management. Elsevier, Burlington, USA, ISBN 10: 0-7506-8235-3, 2007. pp. 222-238. Lui T-W. and Piccoli G., "Degrees of agility: Implications from Information systems Design and Firm Strategy". In Desouza K. C. editor, Agile Information Systems: Conceptualization, Construction, and Management. Elsevier, Burlington, USA, ISBN 10: 0-7506-8235-3, 2007. pp. 122-133. Lyytinen K. and Rose G. M., "The disruptive nature of IT innovations: The case of Internet computing in systems development organizations". MIS Quarterly, 277 (4), 2003. pp. 557. Martensson A., "Producing and Consuming Agility". In Desouza K. C. editor, Agile Information Systems: Conceptualization, Construction, and Management. Elsevier, Burlington, USA, ISBN 10: 0-7506-8235-3, 2007. pp. 41-51.. Mooney J. G. and Ganley D., "Enabling Strategic Agility Through Agile Information Systems". In Desouza K. C. editor, Agile Information Systems: Conceptualization, Construction, and Management. Elsevier, Burlington, USA, ISBN 10: 0-7506-8235-3, 2007. pp. 97-109. Oosterhout M. V., Waarts E., Heck E. V. and Hillegersberg J. V., "Business Agility: Need, Readiness and Alignment with it Strategies". In Desouza K. C. editor, Agile Information Systems: Conceptualization, Construction, and Management. Elsevier, Burlington, USA, ISBN 10: 0-7506-8235-3, 2007. pp. 52-69. Ross J. W. and Beath C. M., "Beyond the business case: New approaches to IT investment". MIT Sloan Management Review, 43 (2), 2002. pp. 51-59. Rouse W. B., "Agile Information Systems for Agile Decision Making". In Desouza K. C. editor, Agile Information Systems: Conceptualization, Construction, and Management. Elsevier, Burlington, USA, ISBN 10: 0-7506-8235-3, 2007. pp. 16-30. Sambamurthy V., Bharadwaj A. and Grover V., "Shaping agility through digital options: Reconceptualising the role of information technology in contemporary firms". MIS Quarterly, 27 (2), 237-263. 2003. Stamos E. and Galanou E., "How to evaluate the agility of your organization: Practical guidelines for SMEs". VERITAS. 2006. Available at: http://www.veritaseu.com/files/VERITAS_D6_1_Agility_Evaluation_ Handbook.pdf. Tsourveloudis et al., "On the Measurement of Agility in Manufacturing Systems". Journal of Intelligent and Robotic Systems, Kluwer Academic Publishers Hingham, MA, USA, 33 (3) 2002. pp. 329 - 342 . Wensley A. and Stijn E. V., "Enterprise Information Systems and the Preservation of Agility". In Desouza K. C. editor, Agile Information Systems: Conceptualization, Construction, and Management. Elsevier, Burlington, USA, ISBN 10: 0-7506-8235-3, 2007. pp. 178-187. Wegner P., "Interoperability". ACM Computing surveys, vol. 28, issue 1, 1996. Wileden J. C. and Kaplan A., "Software Interoperability: principles and practice". Proceedings of the 21st International Conference on Software Engineering (ICSE), pp. 675-676, ACM, 1999. Wileden et al., "Specification level interoperability". Proceedings of the 12th International Conference on Software Engineering (ICSE), pp. 74-85, ACM, 1990.
Industrialization Strategies for Cross-organizational Information Intensive Services Christoph Schroth1, 2 1
2
University of St. Gallen, MCM Institute, Bumenbergplatz 9, 9000 St. Gallen, Switzerland SAP Research CEC St. Gallen, Blumenbergplatz 9, 9000 St. Gallen, Switzerland [email protected]
Abstract. Cross-organizational collaboration is about to gain significant momentum and facilitates the emergence of a globally networked service economy. However, the organization and implementation of business relationships which span across company boundaries still shows considerable weaknesses with respect to productivity, flexibility and quality. New concepts are therefore required to facilitate a comprehensive industrialization by improving the formalization, standardization and automation of related concepts and methodologies. In this work, we briefly elaborate on a specific case of governmental administration in Switzerland, which represents a cross-organizational service industry with significant potential for performance enhancement. We present a reference architecture for service-oriented business media which allow the different involved stakeholders to organize and implement cross-company collaboration as efficiently as possible. By applying this reference architecture to the case of public administration, we show that “Lean” service consumption and provision between organizations can be realized: Similar to the manufacturing context, the seven major kinds of “waste” (defects, overproduction, excessive inventory, transportation, waiting, motion, over-processing) are reduced. Keywords: Engineering interoperable systems, Requirements engineering for the interoperable enterprise, Interoperability issues in electronic markets, Interoperability performance analysis, Interoperable enterprise architecture
1 Introduction Cross-organizational collaboration is about to gain significant momentum and facilitates the emergence of a globally networked service economy. The relentless march of improvements in the cost-performance ratio of information technology today already provides companies the opportunity to execute such crossorganizational business relationships electronically and to thus extend market
16
Christoph Schroth
reach, save time, cut costs and respond to customer queries more agilely [1, 2]. However, the organization of such business relationships still shows considerable weaknesses with respect to productivity, flexibility and quality. Business processes are often unstructured, unclear terminology prevents from a common understanding, functional as well as non-functional parameters are rarely formalized and standardized, and the frequently manual execution lead to a huge variability and hard manageability of results. New concepts are therefore required to facilitate a comprehensive industrialization by improving the formalization, standardization and automation of related concepts and methodologies. In this work, we first of all present clear definitions of relevant terms and also elaborate on the research approach applied (Section two). We also briefly revisit traditional product manufacturing with respect to the major advancements which facilitated its industrialization over time. In section three, we present a reference architecture for service-oriented, electronic business media which allow for improving the organization and implementation of cross-organizational business relationships. Section four is devoted to the application of this reference architecture to an exemplary use case in the field of public administration. We show that “Lean” service consumption and provision between organizations can be realized: Similar to the manufacturing context, the seven major kinds of “waste” (defects, overproduction, excessive inventory, transportation, waiting, motion, over-processing) are reduced significantly. Section five concludes the work with a brief summary and an overview of selected related research work.
2 Definition of Terms and Research Approach Numerous scholars [3, 4, 5] have worked on differentiating services from products, resulting in various kinds of characteristics which they consider as unique to the world of services. The service definition underlying this work is as follows: A service is considered as activity that is performed to create value for its consumer by inducing a change of the consumer himself, his intangible assets or his physical properties. In specific, information intensive services (IIS) can be described as activities that are conducted by either machines or humans and deal with the “manipulation of symbols, i.e. collection, processing and dissemination of symbols – data, information and decisions” [4, p.492]. The focus of this study lies on IIS which are provided and consumed across corporate boundaries (e.g., the collaborative creation of documents or order and invoice processes). Studies of cross-organizational collaboration can be traced back several centuries: In the 18th century, Smith [6] argued that the division of labour among agents facilitates economic wealth. Through their specialization on limited scopes of duties, productivity can be improved as the repeated execution of the same activity helps tapping existing potential for optimization. Malone [1] describes in which way decreasing costs for communication and transportation (also referred to as transaction costs [7]) enabled this division of labour and made organizations evolve from isolated, small businesses over large hierarchies to decentral markets. Today, organizations try to lower the costs of collaborating through information
Industrialization Strategies for Cross-organizational Information Intensive Services
17
exchange even more. The term industrialization is used in various contexts and with very different meanings. In this work we refer to industrialization as the application of scientific means to be able to formalize, standardize and automate the creation of customer value with the final goal to improve performance (measured on the basis of the indicators productivity, quality and flexibility [8]). 18th/ 19th century Watt, Whitney, Bessemer
-
Power machinery, interchangeable parts, division of labor
-
However, no interrelation of different processes, no consideration of the single workers
Early 20th century Taylor, Ford, Sloan
-
Holistic view on production processes Introduction of the „production-line approach“ Separation of process planning and execution
Late 20th century Ohno, Krafcik
-
Lean Manufacturing to reduce waste (superfluous overhead) Simultaneous improvement of productivity, quality and flexibility
Worker specialization Focus on both productivity and flexibility (Sloan)
Fig. 1. Industrialization of traditional product manufacturing
The industrialization of traditional manufacturing (Fig. 1) can trace its roots back to the work of inventors such as James Watt in the 18th century who developed techniques to make steam engines significantly faster, safer, and more fuelefficient than existing devices. During the 18th and also the 19th century, the general focus of industrialization was mainly on the development of power machinery, interchangeable parts, and the division of labour. A more holistic perspective that takes into account overall production processes and also the management of workers did not emerge until the early 20th century: Taylor [9], Ford and Sloan introduced scientific means to the analysis of production processes, promoted the production-line approach (to enable mass-production) and focused on both productivity and manufacturing flexibility. In the second half of the 20th century, the first “Lean” manufacturing systems emerged [8]. As opposed to the traditional, buffered approach (where inventory levels are high, assembly lines have built-in buffers against partial system breakdowns and utility workers are kept on the payroll to buffer against unexpected demand), the Lean Manufacturing approach aims to reduce superfluous overhead (represented by the seven kinds of “waste”: defects, overproduction, excessive inventory, transportation, waiting, motion, over-processing). In this way, all relevant performance indicators (productivity, quality, flexibility) could be improved.
18
Christoph Schroth
3 A Reference Architecture for Seamless Cross-Organizational Collaboration As briefly introduced above, electronic, cross-organizational collaboration shows significant weaknesses with respect to performance indicators such as quality, flexibility and productivity. The reference architecture introduced in this section facilitates the organization and implementation of electronic media for efficient and effective collaboration: For this purpose, we leverage Schmid’s Media Reference Model (MRM) [10]: The concept of media can first of all be defined as enabler of interaction, i.e. they allow for exchange, particularly the communicative exchange, between agents. The MRM comprises the three major components Physical Component (physical basis of the medium), Logical Component (logical sphere between interacting agents) and Organizational Component (social interaction organization). It further consists of a dedicated layer and phase model which builds upon these three components and allows for systematically modelling, understanding and reorganizing media. Organizational Component
Physical Component (Service-Bus) t
Information Objects Interaction Rules Registry Roles Coor-
Fig. 2. Service-Oriented Business Medium Reference Architecture
Fig. 2 visualizes the major components of our service-oriented reference architecture: As a part of the MRM’s Organizational Component, the structural as well as process-oriented organization of the agents’ interaction is defined (“Coordination” in Fig. 2). For the structural organization, electronic media need to provide a registry of participating agents as well as a role model defining rights and obligations for each of them. Also, the information objects (e.g., documents) need to be specified in a common way to ensure seamless interaction between the agents (who assume the above mentioned roles). In terms of process-oriented organization, a cross-organizational status- and action approach is leveraged: Each of the roles may execute certain actions (consume or provide services) depending on the current status of the collaboration and on a set of generally valid business rules. Adhering to the principle of modularization, atomic interaction patterns are introduced which can be assembled towards more high-level business processes [11]. Pieces of the process-oriented organization can be automated on the basis of finite state machines (FSM): Perceived events may change their state and trigger a sequence of further events, representing a sub-process.
Industrialization Strategies for Cross-organizational Information Intensive Services
19
This organizational component is complemented by the Logical Sphere (LComponent) which ensures that agents can seamlessly interact on the basis of a common understanding. Semantics and the structure of exchanged pieces of information represent one major element of this component. The description and the underlying understanding of interaction patterns as well as role model semantics represent further elements of this component. As a third component, the C-Component enables the interaction of the different stakeholders by providing the technology (the physical means) for exchanging messages and relies on the principles of Event-Driven Architectures. It may be considered as a cross-organizational operating system as it provides services for Information Routing, Error and Exception Handling, Data Validation and Transformation, Security, User Directory (supporting the retrieval of adequate business partners), Event/ Catalogue Services (storing information about all supported events and related user rights) as well as services allowing for diverse kinds of information dissemination (Unicast, Publish-Subscribe etc.). Besides these operational services, the C-Component also features coordination services which implement the logic of the above outlined finite state machines (devoted to automating specified pieces of the process-oriented organization). Finally, the actual agents (stakeholders interacting with the help of an electronic medium) encapsulate the services they offer with the help of Web Services-based, standardized adapters which enable them to connect and thus use existing, often proprietary applications. For stakeholders who do not use local software application, Web-based clients are provided. This reference architecture is elaborated in more detailed in [12]. Adapters play a crucial role with respect to both scalability and flexibility of our reference architecture: In the course of the HERA project [13], different “ecosystems” of agents have been found to interact on the basis of proprietary information objects, different role models and interaction patterns as well as rules. To account for such individual requirements in electronic collaboration while still ensuring global connectivity and interoperability, adapters are essential to mediate between the diverse spheres. Adapters may thus not only be used to shield specificities of one agent from the common medium (see Fig. 2), but may also act as a mediator between two or more different electronic business media: In this way, users which are connected to different media (e.g., as they belong to different business ecosystems) can interact with the help of an adapter which translates different data formats or other elements of our reference architecture. In [12], a thorough analysis of this cross-media connectivity (architectural recursivity) is provided.
20
Christoph Schroth
4 Industrialization of IIS in the Case of Swiss Public Administration 4.1 The Collaborative Process of Creating Tax Declarations in Switzerland In this section, we elaborate on a case study that has been conducted in the course of the Swiss government-funded project HERA [13] which aims at an improvement of the tax declaration procedure in Switzerland. It serves as a specific case for the interaction of certain stakeholders who mutually consume and provide IIS in order to achieve a common goal: There are mainly four stakeholders involved in the cross-organizational creation of a tax declaration. First, a company itself aims at submitting a tax declaration that complies with laws, is consistent with the forms issued by the various cantons (Swiss states) and is optimized with respect to the resulting tax load in an as efficient way as possible. Accountants can either be represented as company-internal departments or external service providers. They create comprehensive financial statements and also provide consulting services with respect to profit appropriation strategies. Auditors have to be organizationally separated from accountants (by law) to ensure their independency. They examine and verify compliance of financial statements and profit appropriations. Finally, the cantons (states) receive the completed tax declaration and initiate the assessment/ enactment process. Municipalities play a certain role within the tax declaration process in some of the Swiss cantons, but are left out in this work due to space constraints. During this procedure of creating a tax computation, the division of labour among the players induces the need for coordination and information exchange between them which follows certain choreographies. As a consequence, numerous documents are passed from one stakeholder to the other and are thereby processed in different ways until they reach the end of their respective “lifecycles”. Today, all stakeholders interact with each other via different communication channels. Some information is exchanged in paper format; other documents are transferred via e-Mail or proprietary electronic interfaces. Resulting media breaks, the lack of standardized interfaces and the strong involvement of humans into information processing induces high transaction costs and increases the risk of errors, thereby limiting service quality. Also, services are only rarely subject to quantifiable performance metrics. The study has shown that especially nonfunctional properties of services such as delivered quality or exact time required for completion are usually not provided in a clear, formal and quantifiable way. Also, the cross-organizational process varies from canton to canton as the individual states determine the boundary conditions of the tax declaration procedure. The heterogeneity prevents from standardization with respect to terminology, processes, and pieces of information and therefore deteriorates the productivity of seamless collaboration across the stakeholders’ boundaries. Frequently, decisions have been found to be made on the basis of best practices instead of formalized rule sets.
Industrialization Strategies for Cross-organizational Information Intensive Services
21
4.2 “Lean” IIS Management through the Reduction of Waste By applying the reference architecture introduced above and deploying an electronic business medium (referred to as HERA Bus) for the support of the collaborative tax declaration scenario, “waste” can be reduced significantly. The following paragraphs elaborate and highlight the respective reductions which will be achieved in the course of the HERA project [13]. Waste of defects: In the governmental administration case outlined above, defects are one of the most relevant sources for waste and therefore reduced service performance. Paper-based information exchange, proprietary interfaces and application interconnections as well as a huge degree of human involvement induce the risk of information processing errors. Especially the mapping between different data formats (which requires significant effort) may lead to incorrect interpretation of certain pieces of information (as they are rarely subject to widely accepted standards) which, in turn, result in defective documents (“waste”). The introduction of an electronic business medium which complies with the reference architecture outlined above will reduced error rates dramatically: As a part of the O-Component, first of all, the information objects the agents deal with are clearly specified and standardized. Through formalization and standardization, uncertainty in the mutual interaction and thus also errors are reduced. The semantics of exchanged pieces of information (part of the L-Component), for example, are also clearly determined through a standard which has been created and agreed upon prior to the medium’s establishment. The “semantic gap” which frequently exists between the different information objects which the diverse agents are used to work with is bridged through the above mentioned software adapters which account for the translation from proprietary, locally used standards into the commonly comprehensible format and the other way around. Besides elements of the O-and L-Component of our framework, the C-Component also contributes to the reduction of “defective” results of electronic interaction: The service bus acts similar to a cross-organizational operating system and features services for error detection (on a physical level, e.g. XML schema processing errors, time limit overruns, routing errors etc.), error removal and exception handling. Data Validation Services account for the integrity of transmitted data. Overproduction and Excessive Inventory: As opposed to other scholars [4], we found that inventory does not only exist in the case of physical goods, but also in the information processing context. According to our definition, service inventory represents all the work-in-progress and steps toward service accomplishment that have been performed and stored before the actual service consumer arrives. According to [14], such inventory must be reduced to a minimum to avoid costs such as those incurring for storage, management, processing, maintenance and the decrease of its value over time. On the other hand, it allows firms to “buffer their resources from the variability of demand and reap benefits from economies of scale” [14, p.56] and to avoid a phenomenon which s frequently referred to as “Lean Sinking”. To find an adequate balance between the two, the “push-pull boundary” [14, p.56] must be easily adjustable (and should be
22
Christoph Schroth
placed in a way ensuring that as much as possible work is done only in response to actual demand). The reference architecture which is currently being setup in the course of the HERA project supports firms in optimally managing their information service inventory and preventing it from becoming excessive. The business medium which can be considered as “intermediary” will, for example, provide accountants with services that reduce their information inventory while increasing their responsiveness to customer demands: A dedicated service allows them to access up-to-date, electronic forms for completing tax computations (which is within the scope of the service they perform for their clients). In this way, they are provided support to streamline their inventory (which usually consists of large piles of possibly out-dated, paper-based forms that need to be filled manually). For their key customers, they may already pre-fill relevant information which reduces service lead times on the one hand, but also costs time (for the pre-performed process step) and may eventually represent waste in case the client decides to engage another accountant. With the help of our novel architecture, agents such as accountants may also improve their services’ quality over time. Old client data can be stored at the intermediary to allow for plausibility checks and automated prefilling of forms at the next time the service is consumed. However, this information needs to be maintained, requires storage capacity, security mechanisms, must be processed correctly and may finally go to waste, inducing the need for a thorough and individual trade-off once again. The increased automation through service orchestration (e.g., by means of the presented finite state machines), high transparency and adherence to clear interaction rules (part of the O-Component) redundantize certain activities which are now done ahead of demand. The creation of a proposal for the tax“Ausscheidung” across the different states which accommodate at least one of a taxable company’s premises, for example, is today often done although it is not necessarily needed by entities involved in later process steps. This kind of overproduction can be avoided by the establishment of the institutional intermediary which enforces the application of clear rules and also provides high transparency with respect to the work that has already been done and process steps that still need to be performed. Transportation: In the traditional product manufacturing context, the transportation of goods is regarded as an activity that must be reduced to a minimum as its does not add customer value but costs time and bears the risk that goods are lost or damaged. In the field of IIS, transportation means the transfer of symbols (or pieces of information) from one agent to another. Transportation can also occur internally at a specific agent as he transmits data from one employee to the other before sending it out to another, external party. Especially during the transportation of paper-based information as it is frequently conducted today, information may get lost. Apart from that, the routing of the different dossiers (comprising all the documents currently relevant for one dedicated client) represents considerable effort as people need to seek and contact the next destination. Our approach reduces the transportation of information considerably: First of all, paper-based information representation is fully avoided and replaced by
Industrialization Strategies for Cross-organizational Information Intensive Services
23
electronic information systems (the electronic service bus and the software adapters for connecting local applications to it). In this way, transportation of information becomes standardized and also highly quicker. Intermediate information storages are reduced as well. In the heterogeneous IT application landscape as it is in place today, documents often reside in diverse folders, databases and outboxes until they are transferred (and possibly also transformed) for processing in other information systems on the path to the actual receiver. In this way, both the above discussed risk for errors and the time required for “transportation” is increased, limiting quality and productivity of the agents’ interaction. The electronic business medium which is implemented in the course of the HERA project will now provide a uniform means for exchanging documents. Events (encapsulating pieces of information) are created by local applications, are then transformed into the commonly comprehensible format and can then be transmitted via the bus without any other intermediate stops. Besides this, the reduction of routing uncertainty also contributes to improved transportation conditions. Today, significant time is being spent to identify the desired receiver for a certain piece of information among a number of agents. With the help of a comprehensive role model (featuring related rights and obligations) as well as formalized business rules which represent constraints for the agents’ interactions, in combination with an unambiguous user registry and corresponding addressing schemes, routing uncertainty can be avoided: Agents are only enabled to send specified pieces of information to determined receivers which is, from a CComponent perspective- automatically performed by a routing service, thereby leading to a reduced time period required for information transportation. Waiting: The idle time between single steps towards service accomplishment, leading to human resources or equipment that do not operate at full capacity represent another kind of waste. In the IIS context, such waiting times also exist and are often induced by the unpredictability of approaching work load through customer requests and also the lack of formally defined non-functional properties of services. As opposed to the highly productive manufacturing sector, formal and quantifiable parameters defining service quality or exact time required for completing rarely exist [14]. Our reference architecture which is now being applied in the public administration context provides all stakeholders involved in a collaborative tax computation a huge degree of transparency and thus allows them to schedule their activities accordingly. Again, different architectural elements contribute to the overall reduction of this kind of waste: As part of the OComponent, formalized and standardized service descriptions (regarding functional as well as non-functional parameters) will facilitate a more accurate collaboration. The electronic business medium is enabled to automatically monitor and also enforce such parameters and can take immediate action (such as sending reminders or offering alternative services) in case a certain service does not work appropriately. This transparency of service characteristics combined with services for monitoring and enforcement (provided as part of the C-Component) enhance predictability and dependability of electronic interaction and thus reduce the “waste” of waiting.
24
Christoph Schroth
Waste of motion: In manufacturing, ineffective work procedures represent one of the causes for manual efforts that do not add value from a customer’s point of view. In the context of IIS, not the physical motion, but the superfluous processing of information (which can be well compared to motion) either through human users of machines represents a significant source of waste as well. One example for the reduction of inefficient information processing is a service which orchestrates all the above discussed operational services of the medium as soon as a message is to be transferred via the bus. The sequence of services for security (e.g. decryption), data validation, routing, publish-subscribe message dissemination, re-encryption and transfer to the software-adapter of the message receiver is highly automated and thus very efficient. Human involvement or superfluous information processing can thus be reduced to a minimum. Over-processing: Over-processing, induced for example by over-fulfilment of customer requirements, may result in useless consumption of resources such as time or inventory. As described above, the definition of clearly formalized services which inform their potential consumers about non-functional properties will reduce the risk of over-processing.
Waste of defects
• Electronic, standardized encapsulation of services, interoperable data formats, uniform terminologies will reduce information processing errors
Overproduction
• Process-automation, high transparency and adherence to clear rules redundantizes certain process steps which are now done ahead of demand
Excessive inventory
• Services that allow for electronic download and filling of tax computation forms reduce existing, paper-based inventory and allow for pre-performing certain steps
Transportation
• Information transportation is automated, controlled centrally and is performed according
Waiting
• Formalized service parameters, combined with monitoring and enforcement services provided by
to strict, previously defined routes the medium will improve transparency of the process-oriented organization
Waste of motion
• Superfluous processing of information can be reduced through standardization and partial automation of service orchestrations
Over-processing
• The formalization and standardization of non-functional as well as functional service properties will facilitate the reduction of over-processing
Fig. 3. The reference architecture facilitates the reduction of seven kinds of waste
5 Conclusion The growing relevance of information intensive services for most of the developed countries’ economies on the one hand, and their weaknesses with respect to key performance indicators on the other hand induce an immediate need to identify and apply means to facilitate a comprehensive wave of industrialization [15, 16, 17, 18, 19]. Challenges such as high variability or unquantifiabiliy do not represent inherent characteristics, but major hurdles which need to be taken down on the path
Industrialization Strategies for Cross-organizational Information Intensive Services
25
to a global industrialization. To identify proper means for the industrialization of IIS, we have first of all analyzed methods that have been successfully applied in the traditional product manufacturing context. In particular, we elaborated on the key principles and the central impacts of the Lean Manufacturing concept. Diverse scholars already presented interesting approaches to the challenges of increasing productivity, flexibility and quality of services in general and also of IIS in particular. In [19], the authors present an elaborate framework for measuring service performance which supports the enhancement of services productivity. In [20], process analysis, a method typically applied in traditional operations management is transferred to the field of information intensive services. In this way, the authors argue, business processes that are performed as parts of a service can be analyzed and reorganized in a systematic and detailed fashion. In [14], Chopra and Lariviere introduce the notion of “service inventory” and elaborate on its significant role for the level of service performance. According to the authors, basically three factors can be considered as general determinants of service performance: The placement of the push-pull-boundary (the size of inventory, i.e. work-in-progress or unfinished, stored information which is created prior to a customer’s request), the level and composition of resources (i.e. the people and the equipment that the provider utilizes to perform a service), and finally the service access policies (used to govern how customers are able to make use of a service) have significant impact on services performance. In [22], Levitt proposed a comprehensive production-line approach “as one by which services could significantly improve their performance on both cost, largely through improved efficiency, as well as quality.” [23, p. 209] He believed that service businesses which adopt the (mass) production-line approach could gain a competitive advantage by leveraging a low-cost leadership strategy. Levitt [22] regarded the following four key principles as crucial for the realization of his idea: First, employees should perform limited discretionary action to drive “standardization and quality, i.e. consistency in meeting specifications.” [23, p. 209] Second, by following the principle of division of labour, processes are broken down into groups of tasks which allow for the specialization of skills. “This narrow division of labour made possible limited spans of control and close supervision. It also minimizes both worker skill requirements and training time.” [23, p.209] Third, the systematic “substitution of equipment for labour aligned with well conceived use and positioning of technology led to efficient, high-volume production at acceptable quality levels.” [23, p.209] Last, standardization “allows predictability, preplanning, and easier process control which, in turn, provides uniformity in service quality.” [23, p. 209] In this work, we follow a rather operational approach and propose a reference architecture for the adequate organization and implementation of electronic media for the provision and consumption of service across company boundaries. The architecture comprises an organizational component (role models, associated rules, registries and information objects), a logical component (enabling a common understanding between interacting agents rather than only allowing for the exchange of mere data) as well as a physical component (the actual services which implement the “ideas” of the organizational component). We applied this reference architecture to the case of governmental administration in Switzerland and showed
26
Christoph Schroth
that thereby defects, overproduction and excessive amounts of inventory, unnecessary transportation, waste of time, superfluous effort of employees as well as over-processing can be significantly reduced (see Fig. 3). Future work will be devoted to applying and investigating the reference architecture and its benefits in other scenarios in order to gain more insights into optimal organization and implementation of cross-organizational information intensive services.
References [1] [2] [3] [4]
[5] [6] [7] [8] [9] [10]
[11] [12]
[13] [14] [15] [16] [17] [18]
[19]
Malone, T. (2001). The Future of E-Business. Sloan Management Review, 43 (1), 104. Porter, M. (2001). Strategy and the Internet. Harvard Business Review, 79 (3), 63-78. Hill, T. P. (1977). On goods and services. Review of Income and Wealth, 23, 315-38. Apte, U., Goh, C. (2004). Applying Lean Manufacturing Principles to InformationIntensive Services, International Journal of Service Technology and Management, 5 (5/6), 488-506. Fitzsimmons J. A., Fitzsimmons, M. J. (2003). Service management. New York: McGraw-Hill. Smith, A. (1776). An Inquiry into the Nature and Causes of the Wealth of Nations. U.K. Coase, R.H. (1937). The Nature of the Firm, Economica, 4 (16), 386-405. Krafcik, J. F. (1988). Triumph of the lean production system, Sloan Management Review, 30 (1), 41-52. Taylor, F. W. (1911). The Principles of Scientific Management. New York: Harper Bros. Schmid, B. F., Lechner, U., Klose, M., Schubert, P. (1999). Ein Referenzmodell für Gemeinschaften und Medien. in: M. Englien (Hrsg.), J. Homann (Hrsg.): Virtuelle Organisation und neue Medien. Lohmar : Eul Verlag - Workshop GeNeMe 99, Gemeinschaften in neuen Medien.- Dresden.- ISBN 3-89012-710-X, 125-150. Müller, W. (2007). Event Bus Schweiz. Konzept und Architektur, Version 1.5, Eidgenössiches Finanzdepartement EFD, Informatikstrategieorgan Bund (ISB). Schmid, Beat F., Schroth, Christoph (2008). Organizing as Programming: A Reference Model for Cross-Organizational Collaboration. Proceedings of the 9th IBIMA Conference on Information Management in Modern Organizations, Marrakech, Morocco. HERA project, available online at: http://www.hera-project.ch, accessed in 2007. Chopra, S., Lariviere, M. A. (2005). Managing Service Inventory to Improve Performance, MIT Sloan Management Review, 47 (1), 56-63. Schmid, B. F. Elektronische Märkte- Merkmale, Organisation und Potentiale, available online at: http://www.netacademy.org, accessed in 2007. McAfee, A. (2004). Will Web Services Really Transform Collaboration. Sloan Management Review, 46 (2), 78-84. Karmarkar, U. S. (2004). Will you survive the services revolution, Harvard Business Review, 82 (6), 100-110. Malone, T. W. (2004). The Future of Work: How the New Order of Business Will Shape Your Organization, Your Management Style, and Your Life. Boston, MA, USA: Harvard Business School Press. Schroth, C. (2007). Web 2.0 and SOA: Converging Concepts Enabling Seamless Cross-Organizational Collaboration. Proceedings of IEEE CEC'07/ EEE '07, Tokyo, Japan.
Industrialization Strategies for Cross-organizational Information Intensive Services
27
[20] Harmon, E., Hensel, S., Lukes, T. (2006). Measuring performance in services, The McKinsey Quarterly, No. 1. [21] Karmarkar, U. S., Apte, U. M. (2007). Operations Management in the information economy: Information products, processes, and chains. Journal of Operations Management, Vol. 25, 438-453. [22] Levitt, T. (1972). Production-line approach to service, Harvard Business Review, 50 (5), 20-31. [23] Bowen, D. E., Youngdahl, W. E. (1998). “Lean” service: in defense of a productionline approach, International Journal of Service Industry Management, 9 (3), 207-225.
SME Maturity, Requirement for Interoperability Gorka Benguria, Igor Santos European Software Institute, Parque tecnológico de Zamudio # 204, E-48170 Zamudio, Spain [email protected], [email protected]
Abstract. Nowadays and in the future, the capacity of being able to use the latest means of interaction has become a critical factor not only to increase earnings but also to stay in the market. The objective of this paper is to present a strategy for becoming and staying interoperable in SME (Small and Medium Enterprise) environments. SMEs used to be flexible, adaptable and innovative in their products, but they have difficulties to adopt ICT in disciplines not directly related with their products. This strategy relays in three pillars: an Improvement Cycle to guide the establishment and the maintenance of the interoperable status; An Interoperability Maturity Model as a repository of good practices for being interoperable; An Assessment Method to be able to measure the level of interoperability and being able to establish feasible goals. Keywords: Strategy and management aspects of interoperability, Managing challenges and solutions of interoperablity, Business models for interoperable products and services
1 Introduction The development and delivery of products and services implies the establishment of relationship with external parties like providers, partners, administration and clients. This has not changed since the appearance of the ancient Sumerian civilization, and even before (Some authors [1] date the history commerce from 150,000 years ago). What has changed a lot is the way in which those relationships are carried out and the complexity of these relationships. In most cases, the motivation for this evolution relays in the business need for: increasing earnings and staying competitive. The evolution of the way in which the different roles interact to complete the delivery of a product or service implies the introduction of further prerequisites for the interacting roles in order to take part in the information exchange. It is not that long ago when it was only required to be able to speak and perform some basic
30
Gorka Benguria, Igor Santos
calculations for engaging in any kind of trade. However, the introduction of the written documents, the more recent introduction of the telephone and finally the advent of information and communication technologies (ICT) have brought about a huge set of prerequisites in some trading models (e.g. eCommerce, eBanking, etc). This huge set of prerequisites has made these relationships extremely complex, but on the other hand, effectiveness and profitability have increased dramatically. These prerequisites could include the acquisition of hardware and software, the establishment of new procedures, the modification of the existing ones, and even changes in the organisational culture. Nowadays and in the future, the capacity of being able to use the latest means of interaction has become a critical factor not only to increase earnings but also to stay in the market. This is true, for virtually any kind of organisation, from large to micro enterprises. It is important to underline that the adoption of new means of interaction is not a one shot effort, new means of interaction will appear in the future and organizations should be able to integrate them in line with their business objectives. Interoperability is defined as “the ability of two or more systems or components to exchange information and to use the information that has been exchanged” [2] but how can we support SME to better interact with their partners, clients, providers, administrations, and so on? The objective of this paper is to present a strategy for becoming and staying interoperable in SME (Small and Medium Enterprise) environments. In this paper, the definition of the SME given by the European commission is taken as a reference. This is, enterprises which employ fewer than 250 persons and which have either an annual turnover not exceeding 50 million euro, or an annual balance sheet total not exceeding 43 million euro [3][4]. SMEs are a key element of the European economy [5] as they employ half of the workforce and generate half of the value added. The European commission has stressed their importance several times as they are the focus of the second action of their strategy in line with the Lisbon strategy [6]. Besides, they have some characteristics that require specific strategies, different from those applicable to large companies. SMEs used to be flexible, adaptable and innovative in their products, but they have difficulties to adopt ICT [7] in disciplines not directly related with their products (e.g. how they sell them, how they offer them, etc). This strategy for becoming and staying interoperable relays in three pillars as shown in the next figure (Fig. 1): x
x
An Improvement Cycle to guide the establishment and the maintenance of the interoperable status. SME are flexible by definition but even if they are used to change they do not perform that evolution in a managed way. Becoming and staying interoperability is an ambitious goal, but usually SME do not have the budget to address this kind of huge improvement initiatives. Therefore it should be approached progressively with small and result oriented interactions. An Interoperability Maturity Model as a repository of good practices for being interoperable. Practices that could be adopted by the organisations in the continuous effort for being interoperable. The maturity model
SME Maturity, Requirement for Interoperability
x
31
establishes a roadmap that is capable of guiding the SME from the chaos to the greater stages of interoperability. It takes into account the characteristics of the SME, and tries to prioritise the outdoor interoperability, in contrast to the indoor interoperability. An Assessment Method to be able to measure the level of interoperability of an organisation for knowing the current state and being able to establish feasible goals. This method takes into account the SME constraints and establishes mechanisms to reduce the time and the effort required to complete the assessment. Instead of performing a huge assessment it focus the assessment in different stages of interoperability. The focus of the assessment could be determined based on a short questionnaire based autoevaluation.
Fig. 1. Elements of the Strategy
The paper is structured in three sections presenting the three pillars of the strategy. Then the case study that will be used to validate the strategy is presented. Finally, the paper is concluded with a description of the expected results and benefits, the related work and the preliminary conclusions from the strategy definition.
2 Improvement Cycle Increasing organisational capability of being interoperable is not a question of jumping directly from the bottom level to the top level. Although often perceived as such, it is imperative that organisational personnel refrain from viewing process improvement as a one-time investment. The marketplace is constantly changing its needs which means that organisational requirements must change to meet the needs of the market [8]. Since long time, improvement cycles have been used with the object of supporting organisations in the process of constantly adapting their way of work whenever business needs change. Improvement cycles help organisations in performing this continuous evolution in a systematic and controlled way.
32
Gorka Benguria, Igor Santos
Becoming and staying interoperability is an ambitious goal, but usually SME do not have the budget to address this kind of huge improvement initiatives. Therefore it should be approached progressively with small and result oriented interactions. The improvement cycles describe the set of phases to carry out by organizations willing to enhance their capabilities. Besides, they used to provide a set of good practices for each phase that increase the probability of succeeding with the improvement initiative. Along the time, several improvement cycles have appeared in different domains: x
x
x
PDCA (Plan-Do-Check-Act): In 1939 the statistician Walter Shewart published a continuous cycle (the Shewart Cycle) [9] for following improvement efforts at any stage. W. Edwards Deming popularised this cycle by applying it to the manufacturing industry in Japan in the 1950’s. The Japanese referred to the approach as the Deming Cycle and nowadays it is often referred to as the Plan-Do-Check-Act (PDCA) cycle [10]. IDEAL Model [11] (Initiating, Diagnosing, Establishing, Acting and Learning): The Software Engineering Institute (SEI) developed the IDEAL model for managing CMM®-based (Capability Maturity Model) software process improvement programmes. The IDEAL model has gained widespread acceptance in the CMM® community based on its close link to the Capability Maturity Model for Software. RADAR (Results, Approach, Deploy, Assessment and Review): This is the improvement cycle of the EFQM excellence model. In the same way IDEAL is closely related to CMM®, this improvement cycle is also closely related with the EFQM excellence model [12].
Taking as a basis the Shewart cycle and adding some inputs from other improvement cycles, such as IDEAL and RADAR, a customised improvement cycle was developed as the strategy for becoming and staying interoperable for in SME Environments. This cycle takes into account the SMEs features (specialisation, constrained resources, etc) and the interoperability issues (security, changing standards, etc) in the identification and definition of the good practices. x
x
x
Identify Objectives: In line with the business needs and priorities this phase identifies the interoperability objectives that will be addressed in the next improvement initiative. SME do not have budget or patience to address long term improvement projects. These objectives must be valuable, concrete and achievable in the short term. Of course these small initiatives should be managed and aligned in a common direction. Measure Current State: It is necessary to clearly understand the current situation with respect to the capability of being interoperable to be able to establish coherent improvement objectives, and to check the advance in the interoperability practices once the improvement initiative has finished. For SME this measurement should be fast and focussed in specific issues, trying to avoid extensive and expensive assessments of all the organisational aspects. Plan: Once the current state and the interoperability objectives are known, it is possible to define and schedule the set of activities that will bring the
SME Maturity, Requirement for Interoperability
x
x x
33
organisation from the current to the objective situation. In SME the most probable situation is not to have dedicated resources. We will have to use shared resources, resources that should continue performing their day to day activity while participating in the initiative. In the same line, it will be useful to plan activities to keep the attention over the initiative, like prototypes evaluation, presentation, and so on to avoid the lost of interest due to the day to day pressures. Act: Follow the defined plan, monitor the performance and re-plan when needed. In SME environment is crucial to keep the motivation of the people working in the initiative and the interest of the people that could be potentially affected. Check Results: Once the improvement activities have been carried out, the next step is to confirm that those activities have achieved the expected results. Transfer Results: At the end of each improvement initiative it is advisable to stop for a while and gather the lessons learnt from the improvement initiative. Besides, based on the analysis of the experience, in some special cases, it could be decided to institutionalise the improvement initiative to other areas of the organisation.
Usually, in large organisations the most critical aspect of the improvement initiative is to get the management commitment. Fortunately, in SMEs the improvement initiative usually starts from the management and the commitment is guaranteed since the beginning. But, in those special cases in which this condition is not met it is important to gain and maintain those commitments. The success at this point is to ensure the alignment of the improvement initiative and the business objectives. In the other side, we have the commitment of the workers. This commitment is also difficult to gain and maintain. The success at this point is to ensure the alignment of the improvement initiative with the personal and professional objectives of the staff affected by the improvement initiative.
3 Maturity Model The term maturity model was popularized by the SEI when they developed the CMM® in 1986. These models are aimed at evaluating and measuring processes within organizations and identifying best practices useful in helping them to increase the maturity of their processes. Several models have been developed in different disciplines and focusing on different levels of the enterprise: the Service-Oriented Architecture Maturity Model [13], the European Interoperability Framework [14], the Extended Enterprise Architecture Maturity Model [15], the Levels of Information Systems Interoperability [16] and the Organisational Interoperability Maturity Model [17] are the main models surveyed for this work. Unfortunately, existing maturity models don’t address the issue of interoperability directly or, if they do it, they focus on certain levels (e.g.:
34
Gorka Benguria, Igor Santos
organisational, systems) and are not directly applicable to SME. The maturity model introduced here establishes the roadmap for the adoption of better interoperability practices. The model takes into account that SME could be in a chaotic state where there are not processes not roles defined, and it defines a staged approach that guides SME from the lower chaotic levels to the interoperable state. It depicts 6 interoperability stages of the organization and 4 process areas subject to improvement. The next figure (Fig. 2) shows the structure of this model. The process areas are common to each interoperability level. At each level, different objectives are set within each process area. Each objective is achieved through the fulfilment of certain practices that in the end will determine the interoperability level. These practices are in turn defined by sub-practices and work products.
Fig. 2. Structure of the Maturity Model
The process areas have been defined according to the main dimensions of the enterprise defined in [18]: x x x x
Business processes: Specification, execution, improvement and alignment of business strategy and processes. Organization: Identification, specification, enactment and improvement of all organizational structures. Products and services: Specification and design of the organisation’s products and services. Systems and technology: Identification, specification, design, construction or acquisition, operation, maintenance and improvement of systems.
As shown in the previous figure (Fig. 2), the following maturity levels are proposed: x x
Initial: no processes have been defined and are performed based on memory. Performed: informal definitions of strategies and processes on a per project/department basis.
SME Maturity, Requirement for Interoperability
x x x
35
Modelled: definition of the collaboration and e-business strategy on a per project/department basis and the consequent definition of most processes. Integrated: definition of the strategy concerning interoperability and the institutionalization of formal and complete processes. Interoperable: processes have well defined measures and data from monitoring is analyzed and used for further improvement iterations.
The process areas contain the good practices for being interoperable, while the interoperability level proposes an improvement roadmap. The roadmap is necessary because the amount of good practices contained in the model is too big for being adopted at the same time. More over, as it has been already mentioned, it is not advisable for SME as it will take lot of time, stop the business activity, consume a great amount of resources and it will have a high probability of failure.
4 Assessment Method This section describes the method that will be used to assess the degree to which an organisation has adopted the good practices contained in the interoperability maturity model introduced in this paper. The assessment method defines the activities, roles, and work products to be used during the assessment in order to produce a profile representing the interoperability maturity of the organisation. Besides, the assessment produces a set of strengths and improvement actions that could be used to identify improvement opportunities that could foster the organisation capacity of being interoperable. When the assessment method is used inside the improvement cycle, it provides a sound base for the initial evaluation and the final confirmation of the results. This method takes into account the SME constraints and establishes mechanisms to reduce the time and the effort required to complete the assessment. Instead of performing a huge assessment it focus the assessment in different stages of interoperability. The focus of the assessment could be determined based on a short questionnaire based self-assessment. There exist other formal assessment methods that have been used as basis for the definition of this method such as the ones used with CMMI® [19][20][21] y SPICE [23]. CMMI® and SPICE are software maturity models, and they provide formal assessment methods to support the evaluation of a software intensive organisation against the model practices. A similar approach will be used for the evaluation of the interoperability maturity model. There are other assessment approaches usually used in business improvement initiatives such as the SWOT Analysis (Strengths, Weaknesses, Opportunities and Threats) [24], but these approaches are not specifically designed for working with a reference model. Therefore, they are not useful for benchmarking the improvements achieved by the initiative against an exemplar model. The activities of the method are structured in four main phases: x
Assessment Preparation: The purpose of the assessment preparation is to ensure that all aspects of the assessment planning have been carried out properly. Taking into account SME constraints, this phase include activities
36
Gorka Benguria, Igor Santos
x x x
to reduce the scope of the assessment in order to being able to complete it in less than a week. This is performed through a self-assessment that helps to obtain a fast image of the strengths and weaknesses of the SME. Assessment Execution: The purpose is to determine in a systematic way the capacity of the processes under evaluation. Assessment Reporting: The purpose is to ensure that the results of the communication have been correctly communicated and archived. Assessment follow-up: The purpose of the assessment follow-up is to learn from the gathered experiences along the evaluation in order to improve future evaluation.
From these four main phases the three first phases are mandatory and should be performed in any evaluation. The last phase is optional, but recommendable. It is possible to realise an assessment without this phase, but recording and archiving the lessons learnt will improve the quality of the following assessments.
5 Case Study Nowadays, one of the trends in many markets is the customized provisioning of end products to the final customer. The perception that the purchased product meets perfectly the needs and desires of customers is an essential value for them. For example, in the automotive industry OEM (Original Equipment Providers) can produce up to 100.000 distinct cars and there is a choice of 10.000 different combinations of colors, engine, equipment and other aspects for each model [22]. Similar cases can be applied to industries like furniture, textile, etc. Besides, the organisations continue to have the constant need for increasing earnings and being competitive. The implications of this market trend are the increasing complexity of products and the relationships among providers. In order to keep pace with this demanding environment organizations are forced to increase efficiency along the supply chain. The main objective is to minimize costs through reduced stocks through the integration of the different partners in the supply chain, from the raw material suppliers to the final vendors. Information and communication technologies focused on B2B (Business to Business) processes can be really helpful at this stage. Large firms adopted standards for data interchange like EDI (Electronic Data Interchange) years ago and now, with the widespread use of web technologies, they are starting to adopt web-enabled B2B platforms. In this scenario, the smaller companies providing materials to these large scaled enterprises are obliged to conform to the new established rules if they don’t want to be ruled out of the market. The focus of this case study is the middle-tier SME that provides assembled materials (doors) to OEMs or retailers (furniture development organisations)(Fig. 3). This SME in turn also interacts with upper tier’s suppliers which may provide them raw materials (wood) or components (handles). All the companies in this chain comprise the extended supply chain of industries like automotive, furniture, textile, electronics, etc.
SME Maturity, Requirement for Interoperability
37
Fig. 3. Mid-tier SME supplier
The final aim of the case study is to understand the interoperability barriers of the mid-tier supplier with its upper tier suppliers and the final customer and to provide a solution that will overcome those identified problems. For this purpose these are the steps that will be followed: x x x x x x
Identification of the business objectives of the SME, in this case the doors development company. In this case those related with the improvement of their relationship with their clients, the furniture development companies. Identification of different types of interoperability barriers at different enterprise levels. Mechanisms and channels for asking RFQ (request for quotations); and mechanisms for monitoring the state of their issued orders. Definition of an improvement planning based on the desired situation. Deployment of necessary ICT infrastructure for the fulfilment of the plan. Verification and validation of the results. Transfer of the results through dissemination activities.
This case study is primarily focused on the testing of the interoperability maturity model and the assessment method. Therefore, only one improvement cycle will be performed in the SME. The testing of the improvement cycle is a much longer activity that could not be performed in the context of the project.
6 Results and Business Benefits The expected results from the application of the strategy for becoming and staying interoperable in SME environment are the modification of the organisational culture towards a more interoperability oriented attitude. The organisation should proactively find, evaluate and integrate new ways of interacting with other organisations in order to support their business objectives. Besides, this strategy takes into account the changing nature of the business context and provides a continuous improvement cycle that takes into account this evolution and supports the organisation in this constant activity. The usage of an interoperability maturity model, and performance of assessments against that model, will allow the organisation to benchmark their progresses along the time towards the ideal situation represented by the interoperability maturity model.
38
Gorka Benguria, Igor Santos
7 Related Work The approach introduced in this paper is in line with the many existing quality models. These quality models are composed of a set of good practices that characterize leading organizations in a certain domain. Initiatives like the MBNQA (Malcolm Baldridge National Quality Award) [25], EFQM (European Foundation for Quality Management) [12], ISO 9000 [26] and Six Sigma [27] are focused on business process quality. On the other hand, initiatives like ITIL (Information Technology Infrastructure Library) are focused on delivery of good quality IT services. Other initiatives like CMMI (Capability Maturity Model Integration) [28] or SPICE [29] and other examples mentioned in previous sections of this document are models focused on improving certain areas of the organizations and their focus is not always on interoperability. The strategy introduced here aims to serve as a holistic approach to interoperability in SME. It introduces interoperability practices embracing all the areas within an organization, and structures them in a staged approach to support all kind of SME, from those that remain in the chaos to those that have already adopted some of the best practices that support the interoperability. Besides, it provides practices that take into account the characteristics of the SME when applying the model to become them more interoperable.
8 Conclusion SMEs are a key element in the European community, they are employing the majority of the people and producing half of the gross value added. However, they seem to have problems for adopting ICTs. ICTs are evolving very fast, and they are changing the way to do business. Moreover, today there are certain kinds of trades that cannot be done without using ICT (e.g. eCommerce, eBanking, etc). The capacity of being able to use the latest means of interaction has become a critical factor not only to increase earnings but also to stay in the market. This is true, for virtually any kind of organisation, from large scaled enterprises to micro enterprises. The capacity of being able to use the latest means of interaction has become a critical factor not only to increase earnings but also to stay in the market. This is true, for virtually any kind of organisation, from large to micro enterprises. It will be an error to think that it will be enough to buy proper computers and software in order to support those new ways of doing business. Those new ways of doing business usually require changes in way of work of the organisation; in fact they could affect the processes, the people and the infrastructures of the organisation. Therefore, an improvement effort for increasing the interoperability of an organisation should be carefully evaluated, planned and executed. It is important to underline that this is not a one shot effort, new means of interaction will appear in the future and organizations should be able to integrate them in line with their business objectives.
SME Maturity, Requirement for Interoperability
39
This paper has presented a strategy for becoming and staying interoperable in SME environment. The motivation for the development of this strategy was the lack of a suitable strategy for introducing interoperability practices in an SME in a continuous way. This strategy takes as a basis three pillars that have shown to be successful in other domains: x x x
Improvement Cycle to guide the improvement activities Maturity Model as a source of good practices Assessment Method to establish a basis for the comparison
These three pillars have been adapted to the introduction of the interoperability practices in SMEs taking into account the interoperability issues and the SMEs advantages and difficulties. In order to verify the adequacy of these instruments as building blocks of the strategy for becoming and staying interoperable in SME environments. A scenario for the use case has been identified. It has been decided to centre the validation of the strategy in a SME taking part in a supply chain. Nowadays the supply chain management is one of the most demanding domains with respect to interoperability related technologies and approaches.
Acknowledgement This paper summarises some early results from the ServiciosB2B project. This work was partially funded by the Ministry of Industry, Commerce and Tourism of the Spanish Government through the Programme for the Promotion of Technical Investigation (PROFIT). We would like to thank our partners for their great feedback and collaboration during the execution of the project.
References [1] [2] [3] [4]
[5] [6] [7]
Watson, Peter (2005). Ideas: A History of Thought and Invention from Fire to Freud. HarperCollins. ISBN 0-06-621064-X. IEEE Standard Computer Dictionary: A Compilation of IEEE Standard Computer Glossaries, Institutes of Electrical and Electronics Engineers, New York, NY 1990 European Commission, “The new SME definition User guide and model declaration”, http://ec.europa.eu/enterprise/enterprise_policy/sme_definition/sme_user_guide.pdf European Commission, “COMMISSION RECOMMENDATION of 6 May 2003 concerning the definition of micro, small and medium-sized enterprises, (2003/361/EC) Manfred Schmiemann, “SMEs and entrepreneurship in the EU”, Statistics in focus, European Commission, INDUSTRY, TRADE AND SERVICES, 24/2006 European Commission, “Time to move up a gear: The new partnership for growth and jobs”, Communication from the Commission to the spring European Council, 2006 European Commission, “i2010 – A European Information Society for growth and employment”, COMMUNICATION FROM THE COMMISSION TO THE COUNCIL, THE EUROPEAN PARLIAMENT, THE EUROPEAN ECONOMIC AND SOCIAL COMMITTEE AND THE COMMITTEE OF THE REGIONS, 2005
40
Gorka Benguria, Igor Santos
[8]
David A. Reo, Nuria Quintano, Luigi Buglione, “Measuring software process improvement: there’s more to it than just measuring processes”, ESI, FESMA 99, September 1999. Walter Andrew Shewhart, “Statistical Method from the Viewpoint of Quality Control. New York: Dover”, 1939, ISBN 0-486-65232-7 W. Edwards Deming, “Out of the Crisis”, MIT Center for Advanced Engineering Study, Cambridge, MA, 1986.ISBN: 0262541157 CMU/SEI-96-HB-001, “IDEALSM: A User’s Guide for Software Process Improvement”, Bob McFeeley, February 1996 EFQM. “The EFQM Excellence Model – Improved Model, European Foundation of Quality Management”, 1999 SONIC, “SOA Maturity Model”, http://www.sonicsoftware.com/soamm European Commission. (2004). “European Interoperability Framework for PanEuropean e-Government Services” J. Schekkerman, “Extended Enterprise Architecture Maturity Model”, Version 2.0, 2006, http://www.enterprise-architecture.info/Images/E2AF/Extended%20Enterprise %20Architecture%20Maturity%20Model%20Guide%20v2.pdf C4ISR Interoperability Working Group, Department of Defense. “Levels of Information Systems Interoperability (LISI)”. Washington, DC: 1998 Thea Clark, Richard Jones, "Organisational Interoperability Maturity Model for C2", 1999 Command and Control Research and Technology Symposium, June 1999 ATHENA Integrated Project. (2005). “Framework for the Establishment and Management Methodology”. Deliverable D.A1.4 CMU/SEI-2006-HB-002, “Standard CMMI® Appraisal Method for Process Improvement (SCAMPISM)”, Version 1.2:, SCAMPI Upgrade Team CMU/SEI-95-TR-001, ESC-TR-95-001, “CMM Appraisal Framework (CAF)”, Version 1.0, Steve Masters, Carol Bothwell, February 1995 CMU/SEI-2001-TR-033, “CMM®-Based Appraisal for Internal Process Improvement (CBA IPI)” Version 1.2, Donna K. Dunaway, Steve Masters, November 2001, ESCTR-2001-033 Sachon, M., Albiñana, D. (2004). “Sector español del automóvil: ¿Preparado para el e-SCM?”. EBCenter PwC-IESE. On-line at www.ebcenter.org ISO/IEC TR 15504-3:1998(E), “Information technology - Software process assessment - Performing an assessment”. Learned, E., Christensen, C. Andrews, K. and W. Guth. “Business Policy :Text and Cases”, Homewood, R. Irwin, 1969 NIST, "Baldrige National Quality Program, Criteria for Performance Excellence", http://www.quality.nist.gov/Business_Criteria.htm ISO/IEC, “Quality management systems - Fundamentals and vocabulary“, ISO 9000:2005, 2005 Peter S. Pande et al, Robert P. Neuman, Roland R. Cavanagh, “The Six Sigma Way: How GE, Motorola, and Other Top Companies are Honing Their Performance (Hardcover)“, Publisher: McGraw-Hill; April 27, 2000, ISBN: 0071358064. CMU/SEI-2006-TR-008, “CMMI® for Development, Version 1.2” , CMMI-DEV, V1.2, ESC-TR-2006-008Conclusion ISO/IEC TR 15504-2:1998(E), “Information technology - Software process assessment – Reference Model.”
[9] [10] [11] [12] [13] [14] [15]
[16] [17] [18] [19] [20] [21]
[22] [23] [24] [25] [26] [27]
[28] [29]
Information Security Problems and Needs in Healthcare – A Case Study of Norway and Finland vs Sweden Rose-Mharie Åhlfeldt and Eva Söderström School of Humanities and Informatics, University of Skövde, P.O. Box 408, Skövde, Sweden rose-mharie.ahlfeldt;[email protected]
Abstract. In healthcare, the right information at the right time is a necessity in order to provide the best possible care for a patient. Patient information must also be protected from unauthorized access in order to protect patient privacy. It is also common for patients to visit more than one healthcare provider, which implies the need for crossborder healthcare and a focus on the patient process. Countries work differently with these issues. This paper is focused on three Scandinavian countries, Norway, Sweden and Finland, and their information security problems and needs in healthcare. Data was collected via case studies, and the results were compared to show both similarities and differences between these countries. Similarities include the too wide availability of patient information, an obvious need for risk analysis, and a tendency to focus more on patient safety than on patient privacy. Patients being involved in their own care, and the approach of exchanging patient information are examples of differences. Keywords: Information security, healthcare informatics, patient safety, patient privacy
1 Introduction Information Technology (IT) in healthcare has the potential to increase the welfare of the citizens as well asb improve the efficiency of the healthcare organizations. The demands on the healthcare sector in the Nordic countries come from: an aging population, a need for seamless service processes, an increasing demand of care in the patients’ homes, a demand for more information and participation, etc. [1]. Even if the Nordic countries are at the forefront with regard to the use of IT and the Internet in the society as a whole, the implementation of IT in healthcare has been slow [1]. When IT solutions are applied in healthcare, especially in a distributed fashion, information security is a critical issue [2], [3].
42
Rose-Mharie Åhlfeldt and Eva Söderström
The aim of this paper is to identify similarities and differences concerning the problems and needs of information security in a distributed healthcare domain. The research method used was two minor case studies in Norway and Finland. The results were compared with a previous, similar study in Sweden [4], also incorporating an existing information security model. The contribution is a holistic view of information security, which is necessary when preparing for and alleviating problems and needs in the future. In particular, the holistic view is a necessary contribution when patient information is transferred between different healthcare providers. The paper is structured as follows: Information security and the Info Sec model are presented in chapter 2, while chapter 3 introduces the state-of-the-art information security in healthcare. Chapter 4 includes the research approach, and chapter 5 presents the results of the study. Chapter 6 discusses and compares the results with the Swedish study. A summarizing discussion is given in chapter 7.
2 Information Security Model Information security concerns security issues in all kinds of information processing. According to SIS [5], information security is defined as the protection of information assets, and its aim is to maintain confidentiality, integrity, availability and accountability of information. In order to achieve the four characteristics, both technical and administrative security measures are required. Administrative security concerns the management of information security; strategies, policies, risk assessments etc. It includes structured planning and implementation of security work, and concerns the organizational level and thus the organization as a whole. Technical security concerns measures to be taken in order to achieve the overall requirements. It is subdivided into physical security (physical protection of information) and IT security (security for information in technical information systems). IT-security is further divided into computer security (protection of hardware and its contents) and communication security (protection of networks and media used for communication). A visual overview of the characteristics and security measures is presented in an information security model (Figure 1), The model combines the mentioned concepts.
Information Security Problems and Needs in Healthcare
Confidentiality
Availability
Integrity
43
Accountability
Characteristics
Information Security Security measures
Technical security
IT security
Computer Security
Administrative security
Physical security
Communication Security
Fig. 1. Information Security Model
With information security in the middle, the four characteristics are at the top, and the security measures are placed at the bottom. The latter ones are gathered directly from the SIS conceptual classification [5].
3 Information Security in Healthcare Swedish healthcare is currently undergoing extensive changes. It is to some extent moving out of the hospitals and is instead being carried out in other forms and locations. Patients with complex care needs acquire extensive care activities, for instance, in their homes. They have contact with several different healthcare providers, both within municipalities and county councils. These changes increase the requirements for secure communication, as well as cooperation between several different organizations and authorities. The National Board of Welfare [6] identifies cooperation and information transfer between different healthcare providers as a risk area for patient safety. Computerized systems as well as all media useful for patient information transmission between different healthcare providers constitute risk factors. The Board claims that healthcare providers must offer systematic measures in order to achieve sufficiently secure routines for the exchange of patient information [6]. IT-systems are extended to more and more users, but proper functions that control unauthorized access to patient information are still lacking. The Swedish Data Inspection Board declares in its report that county councils, in practice, have little or no control of who has access to information about specific patients [7]. Depending on the authorization method used, various additional measures and routines may be required to force the users into making active choices when accessing sensitive data about a specific patient. These demands are not unique for Swedish healthcare. Even if traditions and legal aspects differ between countries, the main problems are the same. Strong authentication, derived services such as authorization, access controls, accountability, integrity and confidentiality are importunate demands to achieve [8], [9], [10]. In a distributed healthcare domain, there is also a particular need for
44
Rose-Mharie Åhlfeldt and Eva Söderström
a process-oriented approach [11]. Our research is focused on the patient process, since it expands beyond the boundaries of one organization, and consequently leads to patient information being available to more actors. There is a need for better awareness, improved procedures, improved software, and so on [12]. The main purpose of information security in healthcare is twofold: to achieve a high level of patient safety, i.e. to provide patients with the best care with the right information at the right time; and to achieve a high level of patient privacy; i.e. to protect sensitive patient information from unauthorized access. It is difficult to achieve both aims simultaneously, and one of them is often compromised. Hence, a balance between them is necessary in healthcare [13]. Patient safety is here related to availability and integrity, while patient privacy is related to confidentiality and accountability. We return to the relationship between the two concepts in the results.
4 The Case Studies The research was constructed as two minor case studies with a total of four interviews. The aim of the case studies was twofold: to investigate how healthcare management perceives current information security from a holistic perspective, and to explore their view of information security when patient information is exchanged. The studies were conducted at a national level in Norway and Finland. Three groups were in focus: 1) What main problems and needs, alternatively positive effects of information security, exist in healthcare from a national perspective? 2) What problems and needs, alternatively positive effects, exist when patient information is being exchanged between different healthcare providers? 3) How can the present balance between patient safety and patient privacy be recognized and what tendency can be discerned for the future? From each country, one interest organization (In Norway KITH – The Norwegian Centre for Informatics in Health and Social Care [14], in Finland STAKES – The National Research and Development Centre for Welfare and Health [15]) and one information security manager from a large hospital were selected. The respondents were selected for their good knowledge and experience of Norwegian and Finnish healthcare respectively. The studies used semi-structured interviews and were based on a set of main questions. Nine questions, derived from the Info Sec model, concern information security from a holistic view, the exchange of patient information between different healthcare providers, as well as the balance between the need for availability of patient information and protecting patient privacy. The questions are numbered from 1 to 9. The first one is divided into four parts: 1a - 1d. The four respondents were noted as A – D, with A and B from Norway, and C and D from Finland. The answers from A and the first question are hence noted as A1a, A1b and so on in Figure 2. All the answers are structured in a similar manner.
Information Security Problems and Needs in Healthcare
45
5 Results This chapter presents a summary of the results from the three main questions stated in the applied case study. 5.1 Problems and Needs The questions were structured to address general problems and needs first, before the specific questions about security during patient information exchange. We follow this structure when accounting for the results. The answers from the questions 1, 3, 4, 6, 7 and 8 concerning problems and needs have been classified and illustrated into the Info Sec Model in Figure 2. 1a Information Security Problems – spontaneous Both Norwegian respondents mentioned the problem of unavailable patient information, within as well as across organizations. They also emphasized that only those who need the information should have access to it. Respondent C identified employees as the weakest link, but that this is difficult to define. There are too many deficiencies concerning, for example, audit logs, regulations for encryption of information inside healthcare provider organizations, and medical doctors being reluctant to ask for consent. Respondent D claimed that the major problem is a lack of knowledge concerning the applications’ security level. Suppliers may have said one thing, but the reality is something else.
Fig. 2. Results from questions 1, 3, 4, 6, 7 and 8 indexed and classified in the Info Sec Model
1b Information Security problems – administratively The respondents mentioned that lack of knowledge, human behavior and comprehension are the main administrative problems. There is a lack of resources
46
Rose-Mharie Åhlfeldt and Eva Söderström
for information security activities and insufficient routines to meet the security requirements. Furthermore, systems monitoring is also a problem. It is too expensive to have 24-hour monitoring, although this really is necessary. 1c Information Security problems – physical security “Open” hospitals are a major physical security problem in both Norway and Finland. Although new hospitals in Norway equip their entrances with cameras etc., it is still very easy to enter the wards, where much information is visible, both physically and on screens, according to respondent B. Hospitals in Finland use group logins to buildings, and computers have been stolen from the central store, according to respondent D. Respondent C pointed out that physical security in small and medium size healthcare centers is not adequate. 1d Information Security problems – IT security Authentication and access level problems are obvious in both Norway and Finland. Norway has implemented smart cards for the network, but it is an expensive technique with usability problems, such as users forgetting the cards at home, and not bringing them to other computers. Norway also lacks an automatic tool for checking logs, which is unacceptable since the access levels are too wide according to respondent B. Respondent C claims that their whole password system is not very secure at all. Outsourcing is also a narrow area: “You cannot easily see what you can outsource. You can outsource the technology but you cannot outsource the rest of the responsibility. It has to be included in the contract (respondent C). Respondent D mentioned problems such as too many uncontrolled output sockets in the systems and external partner access to the systems. 3 Present substantial security problems with cross border healthcare information from a patient process view In Norway, the information is attached to one organization and almost limited for legal reasons. Technically, systems are hard to integrate, and structured information is difficult to transfer between the systems. According to respondent B, although they can transfer information, this is not allowed according to their legislation. In one region, all healthcare organizations (except municipalities) have the same system, making it easy to transfer information between them. Finland also has interoperability problems, which are technical in a short-term sense. The main problems are semantic, according to respondent C. Respondent D claims that sending information between hospitals and healthcare centers is a minor problem. Instead, the main problem is serving the systems of the many organizations. 4 and 6 Future substantial security problems with cross border healthcare information from a patient process view and the security measures for solutions The Norwegian respondents agree on the importance of the technology working and being thought through from the start. Risk analysis is needed for the whole healthcare sector, and consequences should be dealt with. The main problem is the daily work for the ordinary co-operators according to respondent A: “It is important to find a balance”. Responsibility issues are also important for the
Information Security Problems and Needs in Healthcare
47
future, such as: obtained consent, documentation and information duty, and distribution to the research community. In Finland, respondent C states that the next generation of health record systems will be more consistent and more accessible, but changing IT will take more than one generation. According to respondent C, finding one general definition of merging data is too difficult. Information overflow is another problem: “The medical doctors still have a maximum of 15 minutes for the patient and five minutes for discussion. They do not have time to check all the information” (respondent C). Respondent D also mentioned the problem with a high service level agreement. To solve these problems, Norwegian respondents suggested supporting as well as presenting tools, improved analysis tools for checking logs, and more education concerning network applications. The Finnish respondents emphasized the need for electronic healthcare information, and respondent C suggested a reorganization of the whole healthcare sector. Furthermore, tools for checking quality will be more common and useful as well as necessary for the healthcare sector in order to improve the quality of care. Respondent D suggested the need for standards and service agreements. 7 and 8 Future security problems concerning availability of health information and measures for solutions Confidentiality aspects are a common risk factor concerning the availability of healthcare information for all respondents. The Norwegians stress that the availability of many patients’ health records is too broad, revealing a need to revise authorization allowance. Respondent A pointed out the risk for criminal blackmail, particularly for well-known persons: “At present, we have a peaceful regime both in Norway and Sweden, but in other types of political regimes, our openness can be of great importance and be misused”. The Finnish respondents also worry about how to control access levels, for example: “We have to think about how we control the access to patient information and other data because the access is very wide” (respondent C). The political situation was also considered: “Political situations outside Scandinavia, health and fitness records and of course the access, is controlled by law, but we have seen that a law can be changed in some weeks like in the UK” (respondent C). The Norwegian respondents suggest the solution of maintaining basic principals concerning privacy and data protection. The access control systems must be improved: “We must implement a “need-to-know-model” even if it is administratively both intensive and expensive” (respondent A); and “We have to do risk analyses in order to set requirements for privacy” (respondent B). In Finland, respondent D emphasizes that patients must have control over their own records, while respondent C is more focused on technical issues: “New technology must be implemented, for instance the new generation of PKI”.
48
Rose-Mharie Åhlfeldt and Eva Söderström
5.2 Positive Aspects This section follows the same structure as the previous one in terms of first presenting the results in Figure 3, addressing general positive aspects, before proceeding with the specifics about the exchange of patient information. The answers from questions 2 and 5 concerning the positives aspects are presented in Figure 3.
Fig. 3. Results from questions 2 and 5 concerning the positive aspects
2 Present positive aspects of information security in healthcare In both Norway and Finland, patients are allowed to read their own patient records, implying: “power of citizens” (respondent C). When paper-based records were used, patients did not have this opportunity. The computerization of patient records is the most positive aspect of information security in healthcare according to all the respondents: “More people can have access to information quickly and simultaneously, and depending on IT, we can log everyone who has accessed the record and compared with the paper based system, this is very positive” (respondent B); “People cannot do whatever they want, with IT you have more orderliness” (respondent D). 5 Future positive aspects of information security in healthcare Future positive aspects will be more or less the same as previously mentioned. Even though improvements are still needed, there is a sense of faith that the healthcare sector will succeed. Information is available for healthcare actors who really need it, both internally and externally. The duty of information is improved and easier to implement with IT than paper-based systems. Hence, IT development must continue in healthcare. 5.3 Patient Safety and/or Patient Privacy? Answers from question 9 are classified according to patient safety and patient privacy regulations. These concepts have extended the upper part of the Info Sec
Information Security Problems and Needs in Healthcare
49
model and become related to the upper four characteristics of information security (Figure 4). The results indicate how patient safety and patient integrity often contradict one another, but also the desire to balance them. Three respondents claim the current focus priority is on patient privacy, since legislation is focused on protecting this issue. One respondent considers that patient safety dominates healthcare, while patient privacy dominates legislation.
Fig. 4. Result from question 9 - the balance between patient safety and patient privacy
All the respondents state the necessity of achieving a balance between the two concepts, even if two of the respondents want it to lean more towards safety. It should be possible to achieve a high level of patient safety and an acceptable level of patient privacy. Respondents A and D claim that in the future there will be a balance between privacy and safety: “We must try to find the right way. Sometimes we have to focus more on patient safety and sometimes on patient privacy” (respondent A): and “Patient safety is more technically focused and therefore easier to solve. The privacy part is more complex” (respondent D). Respondents B and C state that the focus would move towards patient safety in the future: “Safety and quality is coming and will be measured” (respondent C). Respondent B adds that even if privacy seems to be less in focus at the present time, in the near future, the pendulum will probably swing back.
6 Discussion and Comparison with the Swedish Study The results from the case studies are now compared to a similar study in Sweden. For details about the Swedish study, we refer to [4]. The analysis is organized according to the categories: information security problems and needs, positive aspects of information security in healthcare, and patient safety and patient privacy. 6.1 Problems and Needs The three countries have approximately the same social structure. They also basically have the same problems and needs concerning information security in healthcare. With regard to technical security, they share the problems concerning
50
Rose-Mharie Åhlfeldt and Eva Söderström
authentication techniques, even if Norway is somewhat further with their smart cards. All three countries also lack authorization techniques and tools for log management. In administrative security, the main problems are: a too wide availability of patient information, incomplete work routines, a lack of security awareness and a lack of risk analyses. The integration dilemma is also a common problem. Legislation must be adapted to the new requirements, and the responsibilities for patient information must be revised. Thus far, Norway and Finland have implemented more standards and policies than Sweden, while the Swedish study did identify the need for strategies, policies and standards. In Norway and Finland, patients have the right to access their logs and consequently have more control of their own information. This is currently not possible in Sweden, but the proposed “Data Protection Act” will enable patients to be admitted access to their logs if possible, but they will not have the right to claim them. 6.2 Positive Aspects All countries agree that the most positive aspect concerning information security in healthcare is the computerization of healthcare records. Security issues still remain, but the positive aspects are clear; information availability for healthcare staff both internally and externally, more efficient care flow, and improved quality of care and patient safety. Norway and Finland emphasize the positive aspect of more power to the patients, but this is – as mentioned – not possible in Sweden. 6.3 Patient Safety and Patient Privacy Our research shows similarities concerning the balance between patient safety and patient privacy, even though the respondents disagree on which one the present focus is directed. They agree that legislation is focused on privacy, while organizations focus on safety. The Norwegian and Finnish respondents all state the necessity of achieving a balance between the two concepts. The Swedish respondents are divided; three want a balance while two claim the focus should be on patient safety. All the countries claim that in the future the focus will shift to patient safety, while one also mentioned the importance of privacy. 6.4 Comparison Summary The similarities between the countries concern problems and needs, positive aspects, as well as patient safety versus patient privacy, which is illustrated in table 1. Table 1. Similarities between the three countries Category
Similarities
Problems and needs
Lack of an authorization techniques (technical security) Lack of log management techniques (technical security)
Information Security Problems and Needs in Healthcare
51
Too wide availability of patient information (adm. security) Incomplete work routines (adm. security) Lack of security awareness (adm security) Need for risk analyses (adm security) Positive aspects
Computerization of healthcare records
Patient safety and patient privacy
No consensus on the current focus between the countries Legislation is focused on privacy while organizations emphasize safety Tendency to focus more on safety in the future
The computerization of healthcare records, which is a main issue in all three countries, results in availability of information, better care flow, and improved quality of care. There are also differences, illustrated in table 2. Table 2. Differences between the three countries Differences
Sweden
Norway
Finland
Authentication technique problems
Ongoing pilot project with smart cards
Further advanced with smart cards implementation, e.g. communications networks.
Further advanced with smart cards implementation
Exchange of patient information
Few standards and policies
More standards and policies
More standards and policies
Patients’ own access to information
Patients lack the right to access their logs, thus little power to patients
Patients have the right to access their logs, thus more power to patients
Patients have the right to access their logs, thus more power to patients
Patient privacy and Patient safety
No consensus among respondents
Balance is needed
Balance is needed
The first three rows of the table indicate problems and needs, the third row includes positive aspects, and the fourth concerns the discussion of privacy versus safety. Even though the countries share the problems of authentication techniques, they differ in how far advanced they are addressing them. Information exchange is also, in part, a common dilemma, but the countries differ as well in how advanced they are in dealing with the issue. Interestingly, Norway and Finland both allow patients to access their logs. The general similarities should allow Sweden to do the same, but as mentioned, this will require legislative changes.
52
Rose-Mharie Åhlfeldt and Eva Söderström
7 Summarizing Discussion The aim of this paper is to identify problems and needs concerning information security in a distributed healthcare domain. Two minor case studies were conducted in Norway and Finland, and the results were compared to a similar preexisting Swedish study. Consequently, this research provides a more holistic, international overview. The results were presented and classified according to the Info Sec model. Furthermore, the need to relate and analyze patient safety and patient privacy was also emphasized. There are many similarities between the Norway, Finland and Sweden. The differences are mainly found at a more abstract level. Sweden has only recently conducted a National Strategy for IT in Healthcare, while both Norway and Finland have worked in a more structured way for a longer period of time. Both Norway and Finland are more centralized in their healthcare IT-development. Another interesting difference is the lack of patient involvement in Sweden compared to both Norway and Finland. The respondents claim that patient involvement is a useful preventive measure to protect patient privacy. When patients have the right to see who has accessed their records, the healthcare staff is more careful about avoiding misuse. However, protecting privacy is a very individual task, because people have different attitudes about the matter of privacy and even personal opinions can change depending on the situation. The involvement of patients could therefore be a good complement to the protection of their own privacy. This is a challenge for all three countries, but particularly for Sweden since it does not allow patients to access their records in the same way as in Norway and Finland. Patient safety and patient privacy in healthcare must be accomplished in order to achieve good quality of care and maintain the trust of the patients. This implies that information security must be taken seriously and the balance between these two concepts should be given careful consideration. However, the healthcare sector in general needs to look at and implement, more extensively, the existing security standards, framework and best practice, such as ISO/IEC 17799, ITIL etc. Ongoing work to incorporate the ISO/IEC 17799 into the healthcare sector (ISO 27799) exists, but further efforts are needed to adopt other frameworks into the sector as well. Furthermore, future research needs to follow the developments in how the countries address both the problems and differences, nationally, as well as in how they collaborate to identify common solutions. In addition, guidelines for establishing and achieving successful collaboration across national borders should be developed, which can also help establish better links with other countries as well. Such activities will be useful not the least since the mobility of students, the work force and organizations keeps increasing.
References [1]
Norden, (2005). Health and Social Sectors with an ”e”. A study of the Nordic countries. TemaNord 2005:531. Nordic Council of Ministers, Copenhagen. ISBN 92893-1157-6.
Information Security Problems and Needs in Healthcare
53
[2]
Computer Sweden, 2006. Computer Sweden/Dagens Medicin, IT i vården, 2006-1122 (in Swedish). [3] Ministry of Health and Social Affairs, 2006. Nationell IT-strategi för vård- och omsorg. ISBN 91-631-8541-5 (in Swedish). [4] Åhlfeldt, R-M. and Söderström, E. 2007. Information Security Problems and Needs in a Distributed Healthcare Domain - A Case study. In Proceedings of The Twelfth International Symposium on Health Information Management Research (iSHIMR 2007), Sheffield, UK, July 18 – 20, 2007, pp 97-108. ISBN: 0 903522 40 3. [5] SIS, 2003. SIS Handbok 550. Terminologi för informationssäkerhet. SIS Förlag AB. Stockholm (in Swedish). [6] National Board of Health and Welfare, 2004. Patientsäkerhet vid elektronisk vårddokumentation. Rapport från verksamhetstillsyn 2003 i ett sjukvårdsdistrikt inom norra regionen. Artikelnr: 2004-109-11 (in Swedish). [7] Data Inspection Board, 2005. Ökad tillgänglighet till patientuppgifter. Rapport 2005:1 [on-line]. Available from: http://www.datainspektionen.se [Accessed 1 November 2005] (in Swedish). [8] CEN TC 251, prENV 13729, 1999 Health Informatics Secure User Identification Strong Authentication using Microprocessor Cards (SEC-ID/CARDS), 1999. [9] Smith, E. and Eloff, J. H. P. 1999. Security in healthcare information systems current trends. International Journal of Medical Informatics 54, pp. 39-54. [10] Blobel, B. and Roger-France, F., 2001. A systematic approach for analysis and design of secure health information systems. International Journal of Medical Informatics 62, pp. 51-78.
[11] Poulymenopoulou, M., Malamateniou, F. and Vassilacopoulos, G., 2003 Specifying Workflow Process Requirements for an Emergency Medical Service. Journal of Medical Systems, 27(4), pp. 325-335. [12] Louwerse, K., 1998. Availability of health data; requirements and solutions Chairpersons' introduction. International of Medical Informatics, 49, pp. 9-11. [13] Utbult, M., Holmgren, A., Larsson, R., and Lindwall, C. L., 2004. Patientdata - brist och överflöd i vården. Teldok rapport nr 154. Almqvist & Wiksell, Uppsala (in Swedish). [14] KITH, 2007, Web-page. Available from: http://www.kith.no [Accessed sep, 2007]. [15] STAKES, 2007, Web-page. Available from: http;//www.stakes.fi [Accessed sep, 2007]. [16] ISO/IEC 17799, 2000. Information Technology – Code of practice for information security management. Technical Report. International organization for standards, Geneva, Switzerland. [17] ITIL, 2008, Web-page. Available from: http://www.itilofficialsite.com/home/home.asp [Accessed Jan, 2008].
Impact of Application Lifecycle Management – A Case Study J. Kääriäinen1, A. Välimäki2 1
Abstract. Lifecycle management provides a generic frame of reference for systems and methods that are needed for managing all product related data during the product’s lifecycle. This paper reports experiences from a case study performed in the automation industry. The goal was to study the concept of Application Lifecycle Management (ALM) and gather and analyse first experiences when a company is moving towards distributed application lifecycle management. The results show that several benefits were gained when introducing an ALM solution in a case company. This research also produced a first version of an ALM framework that can be used to support practical ALM improvement efforts. In this case, the experiences show that lifecycle activity should manage artefacts produced in different stages in a project lifecycle and keep all activities in synchronised. The challenge resides in how to generate efficient company-specific implementations of ALM for complicated real-life situations. Keywords: Industrial case studies and demonstrators of interoperability, Interoperability best practice and success stories, Tools for interoperability
1 Introduction The ability to produce quality products on time and at competitive costs is important for any industrial organization. Nowadays, companies are seeking systematic and more efficient ways to meet these challenges. Globalization and the use of subcontractors are becoming norms in current business environments. This shift from the traditional one-site development to the networked development environment means that product development is becoming a global complex undertaking with several stakeholders and various activities. In this kind of environment, people are likely to have difficulties in understanding each other and have problems to share a common view towards product related data. Therefore,
56
J. Kääriäinen, A. Välimäki
companies have to search for more effective procedures and tools to coordinate their ever-complicating development activities. Modern approaches for product development need to take into account the business environment, and the product’s whole lifecycle must be covered, from the initial definition up to maintenance. Such a holistic viewpoint means efficient deployment of lifecycle management. Setting up a comprehensive, smoothly running and efficiently managed product development environment requires effective lifecycle processes and tool support. In practice, this means the deployment of concepts of Product Lifecycle Management (PLM) and from a SW development point of view, Application Lifecycle Management (ALM). In literature and among tool vendors, the term PLM has been settled [1]. Within the past few years, the concept, “Application Lifecycle Management” (ALM) has emerged to indicate the coordination of activities and the management of artefacts during a SW product lifecycle, such as requirements, designs, source code, builds, documents, etc. ALM is quite a new term and therefore there are no extensive publications that deal with the term. However, tool vendors have recently used this term to indicate tool suites or their approaches that provide support for the various phases of a SW development lifecycle. While ALM is quite a new term, there is an apparent need to report practical ALM experiences from industry. This paper reports experiences from a case study performed in a company operating in the automation industry. This case study is a part of the broader ALM improvement effort in a case company where the aim is to systematize ALM in distributed development projects that develop SW for SW intensive systems. The previous solution for ALM was not good enough for the needs in future. The company expects that the new ALM solution will improve project visibility and efficiency. In this case study, the goal was to gather and analyse first experiences when a company is moving towards distributed ALM. Also, the whole concept of ALM is somewhat unclear and therefore, the aim of the research was to also create an ALM framework that can be used to evaluate the current state of ALM solution in a target organization and to detect ALM elements that possibly need to be improved. The experience data was collected from the real users of ALM in the case company. The results reported in this paper present the results of current state analysis and the perceptions of the project members when they estimated the impacts of new ALM solution on their daily work. This paper is organised as follows: In the next section, background and concept of ALM are identified. Then, in Section 3, the research approach is described comprising a description of industrial context and research settings. In Section 4, results are introduced. Finally, results are discussed and conclusions are drawn up with some final remarks.
2 Related Research Application Lifecycle Management (ALM) is quite a new term and therefore it is difficult to find definitions for it. One approach for this is standards, such as ISO/IEC 12207 that present lifecycle approaches for software development. Doyle [2] argues that ALM is a set of tools, processes and practices that enable a
Impact of Application Lifecycle Management – A Case Study
57
development organization to implement and deliver to such software lifecycle approaches. In practice, this means that some kind of solutions for ALM exist in every company even though toolsets that are specially marketed as ALM suites were introduced just a few years ago. Schwaber [3] defines that the three pillars of ALM are traceability, process automation and reporting. The role of ALM is to act as a coordination and product information management discipline. The purpose of ALM is to provide integrated tools and practices that support project cooperation and communication through a project’s lifecycle (Fig. 1). It breaks the barriers between development activities with collaboration and fluent information flow. For management, it provides an objective means to monitor project activities and generate real-time reports from project data.
Fig. 1. Application communication.
Lifecycle
Management
facilitates
project
cooperation
and
Doyle & Lloyd [4] identify requirements definition and management as one of the most critical disciplines in ALM. One focal point of requirements management is requirements traceability. It was actively researched in the 90’s and in early 2000. For example, in [5, 6, 7]. Traceability provides a means to identify and maintain relationships between developmental artefacts and therefore, facilitates reporting, change impact analysis and information visibility through the product lifecycle. The roots of ALM tools are in Configuration Management and Integrated Development Environments [8]. However, the problem with CM systems is that they are overly focused on code, and fail to provide a more comprehensive view of the project, especially in terms of its requirements [9]. An important landmark on the road towards ALM tools was the rethinking that requirements and other developmental artefacts started being taken seriously by CM tool vendors [10]. Therefore, to meet the changing needs of the industry, configuration management systems had to be merged into infrastructures that will support the entire SW development life cycle [9, 11, 12].
58
J. Kääriäinen, A. Välimäki
Shaw [13] argues that with traditional toolsets, traceability and reporting across disciplines is extremely difficult. Same data is often duplicated into various applications that complicate the traceability and maintenance of data. According to Doyle & Lloyd [4], having the results of processes recorded in the same repository as the development artifacts are managed in is easy to produce a relevant set of related metrics that span the various development phases of the lifecycle. Traditionally, vendors have attempted to support lifecycle management with ALM tools by moving to a single repository or by suggesting practitioners move to a single vendor for all ALM tools [13]. These solutions could be very comprehensive and well integrated. The downside of these solutions is that they lock a company into a single vendor. Another approach that aims to be more vendor-independent is called an “Application integration framework”. They are used as platforms to integrate several applications needed during the software development lifecycle. Examples of this kind of framework are Eclipse and ALF projects [14]. Schwaber [3] states that ALM does not necessarily require tools. Traditionally, lifecycle activities have been handled by partly using manual operations and solutions. For example, handouts between different functions can be controlled using a paper-based approval process. However, these activities can be made more efficient through tool integration with process automation.
3 Research Approach This section discusses industrial context, phases and methods used in this research. 3.1 Industrial Context The case company operates in the automation industry. The company produces complex automation systems where SW development is a part of system development. Product development in the case company is organized according to product lines. This research focuses on two product lines that consist of several products that are partly developed in different sites. The case was further focused on two teams each having 5 to 6 projects running in parallel. Depending on projects, the projects are geographically distributed over 2 or 3 sites. The same processes and tools are used in all projects. Previously, projects have followed a partly iterative development process. Currently, projects have adopted the agile development method, Scrum. The company is in first iteration in ALM improvement. Previously, the company’s ALM solution for distributed development comprised several somewhat isolated databases to manage project related data, such as version control, document management and fault management systems. The geographic distribution as well as increasing complexity and efficiency demands forced the company to seek more integrated solutions to coordinate distinct project phases and to provide a centralised project database for all project related data. In practice, this meant the deployment of a commercial ALM tool with the Scrum process.
Impact of Application Lifecycle Management – A Case Study
59
3.2 Research Settings This research has been carried out in the following phases: research planning, questionnaire, interviews and analysis of results. A two-step approach was used for data collection. First, a questionnaire was made for project managers and project members. The respondents were asked for current practices related to ALM, opinions on how the introduced ALM solution has affected daily work compared to previous solutions and opinions about things that are important for efficient application lifecycle management. They were also asked about their ideas for the kinds of challenges a distributed environment would set for operation. Based on the related research, the elements of ALM were identified. This framework has been used for organizing a questionnaire and interviews and for analysing the case results. The elements of the preliminary ALM framework are: x
x
x
x
Creation and management of project artefacts: How different data items are created, identified, stored and versioned on various phases of a project lifecycle? All project data should be securely and easily shared for all stakeholders. Team communication should be supported. Traceability of lifecycle artefacts: How traceability in a project lifecycle is handled? Traceability provides a means to identify and maintain relationships between artefacts and, therefore, facilitates reporting, change impact analysis and information visibility through the product lifecycle. Reporting of lifecycle artefacts: How the solution supports reporting on a project lifecycle? The solution should facilitate the gathering, processing and the presentation of process and configuration item-related information for an organization. Process automation and tool integration: How well the tools support lifecycle processes and what kind of tool integrations there are? An ALM solution should support the procedures of the project and facilitate fluent data exchange and queries between various development and management tools.
After the questionnaire, project managers from the projects were selected as representatives who were interviewed using semi-structured interviews. The main structure of the interview framework was similar to the structure of the questionnaire discussed above.
4 Results This section presents results obtained from the industrial case study. Fig. 2 and Fig. 3 illustrate respondents satisfaction to the earlier and new solution based on ALM elements. After that, each element is analyzed and presented as follows: x x
ALM framework element: Issue needs to be addressed in ALM solution. Solution: Description how the issue is supported by the previous ALM solution and new ALM solution.
60
J. Kääriäinen, A. Välimäki
x
Rate (1=poor, 10=excellent)
Experiences: Experiences gained from the usage of the solution. Strengths / weaknesses / improvement ideas in general in this solution based on case data. 10,0 9,0 8,0 7,0 6,0 5,0 4,0 3,0 2,0 1,0 0,0 Previous solution
New ALM solution
Fig 2. Respondents’ satisfaction to previous and new solution.
10,0 Rate (1=poor, 10=excellent)
9,0 8,0 7,0 6,0
Previous solution
5,0
New ALM solution
4,0 3,0 2,0 1,0 0,0 Creation and management of project artefacts
Traceability of lifecycle artefacts
Reporting of lifecycle artefacts
Process automation and tool integration
ALM issues
Fig. 3. Respondents’ satisfaction to the previous and new solutions based on ALM elements.
ALM framework element: Creation and management of project artefacts. Solution: x
x
Previous solution: Local CM systems, central document management database, programming tools, central system/SW fault management database, central test document database. Also e.g. design tools, word processing tools and project management tools were used. New ALM solution: Integrated ALM suite with central database for storing and versioning lifecycle artefacts. Covers project management, requirements management, configuration management, programming environment, project portal and document management. In addition, there is a central test document database and central system fault management database. Also e.g. design tools and word processing tools are used.
Experiences: New ALM solution is a single vendor integrated solution. Previously, projects used separate somewhat isolated databases, e.g. for document
Impact of Application Lifecycle Management – A Case Study
61
Rate (1=poor, 10=excellent)
management, configuration management and task management which was found to be complicated and difficult to keep synchronised. The ALM tool selection was affected in that the vendor in the new solution was the same as the vendor of the old CM system and therefore, the terminology of the new tool was somewhat familiar to the users. In the new solution, SW project artefacts such as management documents, requirements, tasks, design files, source code, builds and test documents can be stored into a central repository. Fig. 4 presents ratings on how well the respondents felt that previous and new solutions supported the management of different project artefacts. 10,0 9,0 8,0 7,0 6,0 5,0 4,0 3,0 2,0 1,0 0,0
Previous solution New ALM solution
Project planning and management documents
Requirements specifications
Design documents
Source code, builds
Defects, bugs
Test plans, cases, reports
Project data elem ents
Fig. 4. Respondents satisfaction to the previous and new solutions based on the management of SW project artefacts.
The respondents felt that the solution facilitates information sharing and communication because there is a single project portal for sharing the project data. At this point, it is worth noticing that globalization is nowadays a must, there is no real option for this. Therefore, the company had the challenge of working in a global environment “locally”, i.e. without the problems of geographical dispersion. Here information technology had a big role in providing the means to reduce geographic barriers. One downside of the solution is that the requirements management features of the solution is insufficient, even improved considerably if compared to the old solution. Current templates that were used in the solution for data capturing were not sufficient but they possibly need to be extended/tailored in the near future for the needs of the project. This is understandable because each organisation, product or project is unique with its own needs and characteristics. For example, attributes regarding requirements classification is missing from the standard process template as well as the visualisation of the hierarchy of the information (requirements – tasks) was poor. Results also show that the use of communication tools can facilitate informal team communication and in some cases also store discussion history (e.g. chat-tool). Furthermore, since the overall management has to coordinate the development efforts and resources it is important to be able to synchronize and prioritize different development projects with each other.
62
J. Kääriäinen, A. Välimäki
ALM framework element: Traceability of lifecycle artefacts. Solution: x
x
Previous solution: Traceability information was collected and stored as logical links (e.g. embedded into file/item names or comment field) or physical links (physical links between databases) between artefacts. Instructions for traceability existed. New ALM solution: The solution provides technical capabilities for linking various lifecycle artefacts stored in a central ALM database. In addition, logical and physical links between other databases. Instructions for traceability exist.
Experiences: According to results, previously traceability was more difficult when project data was dispersed over several somewhat isolated databases. In practice, this meant that traceability information was mostly collected and stored as logical links between artefacts. Respondents defined that insufficient traceability can cause various problems e.g. it slows down the testing, complicates bug fixing and makes it more difficult to find out reasons for changes. To make traceability workable, trace capture and reporting should be as easy/automated as possible or the capture and maintenance of traces is too time-consuming which leads to invaluable, out-of-date information. Respondents also felt that a distributed development environment requires that instructions for traceability should be welldocumented and communicated to ensure that traceability information remains upto-date. Respondents stated that the new ALM solution provides good possibilities for more automated traceability when the process templates will be tailored to better support traceability and reporting according to the needs of the company/project. This provides a possibility to gather traceability information as a part of the development process. A central project database provides a means for central trace storage in a global environment. This means that project personnel have a consistent view to project data and their interrelations. Examples of the lifecycle artefact traceability needs that came up in this study were: requirements – tasks – subtasks – code – build, bug – code – build and traceability of changes. In this case, SW is just one part of a complex system product. Therefore, the ALM solution should interface with HW and system level information management systems, e.g. with a system requirements management system and system version database. ALM framework element: Reporting of lifecycle artefacts. Solution: x x
Previous solution: Predefined and tailored reports from databases. New ALM solution: The ALM solution provides predefined reports, e.g. based on process templates. It is also possible to modify existing reports and create new reports based on the items stored in a central repository.
Experiences: ALM solution generates predefined or tailored project lifecycle reports from a central ALM database and thus facilitates data consolidation. Now there are better project management reports than previously. Previously, reports were based on schedules and feature statuses. Now Scrum method based reports
Impact of Application Lifecycle Management – A Case Study
63
are based e.g. on effort, remaining effort, iterative development and working SW. Report sharing and visibility are supported in a distributed development environment and therefore, project personnel and management can easily follow project status. However, the overall process template was not sufficient for the project and therefore also reports need to be extended in the future to better meet the needs of the project. Respondents defined some examples of project reports that are useful for their work as follows: sprint burn down chart, product backlog composition, found/resolved/inspected bugs, report about remaining work (hours). ALM framework element: Process automation and tool integration. Solution: x x
Previous solution: Separate process guide, tool-specific process support and automation. Some integrations between databases (point-to-point integrations built in a company). New ALM solution: Tight integration between different tools provided by ALM tool: requirements/task management, version control, build, defect management, project management, word processing, spreadsheet application. Solution provides standard process templates and their tailoring possibilities. Tailored templates can be used for configuring ALM system. Process guidance can be accessed from project portal (integrated into ALM tool). Wiki-based company-internal process instructions. Sprint retrospectives according to Scrum to continuously improve practices.
Experiences: The respondents felt that an integrated toolset with process support is important. The new ALM solution supports this by providing an integrated environment that utilises central project repository. Furthermore, it was stated that tool integration should also work in a distributed development environment. For example, developers should have fluent access to a distributed version control system via their local SW development tools. The solution also enables the use of predefined or tailored process templates that are used to configure an ALM system. According to respondents, the use of standard process templates is problematic since the needs of the company and project may vary. Current standard templates represent both ends of the process spectrums, agile (lightweight process that adheres to the principles of Agile Alliance) and CMMI (supports appraisal to CMMI level 3)[19]. Respondents felt that some kind of combination of basic templates and practices could be the answer to this. One respondent answered that tailoring could cause also problems if the ALM solution is updated (new version) in the future and if a new version will not work with a tailored template. Furthermore, the project produces SW that is part of a broader system product and therefore some respondents also raised the question about the integration with system/HW –level information management solutions (e.g. system fault database, test document database, system version database). Thus, ALM interfaces with system lifecycle management systems (i.e. PLM systems). Scrum process instructions can be accessed from the ALM solution. However, they were too general and therefore, there are also wiki-based company-internal process instructions.
64
J. Kääriäinen, A. Välimäki
5 Discussion This paper presents a case study where the impact of application lifecycle management was evaluated. Previously, teams have used several separate systems to manage project related data. This caused challenges in coordinating development work and maintaining product related data. Therefore, the case company decided that the previous solution was not good enough for its needs in the future. The company tries to face challenges with the new ALM solution that provides a more integrated frame for the management of project data. The company expects that the new ALM solution will improve efficiency by better project decisions and a decrease in quality costs in distributed projects. This research also aimed to clarify the concept of ALM by producing a framework that can be used to evaluate the current state of ALM solution in target organization and to detect ALM elements that possibly need to be improved. Based on the results presented in the previous sections, the ALM framework elements are presented in Fig. 5. “Creation and management of project artefacts” element is the foundation for ALM. Other elements enable efficient cooperation and communication environment. The future research should study the framework’s position related to maturity and appraisal models, since these models, such as SWCMM (Capability Maturity Model) or ISO 15504, provide ability for an organization to assess its work process practices. Traceability of lifecycle artefacts
Tool integration
Communication Creation Creationand andmanagement management ofofproject projectartefacts artefacts
Reporting of lifecycle artefacts
Process automation
Fig. 5. Principal elements of ALM framework.
The results of the case study show that successful implementation of a centralized ALM solution has many advantages for SW projects. The project members are more satisfied with the new ALM solution than with the previous solution even though there are still some challenges that need to be solved in future improvement efforts. The results show that a central project database was the foundation for successful ALM. The most successful ALM issues were traceability and reporting. The importance of these issues for ALM has also been discussed in [4] and [2]. Also, communication related arguments were strong in this study. Communication solutions and practices are needed to facilitate informal communication and project data visibility. The project used a predefined Scrum process template to configure ALM solution to support agile working methods.
Impact of Application Lifecycle Management – A Case Study
65
However, responses show that predefined process templates may need to be adapted according to the needs of the project. The worst deficiencies are in requirements management activities of the solution. This is critical since requirements management has been identified as one of the most critical disciplines in ALM [4]. The adaptation of product information management practices to a project environment has been discussed in literature, e.g. from SCM point of view in [15] and [16]. Therefore, the challenge is how to generate efficient company-specific implementations of ALM. Results also show that if SW development is a part of system product development, then interfaces with system/HW lifecycle management (i.e. PLM, CM) need to be handled or e.g. the traceability of SW to system or HW entities remain insufficient. This viewpoint has also been detected e.g. in [17] and [18].
6 Conclusions Planning and deployment of lifecycle management is challenging. It is a common belief that lifecycle management solutions are restricted to support the creation of very large and complicated systems. The important aspect of the lifecycle management is its conception as a generic frame of reference for systems and methods that are needed for facilitating product lifecycle activities. This concept provides a rich framework for the coordination of all kinds of engineering and development work, from agile-types of development to traditional waterfall-types of development. Since each organization and project tends to have its own characteristics and needs, the challenge resides in how to generate efficient, company-specific implementations of lifecycle management for complicated, reallife situations. This paper reports experiences from the case study performed in a company operating in the automation industry. This case study is a part of improvement effort in a case company where the aim is to systematize Application Lifecycle Management (ALM) in distributed development projects that develop SW for SW intensive systems. Even though the new ALM solution is considerably better than the previous solution and the direction is right, there remains some challenges that need to be handled in upcoming ALM improvement efforts. The next step will be that the projects will study the possibilities to tailor standard process templates in the near future. According to results, the following benefits of ALM were gained in this case: x x x x
Central storage place for all project-related data provided foundation for project cooperation and communication. Better communication within the team members of the project (data sharing, visibility even in distributed environment). Better lifecycle traceability of developmental artefacts to facilitate e.g. bug fixing and testing. Easier lifecycle reporting to increase project transparency (what is the real status of the project).
66
J. Kääriäinen, A. Välimäki
x x
Clear processes with fluent tool support that streamline time-consuming lifecycle activities. Better integration of applications that are used for producing, managing and reporting project related data.
References [1] [2] [3] [4] [5]
[6]
[7] [8]
[9]
[10]
[11]
[12] [13]
[14] [15]
Sääksvuori A, Immonen A, (2004) Product Lifecycle Management, Springer-Verlag Berlin Doyle C, (2007) The importance of ALM for aerospace and defence (A&D), Embedded System Engineering (ESE magazine), Vol. 15, Issue 5, 28-29 Schwaber C, (2006) The Changing Face of Application Life-Cycle Management, Forrester Research Inc., White paper, August 18 Doyle C, Lloyd R, (2007) Application lifecycle management in embedded systems engineering, Embedded System Engineering (ESE magazine), Vol. 15, Issue 2, 24-25 Gotel O, Finkelstein A, (1994) An Analysis of the Requirements Traceability Problem, Proceedings of the First International Conference on Requirements Engineering, 94-101 Ramesh B, Dhar V, (1992) Supporting systems development by capturing deliberations during requirements engineering, IEEE Transactions on Software Engineering, Vol. 18, No. 6, 498 –510 Ramesh B, Jarke M, (2001) Toward Reference Models for Requirements Traceability. IEEE Transactions on Software Engineering, Vol. 27, No. 1, 58-93 Weatherall B, (2007) Application Lifecycle Management - A Look Back, CM Journal, CM Crossroads – The configuration management community, January 2007, http://www.cmcrossroads.com/articles/cm-journal/application-lifecycle-management%11-a-look-back.html (available 18.10.2007) Kolawa A, (2006) The Future of ALM and CM, CM Journal, CM Crossroads – The configuration management community, January 2006. http://www.cmcrossroads.com/articles/cm-journal/the-future-of-alm-and-cm.html (available 18.10.2007) Estublier J, Leblang D, van der Hoek A, Conradi R, Clemm G, Tichy W, WiborgWeber D, (2005) Impact of software engineering research on the practice of software configuration management. ACM Transactions on Software Engineering and Methodology (TOSEM). ACM Press, New York, USA, Vol. 14, Issue 4, 383– 430 Heinonen S, Kääriäinen J, Takalo J, (2007) Challenges in Collaboration: Tool Chain Enables Transparency Beyond Partner Borders, In proceedings of 3rd International Conference Interoperability for Enterprise Software and Applications 2007, Funchal, Portugal Yang Z, Jiang M, (2007) Using Eclipse as a Tool-Integration Platform for Software Development, IEEE Software, Vol. 24, Issue 2, 87 - 89 Shaw K, (2007) Application lifecycle management for the enterprise, Serena Software, White Paper, April 2007, http://www.serena.com/Docs/Repository/company/Serena_ALM_2.0_For_t.pdf (available 18.10.2007) Eclipse web-pages, www.eclipse.org (available 18.10.2007) Buckley F, (1996) Implementing configuration management : hardware, software, and firmware, IEEE Computer Society Press, Los Alamitos
Impact of Application Lifecycle Management – A Case Study
67
[16] Leon A, (2000) A Guide to software configuration management, Artech House, Boston [17] Crnkovic I, Dahlqvist AP, Svensson D, (2001) Complex systems development requirements - PDM and SCM integration, Asia-Pacific Conference on Quality Software [18] Välimäki A, Kääriäinen J, (2007) Product Managers’ Requirement Management Practices As Patterns in Distributed Development, 8th International PROFES (Product Focused Software http://www.liis.lv/profes2007/www.lu.lvDevelopment and Process Improvement) conference, Latvia [19] Guckenheimer S, (2006) Software Engineering with Microsoft Visual Studio Team System, Addison-Wesley
Part II
Cross-organizational Collaboration and Cross-sectoral Processes
A Service-oriented Reference Architecture for Organizing Cross-Company Collaboration Christoph Schroth University of St. Gallen, MCM Institute and SAP Research CEC, Blumenbergplatz 9, 9000 St. Gallen, Switzerland [email protected]
Abstract. Today, cross-company collaboration is about to gain significant momentum, but still shows weaknesses with respect to productivity, flexibility and quality: Point-to-point, Electronic Data Interchange (EDI)-based approaches or Managed File Transfer installations provide only limited functional richness and low reach as they often represent proprietary island solutions. A new generation of providers of software (Multienterprise/ Business-toBusiness (B2B) Gateway solutions) and services (Integration-as-a-Service and B2B Project Outsourcing) for multienterprise interaction is about to emerge today and allows for richer interaction while reducing the costs for electronic transactions heavily. However, these products and services still exhibit weaknesses with respect to both managerial and technological aspects. In this work, we present and thoroughly discuss a service-oriented reference architecture for business media that overcome the drawbacks of today’s B2B software products and services. Based on the IEEE Recommended Practice for Architectural Description (IEEE 1471-2000) in combination with Schmid’s Media Reference Model, this reference architecture provides four main views: community (structural organization), process (process-oriented organization), services and infrastructure. Keywords: Support for cross-enterprise co-operative Work, Service oriented Architectures for interoperability, Interoperable enterprise architecture, Architectures and platforms for interoperability, Interoperable inter-enterprise workflow systems
1 Motivation Cross-organizational electronic collaboration is about to gain significant momentum, but still shows weaknesses with respect to productivity, flexibility and quality [1, 2, 3, 4]. Existing, point-to-point, EDI-based approaches or Managed File Transfer (MFT) installations provide only limited functional richness and low reach as they often represent proprietary island solutions. Such message-oriented
72
Christoph Schroth
B2B communities are frequently industry-driven or lead by a large, dominant participant which has lead to a multitude of different standards of different scope and granularity over time [5]. These substantially different standards prevent from a common understanding of exchanged data among a wide mass of organizations, while high cost and complexity of existing solutions impede a fast adoption by potential users. Today, a new generation of providers of software (Multienterprise/ B2B Gateway solutions) and services (Integration-as-a-Service and B2B Project Outsourcing) for multienterprise interaction is about to emerge today and allows for richer interaction while reducing the costs for electronic transactions heavily. Integration service providers such as Advanced Data Exchange, GxS, Seeburger, Sterling Commerce and TietoEnator already offer hosted multitenant environments for reliable and secure communication, trading-partner management, technical integration services and application services [6, 7, 8, 9]. However, these products and services still exhibit weaknesses with respect to both managerial and technological aspects. Limited functional richness, focus on automation rather than business innovation as well as an inherent enterprise- rather than multienterprise perspective represent only some of the remaining challenges towards business media for efficient and effective cross-organizational interaction. In this work, we present and thoroughly discuss a service-oriented reference architecture for business media that overcome the drawbacks of today’s B2B software products and services. Based on the IEEE Recommended Practice for Architectural Description (IEEE 1471-2000) in combination with Schmid’s Media Reference Model, this reference architecture provides four main views: community (structural organization), process (process-oriented organization), services and infrastructure. The remainder of this work is organized as follows: In section two, we elaborate on the state-of-the-art in multienterprise electronic interaction and also discuss the major shortcoming of existing solutions. In section three, after presenting an adequate formal foundation, we propose our service-oriented reference architecture. Section four concludes the work with a brief summary and an outlook on future work.
2 State-of-the-Art in Multienterprise Electronic Interaction and Shortcomings A first differentiation (see Fig. 1) of existing business media for the support of cross-organizational electronic interaction can be made between e-commerce solutions (automating a company’s information transactions with its buyers and sellers) and solutions for IT extension (extensions of corporate IT infrastructures to third-party service providers, e.g. Business Process Outsourcing (BPO), Softwareas-a-Service (SaaS)). Besides their intended purpose, the second differentiation can be made along their functional capabilities. Batch-oriented point-to-point integration has been the predominant integration scenario for many years [7]. Stand-alone EDI translators and Managed File Transfer solutions (MFTs) represent major examples for solutions related to this era: “A stand-alone EDI translator is a software application that an enterprise typically licenses from an EDI software provider or subscribes
A Service-oriented Reference Architecture for Organizing Cross-Company Collaboration 73
from a Value-Added Network (VAN). The translator interprets incoming EDI information […] and converts it into the format or formats used by the enterprise's in-house systems and applications” [8, p.8-9]. Such approaches stem from the time when EDI was the predominant form of cross-organizational transaction message format (as opposed to the proliferating XML standard). “MFT software enables companies to automate, compress, restart, secure, log, analyze and audit the transfer of data from one endpoint to another” [8, p. 7]. Such message-oriented B2B software products have a limited functional scope and are frequently tailored to proprietary needs and not adequate to support heterogeneous processes or applications. To allow for richer functionality and thus high “reach” to a multitude of trading partners with different requirements, generic multienterprise/B2B gateway software is currently gaining momentum [8]. “Multienterprise/B2B gateway software (see Fig. 1) is a form of integration middleware that is used to consolidate and centralize a company's multienterprise data, application and process integration and interoperability requirements with external business partners.” [8, p.5] SaaS BPO Supply Chain/ E-Commerce
Company A
IT extension
Company B
Company C
Business Medium
3. 4. 5.
Software EDI Translators Managed File Transfer Multienterprise/ B2B Gateway SW
1. 2.
Service IaaS B2B Project Outsourcing
Fig. 1. Classification of cross-organizational electronic interaction
A remarkable market has emerged for “B2B integration capabilities that are hosted in a multitenant environment and delivered as a service rather than as software. Traditionally known as EDI VANs, we now call these hosted offerings IaaS [Integration-as-a-Service], in the spirit of SaaS, and we call vendors that offer such services (usually in one role relative to other roles) integration service providers. By definition, to be considered an integration service provider, a vendor must offer hosted multienterprise integration and interoperability services.” [8, p.
74
Christoph Schroth
9]. IaaS providers usually offer services in the fields of communication, trading partner management, technical integration (towards internal systems) and different application services. As opposed to the mere outsourcing of technical infrastructure, B2B Process Outsourcing (B2BPO) also comprises the outsourcing of a complete B2B project (including the workforce and their structural as well as process-oriented organization). Table 1. B2B Software and Services Market Overview (extracted from [6, 7, 8, 9]) Vendor
Software Multienterprise/ B2B Gateway SW
MFT Suites
Service EDI Translators
Accenture Adv. Data Exch.
B2BPO
x
x
x
Axway
x
x
Click Commerce
x
x
x x
Covisint Crossgate
IaaS
x x
x
DICentral
x
x
E2Open
x
x
EasyLink
x
eZCom Software
x
GxS
x
x x
Hubspan Inovis
x
x
nuBridges
x
x
Seeburger
x
Sterling Com.
x
x
x
x
x
x
x
x
x
x x
x
x
x
SupplyOn
x
TietoEnator
x
Tumbleweed
x
x x
x
Table 1 provides an overview of 19 selected vendors (mainly drawing on an analysis of more than 100 vendors conducted by Gartner [8]) which are active in at least one of the five above discussed fields of activity. A clear trend can be identified towards IaaS, i.e. the Internet-based, hosted offering of B2B integration capabilities (ideally in a multitenant environment). More and more vendors (worldwide already more than 100) are entering this promising market which is expected to grow significantly. In this work, we focus on managerial as well as technological aspects of IaaS. Many of the existing solutions in this area show weaknesses with regard to mainly the following three aspects:
A Service-oriented Reference Architecture for Organizing Cross-Company Collaboration 75
Cross-enterprise view: Many solutions focus on shared business functionality rather than bridging existing gaps between enterprises and allowing them to seamlessly interact. Innovation potential: As elaborated in [7], existing approaches support the automation of cross-corporate interaction but do not enable users to tap the full potential business innovation. First trends towards innovation enablement can be “observed by looking at communities of data exchange where third parties are beginning to offer a range of business services as value added to the [mere] document exchange services” [7, p. 8]. “Richness” and “Reach”: Many “B2B communities” are still being setup as stand-alone island solutions for specific purposes. However, the frustration of organizations in establishing and supporting multiple, single-purpose portals and partner communities grows. “These communities and their underlying networks are not configured to support heterogeneous processes or applications.” [7, p.6]. According to Gartner research, firms desire integration services supporting “multiple protocols, multiple data formats, multiple onboarding approaches, higher-order integration features (for example, in-line translation, data validation and business process management), BAM (for example, process visibility and compliance management) and hosted applications (for example, catalogues and global data synchronization” [6, p.4]. Summing up, besides a lack of crossenterprise perspective and innovation potential, existing solutions only provide limited richness (functional scope) and reach (amount of connected organizations).
3 A Service-Oriented Reference Architecture 3.1 The IEEE Recommended Practice for Architectural Description To thoroughly elaborate on our reference architecture, we leverage the “IEEE Recommended Practice for Architectural Description” (IEEE 1471-2000 [10]) which has also been used by Greunz [11] to describe electronic business media. The standard considers architecture as „fundamental organization of a system embodied in its components, their relationships to each other, and to the environment, and the principles guiding its design and evolution“[10, p. 3]. As depicted in Fig. 2, systems are subject to an architecture. To be able to describe these adequately and systematically, there are architectural descriptions. As a central element, viewpoints exist: In order to reduce complexity and to maximize the benefit for certain groups of stakeholders, an architectural description selects one or more viewpoints, from which the system is then analyzed. A viewpoint is considered as „pattern or template from which to develop individual views by establishing the purposes and audience for a view and the techniques for its creation and analysis“[10, p. 4]. The goal is to codify a set of concepts and interrelations in order to be able to adequately present and analyze certain concerns. „A concern expresses a specific interest in some topic pertaining to a particular system under consideration […]. Concerns arise in relation to human interests. These humans are called system stakeholders“[11, p. 16] who are
76
Christoph Schroth
individuals, teams or whole organizations who have a specific interest in a system. A viewpoint defines the conventions which underlie the creation, symbolization and analysis of a view „by determining the language (and notations) to be used to describe the view, and any associated modelling methods or analysis techniques to be applied to these representations of the view“ [11, p. 17]. Corresponding to the selection of viewpoints, an architectural description is divided into one or more views. Each view is thereby the representation of the overall system, but from the perspective of a set of specific, interrelated concerns. The relation between viewpoint and actual view can thus be compared with the relation between a class and one of its instances in the programming context. Due to space constraints, the remaining IEEE 1471-2000 artefacts shall not be discussed in detail. For the description of the reference architecture presented in this paper, the differentiation between viewpoints, concerns and actual views is of central importance. Missi +
+
1
Environme + influences
+
Syste
has Architectur + described by
+ is important
+
1
Architectural
+
1
Stakehol
+
1 +
1
Rati
+ participates
is
Library +
has
0
+
1 + used to
Conce 1
1
+
View
+
1 +
View 1
+ participates
1
+ consists 1
+ establishes
Mod 1
+
Fig. 2. IEEE Recommended Practice for Architectural Description (IEEE 1471-2000, [24])
3.2 Schmid’s Media Reference Model After discussing the essentials of describing architectures, Schmid’s Media Reference Model shall be used to specify and adapt the so far generic IEEE 14712000 meta-model to electronic business media, in particular in cross-organizational contexts. According to Schmid [12, 13, 14], media can basically be defined as follows: They are enablers of interaction, i.e. they allow for exchange, particularly the communicative exchange between agents. Such interaction enablers can be structured into three main components (Fig. 3): First, a physical component (CComponent) allows for the actual interaction of physical agents. This component
A Service-oriented Reference Architecture for Organizing Cross-Company Collaboration 77
can also be referred to as carrier medium or channel system. Second, a logical component (L-Component) comprises a common “language”, i.e. symbols used for the communication between agents and their semantics. Without such a common understanding, the exchange of data is possible (with the help of the CComponent), but not the exchange of knowledge. Third, an organizational component (O-Component) defines a structural organization of agents, their roles, rules which impact the agents’ behaviour as well as the process-oriented organization of agents’ interactions. Layers
Community Viewpoint
Structural Organization
O Process Viewpoint
Processes/ Interactions
Service Viewpoint
Services/ Interfaces
L
C Infrastructure Viewpoint
Channel/ Service Platform Phase
Inform
Signal
Negotiate
Execute
Fig. 3. Schmid’s Media Reference Model [10]
Together, these basic three components have been identified to constitute various kinds of media. Among others, it is appropriate to describe electronic media such as those deployed to support cross-organizational collaboration. Based on these components which already represent a first, scientific approach to modelling, understanding and reorganizing media, a layer/ phase reference model has been introduced as well. The Media Reference Model (MRM) [12, see Fig. 3] comprises four different layers (which all represent dedicated views on media) and structures the use of media into four sequential phases. Similar to the emerging field of software engineering in the software context, the MRM aims to provide a comprehensive, coherent and systematic framework for the description and analysis of various media. The Community View (first layer) thereby accounts for the set of interacting agents, the organization of the given agents’ population, i.e. the specific roles of involved stakeholders, the situations in which they act as well as the objects with which they deal. Summing up, it models the structure of the social community sphere in a situation-dependent, but static fashion. The Process View (Implementation Aspects) deals with the modelling of the process-oriented organization of agents and can also be referred to as “Interaction Programming” [12]. It is also called implementation view as it connects the needs of the
78
Christoph Schroth
community with the means provided by the carrier medium and thus implements the “community-plot” on the basis of the carrier medium. The Service View (Transaction View) models the services provided by the carrier medium which can be used in the different interaction steps to reach the respective interactions’ goals. The Infrastructure View models the production system, which creates the services provided by the service view, i.e. in the case of electronic media the actual underlying information technology. The above discussed three major components can seamlessly be integrated into the MRM: The upper two views (Community Aspects and Implementation Aspects) represent the organizational component (O-Component) which accounts for the structural as well as process-oriented organization. The lower two layers are mapped to the physical component (C-Component) which focuses on the creation and provision of services. Last, the logical component (L-Component) concerns all four layers as it ensures that interaction of agents is based on a common understanding of exchanged symbols. 3.3 Service-Oriented Reference Architecture In this section, we describe the service-oriented reference architecture from the four views which have been identified as essential in the context of electronic media: Community, Processes, Services and Infrastructure. 3.3.1 Community View The concerns [10] addressed by the community view comprise the following: Who are the agents interacting over the electronic business medium? As a first element of our reference architecture, a registry of the different stakeholders must be available to ensure that organizations can publish their business profiles and are also enabled to find adequate trading partners. Particularly in multitenant environments, frequently groups of agents emerge which are also referred to as B2B communities: They represent a set of organizations that have negotiated certain business conditions as well as access right to possibly sensitive information. As a part of IaaS solutions, applications are thus required to manage profiles and access rights. What are their exact roles and the rights/ obligations associated to the roles? The different agents using the medium are assigned certain roles in order to reduce complexity of community management. Different agents may assume the same rule, depending on their properties and capabilities. Associated to these roles are clearly formalized rights and obligations which impose constraints on the agents’ activities. Which services are provided/ consumed by these roles? Besides the agents’ identities/ profiles and the specified roles which they may assume, the services they provide need to be determined to ensure an efficient organization. The mere process-oriented view which is often leveraged when analyzing an organization is not adequate to model services in the context of our reference architecture. In fact, collaborative tasks which have been identified shall be structured and decomposed into subtasks (i.e. services) according to the criteria proposed by Parnas [15] in the software programming context: First, rather than starting with a process or workflow and determining subtasks/ services as sequential parts of this process, organizational engineers are supposed to
A Service-oriented Reference Architecture for Organizing Cross-Company Collaboration 79
encapsulate those design decisions which are difficult or likely to be subject to change in the future: “We have tried to demonstrate by examples that it is almost always incorrect to begin the decomposition of a system into modules on the basis of a flowchart. We propose instead that one begins with a list of difficult design decisions or design decisions which are likely to change. Each module is then designed to hide such a decision from the others. Since, in most cases, design decisions transcend time of execution, modules will not correspond to steps in the processing”[15]. Second, organizational services need to hide as much proprietary information as possible. By shielding information and complexity from the outside world, services are quickly exchangeable (since service interaction only is conducted via simple interfaces) and their specialization is facilitated. As a third important criterion which we transfer from software engineering to the organizational context, hierarchies need to be adopted where appropriate: Tasks are first of all broken down into subtasks/ services which reside on a first level. Similar to composite Web Services, such services than can often be composed out of other, more basis and focused services. “We have a hierarchical structure if a certain relation may be defined between the modules or programs and that relation is a partial ordering. The relation that we are concerned with is “uses” or “depends on”[…]. The partial ordering gives us two additional benefits. First, parts of the system are benefited (simplified) because they use the service of lower levels. Second, we are able to cut off the upper levels and still have a usable and useful product”[15]. Which information objects are exchanged between the agents? Today, pieces of information are frequently specified from a rather technical point of view. Countless standards already exist for XML-based electronic documents. In our reference architecture, we propose a holistic view which comprises both semantic and syntactical considerations and considers actual implementation (e.g., on the basis of XML) a separate issue (which is to be coped with in the service view). In [16], we thoroughly describe the meta-model as well as an implementation of a novel approach to modelling electronic documents based on generic, semantic building blocks (thereby complying with the principle of modularization). 3.3.2 Process View The process view mainly deals with two major concerns: How do individual agents behave and how is the overall service choreography (i.e. the interaction from a comprehensive, bird’s eye view), defined? With respect to this processoriented organization, a first differentiation must be made between imperative and declarative organizational styles: The collaboration of agents (across corporate boundaries) can be organized by prescribing the logic and the control (adhering to an imperative style) or by only pre-determining the logic (i.e. “what” has to be accomplished). The choice for an imperative or rather declarative process-oriented organization depends on certain external requirements. In case stable, standardized processes determine the interaction of agents, an imperative organization is appropriate. In cases where the actual interaction of stakeholders depends on various situational, unforeseeable factors, context-sensitive parameters or individual preferences and restrictions which vary over time, a process-oriented organization which prescribes both logic and control elements is not adequate. In
80
Christoph Schroth
such environments, more declarative “interaction programming” is needed which is merely based on specifications of “what” has to be achieved during the interaction rather than “how” (e.g. in which exact order). Independent of the decision for an imperative or a declarative process-oriented organization, our reference architecture foresees the utilization of atomic and generic process building blocks (also referred to as interaction patterns) which can be assembled towards complex cross-organizational processes. In this way (again complying with the principle of modularization), both operational flexibility and interoperability can be improved. On the basis of commonly agreed building blocks (such as those used in the case of the UN/CEFAT Modelling Methodology [16]), different parties can seamlessly model and negotiate their collaborative business processes. In the context of the HERA research project [17], an exemplary, declarative process-oriented organization is currently being setup with the help of fine-granular interaction patterns defined in the “Event Bus Schweiz”- standard [18]. 3.3.3 Services View The two major concerns addressed by the services view are: Which are the services provided by the electronic medium and which services are required to connect to it? Second, which are the interfaces of these services and how are they described? In the course of several studies, two basically different service classes have been identified. Operational services enable interaction (provide basic communication functionality), while coordination services facilitate (as they read and interpret exchanged information and act accordingly) interaction. We propose the following (non-exhaustive) set of operational services [18] as part of our reference architecture: Services supporting diverse information dissemination patterns (e.g., Publish/subscribe, Unicast, etc.), directory services (allowing for publishing and retrieving business partners and their respective profiles), event catalogue services (especially in case of declarative process-oriented organizations, the documentation of all events (messages) which may be disseminated via the medium is of central importance), transformation services (accounting for mediation of electronic artefacts which adhere to different format standards), security services (encryption and decryption), operating services (for media administration purposes), error services (automatic failure detection and removal), routing services, and validation services (e.g., for evaluation of correctness and integrity of exchanged information). Besides these enabling services, a set of coordination services is required to partially automate the process-oriented organization and to provide certain application functionality. In the course of the HERA project [17], which aims at improving the collaborative scenario of creating tax declarations, coordination services such as automated document completeness control, due date monitoring (each exchanged document is evaluated with respect to due dates for a response message; in case time-limits are not met, the service issues reminders and may also take other, pre-determined action) and process visibility (visualization of status and other key parameters of the cross-organizational interaction in order to improve transparency and thus also manageability) have been employed.
A Service-oriented Reference Architecture for Organizing Cross-Company Collaboration 81
3.3.4 Infrastructure View The main two concerns addressed by the infrastructure are: Which technology shall be used to implement the medium as defined before? Which design principles are to be employed during implementation? Again, to account for the principle of modularization, our reference architecture foresees a decentral organization of electronic media which are primarily devoted to fulfilling the requirements of their respective ecosystems. In the HERA case, for example, a dedicated medium is currently being established for the purpose of collaborative tax declaration scenarios in Switzerland. The requirements with regard to all the four views of our architecture highly vary between different ecosystems and (today) prevent from a globally standardized common electronic medium for seamless interaction. We rather rely on the principle of decentrality and modularization. To allow for connectivity across the different media, though, we propose the standardization of a minimal set of common services. The “Event Bus Schweiz” initiative [14, 18] represents a first infrastructural example for a nationwide network of (sub-) event-buses which only need to adhere to common routing protocols and message formats to allow for interaction across sub-buses. As a further part of the infrastructure view, we propose the separation of actual media technology and access channel technology. For accessing instances of our reference architecture, dedicated software adapters (for connecting legacy software to the medium) or Web-based interfaces exist. Organizational Component
Physical Component (Service-Bus) t
Information Objects Interaction Rules Registry Roles Coor-
4 Conclusion In this work, we have analyzed the weaknesses of existing approaches to supporting the electronic interaction across corporate boundaries. On the basis of the IEEE Recommended Practice for Architectural Description as well as Schmid’s Media Reference Model, we then proposed a comprehensive reference architecture for business media which surpass today’s solutions. Fig. 4 visually summarizes the main artefacts of this architecture. As part of the organizational component, a registry, role models, business rules and specified information objects are
82
Christoph Schroth
leveraged to determine the structural and the process-oriented organization. The Logical Component (marked red) ensures that all artefacts are subject to standards and thus ensures seamless interaction on the basis of knowledge exchange rather than on data exchange. As a third component, the service bus actually enables and facilitates interaction on the basis of operational as well as coordination services. Finally, the actual agents encapsulate the services (and their related specifics) they offer with the help of standardized software adapters (alternatively, the medium can be accessed with the help of Web Clients).
References [1] [2] [3] [4]
[5]
[6] [7] [8]
[9]
[10] [11] [12]
[13] [14]
[15] [16]
McAfee, A. (2004). Will Web Services Really Transform Collaboration. Sloan Management Review, 46 (2), 78-84. Malone, T. (2001). The Future of E-Business. Sloan Management Review, 43 (1), 104. Porter, M. (2001). Strategy and the Internet. Harvard Business Review, 79 (3), 63-78. Schroth, C. (2007). Web 2.0 and SOA: Converging Concepts Enabling Seamless Cross Organizational Collaboration. Proceedings of the IEEE Joint Conference CEC'07 and EEE '07, Tokyo, Japan. Frenzel, P., Schroth, C., Samsonova, T. (2007). The Enterprise Interoperability Center– An Institutional Framework Facilitating Enterprise Interoperability. Proceedings of the 15th European Conference on Information Systems, St. Gallen, Switzerland. Lheureux, B. J., Malinverno, P. (2006). Magic Quadrant for Integration Service Providers, 1Q06. USA: Gartner Research Paper. White, A., Wilson, D., Lheureux, B. J. (2007). The Emergence of the Multienterprise Business Process Platform. USA: Gartner Research Paper. Lheureux, B. J., Biscotti, F., Malinverno, P., White, A., Kenney, L. F. (2007). Taxonomy and Definitons for the Multienterprise/B2B Infrastructure Market. USA: Gartner Research Paper. Lheureux, B. J., Malinverno, P. (2007). Spider Diagram Ratings Highlight Strengths and Challenges of Vendors in the B2B Infrastructure Market. USA: Gartner Research Paper. IEEE Computer Society (2000). IEEE Recommended Practice for Architectural Description; IEEE Std. 1471-2000. Greunz, M. (2003). An Architecture Framework for Service-Oriented Business Media. Dissertation, University of St. Gallen. Bamberg, Germany: Difo-Druck. Schmid, B. F., Lechner, U., Klose, M., Schubert, P. (1999). Ein Referenzmodell für Gemeinschaften und Medien. in: M. Englien (Hrsg.), J. Homann (Hrsg.): Virtuelle Organisation und neue Medien. Lohmar : Eul Verlag - Workshop GeNeMe 99, Gemeinschaften in neuen Medien.- Dresden.- ISBN 3-89012-710-X, 125-150. Schmid, B. F. Elektronische Märkte- Merkmale, Organisation und Potentiale, available online at: http://www.netacademy.org, accessed in 2007. Schmid, B. F., Schroth, C. (2008). Organizing as Programming: A Reference Model for Cross-Organizational Collaboration. Proceedings of the 9th IBIMA Conference on Information Management in Modern Organizations, Marrakech, Morocco. Parnas, D. L. (1972). On the criteria to be used in decomposing systems into modules, Communications of the ACM, 15 (12), 1053 – 1058. Schroth, C., Pemptroad, G., Janner, T. (2007). CCTS-based Business Information Modelling for Increasing Cross-Organizational Interoperability. Proceedings of the
A Service-oriented Reference Architecture for Organizing Cross-Company Collaboration 83
3rd International Conference on Interoperability for Enterprise Software and Applications, Madeira, Potugal. [17] HERA project, available online at: http://www.hera-project.ch, accessed in 2007. [18] Müller, W. (2007). Event Bus Schweiz. Konzept und Architektur, Version 1.5, Eidgenössiches Finanzdepartement (EFD), Informatikstrategieorgan Bund (ISB).
Patterns for Distributed Scrum – A Case Study A. Välimäki1, J. Kääriäinen2 1
Abstract. System products need to be developed faster in a global development environment. More efficient project management becomes more important to meet strict time-to-market and quality constraints. The goal of this research is to study and find the best practices to distributed Scrum, which is an agile project management method. The paper describes the process of mining distributed Scrum organizational patterns. The experiences and improvement ideas of distributed Scrum have been collected from a global company operating in the automation industry. The results present issues that were found important when managing agile projects in a distributed environment. The results are further generalized in the form of an organizational pattern which makes it easier for other companies to reflect on and to apply the results to their own cases. Keywords: Industrial case studies and demonstrators of interoperability, Interoperability best practice and success stories, Tools for interoperability, The human factor in interoperability
1 Introduction Globalization is the norm in current business environments. This shift from traditional one-site development to a networked development environment means that product development is becoming a global, complex undertaking with several stakeholders and various activities. People operating in global development environments may have difficulties to communicate and understand each other. Furthermore, the overall coordination of activities is more complicated. Therefore, companies have to search for more effective procedures and tools to coordinate their ever-complicated development activities. Distributed development has recently been actively researched. For instance, in [1] and [2] and lately published as a special issue in IEEE Software [3]. From a project management point of view, distributed development has been studied e.g.
86
A. Välimäki, J. Kääriäinen
in [4]. They present in their study drivers, constraints and enablers that are leading organizations to invest project management systems in today’s increasingly distributed world. As one interesting result, they divide top issues in distributed development as follows: strategic issues, project and process management issues, communication issues, cultural issues, technical issues, and security issues. One response to an ever-complicated environment is the rise of so called agile methods [5]. These methods or approaches value the following things [6]: x x x x
Individuals and interactions over processes and tools Working software over comprehensive documentation Customer collaboration over contract negotiation Responding to change over following a plan
At first glance, these issues seem to be only suitable for small teams operating in a local environment. Some time agile methods were applied just for local development, but nowadays their potential for supporting more effective global development environments has been detected. The usage of agile methods in distributed development environment has been studied and reported e.g. in [7, 8, 9]. Ramesh et al. [7] report case studies conducted in three companies. Companies combined agile methods to distributed development environments. The article first lists the challenges in agile distributed development that relate to the aspects of communication, control and trust. Then the article reports successful practices observed in three organizations that address these challenges. As a conclusion the success of distributed agile development relies on the ability to blend the characteristics of both environments, agile and distributed. Sutherland et al. [8] report a case study of a distributed Scrum project. They analyze and recommend best practices for distributed agile teams. They report integrated Scrum as a solution for distributed development. Farmer [9] reports experiences when developing SW in a large, distributed team. The team worked according to slightly modified Extreme Programming (XP) practices. Farmer [9] states that the major factors that contributed to their success were: x x x
Team consisted of some of the top people in the company Team were permitted to find their own way Team got support from management when it was needed
Our paper reports experiences from a case study performed in a company operating in the automation industry. The goal of our research is to study the challenges and find practices to support project management in distributed Scrum (Fig. 1) projects which use the Application Lifecycle Management (ALM) (Fig. 2) solution. The Scrum approach has been developed for managing the systems development process. It is an empirical approach applying the ideas of industrial process control theory to systems development resulting in an approach that reintroduces the ideas of flexibility, adaptability and productivity. In the development phase the system is developed in Sprints. Sprints are iterative cycles where the functionality is developed or enhanced to product new increments. Each
Patterns for Distributed Scrum – A Case Study
87
Sprint includes the traditional phases of software development: requirements, analysis, design, evolution and delivery phases. [5] Doyle [10] presents that ALM is a set of tools, processes and practices that enable a development organization to implement software lifecycle approaches. PREGAME PHASE
DEVELOPMENT PHASE
POSTGAME PHASE
Regular updates
Goals of next Sprint
Product backlog list
Planning
Sprint backlog list
No more requirements
System testing
Final release
Integration Documentation
Requirements
Priorities
Effort estimates
Analysis Design Evolution Testing Delivery
SPRINT
High level design/ Architecture Standards Conventions Technology Resources Architecture
Traceability Traceability Reporting Reporting Process Processautomation, automation,tool toolintegration integration
Lifecycle
Management
facilitates
Tester
project
cooperation
and
88
A. Välimäki, J. Kääriäinen
This paper is organised as follows: In the next section, research approach is described comprising a description of industrial context and research settings. In the Section 3, results are presented. Finally, results are discussed and conclusions are drawn up.
2 Research Approach This section discusses industrial context and research settings used in this research. 2.1 Industrial Context This case study has been carried out in a global company operating in the automation industry. Previously, projects have followed partly iterative development process. Based on the positive experiences, the case project has gone further and deployed the Scrum method. The company operates in a multi-site environment and in the future, the work is globalizing more and more. Therefore, the challenges of a global development environment were also under investigation. This study reports first experiences from the distributed Scrum in the case company. Distributed Scrum practices are applied with the Application Lifecycle Management (ALM) solution. The paper discusses experiences gained from distributed Scrum projects and reports successful practices as patterns to make them more easily exploitable for other companies. 2.2 Research Settings The research has been carried out as a case study with the following phases: research planning, questionnaire, interviews, analysis of results. A two-step approach was used for data collection. First, a questionnaire was sent to scrum masters and other team members. The questionnaire was organized according to the following Scrum practices and artifacts: x x x x x x
Product backlog Scrum planning Scrum backlog Sprint and Daily Scrum Sprint review Scrum of Scrums
The respondents were asked about their opinions on how Scrum practices and artifacts have affected their daily work in a distributed development environment. They were also asked about their ideas regarding what kinds of challenges a distributed environment would set for operation. After the questionnaire, a few key persons from the scrum teams were selected as representatives who were interviewed using semi-structured interviews.
Patterns for Distributed Scrum – A Case Study
89
The results from the questionnaire and interviews were filtered and organized according to Scrum practices and artifacts and analysis viewpoints as follows (analysis framework): x
Practice or artifact: -
People: Issues that relate to people and organization. Process: Issues that relate to working methods, practices. Tool: Issues that relate to tools.
After this, patterns [11] were used to present suggested solutions to overcome the challenges in the organization Patterns make it easier for other companies to reflect on and to apply the results to their own cases. The patterns will be presented according to the following format: x x x x x x
An ID number of a pattern. A short name of a pattern. A detailed description of a problem. A context where a problem exists. Activities that will solve the problem. Results and trade-offs when the pattern is applied.
3 Results This section presents the results obtained from an industrial case study. Issues that present needs or challenges for Distributed Scrum are classified according to analysis framework presented in the previous section. Patterns that describe suggested practices to overcome the challenges of distributed project management were developed based on these results and literature inventory. In this paper, some patterns are presented as examples. 3.1 Experiences from an Industrial Case Study This section presents results of the questionnaire and interviews organized according to the analysis framework. Results reflect issues that the respondents and interviewees found important for distributed project management. For each Scrum practice or artifact issues are mapped with people, process or tool -viewpoint based on the analysis (see Tables 1 to 6). In the end of issues there is a link to a related organizational pattern, e.g. (P Org) is an abbreviation from the words Pattern: “Organize needed Scrum roles in each site”, (P Kick-off) is “Have a kick-off meeting”, (P Rel) is “Make a release plan with some sprints”, (P ALM) is “Establish Application Lifecycle Management tool”, (P Infra) is “Establish a fast and reliable infra”, (P Comm) is “Establish efficient communication methods”, (P Know) is “Knowledge transfer”, (P Visual) is “Visualize status of project”, (P Daily) is “Have many Daily Scrums”.
- The Product Owner role is very important since it acts as an internal customer for a development project. - In distributed development the replication of Scrum roles through sites is essential to ensure fully functional Scrum teams in every site. (P Org) - Knowledge transfer is especially important in a distributed environment. More detailed information about domain and product backlog items is needed to ensure understanding in a global environment. (P Know) - In a global environment, team members have to rely on more formal communication and IT means. (P Know & P Comm) - Project personnel need to learn new community working methods in order to make Scrum work. (P Org & P Comm & P Kick-off)
Process
- Product backlog process should allow collecting and processing ideas. Ideas, if accepted for development, will be used for creating backlog items. (P ALM)
Tool
- Common secure and integrated repositories for all project data. (P ALM) Items need to be classified and organised hierarchically. (P ALM) Process support should enable the process and form tailoring, e,g, new item attributes, tailored status models, adding hierarchy, etc. (P ALM) Infrastructure and efficient as well as reliable network connections are a must when using a central project database in a distributed environment. (P Infra) Table 2. Scrum practice: Sprint Planning.
Viewpoint
Issues
People
Estimation of task is challenging, especially for new developers when team support or support from an architect is needed. It is challenging to make a task estimation when there are other responsibilities like maintenance work. All participate, all exchange ideas and information and all share responsibility which is good team work. Constant travelling of team members is needed to have a good Sprint Planning meeting in distributed development. (P Comm)
Process
A presentation of a product owner is important to understand the goal of a sprint. (P Kick-off)
Patterns for Distributed Scrum – A Case Study
Preparing for the next print planning is needed and all related information is needed to clarify items in a distributed development. (P Know) Tool
Better video conference and net meeting tools will decrease the need of travelling. (P Comm) Table 3. Scrum artifact: Sprint Backlog.
Viewpoint
Issues
People
Different concepts are in use with architects and programmers. (P Know)
Process
Task splitting is a good method to clarify the size of a sprint.
Tool
Easy to see what is needed by sprint backlog list. (P ALM) Table 4. Scrum practice: Sprint and Daily Scrum.
Viewpoint
Issues
People
“Daily scrum” -practice was found useful for internal communication and information sharing (e.g. ongoing tasks, problems etc.). (P Daily) Sprint –practice was found good since it increases the attitude that SW needs to work (for every sprint).
Process
These practices enforce the scrum team to follow the procedure that has many advantages (e.g. values functional SW, increases communication, increases personal responsibility).
Tool
Respondents felt that a common sprint overview report would be good (common template to show the status). (P Daily) If distributed Daily scrum is practiced then adequate IT means are needed to facilitate formal and informal communication (e.g. chattool, project news, central ALM). (P Comm) Table 5. Scrum practice: Sprint Review.
Viewpoint
Issues
People
The Product Owner is a very important person in a review.
Process
Review provides a good possibility to find weaknesses from software and plans. Easier to illustrate how the system works and what kind of deficiencies there are. Sprint retrospective collects improvement ideas for the project (e.g. process).
Tool
Efficient communication tools are needed. (P Comm)
91
92
A. Välimäki, J. Kääriäinen
Table 6. Scrum practice: Scrum of Scrums. Viewpoint
Issues
People
-
Process
If the team is large then it is more efficient to have local daily scrums and then a global scrum of scrums with key persons present (network connection). (P Daily) In a distributed environment some sort of a summary report would be good to show and share to present project status for teams. This is especially important when people use different accent although the language is same. (P Daily)
Tool
Net meeting is a good tool to use. (P Comm) A chat-tool could be an alternative for a conference phone. (P Daily)
Fig. 3 illustrates respondents’ opinions related to Scrum claims. 5,0
Rate (1=disagree, 5=fully agree)
4,5 4,0 3,5 3,0 2,5 2,0 1,5 1,0 0,5 0,0 1
2
3
4
5
6
7
Claims
1 2 3 4 5 6 7
Distributed Scrum improves the visibility of the project status in relation to the previous procedure. Distributed Scrum accelerates the handling of changes in relation to the previous procedure. Distributed Scrum improves the management of features through the entire project lifecycle in relation to the previous procedure. Distributed Scrum improves understanding of requirements in relation to the previous procedure. Distributed Scrum improves the team communication in relation to the previous procedure. Distributed Scrum improves the utilisation of the knowledge possessed by the entire team in relation to the previous procedure. Distributed Scrum improves the commitment to the goals of the project in relation to the previous procedure.
Fig. 3. Respondents’ opinions about Scrum related claims.
Patterns for Distributed Scrum – A Case Study
93
3.2 Preliminary Patterns Derived from a Case Study The answers were analyzed and after that some organizational patterns were created (later a pattern) based on gathered information and related research materials. Some of the important patterns from the viewpoint of a distributed project management and Scrum are described below. ID: P Kick-off(1) & Name: Have a kick-off meeting (only name) ID: P Rel(2) & Name: Make a release plan with some sprints x x x
Problems: One big project plan could be a risk in distributed development. Solution: Split your project in many sprints. Iterative and incremental sprints improve the visibility of a project and improve the motivation of team members. Also working software is a good measurement of a sprint. Consequences: Sprints make it easier to plan your project and to change it, if needed. Visibility of a project is also better when you can see the results in the end of each sprint, in a sprint review meeting. In the end of a sprint it is also possible to have a retrospective meeting in which a process can be changed if it is really needed.
ID: P Org(3) & Name: Organize needed Scrum roles in each sites x x
x
Problems: Communication between different sites. Solution: Replicate needed Scrum roles in every site (e.g. Scrum master, a college of Product owner, Architect, IT Support, Quality assurance etc.) A person with the same kind of role can communicate efficiently between sites and inside his/her own site. Also architecture of a product has an effect on the division of responsibilities and roles as well as which phase of development work is made in which site. Consequences: Distributed development needs formal communication and a clear communication organization. A college of a product owner helps e.g. with understanding of feature lists and related specifications. A Scrum master in each site is also a key person. One person can have many roles in a project.
ID: P ALM(4) & Name: Establish Application Lifecycle Management tool x x
Problems: Separate Excel files are difficult to manage and project data is difficult to find, manage and synchronize between many sites Solution: A common Application Lifecycle (ALM) Management solution for all information in a project e.g. Product Backlog, Sprint Backlog, Sprint Burn Down Chart also e.g. source code, data of faults, requirements, testcases, other documents etc. An ALM solution includes e.g. storage places for artifacts and guidelines for common processes and effective user rights methods and role based views in use to see certain data. It also includes a possibility for global access regardless of a time and a place. ALM can consist of databases based tools which might have too simple GUI and other properties. ALM can also be a group of dedicated tools which have been integrated with each other.
94
A. Välimäki, J. Kääriäinen
x
Consequences: An ALM solution costs quite a lot of money and use of ALM needs training. With ALM, time is saved when information is found more quickly and common processes make work more efficient. Anyhow, a lot of work is needed to ensure that only needed information is visible to different user groups
ID: P Infra(5) & Name: Establish a fast and reliable infra ( P Infra) (only name) ID: P Comm(6) & Name: Establish efficient communication methods (only name) ID: P Know(7) & Name: Knowledge transfer x x
x
Problems: Lack of knowledge about domain and features to be developed. Solution: Domain knowledge can be distributed by visitors in different sites. Also features with short specifications and possible diagrams in Sprint Backlog improve knowledge transfer. Detailed specifications should be made latest during the earlier sprint. Consequences: Distributed development requires more traveling and a change of information. Specifications are also needed to clarify the contents of each feature.
ID: P Visual(8) & Name: Visualize status of project x x x
Problems: Status of project is not known. Solution: Use of Sprint burn down charts, trends of bugs, tasks, test-cases etc. from ALM solution. Working software in the end of Sprint is also a good measurement. Consequences: Iterative development with good measurements and a requirement of working software in the end of Sprint visualize the status of a project very well.
ID: P Daily(9) & Name: Have many Daily Scrums x x
x
Problems: One Daily Scrum is not always enough. Solution: Scrum of Scrums is used to manage big groups. One Daily Scrum could be enough for a small group with good communication tools e.g. conference phones, web cameras, video conferences, chat tools. With foreigners, written logs can be one solution e.g. chat logs or common documents. Consequences: Written logs are sometimes easier to understand. Scrum can be scaled with the use of Scrum of Scrums in a distributed development.
5 Discussion The results obtained from this case study indicate some important issues for a distributed Scrum in distributed development. The importance of distributed Scrum
Patterns for Distributed Scrum – A Case Study
95
has also been reported in other studies in industry, e.g. in [8]. Fairly simple solutions are sufficient when operating in a local environment with Scrum. However, in this distributed Scrum case study, secure shared information repositories as ALM solution and electronic connections (e-meetings, teleconferencing, web cameras, chat, wiki) were seen as essential solutions to support a collaborative mode of work. This has also been indicated in other case studies related to global product development, for instance, in [1] and [7] (e.g. intranet data sharing, teleconferencing). The results of this case study show that successful implementation of distributed Scrum has many benefits for case projects. The team members are more satisfied with the newly distributed Scrum than with the previous solution even though there are still some challenges that need to be solved in future improvement efforts. The results show that the most successful distributed Scrum issues have been improvements in visibility, management of features, communication, and commitment to the goals of the project. The importance of these issues for distributed development has also been discussed in [12]. Communication problems have been resolved by utilizing Scrum of Scrums, Daily Scrums and other Scrum practices and also other communication means. These issues have also been discussed both in [13] and [14]. Other major issues that emerged from the needs for more effective distributed agile project management: x x x x x
Enforcement to a global common process that is tool-supported. Same ALM solution for every site. Training of new methods and tools is important for the consistent usage of tools and practices. Product Owner should represent as a customer and interact with development personnel. Resolve knowledge transfer problems in distributed development environment: -
x
Local Product Owners and Scrum masters are needed for each site to facilitate knowledge sharing. Clear Product/Sprint Backlog items that will be specified more detail during previous Sprint. Specifications should be able to contain rich information in order to facilitate understanding (figures, diagrams). Ability to travel when it is needed. Efficient communication means.
Resolve visibility problems using Scrum based reports e.g. burn down charts, trends of bugs, tasks, test-cases, remaining effort etc. Working software in the end of Sprint visualize the status of a project very well.
96
A. Välimäki, J. Kääriäinen
6 Conclusions An efficient project management process is very important for companies. The goals and plans of a project must be communicated and working software must be developed in a product development faster and more efficiently than earlier, even in a distributed development environment. This paper presents a research process for mining organization patterns in order to improve an agile project management process, Scrum, in distributed development. The research method has included a framework based on Scrum practices and artifacts and the results of questionnaires and interviews have been described in the form of a table. Furthermore, the results have been generalized in the form of organizational distributed Scrum patterns and a list of important issues. The important issues from the view of people, processes and tools in different phases were studied. The results emphasise that distributed Scrum is possible to implement with the help of an ALM solution and distributed Scrum patterns. The key issue is also that the ALM solution provides global access and it changes distributed development partly back to centralized development. Of course, all distributed Scrum and other distributed agile patterns haven’t been found yet and also distributed Scrum doesn’t solve all the problems in distributed development. But at least this is a start for research to find distributed Scrum patterns. Organizational patterns seem to be one good method for describing solutions to a problem. Generalized patterns make it easier for other companies to reflect on and to apply the results to their own cases. Future research directions will be the analysis of experiences with the current patterns in future development projects, the improvement of the patterns and the creation of new patterns according to the feedback gained from projects.
References [1] [2]
[3]
[4] [5]
[6] [7]
Battin RD, Crocker R, Kreidler J, Subramanian K, (2001) Leveraging resources in global software development, IEEE Software, Vol. 18, Issue 2, 70 – 77 Herbsleb JD, Grinter RE, (1999) Splitting the organisation and integrating the code: Conway’s law revisited, Proceedings of the 1999 International Conference on Software Engineering, 16-22 May 1999, 85 – 95 Damian D, Moitra D, (2006) Guest Editors' Introduction: Global Software Development: How Far Have We Come?, IEEE Software, Sept.-Oct. 2006, Vol. 23, Issue 5, 17 - 19 Nidiffer E, Dolan D, (2005) Evolving Distributed Project Management, September/October 2005 IEEE Software Abrahamsson P, Salo O, Ronkainen J, Warsta J, (2002) Agile software development methods: Review and Analysis. Espoo, Finland: Technical Research Centre of Finland, VTT Publications 478, http://www.inf.vtt.fi/pdf/publications/2002/P478.pdf www.agilealliance.org, (available 10.07.2007) Ramesh B, Cao L, Mohan K, Xu P, (2006) Can distributed software development be agile? Communications of the ACM, Vol. 49, No. 10
Patterns for Distributed Scrum – A Case Study
[8]
[9] [10] [11] [12] [13] [14]
97
Sutherland J, Viktorov A, Blount J, Puntikov J, (2007) Distributed Scrum: Agile Project Management with Outsourced Development Teams, Proceedings of the 40th Annual Hawaii International Conference on System Sciences (HICSS) Farmer M, (2004) DecisionSpace Infrastructure: Agile Development in a Large, Distributed Team, Proceedings of the Agile Development Conference (ADC’04) Doyle C, (2007) The importance of ALM for aerospace and defence (A&D), Embedded System Engineering (ESE magazine), June 2007, Vol. 15, Issue 5, 28-29 Coplien JO, Harrison NB, (2005) Organizational Patterns of Agile Software Development, Pearson Prentice Hall Leffingwell D, (2007) Scaling Software Agility, Addison-Wesley Schwaber K, (2004) Agile Project Management with Scrum, Microsoft Press Schwaber K, (2007) The Enterprise and Scrum, Microsoft Press
Understanding the Collaborative Workspaces Gilles Gautier, Colin Piddington, and Terrence Fernando Future Workspaces Research Centre, University of Salford, Salford, M5 4WT {g.gautier, c.piddington, t.fernando}@salford.ac.uk
Abstract. Despite the concept of working collaboratively being very popular, its implications on the organisational structures and on the human requirements are not completely understood. As a result, the advanced technologies used to enhance collaborative work have often had a limited or even a negative impact on the work efficiency. Common issues often involve human factors, such as the acceptance of the new product by the users, the lack of communication between users and the working culture. As a consequence, a better understanding of the human needs would facilitate the development of better tools to support group work as well as more efficient organisations. Therefore, this paper compares the organisation and the human perspectives on collaboration in order to identify the barriers to implementing it. In particular, it focuses on issues related to the life cycles, organisation structure, information flow and human motivation. It also introduces the case of virtual organisations and their difficulty to generate efficient collaboration. Keywords: Socio-technical impact of interoperability, The human factor in interoperability, Knowledge transfer and knowledge exchange
1 Introduction Due to a lack of understanding of the implications of collaboration, it is often believed that new technologies will enhance collaboration by supporting the organisational processes [1]. But psycho-sociologists have demonstrated since the 19th century the importance of the social relationships between the workers [2]. Indeed, it appears clearly in the literature that social interactions are necessary to build trust and to generate a shared understanding of the project aims and processes. These are among the main features of collaboration [3] and [4]. Efficient technological workspaces for collaboration should therefore combine the support of the organisation processes and of the workers. So far, these two objectives are usually considered independently because of the difficulty to integrate both [5]. Consequently, the social and psychological features of the
100
Gilles Gautier, Colin Piddington and Terrence Fernando
workers are not represented in enterprise models where the humans are only seen through their roles in supporting the processes. As a result, the workers are often reluctant to integrate new technologies in their workspaces because they are associated to a change of their working culture. A better understanding of the working environment and its influence on the work efficiency would therefore permit the creation of better workspaces and the integration of more user-friendly technologies. In addition, it might be necessary to limit the intrusion of IT in the workspace in order to better respect the human nature. These issues are discussed in this document which starts with an introduction to collaboration before introducing the organisation’s and worker’s viewpoints on the need for collaborative workspaces. Finally, the role of the technology in supporting group work is discussed in the last section. The discussions of this paper are based on a competency-oriented decomposition of the organisation structure that is presented in the third chapter.
2 Understanding Collaboration As defined by Montiel-Overall [4], “collaboration is a trusting, working relationship between two or more equal participants involved in shared thinking, shared planning and shared creation”. This notion of collaboration is difficult to implement by enterprises because it implies quite open communications and social relationships between the participants, as explained in the group working theories. Consequently, organisations rarely achieve collaboration when they try to generate it. Instead, as will be explained in the next section, their non flexible structures and IPRs limits them to work cooperatively, which is not as beneficial for the group work. Therefore, collaboration necessitates the building of a team over four stages (Fig. 1) that have been identified by Tuckman [6]. First, the forming phase is when people join and set up the roles of each others. This is an observation phase strongly influenced by the leaders of the group and the structure of the participants’ roles [7]. In the second phase, called the storming phase, everyone tries to assure his position and the authority of the leaders is challenged. Conflicts arise between the members of the group. In the norming phase, people become closer and often start sharing a social life. The roles are clear in the group, and discussions become productive. Finally, the performing stage is when the collaborators are fully aware of the group structure, of each others’ roles and objectives, as well as of their capacities and ways of working. Like other researchers before him, Tuckman finally realised that as the group becomes closer, the efficiency of the work increases in parallel with the motivation of the collaborators. Those stages can be explained partially by the need to share an awareness of the working group between the collaborators, as explained by the Johari window [8]. This common knowledge, which is called the open area, is limited when the group forms, and it soon integrates a shared understanding of the roles of everyone (forming phase), and the objectives of all the participants related to the collaborative work (storming phase). In the norming phase, the group becomes stronger and share some social experiences outside work that allow them to
Understanding the Collaborative Workspaces
101
understand each others better, so that they can communicate more efficiently and trust more easily. This social need is the main difference between collaboration and cooperation, where only the understanding of the working group structure is essential. As a consequence, the performing phase can hardly be achieved.
Formin g
Storming
Norming
Performing
Fig. 1. The evolution of the roles in a collaborative team
Of course, any set of people cannot become a highly collaborative group. Among others, Kurt Lewin has shown that there is a need for an interdependence of fates and tasks between collaborators [9]. This corresponds to Vroom [10] theory that will be later introduced. Indeed, people not only need to share a common objective, but also a shared reward. If the work of everyone benefits only a few, this contradicts the instrumentality element of Vroom’s theory. The interdependence of the collaborators is then called negative, because their objectives conflict with each others. This can lead to the end of the group work due to a lack of communication, and to a lack of information sharing. On the opposite, positive interdependence enhances participation, discussions, friendship and decreases aggressive behaviours. It enables the collaborators to achieve Tuckman’s performing phase and the high productivity observed by Fayol [2] and Mayo [11].
3 The Organisational Layers Collaboration is usually required for particular projects in order to support interoperability between the stakeholders. During these projects, organisations usually adopt traditional pyramidal structures with managerial roles at the top and functional roles at the bottom. The top level roles mainly concentrate on the project objectives and processes, while the bottom ones are usually focussed on one skill or competence. Consequently, project organisational structures can be decomposed in three layers that are introduced here from top to bottom in order to explain the environment of the collaboration: x
The Project Manager (PM) has a management and control function in the overall project. He is responsible for the achievements of the project, and he must make sure that the organisation’s project objectives will be met. Therefore, he defines the processes that will be followed during the project in
102
Gilles Gautier, Colin Piddington and Terrence Fernando
order to meet a customer or product demand (time and cost) and he also defines the conditions that will allow to start or finalise each phase of the project. Finally, he is responsible for finding and managing the functional resources necessary to support the project processes (Fig. 2). He is not usually concerned by human factors aspects and mainly considers the employees at functional levels as the actors of the processes. Identify a set of objectives
Define the project processes
Determine resource allocation
Fig. 2. Simplistic view of the Project Manager’ successive tasks
x
The Management Team (MT), led by the project manager, is in charge of the integration of several competencies/functions during a particular phase of the project. A Competency Representative is present in the team for each competence involved in this phase. The first task of the management team is to match the processes defined by the project manager with the corporate competencies. Consequently, the MT must build a common understanding between its members in order to allow them to take better decisions when a conflict has to be solved. If some competencies are missing in the organisation, then a virtual organisation is created by acquiring competencies through outsourcing or sub-contracting (Fig. 3). The management team is where collaboration is the most likely to appear because a shared understanding is required to take better decisions and the competency representative have full control on the IPRs of their competencies. Build up a shared understanding
Identify relevant competencies
Build up a VO if required
Fig. 3. Simplistic view of the Management Team' successive tasks
x
The Skills Silos are vertical structures responsible for the development of the competence. Thus, they contain the organisation knowledge for each competency involved in the project. The competency representative, who represents the silo in the management team, is at the top of the silo, possibly supported by other levels of managers. At the bottom of the silo, the employees with the functional competencies/skills act according to the decisions made by their managers. They have a limited understanding of the overall project and they usually have little contacts with other silos, which disable the possibility to build up an inter-silos team. Consequently, the functional
Understanding the Collaborative Workspaces
103
levels do not usually get involved in any collaboration with other silos, and most of the wrong decisions and interoperability issues appear at these levels.
The above components permit a simplistic representation of an organisation structure for one phase of a project. This simplistic view is nonetheless sufficient to highlight the need for collaboration in the management team in order for them to share a common understanding of the project and to assure that every view point concerned will be considered equally throughout the project. In fact, late problem discovery has proved to be extremely costly in many industrial cases and they are regularly due to a limited understanding of the consequences of decisions made during previous management meetings. Similarly, collaboration at the silo level is essential to permit the workers to share cross project knowledge and experience in order to allow faster problem resolution or innovation [12]. The organisation structure presented here corresponds to a snapshot in time, but the management teams change throughout the project, as well as the involvement of the skills silos.
4 The Life Cycles The Product Life Cycle is the time line representation of a set of sequential phases from the birth to the death of a product or service. This addresses conceptualisation through design, to production, to deployment, and finally replacement (Fig. 4). The Product Life Cycles are extended by the maintenance and recycling to face the increasing demand for “through life services” [13] and [14] as well as environmental issues. In this line, some advanced work in standards and PLCS (Product Life Cycle Support) has already been done in the USA Department of Defense and in the UK as traditional manufacturers change their business from product manufacture to through life service (Leasing). In addition, the building sector is following a similar path with PPP (Private, Public Partnerships). Therefore, the outcome of a life cycle is often not only a product but also a service. This demands a far reaching collaborative way of working across organisations in order to consider the requirements at maintenance and recycling levels as early as at the design phase. Conceptualisati
Design
Production
Deployment
Maintenance
Recycli
Fig. 4. Example of product life cycle
In order to save time, these phases are often parallelised which means that a phase can start before the previous one has been completed. This specific adaptation of the product life cycle is used by the project management processes. The overlapping period between the start of a phase and the termination of the previous one is only possible after a Maturity Gate (Fig. 5) meeting where the
104
Gilles Gautier, Colin Piddington and Terrence Fernando
representatives of several competencies meet to assess the risk of change of starting a new phase of the project. Each overlapping phase corresponds to a high risk period where any changes in the previous phase can become extremely time consuming and costly to accommodate in later phases. The chairman of these meetings is the project managers, so that he can control the project from the clients’ perspective.
Conceptualisati M
Design M
Production M Deployment M Maintenance M
Recycling
Fig. 5. Example of project management process
Decisional Gates occur within planned meetings and chaired by the project manager, and whose outcomes are critical for the project because they aim at solving or avoiding problems. As a consequence, they usually happen within a phase of the project, and even might involve representatives of later phases, such as maintenance managers. Decisional gate intent is to limit delays in the project progress by the identification of optimised ways to progress. Nevertheless, the participants of these meetings are the competency representatives and they usually need to consult with their skills silos in order to resolve specific difficulties or assemble additional data. Since these consultations cannot usually be made during the meetings, the decisional gates often consist of a series of meetings that result in action lists for the competency representatives and their skills silos. If the meeting workspace could be connected to the skills silos workspaces, then it would be possible to make decisions faster by discussing solutions with the relevant silos during the course of the meeting. Two types of meetings can be differentiated - the planned meetings that are held throughout the project management process to assess progress and to mediate on problem resolution, and the reactionary ones that are needed to address urgent issues - they only influence the technology used and not the roles of the stakeholders, as the solution requires near real time performance. For reactionary meetings, mobile technologies might be useful to improve the reactivity of the organisation where meeting participants are not available or needed locally, even if these technologies could limit the level of interaction between the participants to the meeting.
Understanding the Collaborative Workspaces
105
5 The Processes and the Motivation Once the objectives of each phase of the project and the conditions to parallelise phases have been defined, the project manager aims at generating the processes that will have to be followed within each phase of the project in order to achieve the corresponding objectives. Roles are defined and assigned in accordance with the management teams. The view of the organisation, which is often the one of the project manager, is project-based and sees the employees as the actors of the processes. As a consequence, human factors are little considered by the higher levels of the organisation and the structure of the organisation tends to become rigid. However, Henri Fayol [2] has identified since the end of the 19th century the role of the administrative part of an organisation as supporting the employees. He also highlights the importance of giving structure to the employees, and the benefits of self-organisation which is often supported by social relationships. As an example, his miners organised themselves in small groups according to their friendships and relationships. The groups were more stable and more productive thanks to the increased motivation of the employees. This is consistent with the group building theories discussed above, which notice the increasing importance of informal relationships over time while structure is essential at the beginning (Fig. 1). On the opposite, the scientific management of work proposed by Taylor [15] and which corresponds to the decomposition of work into processes has shown some limits. Indeed, it increases sickness and absenteeism due to the lack of motivation of the employees. Nowadays, the French automotive sector and its high suicide rate is a good example of the consequences of such an approach. Therefore, it is crucial to support both structured and informal relationships when trying to develop collaboration. It is also essential to make the distinction between an employee and its roles in the organisation. This permits, for example, the consideration of the worker’s motivation by understanding his personal objectives, interests or culture. Indeed, it is sometimes believed that humans are not naturally inclined to work and they must see a personal gain in order to do it [10]. The most studied theory in motivation science is the hierarchy of needs developed by Maslow [16]. He presents the human needs in a five-level hierarchy. Those needs are determinant for human behaviours because they act as motivators as long as they are not fulfilled. From bottom to top, the five levels are: physiological needs, safety needs, love needs, esteem needs, and self-actualisation needs. The physiological needs are the ones that trigger homeostatic and instinctive behaviours. The safety needs include security, order and stability. The love needs correspond to our social lives and look for relationships. The esteem needs involve appreciation and recognition concerning abilities, capabilities and usefulness. Finally, the self-actualisation needs refer to our desire for self-fulfilment. The needs in the highest levels only act as motivators if the lower levels are already fulfilled. After a level of needs is fulfilled, lower needs become less important and higher ones are predominant. Even if the hierarchy is supposedly independent from any culture, the importance of each need can depend on the past
106
Gilles Gautier, Colin Piddington and Terrence Fernando
experiences of the people. As an example, a lack of love during childhood can lead to psychological illness and to a depreciation of love needs. As demonstrated by Herzberg et al. [17], the lowest levels needs are often seen as required ones by the workers. He considers those as hygiene factors, which are responsible for disaffection at work. As a consequence, unfilled hygiene factors can lead an employee to perform his tasks poorly or to leave his job. The highest levels of Maslow’s hierarchy of needs act as motivators that make the employees produce more effort to achieve the objectives associated to their roles in the organisation. In order to keep employees motivated, any work must therefore target the three factors that influence human’s motivation according to Vroom [10]: expectancy (be sure that his role is valuable in the project, and that others’ participations will lead to the achievement of the general objective), instrumentality (effort is rewarded accordingly to the effort produced) and valence (reward is appropriate to the employee’s need). To summarise, supporting the motivation of the employee must be considered as important as supporting the processes he is involved in when implementing collaborative tools. However, motivation evolves over time, and it is difficult to predict. It starts with a need for security before moving to self-development. Besides, an innovative organisation will benefit from more flexibility because a major issue is to integrate innovative methods and techniques in the organisations. Not being able to integrate the innovation could limit the user’s motivation because he would have the feeling that his effort were vain, which contradicts with Vroom’s requirements.
6 Virtual Organisations Contractual terms surround most collaborations and describe the constraints to protect the organisation and its customers and to build trust. Indeed, from an organisation viewpoint, trust mainly refers to the respect of the contractual agreement because this data is quantifiable and therefore tangible [18]. As part of the contract the organisation must define what can be shared, which is usually easier than to identify what needs to be protected (Intellectual Proprietary Rights). However, it limits collaboration because companies tend to share the minimum information, and it becomes difficult to build a common understanding of the project. Consequently, innovation is reduced and the decisions are easily made with an incomplete understanding of their consequences. This is particularly relevant in cross-organisations collaborations.
Understanding the Collaborative Workspaces
107
Fig. 6. The roles of an employee
Additional issues appear when considering the employee viewpoint, such as the workload or conflicts between roles for several organisations. Indeed, enterprise models are often project-based, and they cannot capture the conflicts between organisations. The example of an employee having to share his time between his main organisation and a virtual one is given in Fig. 6. He can play roles in both organisations independently for each others, and these roles can conflict with each others in terms of workload, IPR or objectives. The decisions of the employee, based on his manager’s advice or on his motivation, will then influence his choices. Therefore, there is a need when representing Virtual Organisations to add details about the involvement of the worker in external activities. Virtual Organisations are mainly necessary when an organisation does not have the full resources necessary to build a successful project. Two configurations are presented in Fig. 7 to illustrate some extremes of inter-silos relationships. In the first one, the Virtual Organisation is formed by the combination of skill silos that still work for their main organisations (Fig. 7-a). As explained before, the information exchanges between the stakeholders are controlled and limited to the minimum. Each organisation occasionally allocates some working hours for the workers, who mainly work for their own organisation. As a consequence, innovation is also limited by the lack of understanding between silos, and it is usually derived from the collaboration objectives. This is typical of supply chains and short term collaborations (partnering). In the second configuration, the virtual organisation tends to become autonomous from its parent organisations (Fig. 7-b). It creates its own working culture and the communications between its skill silos are less constrained. The workers tend to be fully allocated to the Virtual Organisation, and IPRs protection becomes less important. As a consequence, innovation is facilitated by the shared understanding between the silos. Since the links with their original organisations are weaker and less controlled, the Virtual Organisation ultimately becomes independent and works as a new organisation (merger or new business).
108
Gilles Gautier, Colin Piddington and Terrence Fernando
Organisation A
Organisation
(a)
B
Virtual Organisation A+B
(b)
Fig. 7. Examples of Virtual Organisation
Partnering intends to limit the possible drawbacks of collaborations due to the compartmentalisation of the knowledge and the development of a blame culture building a dependency of faith between organisations. This approach promotes similar values as inter-personal collaborations, such as equality, transparency and mutual benefits [19], and it also aims at generating win-win co-operations based on each organisation’ strengths.
7 Information Flows When considering the information flows, Virtual Organisations are where collaboration has the most impact because this is where the information shared is the most carefully selected and protected for commercial or privacy reasons. The competency representatives, due to their managerial roles, usually have the responsibility for defining what can be shared with other silos. Consequently, any sharing of new information must be validated by the relevant competency representative. However, as discussed before, the need for inter-silo collaboration often involves the participation of employees at bottom levels. Consequently, data is needed to be exchanged in the shared data environments via the demilitarised zones of the parent organisations. They must keep control on it through their competency representatives, even in emergency situations. The IT collaborative workspaces must allow communication across the organisation, but the IT tools developed on top of them often support their competences and their culture-based organisations. As a consequence, they store their data in a way adapted to these competences, taking into consideration the processes during which the data has to be accessed, or used, and manipulated by the corresponding disciplines. APIs are then used to extract the required view of the information and to translate the data stored into standardised or contractually agreed information so that it can be shared with other silos (Fig. 8). However, translations are often accompanied by loss of information and inter-silo interoperability is not then fully achieved. Data incompatibility remains a major
Understanding the Collaborative Workspaces
109
problem, even between silos sharing similar applications between silos because the software versioning, the platform used or the configuration can be responsible for incompatibilities at the data level [1].
Applicatio A PI Inform ation store
Applicatio A PI Inform ation store
Applicatio A PI Inform ation store
Fig. 8. Distribution of the product data between the skills silos
A solution to solve skills silos interoperability issues is proposed by IT: specifying the processes more carefully and completely in order to be sure that every situation has been considered and limiting the human initiative, which is unpredictable but innovative. However, this theory must be based on the strong assumption that any task can be detailed as a set of processes, because the workers involved in these processes would trust them and they would loose the ability to identify problems. As a consequence, any issue would be discovered very late and have huge financial consequences, as it was the case for the production of the Airbus A380. Besides, people would loose their ability to innovate because the structure of the organisation would be too rigid to integrate any development. Finally, the workers would not need to communicate with each others in order to achieve their objectives because of the processes efficiency. Their isolation would not allow them to build a shared understanding of their project and would compromise any intent to implement collaboration. Consequently, the implementation of tools to support processes should be modified in order to allow for the development of informal relationships between the workers. The support tools should focus on intra-silo work, where communications are already easy and efficient because people share similar background and working cultures.
8 Conclusion The above decomposition of the organisations’ structures during projects permits to highlight that the real challenge for supporting collaborative organisations is to generate social interactions between skills silos. Not only it would increase the motivation of the workers, but it would also facilitate innovation and facilitate
110
Gilles Gautier, Colin Piddington and Terrence Fernando
interoperability. IT is indeed extremely useful to automate processes and to support personal work, but its current scientific approach seems to be quite inefficient for supporting the social aspects of collaboration and its faults have increased financial consequences. Limiting the role of IT to a knowledge sharing tool and inter-silos communication enhancer would therefore allow for the development of informal interactions between the stakeholders at any level. This could facilitate the early problem identification, innovation and the building of trust. However, socialcomputing must still progress and prove that supporting the workers thanks to IT is not only a dream but that it can become true. This will be an objective of the CoSpaces project, which plans to develop user-centric technological workspaces based on the user’s working context.
Acknowledgments The results of this paper are partly funded by the European Commission under contract IST-5-034245 through the project CoSpaces. We would like to thank also Sylvia Kubaski and Nathalie Gautier for their useful resources concerning the psycho-sociological area, as well as all the industrial partners of the CoSpaces project for their information about current industrial practices and challenges and the research partners for their enlighten knowledge sharing.
References [1]
Prawel, D.: Interoperability isn’t (yet), and what’s being done about it. In: 3rd International Conference on Advanced Research in Virtual and Rapid Prototyping, pp. 47--49. Leiria, Portugal (2007) [2] Reid, D.: Fayol: from experience to theory. Journal of Management History, vol.1, pp.21--36 (1995) [3] Schrage, M.: Shared minds: The new technologies of collaboration. Random House, New York (1990) [4] Montiel-Overall, P.: Toward a theory of collaboration for teachers and librarians. School library media research, vol 8. (2005) [5] Klüver, J., Stoica, C., Schimdt, J.: Formal Models, Social Theory and Computer Simulations: Some Methodical Reflections. Journal of Artificial Societies and Social Simulation, vol. 6, no. 2 (2003) [6] Tuckman, B.: Developmental sequence in small groups. Psychological bulletin, vol. 63, pp. 384-399 (1965) [7] Belbin, M.: Management Teams, Why they Succeed or Fail. Heinneman, London (1981) [8] Luft, J., and Ingham, H.: The Johari Window: a graphic model for interpersonal relations. University of California Western Training Lab (1955) [9] Brown, R.: Group Processes. Dynamics within and between groups. Blackwell, Oxford (1988) [10] Vroom, V.: Work and Motivation. Jon Wiley & Sons, New York (1964) [11] Mayo, E.: The Social Problems of an Industrial Civilization. Ayer, New Hampshire (1945)
Understanding the Collaborative Workspaces
111
[12] Lu, S., Sexton, M.: Innovation in Small Construction Knowledge-Intensive Professional Service Firms: A Case Study of an Architectural Practice. Construction Management and Economics, vol. 24, pp. 1269--1282 (2006) [13] Gautier, G., Fernando, T., Piddington, C., Hinrichs, E., Buchholz, H., Cros, P.H., Milhac, S., Vincent, D.: Collaborative Workspace For Aircraft Maintenance. In: 3rd International Conference on Advanced Research in Virtual and Rapid Prototyping, pp. 689—693. Leiria, Portugal (2007) [14] Ward, Y., Graves, A.: Through-life management: the provision of integrated customer solutions by Aerospace Manufacturers. Report available at: http://www.bath.ac.uk/management/research/pdf/2005-14.pdf (2005) [15] Taylor, F.W.: Scientific Management. Harper & Row (1911) [16] Maslow, A.: A theory of human motivation. Psychological Review, vol. 50, pp. 370-396 (1943) [17] Herzberg, F., Mausner, B., Snyderman, B.B.: The motivation to work. Wiley, New York (1959) [18] TrustCom: D14 Report on Socio-Economic Issues. TrustCom European project, FP6, IST-2002-2.3.1.9 (2005) [19] Tennyson, R.: The Partnering toolbook. International Business Leaders Forum and Global Alliance for Improved Nutrition (2004)
Models and Methods for Web-support of a Multi-disciplinary B2(B2B) Network Heiko Weinaug, Markus Rabe FhG IPK, Pascalstrasse 8-9, 10587 Berlin, Germany {heiko.weinaug, markus.rabe}@ipk.fraunhofer.de
Abstract. B2B manufacturing networks offer significant potentials for service providers, which can offer and operate services with the complete network instead of connecting to each single company to be involved. The term multi-disciplinary B2(B2B) characterizes this business-to-network relationship. The FLUID-WIN project is developing and using new business process models and methods as base for the related tools’ development for the smooth integration of logistic and financial services into a B2B manufacturing network. Keywords: Business Aspects of Interoperability, Business Process Reengineering in interoperable scenarios, Enterprise modeling for interoperability, Modeling cross-enterprise business processes, Modelling methods, tools and frameworks for (networked) enterprises
1 Introduction European enterprises have increased the degree of outsourcing, steadily, in the last decade leading to many more partners along the supply chain. In combination with smaller batch sizes and delivery requests on shorter notice, the coordination requirements are tremendous for all the companies involved. The number of processes to be synchronised has increased, significantly. Nevertheless, investigations demonstrate that up to 30% of the administrative effort can be saved by establishing electronic interchange mechanisms in the supply chain [10]. A very efficient additional effect could be achieved by connecting the B2B network with service providers. This approach has been described by Rabe and Mussini and introduced as the business-to-B2B or B2(B2B) mechanism [6,12,13]. Obviously, financial services providers can offer services based on the electronic invoices circulating in the B2B network. Also, logistic service providers can provide services based on the electronic transportation documents available through the B2B network, taking into account current order information that includes announced or expected delays as well as forecast information.
114
Heiko Weinaug, Markus Rabe
However, there is a need for a systematic approach and easily applicable instrument as a prerequisite to establish multi-disciplinary B2(B2B) services and to develop supporting tools for multi-enterprise B2B networks. Especially, the approach has to handle the large set of enterprises included in B2B manufacturing networks, whereas each has its own wishes, constraints and might be embedded in various other B2B networks. Moreover, the processes of the services providers have to be synchronized and integrated with the network processes. Beside the organizational aspects, the approach has to manage the different local and networkbound IT solutions as well as their relation to each other, to the processes and the actors during the service use. The FLUID-WIN project, partially funded in the sixth European Framework Programme, aims to the development a service integrating a B2(B2B) model and a prototype of the related e-commerce application. The FLUID-WIN consortium has decided to use the business processes modelling (BPM) as the main instrument related to certain steps in the development sequence to achieve the objectives. For this purpose, the Integrated Enterprise Modelling (IEM) Method [3] was used to set up a model of the B2(B2B) business process model. For example, during the field study phase a “Template Model” and various “As-is Models” were modelled to document mainly the results of the interviews done with the companies, thus helpimg to identify the user requirements, potentials and restrictions. All processes, constraints and potentials were merged into one “General Model”, which again was base for the definition of the “FLUID-WIN B2(B2B) Business Process Model”. The FLUID-WIN B2(B2B) Business Process Model represents the definition of a “To-be Model” that describes the new processes and methods as a base for the process and platform implementation through upcoming software development tasks. Based on the BPM models, the “Interdisciplinary Service Model” and the “Network Model” were developed as instrument to set-up and configure the service workflows and their related business actors in the FLUID-WIN platform.
2 The B2(B2B) Background and its Interoperability Challenges The FLUID-WIN consortium members found out that the integration of the business processes among companies belonging to a manufacturing network and companies providing logistic or financial services have major potentials, especially for manufacturing networks that are geographically distributed across state borders, where transportation duration is significant (days to weeks) and where cross-border payment and trade finance are far from being smooth. This is fertilized by connecting these service providers to the manufacturers’ B2B platform. This B2(B2B) approach is in contrast to the “traditional” way of connecting each single company with each related service provider. The effort saving through the new approach is significant, as interfaces with customers, suppliers, financial service providers and logistic service providers (figure 1) are replaced by one single interface to the new B2(B2B) platform. In this context, “multidisciplinary” indicates that the services to be integrated into the network are from other domains,
Models and Methods for Web-support of a Multi-disciplinary B2(B2B) Network
115
such as finance or logistics. Moreover, the new approach has to tackle crossdiscipline processes. This, of course, induces a high focus on the Interoperability topic on its various levels. Generally, interoperability is defined as “the ability of two or more systems or components to exchange information and to use the information that has been exchanged” [2]. In this context “system” can mean IT systems, but of higher importance are enterprises and other organizational units interacting with each other. To explain the challenges related to Interoperability, a framework for modelling cross-organisational business (CBP) processes from the ATHENA research project is used. The CBP framework was developed to provide modelling support for the business and technical level as well as for the transformation to executable models [1]. With respect to the modelling dimension this framework incorporates three levels:
Fig. 1. Network relations addressed by the B2(B2B) Approach
The Business level represents the business view on the cooperation and describes the interaction of the partners in their cross-organizational business process as base for the analysis of business aspects, like involved partners and their responsibilities. At this level, all FLUID-WIN Process Models cover the processes of 1) the B2B collaboration within a supply chain (manufacturing), 2) the business interaction with logistic service providers (B2B to LSP), 3) the business interaction with financial service providers (B2B to FSP), and 4) the support of new integrative services by functionalities of the FLUID-WIN Platform.
116
Heiko Weinaug, Markus Rabe
The Technical level represents the complete CPB control including the message exchange. Thereby, different task types can be distinguished: those which are executable by IT systems and those that are executed, manually. The challenge for the B2(B2B) Business Process Model is to use – as far as possible – standardised descriptions for the information exchange, like the XML Common Business Library [11], which later can be reused for platform programming and implementation. Furthermore, the model has to serve as 1) a base for the technical definition of services, 2) a supporting documentation for the development of the FLUID-WIN Platform concepts and software, and 3) for the generation of crossdomain classes, and as base for the customization and orchestration of services. On the Execution level the CBP is modelled in the modelling language of a concrete business process engine, e.g. a model based on the Business Process Execution Language (BPEL). It is extended with platform-specific interaction information, e.g. the concrete message formats sent or received during process execution. For example, the B2(B2B) Business Process Model has to deliver on the Execution level 1) support for FLUID-WIN Platform execution (e.g. help functionalities) and 2) integration of model concepts and platform architecture. Beside the results from the ATHENA projects, existing reference models and available standards were investigated to be used in FLUID-WIN. There are few commonly accepted approaches for models in the manufacturing supply network area. The Supply Chain Operations Reference (SCOR) model was designed for the effective communication among supply chain partners [8]. SCOR is a reference model, as it incorporates elements like standard descriptions of processes, standard metrics and best-in-class practices. However, still SCOR has no broad and common acceptance in industries, and the authors have experienced in their studies that most of the companies under consideration – especially the smaller ones – did not have any skills with respect to SCOR. The Value Reference Model (VRM) model version 3.0 provided by the Value Chain Group [9] follows a broader and more integrative approach than SCOR and supports the seamless and efficient management of distributed research, development, sales, marketing, sourcing, manufacturing, distribution, finance and other processes. Summarizing, VRM has a more substantial approach than SCOR. However, VCOR has not even reached the industrial awareness of SCOR.
3 Template Model, As-is Models and the General Model The authors have developed an approach for the analysis of supply chain networks, which was based on reference models and a guideline with several components [5]. This approach has been successfully applied for the analysis of requirements and potentials in regional supply networks [4]. For a number of field studies that were performed in the first months of the FLUID-WIN project, the approach was adapted to the specific needs of the service workflow investigation. In four different cross-European networks the field studies have involved 6 manufacturers, 2 Logistic Service Providers (LSP) and 5 Financial Service Providers (FSP) [6]. Different types of models have been used as efficient instrument to reach the
Models and Methods for Web-support of a Multi-disciplinary B2(B2B) Network
117
project aims. During the field studies these were the “Template Model”, the various “As-is Models” and the merged “General Model”, respectively. The Template Model provided an orientation during the interviews as a “guide” for relevant processes and topics. It served also as a base for modelling of the As-is Models. The As-is Models describe and document the results of the interviews done with the companies and thus helped to identify the user requirements, potentials and restrictions. Finally, the General Model includes the outcome of all the As-is Models and forms a unique interdisciplinary model, enabling an overview of the processes in all three domains (manufacturing, finance and logistics) and their relations among each other.
4 B2(B2B) Business Process Model The FLUID-WIN B2(B2B) Business Process Model is based on the requirements that have been derived from the field studies. It represents the definition of a “Tobe Model” that describes the new processes and methods as a base for the process and platform implementation through upcoming software development tasks. The model is an integrative framework in which all the workflows, interactions between the domains, services, functionalities and software modules involved are visualized. Hence, the description of the model has been done with a “servicefunctionality-module” approach. In order to clearly understand the model concept it is necessary to have a clear impression and definition of the terms “service”, “functionality” and “module” as they are used in this context. Services define the typical and ideal workflow provided by business service providers to other business partners, namely the interactions with their clients. Services are under the responsibility of service providers such as LSPs or FSPs and have their own workflows. Functionalities are the supporting elements which enable the users of the services to carry out electronically the activities supported by the B2(B2B) platform. The FLUID-WIN project develops a prototype for such B2(B2B) platform, which is called the FLUID-WIN Platform. The definition of the functionalities is, obviously, an important prerequisite for the FLUID-WIN Platform specification, as they define the workflow within the FLUID-WIN Platform. Functionalities describe how the FLUID-WIN Platform supports the service workflow and its participants. Exemplary, the functionality “Logistic Forecast Propagation” supports a certain step within logistic Transportation Service by gaining, transferring and providing logistical forecast information. Functionalities and services are strongly interrelated and depend on each other. A module is simply a piece of software within the B2(B2B) platform (i.e. user interface, database, logic etc.) made up of the FLUID-WIN supporting functionalities. According to the “service-functionality-module” approach, first the services were defined comprehensively during the development phase. Then the necessary functionalities of the FLUID-WIN Platform were identified and methodologically
118
Heiko Weinaug, Markus Rabe
specified depending on the requirements of the services. Finally, based on the generated functionalities, the modules were indicated. Based on these prerequisites the framework structure of the FLUID-WIN B2(B2B) Business Process Model has been developed. The highest level of the model is shown in figure 2. It contains six main areas, which are highlighted by numbers. Area 1 describes three symbolic enterprises related to different roles of manufacturing companies. Together they form a typical supply chain and represent the B2B network. These manufacturers are using the services provided by the service providers. The influence of these services on the physical material flow is displayed in the area 6 of figure 2 (e.g. transportation of goods performed either by the LSPs or by the companies themselves). The manufacturing supply chain is assumed to be organised by a supply chain execution tool which communicates with the local ERP software (relates to area 3). For this purpose, any SCE supporting tool such as I2, SAP APO, SPIDER-WIN etc. is suitable [7]. The control and information exchange integration through the supply chain is enhanced by the logistic as well as the financial participants using their own techniques, software and methods. Hence, the model areas 2 and 4 contain the process description of the services from the providers’ point of view, but related to the cross-domain information flow during the complete service. The model contains the LSP-provided services Transportation, Warehouse and Combined Logistics service. Related to financial services provided by the FSP the model contains Invoice discounting and Factoring service. Based on the field studies, these services have been selected for the prototypical implementation as they promise a high market importance. However, the architecture of the model and the platform allow for the definition of additional services at any time.
Fig. 2. FLUID-WIN B2(B2B) Business Process Model (Level 0)
Each service is described and modelled in a separate workflow. However, the modelling of the service workflows in domain-oriented model areas enables the
Models and Methods for Web-support of a Multi-disciplinary B2(B2B) Network
119
identification of common activities between the services as well as domain interactions. This is especially important for the common understanding and consensus building among the various involved parties (domain experts, platform users, and software developers). Figure 4 demonstrates the general structure of modelling services on an exemplary cut-off view from the transportation service workflow (detailing the area 2 in figure 2). The service workflow is clustered in four particular areas (figure 3): 1. The ideal service workflow describes each activity along the whole business process. This information is mainly used to define the workflow templates in the Interdisciplinary Service Model. 2. The responsible person or organization is identified for each activity of the business process. This is important, because the user interface definition of the platform has to support those actions. The information in relation to the service work sequence is also used to define the Network Model parameters later. 3. The identification of local IT and B2B support helps to define the right gateway functions to bridge between the FLUID-WIN Platform and the local IT and to identify the right place to source required data. 4. The most important part is the alignment of functionalities to certain processes within each service workflow. Each activity in the service workflows is interrelated with certain functionalities managing the service task supported by the FLUID-WIN Platform. The processes in area 5 (figure 2) explain these functionalities, their related information exchange, their configuration and dependencies in detail. The structure of the FLUID-WIN B2(B2B) support is indicated in figure 4, which consists of 3 different parts. 1. Ideal service workflow
Transportation service
Actors
2. Responsible Organization
FLW connection to local systems
3. Local IT
4. Platform functionalities FLW functionalities required to support the service workflow Specification and Delimitation of FLW functionalities
Fig. 3. A cut-off from the LSP’s Transport Service Workflow with Activities and Levels
120
Heiko Weinaug, Markus Rabe
Fig. 4. Areas and described Functionalities in the Functionality Level of the FLUID-WIN B2(B2B) Business Process Model
In the top part of the figure 4, the enabler processes for the FLUID-WIN Platform are indicated. There are enabler processes ensuring trust and security mechanisms and processes which enable the addition of new services into the Platform. The second part of the B2(B2B) support level explains the processes for the platform configuration and customization. The responsibility of these processes is to compile, configure and coordinate the whole network including the services according to the data entered by the users. Hence, these processes mainly relate to the Network Model and Interdisciplinary Service Model. As the service is then ready to use, the third part of the B2(B2B) support level defines all 13 functionalities in detail. The aim of the modelling task within this area was the specification of the content and the operation of each functionality as well as the information exchange and the interdependencies between different functionalities. The methods to be developed (calculations, logics etc.) and the related software modules were identified.
5 Interdisciplinary Service Model and Network Model The FLUID-WIN B2(B2B) model targets to flexibly support service processes attached to a B2B network with low customization effort. For this purpose, there is the necessity for generalized models as well as for company- and service-specific models. In order to establish a general, re-usable base for the services in the FLUID-WIN platform, an Interdisciplinary Service Model is defined that specifies
Models and Methods for Web-support of a Multi-disciplinary B2(B2B) Network
121
workflow templates which implement the functionalities and the parameters that can be adjusted with these templates.
Fig. 5. Relation between the B2(B2B) Model, the Interdisciplinary Service Model and the Network Model
The instantiation, parameterization, and connection of the workflow templates is conducted with the support of the Interdisciplinary Service Modeller, which then in turn results in the definition of operating services, which are used by a Service Engine to drive the B2(B2B) business processes in the FLUID-WIN Platform (figure 5) The service workflows are implemented without relationship to the companies, and are therefore templates which form the Interdisciplinary Service Model and need to be customized before they can be used in operation. In order to establish concrete B2(B2B) services, the Network Model with information about the network members is required (e.g. company name, location, user’s roles and access rights, interoperability parameters, etc.). This information is static, i.e. it does not depend upon the services that the company uses or will use in the future. In order to operate on the platform, Template Workflows are instantiated and configured with respect to the involved companies as well as with data specific for the service. The Interdisciplinary Service Modeller is used to enter this information.
6 Summary and Outlook The area addressed by FLUID-WIN is one of the areas where companies have still significant room for cost improvements. For example, although transport cost are still rather low, logistic processes and the related information processing represent
122
Heiko Weinaug, Markus Rabe
a considerable and growing factor. Performance indicators of a supply chain should be extended beyond the traditional ones like availability of goods, low inventory, physical transportation cost and IT investment. More emphasis should be put on low coordination cost and connectivity cost, improved reactivity to unpredicted changes, and transparency of selective information while hiding information from unauthorized partners. The FLUID-WIN B2(B2B) model and the supporting platform conduct a significant step forward to optimize the synchronisation of money flow and delivery. This paper highlights that supply chain control platforms for multi-disciplinary B2(B2B) Networks are a good option to increase flexibility and reduce cost, and the most efficient way to create such coordination service is the method of Business Process Modelling. It can be used very efficiently in distributed environments such as supply networks. Thereby, differently styled and structured models can be used in different development phases for specific purposes (e.g. gaining information from field studies; structuring service workflows and their relation to the supporting B2(B2B) platform; set templates for service configuration). Within modelling SCOR is a good base to define the terminology and basic supply network model elements, and also compatibility with SCOR is advantageous for more widely accepted templates. Methods to combine models enable Europe-wide and even transcontinental collaboration while designing and optimizing supply chains.
Acknowledgements This work has been partly funded by the European Commission through the IST Project FLUID-WIN (IST-2004-027 083). The work reported here has involved all project partners Joinet (Italy), Régens Information Technology (Hungary), AcrossLimits (Malta), Lombardini (Italy), TS Motory (Slovakia), Fundación Labein (Spain), Technical University of Kosice (Slovak Republic), mb air systems (UK), Tecnicas de calentamiento (Spain) and ITW Metalflex (Slovenia).
References [1]
[2] [3] [4]
Greiner, U.; Lippe, S.; Kahl, T.; Ziemann, J.; Jäkel, F.-W. (2007) Designing and Implementing Cross-Organizational Business Processes – Description and Application of a Modelling Framework. In: Doumeingts, G.; Müller, J.; Morel, G.; Vallespir, B. (eds.): Enterprise Interoperability - New Challenges and Approaches. London: Springer 2007, pp. 137 - 147 IEEE (2007) IEEE Standard computer dictionary: A compilation of IEEE standard computer glossaries. Institute of Electrical and Electronics Engineers, New York Mertins, K.; R. Jochem (1999). Quality-oriented design of business processes. Kluwer Academic Publishers, Boston Rabe, M.; Mussini, B. (2005) Analysis and comparison of supply chain business processes in European SMEs. In: European Commission (Hrsg.): Strengthening competitiveness through production networks - A perspective from European ICT
Models and Methods for Web-support of a Multi-disciplinary B2(B2B) Network
[5]
[6]
[7]
[8] [9] [10] [11] [12]
[13]
123
research projects in the field ‘Enterprise Networking’. Luxembourg: Office for Official Publications of the European Communities, pp. 14-25 Rabe, M.; Weinaug, H. (2005) Methods for Analysis and Comparison of Supply Chain Processes in European SMEs. 11th conference on Concurrent Engineering (ICE), München, pp. 69-76 Rabe, M.; Weinaug, H. (2007) Distributed analysis of financial and logistic services for manufacturing supply chains. In: Pawar, K.S.; Thoben, K.-D.; Pallot, M.: Concurrent innovation: An emerging paradigm for collaboration & competitiveness in the extended enterprise. Proceedings of the 13th International Conference on Concurrent Enterprising (ICE’07), Sophia Antipolos (France), pp. 245-252 Rabe, M.; Gocev, P.; Weinaug, H. (2006) ASP supported execution of multi-tier manufacturing supply chains. In: Proceedings of the International Conference on Information Systems, Logistics and Suppy Chain ILS’06, Lyon (France), CD-ROM Publication SCC (2006) Supply-Chain Operations Reference Model. Supply-Chain Council, http://www.supply-chain.org, visited 10.11.2006 VCG (2007) Value Reference Model. Value Chain Group. http://www.valuechain.org/, visited 08.11.2007 VDI (2005) Datenaustausch per EDV läuft oft mangelhaft. VDI Nachrichten No. 24, 17th June 2005, p. 24 XCBL (2007) XCBL, XML Common Business Library. www.xcbl.org/ (visited 30.05.2007) Rabe, M.; Mussini, B.: ASP-based integration of manufacturing, logistic and financial processes. In: XII. Internationales Produktionstechnisches Kolloquium, Berlin, 11.-12. October 2007, pp. 473-487. Rabe, M.; Mussini, B., Weinaug, H.: Investigations on Web-based integrated services for manufacturing networks. In: Rabe, M.; Mihok, P. (eds) New Technologies for the Intelligent Design and Operation of Manufacturing Networks. IRB, Stuttgart 2007.
Platform Design for the B2(B2B) Approach Michele Zanet1, Stefano Sinatti2 1
Abstract. Business-to-business or “B2B” is a term commonly used to describe the transaction of goods or services between businesses, enabled through a platform that allows for the interaction between customers and suppliers. Today, the new challenge is the integration of financial and logistics services into the business relationship without having to install thousands of peer-to-peer interfaces. The authors follow an approach that introduces a new level of business, that has been called B2(B2B). The purpose of this paper is to describe briefly the design of the FLUID-WIN project that targets this new approach, and to sketch the components to be developed in order to integrate the activities between significantly different business entities. Keywords: Decentralized and evolutionary approaches to interoperability, Design methodologies for interoperable systems, Interoperability of E-Business solutions, Interoperable inter-enterprise workflow systems, Tools for interoperability
1 Introduction The FLUID-WIN project aims to achieve a web platform for providing financial and logistic services to a manufacturing network that already uses a B2B platform for supply chain management [1]. This paper aims to present a particular aspect of this project, that is the design of the FLUID-WIN platform. From the point of view of technology the platform offers the management of an information flow connected to a material flow and the associated financial and logistic flows. The single flow is related to a series of activities that without the support of a platform, such as that offered by FLUID-WIN, would require the installation of a series of one-to-one interfaces for a single report. Every member of the network would require as many interfaces as the number of partners in the network with which it relates to, and this level of costs is unacceptable.
126
Michele Zanet, Stefano Sinatti
The significant progress that FLUID-WIN implies, involves switching from a one-to-one network to a new model where the platform is the only channel of communication through which it will be possible to efficiently implement the exchange of information between the three main domains : manufacturing, logistics and finance.
Methods Platform Services
FF
Finance
FF
L, FF L,
Logistics
M, L, L, FF M,
M, L, L, FF M,
M, L, L, FF M,
Impact and support of Fluid-WIN to the sector
Manufacturing
M = Manufacturing L = Logistic F = Financial
Fig. 1. Information categories exchanged between the platform and the sectors
Therefore, the main objective of FLUID-WIN is to interact on a single platform among manufacturing B2B network providers and logistical and financial service providers (Fig. 1), applying the new B2(B2B) approach [2]. This paper has been prepared when the project has not been completed yet, and when the detailed platform design was still in process. Nevertheless, the software architecture has been defined, and the components that implement the platform are Network Modeler, User Interface, Interdisciplinary Modeler, and Service Engine. In the following, the architecture of the platform is described, the base technologies used are characterized, and the software components of the platform are explained.
2 Architecture Overview A logical scheme of the platform is depicted in Fig. 2. The general architecture is composed of the following modules: x x x x
Network Modeler Interdisciplinary Modeler User Interface Service Engine
The first two components are designed to model the entities involved in the communication and the rules by which they interact. The User Interface and the Service Engine components are designed to manage certain documents and events within the network.
Platform Design for the B2(B2B) Approach
127
Fig. 2. The FLUID-WIN platform
In addition to the platform, another set of software components called “gateway” are part of the FLUID-WIN architecture. The gateways aim to implement domain-dependent communication. Thus, the three gateways relate to the FLUID-WIN domains manufacturing (B2B), Logistics and Finance. Gateways have to face and to solve the interoperability challenges of the FLUID-WIN approach. There are two major levels of interoperability: x x
Internal interoperability concerns the communication among the FLUIDWIN modules (for instance communication between the FLUID-WIN platform and the logistic gateway). External interoperability concerns the communication between the different FLUID-WIN modules and the legacy systems of the FLUID-WIN users . The FLUID-WIN interoperability is realized by the implementation of gateways and adapters that work between the FLUID-WIN Platform and the external application domains.
3 Technologies for Development and Technologies for Design The choice of technology for the platform development has a strong impact on interoperability. Especially for the internal interfaces, we must therefore consider: x
Abstracted: Service is abstracted from the implementation
128
Michele Zanet, Stefano Sinatti
x x x
Published: Precise, published specification functionality of the service (not of the implementation of the service) Formal: Formal contract between endpoints places obligations on provider and consumer Relevant: Functionality presented at a granularity recognized by the user as a meaningful service
The modeling of these properties is possible through the use of technology offered by Web services. Thus, the platform itself was developed with the following instruments: x x
Eclipse 3.2, with Web Tools Platform 1.5 development environment [3] J2EE 1.4, that fully supports the use of Web services [4]
For the deployment, the following tools have been selected: x x x x
The FLUID-WIN Platform will be implemented in Java, Enterprise Edition 1.4. The J2EE Application Server will be BEA Web-Logic Server 9.2. [5] The application will run on a server with Linux or Sun Solaris as OS. The B2B gateway and the FLUID-WIN Platform have a DBMS layer (IBM Informix).
Thus, the platform is a WebService which exposes a number of methods available through gateways that are mandated to route messages to the outside world. In Fig. 3, a conceptual scheme is reported of the protocols and tools used for communication among network entities:
Platform Design for the B2(B2B) Approach
129
FLUID-WIN PLATFORM
APPLICATION SERVER
Apache Axis/SOAP Server
BEA WebLogic
SOAP/HTTP
WEB-SERVER
Inner Firewall
GATEWAY SOAP/HTTPS
inte rne t
Outer Firewall
Database Server
Authorization
HTTPS
BROWSER
Fig. 3. Platform infrastructure
For the design tasks at the FLUID-WIN consortium partners, the Integrated Enterprise Modelling (IEM) method [6] was used to define and exchange the business processes. The Universal Modelling Language (UML) [7] is used to define and describe workflow protocols for the exchange of messages between the gateway, the platform and the outside world. The development environment for UML is Enterprise Architect, an advance software modeling tool for UML 2.1 [8].
4 Components for Modelling The software components for modelling are the Network Modeler and the Interdisciplinary Service Modeller. These two components have the main objective to define the parties involved in the B2(B2B) network, the parties involved in specific workflows and the “rules of the game” applied when operating the platform’s functionalities. In particular, the Network Modeler is used to model the players in the B2(B2B) process, assigning roles and defining constrains that will drive the collaborative activities. The Network Modeler is expected to have structural similarities with the modelling engines of the SPIDER-WIN project, that facilitated the exchange of relevant manufacturing execution information along the supply chain [9, 10]. Therefore, partners expect that they can re-use the structural parts of the specification from the SPIDER-WIN project, while the content parts will need to be specified newly, leading also to the development of a completely new modeller.
130
Michele Zanet, Stefano Sinatti
The Interdisciplinary Service Modeller has been conceived for the modelling of the domain concept to be handled in a given network context, where this context is defined through the Network Modeller. The Interdisciplinary Service Modeller will enable to define the interdisciplinary process activities and to map them to the process elements and events that trigger the actions managed by the Service Engine. Another fundamental aspect to be considered is the modeling of processes through which certain documents can be obtained. This is possible through the socalled Workflow and Report Templates that are included within the Interdisciplinary Service Modeler. The above mentioned processes are defined by a number of states and methods implemented in the platform, and used by workflows whose final output will be a document (e.g. logistic order, request for quotation, quality measurement document, etc.). A WorkFlow Template is composed from a set of states, transitions, rights, outgoing and incoming events, functions, notifications, constraints and changes to the database. Through customizable external input it is possible to obtain a workflow as shown in Fig. 4, which is an example of a Workflow Template on a Logistic Order, specifying the possible states and the transitions between these states.
Platform Design for the B2(B2B) Approach
131
LOGISTIC ORDER – TEMPLATE WORKFLOW
f: {F1}; i:{I1}, r:{LCU } f :{ F 3 }; r:{LCU}; o :{ O 3 } f:{F2 }; i: {I1}, r:{LCU} f :{ F2 }; i: { I1 }, r:{LSP }
5 Components for Managing The components for the management are Service Engine and User Interface. The main task of these components is the management of documents and events within the network. The Service Engine manages all messages that are exchanged with B2B and legacy applications, storing and updating the central repositories, fulfilling the required elaboration to propagate message data through the singlediscipline domains, and towards interested network players. Moreover, the Service Engine collaborates with software agents in charge of detecting events and transporting messages, and it defines the basic routing elements for control flow semantics, based on an XML/XSL.
132
Michele Zanet, Stefano Sinatti
The User Interface allows authorized users to interact through a series of actions granted to them by their partners. An example of the first draft User Interface is shown in Fig. 5.
Fig. 5. Example User Interface
6 Summary The step from the “classical” B2B approach to the new B2(B2B) concept requires the development of a platform that centralize services and therefore communications between various entities. Therefore, a technology is mandatory that offers a single language-independent development environment. The project has selected Web services for this purpose. The successful exploitation of the B2(B2B) concept requires x x x x x x
a business process model of the B2(B2B) concept, that forms the base of the implementation and supports the customization in a concrete network, an Interdisciplinary Service Model that defines the Workflow Templates implementing the states and transitions on the platform, a Network Model that defines the players in a concrete platform, a Service Engine that operates the workflow, a B2B gateway that connects the platform to a B2B platform (which then connects to a multiplicity of manufacturers), finance and logistic gateways that allow for the attachment of specific IT systems from the service domains, and
Platform Design for the B2(B2B) Approach
x
133
user interfaces to directly access the platform, taking into account that the majority of services will be used without direct access to the platform, as information is exchanged through the gateways and platforms among the existing legacy systems.
References [1]
FLUID-WIN: Finance, Logistic and Production Integration Domain by Web-based Interaction Network. European Project FP7-IST-027083, www.fluid-win.de [2] Rabe, M.; Mussini, B.: ASP-based integration of manufacturing, logistic and financial processes. In: XII. Internationales Produktionstechnisches Kolloquium, Berlin, 11.-12. October 2007, pp. 473-487. [3] Eclipse Foun¬da¬tion, Eclipse 3.2: http://www.eclipse.org/, visited 05.11.2007, also http://help.eclipse.org/help32/index.jspX (Documentation) [4] Sun Microsystems, J2EE 1.4, 2004. http://java.sun.com/j2ee/1.4/docs/tutorial/doc/ [5] BEA Web-Logic Server 9.2: http://edocs.bea.com/wls/docs92/index.html, visited 05.11.2007 [6] Mertins, K.; Jochem, R.: Quality-oriented design of business processes. Kluwer Academic Publishers, Boston, 1999. [7] UML 2.1: http://www.uml.org/, visited 05.11.2007 [8] Enterprise Architect: http://www.sparxsystems.com.au/, visited 05.11.2007 [9] Rabe, M.; Mussini, B.; Gocev, P.; Weinaug. H.: Requirements and potentials of new supply chain business processes in SME networks - Drivers for solution developments. In: Cunningham, P.; Cunningham, M. (Hrsg.): Innovation and the Knowledge Economy: Issues, Applications, Case Studies. Amsterdam: IOS Press 2005, Vol. 2, S. 1645-1652. [10] Rabe, M.; Gocev, P.; Weinaug, H.: ASP Supported Execution of Multi-tier Manufacturing Supply Chains. In: Proceedings of the International Conference on Information Systems, Logistics and Suppy Chain ILS’06, Lyon (France), 14.-17. Mai 2006, CD-ROM Publication. [11] FLUID-WIN consortium: New Business Processes Specifications. Deliverable D13, 28.09.2007, www.fluid-win.de
Trust and Security in Collaborative Environments Peter Mihók, Jozef Bucko, Radoslav Delina, Dana Palová Faculty of Economics, Technical University Košice, NČmcovej 32, 040 01 Košice, Slovakia {peter.mihok, jozef.bucko, radoslav.delina, dana.palova}@tuke.sk
Abstract. It is often stated in literature that trust is of critical importance in creating and maintaining information sharing systems. The rapid development of collaborative environments over the Internet has highlighted new open questions and problems related to trust in web-based platforms. The purpose of this article is to summarize how trust can be considered in collaborative environments. Partial results of the field studies of two European IST projects, FLUID-WIN and SEAMLESS, are presented. Identity management problems and trusted operational scenarios are treated. Key words: trust, security, information sharing, collaboration, web-based platform, identity management, trusted scenario
1 Introduction Trust is considered as a basic success factor for collaboration. Modern ICT based collaboration environments allow companies to realize a number of competitive advantages by using their existing computer and network infrastructure for the collaboration of persons and groups. The collaborating actors (manufacturers, suppliers, customers, service providers) must feel confident that their transaction processes are available, secure and legal. Trust building mechanisms vary according to their complexity and acceptability, especially among companies with low IT skills. Appropriate selection and user-friendly implementation can enhance trust and efficient use of web-based business platforms. In this contribution we examine trust and trust building mechanisms in different contexts. Organizations and projects are looking for ways to optimize their supply chains in order to create a competitive advantage. Consequently, the same organizations are modifying their business processes to accommodate the demands that information sharing requires. Information sharing can reduce the cost of failure and operational cost. Furthermore, it can improve the scheduling quality and the efficiency of current resources. It also provides intangible benefits such as
136
Peter Mihók, Jozef Bucko, Radoslav Delina, Dana Palová
improved quality with increased customer and shareholder satisfaction. However, integrating and sharing information in inter-organizational settings involves a set of complex interactions. The organizations and institutions involved must establish and maintain collaborative relationships in which information and data of sensitive and critical nature are transferred outside of the direct control of the organization. The sharing processes can involve significant organizational adaptation and maintenance. Trust and security mechanisms are often stated in literature as being of critical importance in the creation and maintenance of an information sharing system. In the past decades there is a rapid increase of information sharing systems based on different electronic services (e-services) offered through web-based platforms. Trust and security aspects in the development of such platforms are in the center of the research activities of European FP6 and FP7 projects, e.g. networks of Living Labs and Digital Ecosystems, projects SECURE, SERENITY, SWAMI, HYDRA, etc. In this paper we will restrict our attention to two types of collaborative environments: electronic marketplaces and manufacturing networks. Our research is based on our results and experiences in the FP6 IST projects SEAMLESS and FLUID-WIN. The SEAMLESS project studies, develops and tests an embryo of the Single European Electronic Market (SEEM) network, where a number of e-registries are started in different countries and sectors. The SEEM vision is towards a web-based marketplace where companies can dynamically collaborate without cultural, fiscal and technological constraints. The FLUID-WIN project targets business-to-business (B2B) manufacturing networks and their interactions with the logistics and financial service providers. This cross-discipline service integration concept is called business-to-(B2B), or shorter B2(B2B) [19]. Within that context the project aims to develop an innovative platform, which can integrate data and transfer them among all the various partners, in order to improve the competitiveness and to make the business processes of the integrated network as efficient as possible. After introducing the basic concepts related to trust and security we deal with the problem of the secure access to the platforms and additional trust mechanisms considered within the projects.
2 Basic Concepts In the context of collaboration it is of importance to differentiate between trust and security. The basic concepts and terms are defined as a base for the further discussion. Trust is a seemingly very abstract factor and as a complex notion, synonymous to confidence, it has a lot of meanings depending on the context where it is considered. By WordNet [28] the word trust relates to: x x x x
Reliance, certainty based on past experience Allow without fear Believe to be confident about something Trait of believing in the honesty and reliability of others
Trust and Security in Collaborative Environments
x
137
Confidence, a trustful relationship
Another definition describes trust as confident reliance. “We may have confidence in events, people, or circumstances, or at least in our beliefs and predictions about them, but if we do not in some way rely on them, our confidence alone does not amount to trust. Reliance is a source of risk, and risk differentiates trusting in something from merely being confident about it. When one is in full control of an outcome or otherwise immune from disappointment, trust is not necessary” [27]. Trust is usually specified in terms of a relationship between a trustor and trustee. The trustee is the subject that trusts a target entity i.e. the entity that is trusted. Trust forms the basis for allowing a trustee to use or manipulate resources owned by a trustor or may influence a trustor’s decision to use a service provided by a trustee. Based on the survey of Grandison and Sloman [13], trust, in the eservices context, is defined as: “the quantified belief by a trustor with respect to the competence, honesty, security and dependability of a trustee within a specified context”. Also distrust is defined as: “the quantified belief by a trustor that a trustee is incompetent, dishonest, not secure or not dependable within a specified context”. The level of trust has an approximate inverse relationship to the degree of risk with respect to a service or a transaction. In many current business relationships, trust is based on a combination of judgment or opinion based on face-to-face meetings or recommendations of colleagues, friends and business partners. However, there is a need for a more formalized approach to trust establishment, evaluation and analysis to support e-services, which generally do not involve human interaction. Comprehensive surveys on the meaning of trust can be found e.g. in [17] and [13] as well as in the book on Trust in E-Services [24]. Whereas, security is “a wish” of being free from danger, the goal is “bad things don't happen”. Computer security is the effort to create a secure computing platform, designed in a way that users or programs can only perform actions that have been allowed. This involves specifying and implementing a security policy. The actions in question are often reduced to operations of access, modification and deletion. Schneier [21] defines security “like a chain; the weakest link will break it. You are not going to attack a system by running right into the strongest part. You can attack it by going around that strength, by going after the weak part, i.e., the people, the failure modes, the automation, the engineering, the software, the networks, etc.” In the context of information-sharing computer systems, everything reduces to access to appropriate information. Provision (or disclosure) of information is the key element. A simple transfer of data is between two parties, a sender and a receiver, and includes the following key steps: preparation of data; transfer a copy of the prepared data; use the copy of data received. More complex transactions can be composed from such simple data transfers. Dependability is an ability to avoid failures that are more frequent or more severe than it is acceptable (to avoid wrong results, results that are too late, no results at all, results that cause catastrophes). Attributes of dependability are:
138
Peter Mihók, Jozef Bucko, Radoslav Delina, Dana Palová
x x x x x
Availability – readiness for correct service Reliability – continuity of correct services Safety – absence of catastrophes Integrity – absence of improper results Maintainability – ability to undergo modifications and repairs
Security can be defined [14] as the combined characteristics of: confidentiality (i.e., absence of unauthorized disclosure of information), availability to conduct authorized actions, and also integrity (i.e. the absence of unauthorized system alterations). Security and dependability overlap, and are both required as base for trust. Unfortunately and most confusingly, the terms dependability and security are sometimes used interchangeably or, else, either term is used to imply their combination. In fact, because security and dependability are distinct but related and partially overlapping concepts, the term trustworthiness is being increasingly used to denote their combination. The main technical mechanisms that have strong influence on the trust in networked based systems include: x x x
Identity management Access control Data encryption and security
The identity management systems provide tools for managing partial identities in the digital word. Partial identities are subsets of attributes representing a user or company, depending on the situation and the context. Identity management systems must support and integrate both techniques for anonymity and authenticity in order to control the liability of users. The access control is the enforcement of specified rules that require the positive identification of the users, the system and the data that can be accessed. It is the process of limiting access to resources or services provided by the platform to authorized users, programs, processes or other systems according to their identity authentication and associated privilege authorization. Finally, data encryption and security are related to cryptographic algorithms, which are commonly used to ensure that no unauthorized user can read and understand the original information. The concept of asymmetric cryptography, (also called Public Key Cryptography), was described for the first time by Diffie and Hellman [6]. In contrast to the symmetric cryptography in which we have the same secret key for encryption and decryption, we now have one public key eP (encryption key) and one private key dP (decryption key) for each person P. While the public key eP can be published to the whole world, the private key dP is to be treated as a secret and only person P knows it. An important characteristic of such a cryptography system is that it is computationally infeasible to determine the private key given the corresponding public key. The advantage of asymmetric cryptography is the enormously reduced effort for key management. A disadvantage is the velocity. The asymmetric cryptography can serve as base for the digital signature. The Public Key Infrastructure (PKI) provides the identification of a public key with a concrete person via the certificate. The PKI is the system of technical and administrative arrangements associated with issuing, administration, using and
Trust and Security in Collaborative Environments
139
retracting of public key certificates. The PKI supports reliable authentication of users across networks and can be extended to distributed systems that are contained within a single administrative domain or within a few closely collaborating domains.
3 Trust and Security on Web-based Platforms In an open and unknown market place with a high number of unknown participants, assurance and trust are difficult but very important. There is a growing body of research literature dealing with online trust, in which e-commerce is one prominent application. Several studies contend that e-commerce cannot fulfil its potential without trust (e.g. [8], [11], [15], [20]). Lee and Turban [16] highlight lack of trust as the most commonly cited reason in market surveys why consumers do not shop online. On an open consultation on “Trust barriers for B2B e-marketplaces” [7] conducted by the Enterprise DG Expert Group in 2002, several important barriers were identified. From the report we can find that the most important trust barriers are issues regarding the technology (security and protection), trust marks and dispute resolution absence, online payment support, lack of relevant information about partners, products, contract and standardization issues. A trust building process must be set up to resolve these issues. Trust usually is conceptualised as a cumulative process that builds on several, successful interactions [18]. Each type of process increases the perceived trustworthiness of the trustee, raising the trustor’s level of trust in the trustee [3]. It is not known exactly which trust-building processes are relevant in an e-commerce context. It is suggested that, in this setting, trust building is based on the processes of prediction, attribution, bonding, reputation and identification [3]. Reputation has a very high relevance in a trust-building process on e-commerce markets [1]. According to the Chopra and Wallace classification, identification based trust refers to one party identifying with the other, for example in terms of shared ethical values. Identification builds trust when the parties share common goals, values or identities. In e-commerce, these attributes perhaps may relate to corporate image [2] or codes of conduct. These results are more focused on trust impact than on factors, which build trust. According to several research activities, the research on significance and acceptance of trust building mechanisms is still missing and is necessary for future development in this field. This absence has been examined in the Seamless project [22]. The results are presented in Deliverable D1.2 “Trusted Operational Scenarios” of the project, see also [5]. Though operating within a closed supply chain system, locally spread information technology destinations (users of manufacturer, supplier and financial service institutions) need to be linked, which brings up the need for trust, privacy and security. It is to be expected that security is at least of equal importance than in an open system, as limitation of access plays a vital role. There are several trust and security best practices scattered throughout the Internet and material is constantly updated daily, if not hourly, based on the latest threats and
140
Peter Mihók, Jozef Bucko, Radoslav Delina, Dana Palová
vulnerabilities. Security standards are not “one size fits all.” Responsible, commercially reasonable standards vary, depending on factors such as company size and complexity, industry sector, sensitivity of data collected, number of customers served, and use of outside vendors. Security standards exist for several types of transactions conducted, and new ones are on the way all the time. A further checklist to meet trust and security requirements is to meet local legislation in terms of data protection and privacy regulation. Financial transactions need to meet local and also foreign standards if they are to be accepted by a provider. With respect to the FLUID-WIN project and its platform mainly two security and trust building mechanisms can be differentiated: 1. 2.
Mechanisms based on workflow design, policies and contractual issues. Technical solutions ensuring a save login and data exchange.
Both mechanisms are strongly related and build upon each other. Based upon the results of a mail-based survey the following ten key success factors for an established information sharing system were determined [10]: 1. 2. 3. 4. 5. 6. 7. 8. 9. 10.
Centralized Information Sharing Control Maintain and Update Information Sharing Rules Significant Exchange of Information Defined Use of Information Collaboration with Suppliers Cooperative Competition End-to-End Connectivity Formed Supply Alliances Replace Traditional Communication with IT Share Frequently with Suppliers
Trust, which did not occur as a factor among them, is replaced by contractual agreements defining the limitations of the transferred information usage. However, the challenge within the FLUID-WIN idea is the high number of actors from different domains as well as their technical connection within the B2(B2B) concept. For secure access to the FLUID-WIN Platform it could be convenient to use digital signatures (private keys), which are saved at a secure device (chip card, USB key) and protected by other safety elements (PIN and static password). It is necessary to take the existence of a digital signature couple as granted, the first one for access and cryptography and the second one for designation. The strength of this securing form is the fact that the method of digital signature cannot be breached by “brute” force at the present time. The weakness of this security method is insufficient knowledge of this method and infringement of all safety rules that are related to the physical security of digital signature storage site and safety elements, which allow its operation. Therefore, it is necessary to work out the security policy, in which the method of usage, security principles and risks of improper use of this method will be exactly specified. The digital world uses the same principles of electronic signature data identification (in case of a digital signature it is the public key):
Trust and Security in Collaborative Environments
x
x
x
141
Uniqueness of the line – based on the agreement about a public key between the key owner and the verifier of the signature. This way of connection is unequivocal under opinion of both parties. Duly conducted agreement protects both parties and is one of the arguments in the case of cause. This application of digital signature is possible in the so-called closed systems. This method is used in the electronic banking services at present. Triangle of trust – in open systems, the owner of the key has not often the possibility to meet with the verifier of the signature to make an agreement concerning the relevant public key. In this case it is suitable to use the third party principle. Certification – a dedicated authority assures the unequivocal identification of the public key with a concrete person (its owner), on the basis of a certified application of the owner. In this application the basic identification data and relevant public key are listed.
Therefore, the necessary condition of active employment of electronic signature technology, which allows the transition to electronic document interchange in open systems, is the existence of certification authorities. To the basic services defined by the e-signature mechanisms belong: x x x
Registration services – contact with certificate applicant, verification of data conformity (data in the application form for issue of certificate and data concerning the identity of an applicant). Issuing of certificates – on the basis of an agreement with an applicant and verification of all necessary data the certificate for public key is issued. Revoke of certificates and publication of the cancelled certificates list – in case that an unauthorized person obtains the private key, it is required to cancel the certificate before its validity date. The certification authority is obliged to keep and publish lists of valid and cancelled certificates.
However, financial service providers have their own security policies. Especially larger financial institutions are hard to be convinced to adapt their policy as precondition to use the FLUID-WIN Platform. Rather, it is likely that the FLUID-WIN Platform has to accept the policies of the certain Financial service providers, even if is this means that FLUID-WIN has to implement a set of different security mechanisms depending on the requests of each financial service provider.
4 Conclusion E-services like web-banking, web-shopping, web-auctions, e-government, e-health, e-manufacturing, e-learning are becoming part of everyday life for citizens everywhere. As a basis for deciding to use the service trust is becoming a major impediment. To fill the gap between identities and their level of trust is one of the eight major issues in developing identity management for the next generation of distributed applications and use of e-services [24]. A lot of interesting questions
142
Peter Mihók, Jozef Bucko, Radoslav Delina, Dana Palová
and problems are considered in the recent publications [4],[5],[10],[12] and can be found as public deliverables of the projects [9],[22],[23],[25] and [26].
Acknowledgements This work has been partly funded by the European Commission through IST Project SEAMLESS: Small Enterprises Accessing the Electronic Market of the Enlarged Europe by a Smart Service Infrastructure (No. IST-FP6-026476) and IST Project FLUID-WIN: Finance, logistics and production integration domain by web-based interaction network (No. IST-FP6-27083). The authors wish to acknowledge the Commission for their support. We also wish to acknowledge our gratitude and appreciation to all the Seamless and Fluid-Win project partners for their contribution during the development of research presented in this paper.
References [1]
Atif, Y.: Building Trust in E-Commerce, IEEE Internet Computing, Jan-Feb (2002), 18-24 [2] Ba, S., Pavlou, P.A.: Evidence of the effect of trust building technology in electronic markets: Price premiums and buyer behaviour. MIS Quarterly Vol. 26 No. 3. (2002) 243-268 [3] Chopra, K., Wallace, W.: Trust in Electronic Environments, Proceedings of the 36th Hawaii International Conference on Systems Sciences (HICSS), (2003) [4] Delina, R., Azzopardi, J., Bucko, J., Frank, T., Mihok, P.: Financial Services in Webbased Platforms. In: Managing Worldwide Operations and Communications with Information Technology, Proc. IRMA Conference Vancouver (Canada), ed. Koshrow M. (2007) 1273-1274 [5] Delina, R., Mihok, P.: Trust building processes on web-based information-sharing platforms. In: Proceedings of the 13th International Conference on Concurrent Enterprising, ICE’2007, eds. K.S. Pawar, K-D. Thoeben and M. Pallot, Sophia Antipolis, France (2007) 179-186 [6] Diffie,W., Hellman,M.: New directions in cryptography, IEEE Transactions on Information Theory. (1976). Available at: http://crypto.csail.mit.edu/classes/6.857/papers/diffie-hellman.pdf (11.10.2007) [7] EU Commission: Open consultation on “Trust barriers for B2B emarketplaces” Presentation of the main results. (2002) Available at: http://europa.eu.int/comm/ enterprise/ict/policy/b2bconsultation/consultation_en.htm (25.05.2007) [8] Farhoomand, A., Lovelock, P.: Global e-Commerce – Texts and Cases, Prentice Hall (2001) [9] FLUID-WIN Finance, logistics and production integration domain by web-based interaction network. FP6 IST STREP 27083 funded by European Commission. Available at: www.fluid-win.de [10] Frank, T. G., Mihók, P.: Trust within the Established Inter-Organizational Information Sharing System. In: Managing Worldwide Operations and Communications with Information Technology, Proc. IRMA Conference Vancouver (Canada), ed. Koshrow M. (2007) 132–135 [11] Friedman, B., Kahn, P., Howe, D., Trust Online, Communications of the ACM, Vol. 43, No. 12 (2000) 34-40
Trust and Security in Collaborative Environments
143
[12] Giuliano, A., Azzopardi, J., Mihók, P., Bucko, J., Ramke, Ch.: Integration of financial services into multidisciplinary Web platforms. To appear in: Ambient Intelligence Technologies for the Product Lifecycle: Results and Perspectives from European Research, IRB Stuttgart (2007) [13] Grandison, T., Sloman, M.: A survey of trust in Internet applications, IEEE Communications Surveyes and Tutorials, 4(4) (2000) 2-16 [14] IEEE Standard computer dictionary: A compilation of IEEE standard computer glossaries. Institute of Electrical and Electronics Engineers, New York (2007) [15] Jones, S., Wilikens, M., Morris, P., Masera, M.: Trust requirements in e-business: A conceptual framework for understanding the needs and concerns of different stakeholders. Communications of the ACM, Vol. 43, No. 12 (2000) 81-87 [16] Lee, M., Turban, E.: A Trust Model for Consumer Internet Shopping, International Journal of Electronic Commerce, Vol. 6, No. 1 (2001) [17] McKnight, D.H. and Chervany, N.L.: The Meanings of Trust. MISRC 96-04, University of Minnesota, Management Informations Systems Research Center, University of Minnesota, (1996) [18] Nicholson, C., Compeau, L., Sethi, R.: The Role of Interpersonal Liking in Building Trust in Long-Term Channel Relationships, Journal of the Academy of Marketing Sciences, Vol. 29, No. 1 (2001) 3-15 [19] Rabe, M.; Mussini, B.: ASP-based integration of manufacturing, logistic and financial processes. In: XII. Internationales Produktionstechnisches Kolloquium, Berlin, 11.-12. October 2007, pp. 473-487. [20] Raisch, W.: The E-Marketplace – Strategies for Success in B2B Ecommerce, McGraw-Hill (2001) [21] Schneier, B.: Security in the real world. How to evaluate security technology. In: Computer Security 15/4 (1999) 1-14 [22] SEAMLESS: Small enterprises accessing the electronic market of the enlarged Europe by a smart service infrastructure. FP6 IST STREP 26476 funded by European Commission. Available at: www.seamless-eu.org [23] SECURE: Secure environments for collaboration among ubiquitious roaming entities. Available at: http://www.dsg.cs.tcd.ie/dynamic/?category_id=-30 (visited 01.11.2007) [24] Song, R., Korba, L., Yee, G.: Trust in E-Servuces, Technologies, Practices and Challenges, Idea Group Publ. (2007) [25] SWAMI: Safeguards in a world of ambient intelligence, final report. (2006) Available at: http://swami.jrc.es (visited 01.06.2007) [26] TRUSTe: Security guidelines 2.0. (2005) Available at: http://www.truste.org/pdf/Security Guidelines.pdf (visited 23.05.2007) [27] UNMC: Glossary. (2007), www.unmc.edu/ethics/words.html (31.05.2007) [28] WordNet: Word net search; Trust. (2007) Available at: http://wordnet.princeton.edu/ perl/webwn?s=trust&sub=Search+WordNet&o2=&o0=1&o7=&o5=&o1=1&o6=&o4 =&o3=&h (visited 21.05.2007)
Prototype to Support Morphism between BPMN Collaborative Process Model and Collaborative SOA Architecture Model Jihed Touzi, Frederick Bénaben, Hervé Pingaud Centre de Génie Industriel, Ecole des Mines d’Albi-Carmaux Route de Teillet 81013 ALBI Cedex 9 {Jihed.touzi, Frederick.benaben, Herve.pingaud}@enstimac.fr
Abstract. In a collaborative context, the integration of industrial partners deeply depends of the ability to use a collaborative architecture to interact efficiently. In this paper, we propose to tackle this point according to the fact that partners of the collaboration respect SOA (Service-Oriented Approach) concepts. We propose to design such a collaborative SOA architecture according to MDA (model-Driven Approach) principles. We aim at using business model (the needs) to design a logical model of a solution (logical architecture). The business model is a collaborative business model (in BPMN, at the CIM level), while the logical model is a collaborative architecture model (using UML, at the PIM level). This paper presents the theoretical aspects of this subject, the mechanisms of morphism and the dedicated translation rules. Finally, we show the prototype of a demonstration tool embedding the transformation rules and running those principles. Keywords: transformation, ATL prototype, collaborative process, meta-model, morphism.
1 Introduction The application of model-driven development facilitates faster and more flexible integration by separating system description to different levels of abstraction. The global MDA approach shows that is possible to separate concerns by splitting implementation choices from specifications of business needs. Specifically, the Model Driven Interoperability (MDI) paradigm [1][2][3] proposes to start from an Enterprise Modelling level that means at Computation Independent Model (CIM) level, defining the collaboration needs of a set of partners and to reach a Platform Independent Model (PIM) level defining a logical architecture of a collaborative solution. Finally the Platform specific Model (PSM) can be generated. The three models are in closed connections and passing from one layer to another one, must
146
Jihed Touzi, Frederick Bénaben, Hervé Pingaud
be facilitated by vertical transformation rules. Previous research works have shown the benefits of this new paradigm. The PIM4SOA project [4] defines a PIM metamodel based on SOA. In this work, it is possible to generate a PIM model from a Processes, Organization, Product, * (POP*) model. The weak point of this work is that there is no description of the needed morphism between the CIM model and the PIM model. Other research works like [5] and [6] focus principally on the identification of the two meta-models (CIM and PIM). The Morphism (which contains the definition of the transformation rules) is missed or shortly described. In this paper we intend to identify a morphism between a collaborative process model and a collaborative SOA architecture. In our PhD work [3] we are actually developing a prototype that transforms a BPMN collaborative process into a collaborative SOA architecture. The generated UML model can be enriched by an additional knowledge about the collaboration (services description, exchanged messages details, etc.). This paper is organized as follows. Section 2 defines morphism between two models, while section 3 defines the needed meta-models of the BPMNCollaborative SOA architecture transformation. Transformation rules are illustrated in section 4 and finally we present in section 5 the architecture of the developed prototype to support the defined morphism.
2 Definition of a Morphism We aim to establish a morphism between a BPMN collaborative process model and a collaborative SOA architecture model. Fig 1 shows how to represent schematically the notion of morphism: A, B represent respectively source and target model and M is the morphism.
M
Fig. 1. Morphism between A and B models
Morphism allows obtaining a model B from a model A. It is based on the concepts of mapping and transformation [7]. If we consider that one model is composed of a set of elements: x
Mapping is a relation that aims to establish correspondences between elements of two models without modification. The definition of one mapping needs the availability of the two models. The establishment of one mapping needs firstly to define the meta-models that define the models.
Prototype To Support Morphism between BPMN and Collaborative SOA
x
147
Transformation is a function that transforms (or modifies) one source model to a target model. The source model is unchanged and there is generation of new model: the result of the transformation.
Mappings can be of different types [7]: A 1-to-1 mapping puts in correspondence one element of a model with exactly one element of the other model. However, there are other cases in which we map single elements in the source model with a sub-graph in the second model (1-to-n) or, even, a sub-graph to sub-graph (m-to-n).
3 Definition of the Meta-Models In this section, we define the needed meta-models of the BPMN-Collaborative architecture transformation. 3.1 Definition of BPMN Collaborative Process Meta-Model The first meta-model is of the collaborative process. The meta-model of Fig 2 regroups basic BPMN elements (like gateway, events, message flow…) and other specialized components (like pools or tasks that explicitly refer to collaboration entities). The BPMN formalism aims to support process management for technical and business stakeholders by providing a graphical notation that is rather intuitive and able to represent complex process semantics. As a specialized element the Collaborative Information System (CIS) pool refers to a mediation actor of the collaboration that offers a set of added value services. As example: choosing a supplier from a list of suppliers concerned with a customer order, checking payment transactions, etc. The defined meta-model respects our vision of the collaboration, based on the use of a mediator that facilitates interoperability issues between partners.
148
Jihed Touzi, Frederick Bénaben, Hervé Pingaud
Fig. 2. Collaborative process meta-model (source meta model)
3.2 Definition of Collaborative SOA Architecture Meta-Model The collaborative SOA architecture meta-model described in Fig. 3 is closed and inspired from the PIM4SOA meta-model [4]. Three packages are proposed corresponding to three views: x x
x
Services view: services that are used in the collaboration are described; they are business reachable computing functionalities with a known location on the communication network. Information view: data are exchanged by messages between services, they are defined here in their structure by a data model, and also as a communication utility by identification of the emission and reception services Process view: interaction amongst services and coordination aspects are specified by the control of processes described here
Prototype To Support Morphism between BPMN and Collaborative SOA
149
Fig. 3. Meta model of the SOA collaborative architecture (target meta model)
4 Definition of the Transformation Rules The described source and target meta-models of the morphism are further detailed and justified in [3]. It is not possible to explain here, due to limited space constraints, the objects and the relations that rely them in each meta-model. However, we hope that the following dedicated to the mappings of those metamodels that lead to the transformation rules will give some lights about the equivalences that are used. Transformation rules are classified in two categories: 1.
Basic generation rules are used in a first time to create elements of the target model. Most of these rules are defined by a direct mapping between meta model elements.
150
Jihed Touzi, Frederick Bénaben, Hervé Pingaud
2.
Binding rules are applied in a second time to draw the links between the elements resulting from the previous phase. Existing relations in the source model are transformed into relations in the target model.
4.1 Basic Generation Rules Fig. 4, Fig. 5 and Fig. 6 try to summarise the set of rules (also called derivation laws) that are applied during transformation. The rules are represented by circles located in the middle of two class diagrams. The class diagrams are subgraphs which are parts of the primitive meta models. On the left part of each figure is the subgraph of the source meta model, and on the right part is the subgraph of the target meta model. The rules have to be interpreted in the following manner: “When an object is identified in the collaborative process model, it belongs to meta-model class of the left side subgraph linked to the rule. Then, it will be transformed in an object instantiated from the class of right side of the figure. We mean that it will become an object in the SOA collaborative architecture”. The service view of the SOA collaborative architecture is represented in Fig 4. On the left part, the pool and lane classes are mapped on the different entities services of the right part (partners or CIS services). Rs1 rule gives the links from tasks in the collaborative process model to services listed in the registries, either specific to the collaboration or generic ones. Rs2 to Rs4 rules provide solutions for the structure and organisation of services. Rs5 shows the need for an additional knowledge to fill services description. With the same logic, Fig. 5 introduces two transformation rules applied for the information view. As indicated before, the transformation is not sufficiently developed in this domain (business process is not the good way to modelize informations of the collaboration). Transformation provides syntactic indications that help to create business objects (Rules Ri1 and part of Ri2). However, the problem of translation refers to semantic interpretation that we do not include in this part of the study (Remaining part of Ri2 is probably not a robust solution).
Prototype To Support Morphism between BPMN and Collaborative SOA
151
Fig. 4. Localisation of transformation rules for basic generation of the service view
Fig. 5. Localisation of transformation rules for basic generation of the information view
152
Jihed Touzi, Frederick Bénaben, Hervé Pingaud
In contrast Fig. 6 is the most developed part of the transformation procedure with nine rules. The “process view” package has been designed using specifications of the BPEL meta model language. BPEL (Business Process Execution Language) is one of the most popular candidates for specification of web services process execution. Some of the rules in Fig. 8 are adaptations of recommendations provided by BPMI when they address the problem of BPMN graph conversion to BPEL well defined XML sentences [8]. It concerns rules Rp3 to Rp6, and rules Rp8 to Rp9. Rules Rp1, Rp2 and Rp7 participate to the definition of coordination activites.
Fig. 6. Localisation of transformation rules for basic generation of the process view
Prototype To Support Morphism between BPMN and Collaborative SOA
153
4.2 Binding Rules The binding rules can be used to build the interactions between the elements of the target model coming from the basic generation rules appliance. The links could be inside a target model package or between two different packages (dependence). The goal is to define in the target model the necessary relations that exist in the source model. The relations may be of different types like inheritance, composition, aggregation or simple association. Three binding rules Rb1 to Rb3 are given in the following as an exemple. Rb1: sequence ordering A sequence element issued from Rp3 rule is associated with two basic activities into the same process package. Rb2: information processing A service from service package is related to a business object of the information package. Rb3: service identification A basic activity of the process package is linked to a service of the service package.
5 Prototype Development
Fig. 7. Technical architecture of the developed prototype
154
Jihed Touzi, Frederick Bénaben, Hervé Pingaud
Fig. 7 shows the technical architecture of the prototype, developed to implement our proposition. It is based on three open source tools that run in the Eclipse¤ platform. Intalio designer¤ is a BPM tool that helps user to specify the BPMN model. The Atlas Transformation Language (ATL)¤ can use a process model in the XML format coming from the BPM tool in input, and produce the UML model of the collaborative architecture in output. It is the heart of our transformation prototype. The TOPCASED¤ tool is a computer aided software environment that can perform a graphical edition of the UML model. The ATL tool allows generating a UML model from an XML file that represents the process model. The rules presented previously are coded with ATL. The following gives examples of the ATL code. 5.1 Examples of ATL Code The following ATL code generates the structure of the collaborative architecture (three packages: services, information and processes): rule generatePackages { from a : BPMN!Collaborativeprocess to out :UML2!Model ( name<-‘Collaborative architecture’, packagedElement <- OrderedSet {services, information, processes} ), --generation of services package services: UML2!Package ( name <- 'services view' ), --generation of information package information: UML2!Package ( name <- 'Information view' ), --generation of processes package processes : UML2!Package ( name <- 'Process view' , packagedElement <- OrderedSet {basic,structured} …
The following ATL code implements the identified Rs3 rule (Fig. 4). It permits to generate a partner service (in the collaborative architecture) from a partner task (of the BPMN process). rule generatePartnerServices { from a : BPMN!PartnerTask to task :UML2!Class ( name <- a.name ), service :UML2!Class ( name <- a.name )
Prototype To Support Morphism between BPMN and Collaborative SOA
155
The following ATL code shows how to bind the generated ‘partner service’ with the class ‘registry’ of the collaborative architecture. We use the helper (defined function) CreateAssociation. for (m in BPMN!PartnerTask.allInstances()->select(a|true)) { --binding registry with partner service thisModule.CreateAssociation(registry,thisModule.resolveTemp(m ,'service'), partners,'appartient a',1,0,'contient','appartient à'); }
5.2 Example of Transformation A simple example of a collaborative process is proposed in Fig. 8. The collaboration takes place between a customer and a supplier for a trading transaction. Fig. 9 shows the result of the transformation of the collaborative process of Fig. 8 into a collaborative architecture, generated by our prototype. We can see the three packages: service, information and processes and the different links.
Fig. 8. A BPMN collaborative process model
156
Jihed Touzi, Frederick Bénaben, Hervé Pingaud
Fig. 9. Result of the transformation using the developed prototype
6 Conclusion and Perspectives Our work is linked to research works in the Model Driven Interoperability field. In this paper we have defined a morphism between collaborative process model (CIM) and collaborative architecture model (PIM). We have showed how we can derive from a CIM model the necessary information to automatically create PIMlevel. The generated SOA model can later be mapped to different ICT architectures. We are aware that it is relatively not frequent to have networks of organizations that are able to draw a collaborative process of their predicted common activities. In [9], we study the contribution of a knowledge based methodology to help in the process model design. The PIM solution that we have produced has been one of the components selected in the JonES project (French project ANR/RNTL 2005). JonES main objective is to test a complete MDA approach in the frame of an
Prototype To Support Morphism between BPMN and Collaborative SOA
157
Enterprise Service Bus technology (Target Platform). The solution developed is open source and has been designed by the ObjectWeb community (Petals ESB).
References [1]
[2]
[3] [4]
[5]
[6]
[7] [8]
[9]
Grangel Seguer R. Ben Salem, J.P. Bourey, N. Daclin, Y. Ducq.: Transforming GRAI Extended Actigrams into UML Activity Diagrams: a First Step to Model Driven Interoperability, Enterprise interoperability:New challenges and approaches II, Springer edition. ISBN:978-1-84628-857-9, (2007) Bourey J.-P. R. Grangel, A.Berre, G. Doumeingts, K. Kalampoukas, M. Bertoni, L. Pondrelli, and N. Daclin: DTG2.1: Report on model establishment, Interoperability Research for Networked Enterprises Applications and Software Network of Excellence, n° IST 508-011, (2005) Touzi J. : A model transformation for mediation information system design, PhD thesis, Ecole des Mines d’Albi-Carmaux, 9 Nov. (2007) Benguria, G., X. Larrucea, B. Elveseater, T. Neple, A. Beardsmore, M. Friess, A Platform Independent Model for Service Oriented Architectures. IESA’06 Conference, Bordeaux, France (2006). Bauer B., J.P. Müller, S. Roser: A Decentralized Broker Architecture for Collaborative Business Process Modelling and Enactement, Enterprise Interoperability: New Challenges and Approaches- Springer Verlag - ISBN-10: 1846287138, (2006). Darras F., Proposition d’un cadre de référence pour la conception et l’exploitation d’un progiciel de gestion intégré, PhD thesis, Ecole des Mines d’Albi-Carmaux, (2004). D’Antonio F.. InterOp Noe Network of Excellence Report, Task Group 2.2 (MoMo), IST 508011, available at http://www.interop-noe.org(2005). Ouyang, C., W. Van Der Aalst, M. Dumas, A. Hofstede, Translating BPMN to BPEL, Technical report - BPM group, Queensland University of Technology, Brisbane (2006). Rajsiri V, Lorré JP, Bénaben F and Pingaud H.. Cartography based methodology for collaborative process definition, accepted paper for PRO-VE’07-8th IFIP Working Conference on Virtual Enterprises, Portugal (2007).
Heterogeneous Domains’ e-Business Transactions Interoperability with the use of Generic Process Models Sotirios Koussouris1, George Gionis1, Aikaterini Maria Sourouni1, Dimitrios Askounis1 and Kostas Kalaboukas2 1
2
National Technical University of Athens, 9 Iroon Polytechniou, Athens, Greece {skoussouris, gionis, ksour, askous}@epu.ntua.gr SingularLogic Software, Al. Panagouli & Siniosoglou strs, 142 34 N. Ionia, Greece [email protected]
Abstract. Interoperability is the key factor which will drive e-Business to the next level by offering fully automated transactions among Enterprise Applications, such as Enterprise Resource Planning or Supply Chain Management systems. Nowadays, research seems to have dealt with the problem of interoperability in various business domains, however the issue of interoperability in heterogeneous business domains – Enterprises, Governmental and Banking Institutions of different countries (cross-border) or Enterprises of different interests (cross-sector) - remains still a big challenge. This paper presents generic models of the most common business transactions carried out mainly by Small and Medium Enterprises. These models are constructed using state-of-the art notations and methodologies which facilitate the Application-to-Application interconnection and the automated business documents exchange between enterprises, governmental and banking institutions, covering not only national or sector specific business domain transactions but also cross-border and cross-sector processes, which imply different requirements as apart from the differences in the execution way, different legal rules and data entities, are also present. Keywords: Modeling cross-enterprise business processes, Enterprise modeling for interoperability, Meta-data and meta-models for interoperability
1 Introduction During the last years there has been substantial technological progress in the area of e-Business. However, despite the fact of e-Business evolution, the adoption of new internet-based technologies in the business environment is still limited, especially in the sector of small and medium (SMEs) or very small enterprises (VSEs) [1]. In parallel, the to-date efforts for developing and adopting e-Business solutions has been targeted more towards the Business-to-Consumer (B2C) and the
160
S. Koussouris, G. Gionis, A. M. Sourouni, D. Askounis and K. Kalaboukas
Business-to-Business (B2B) of same interests –same business sector– area and not so vitally towards the area, which this paper addresses to. This area comprises the Business to Business (B2B), Business to Government (B2G) and Business to Intermediaries (B2I) –such as Banks and Public Insurance Instituitions– transactions between Business, Governmental and Banking Organizations of different countries (cross-border) or of different interests/operation domains (crosssector). Interoperability is defined in [2] as the ability of two or more systems or components to exchange information and to use the information that has been exchanged. Thus achieving interoperability is considered as the key factor which will drive e-Business to the next level by offering fully automated transactions that will be carried out without the need of any further actions; it will indicate the final adoption of e-Business in heterogeneous business domains (cross-border / crosssector business domains). The European Commission considers the development of interoperability of enterprise applications as a strategic issue for European Business environment, so that enterprises can raise their collaboration and gain competitiveness in the global market[3]. Towards facilitating such issues and proposing interoperability solutions that involve enterprise application integration and interconnection [4], a number of research projects are already undergoing funded by European Commission aiming at providing solutions in the key area of electronic transactions. Such projects are: Interop-NoE[5] which aims to create the conditions of an innovative and competitive research in the domain of Interoperability for Enterprise Applications and Software, ATHENA-IP[6] which deals with interoperability by identifying and meeting a set of inter-related business, scientific & technical, and strategic objectives, FUSION[7] which aims at business collaboration and interconnection between enterprises by developing a framework and innovative technologies for the semantic fusion of heterogeneous serviceoriented business applications, en-VISION [8] which will develop and validate an innovative e-business platform, ABILITIES Project etc. The project GENESIS [9] (Enterprise Application Interoperability – Integration for SMEs, Governmental Organizations and Intermediaries in the New European Union) is also funded in the context of the EU Framework Program 6 (FP 6) [10] and its main goal is the research, development and pilot application of the needed methodologies, infrastructure and software components that will allow the typical, usually small and medium, European enterprise to conduct its Business transactions over Internet, by interconnecting its main transactional software applications and systems with those of collaborating enterprises, governmental bodies, banking and insurance institutions with respect to the EC current legal and regulatory status and the existing one in the new EU, candidate and associate countries [11]. The present paper derives from a thorough research in the European business environment regarding the most common business transactions carried out mainly by European SMEs. A list of the common B2B, B2G and B2I transactions used by enterprises of different sectors and countries, has been formed, in accordance with an evaluation framework constructed for the identification of the most important processes, which can be modeled and further automated. For each one of these
transactions, a generic process model facilitating interoperability was designed using state-of-the-art Modeling Notations and Methodologies [12], [13], [14]. This paper presents three generic process models of transactions, covering the core of the above mentioned transactions’ list. Regarding the structure of this paper, Section 2 identifies and discusses briefly the process modeling methodology which has been followed; Section 3 defines the list of the common SMEs’ transactions and presents analytically the generic process models of core transactions; Section 3 presents issues that have arisen during the modeling proceeding and compares the models for different sectors signifying the relativity to legal rules & data issues. Finally Section 4 concludes and Section 5 presents the potential future work that needs to be done towards this direction.
2 Process Modeling Methodology Developed and Applied The Process Modeling Methodology, regarding the heterogeneous domain interoperability requirements, has to incorporate the following issues: x
“Cross-Enterprise, Cross-Sector” processes: Refers to the ability to support “Cross-Enterprise, Cross-Sector” processes and transactions between enterprises and organisations that belong to the private sector, to the public administration and to the banking sector. Those transactions have different parameters, depending on each transaction and those processes are identified as “cross-enterprise, cross-sector” ones. x “Cross-Border” Transactions: The ability to manage models representing international transactions. Such transactions have their own characteristics and their own parameters, which vary among same transactions carried out between different countries. x “Legal Issues”: The ability to deal with and to model various legal aspects which are present in cross-border, cross-enterprise transactions. During the Process Modeling Methodology definition, three different levels of process modeling are used; the private, the public and the generic (collaboration) process modeling view[12], [15]. The need for this discrimination is to build structured models which can describe fully a transaction, from the internal enterprise level up to the collaboration level between different transaction parties. The Private Process View incorporates the “private processes” of one transaction party, which are inner-organizational processes from the business point of view i.e. an internal process of an organization or an enterprise. Private processes are used to identify the context of how and when certain documents for collaborating with other parties are produced or consumed. These documents are the interfaces for the public process. The Public Process View is a coarse description of process steps which represent the interface of an organization to collaborate with other parties. Only those activities that are used to communicate outside the private business process, plus the appropriate flow control mechanisms, are modeled in the public process
162
S. Koussouris, G. Gionis, A. M. Sourouni, D. Askounis and K. Kalaboukas
view. A public process, as seen from the transaction point of view, presents the sequence of messages that are required in order to interact with other parties. Both of the above process modeling views are defined as national and sector specific. In the Collaboration Process View, abstract, generic process models are built. They derive from the appropriate consolidation of the public processes of the collaborative parties without any country specifications. These generic process models are designed at the highest abstraction level possible, so as to be able to fit easily to different countries without interfering with the internal private processes of the parties involved. Advance and state-of-the-art modeling notations and methodologies have been selected for the process modeling phase in the three different view levels of modeling. Namely, the Business Process Modeling Notation (BPMN) [16] has been used in order to extract executable code from the designed models using the Business Process Execution Language (BPEL) [16].
3 Generic Business Process Models 3.1 Transaction List In order to identify the most common and important transactions carried out by SMEs which can and are worth being fully automated, an evaluation framework has been used. This framework consists of the assessment of the following criteria: x x x x x x
Frequency of use. Time for the process execution. Cost of the process. Level of support of the process with the existing Enterprise Applications. Legal and statutory framework supporting the execution of the process. Value added in the Enterprise (e.g. is it core business or supportive?).
This framework was used on a initial transaction list which had various transaction for each sector. B2B transactions were identified with the use of the UBL 2.0 standarts [18] for B2B Processes, B2G transactions were identified by studying the eEurope 2005 [19] and IDABC [20] initiatives and Banking transactions were identified by studying financial exchange standarts like OFX [21]. All those transactions were evaluated in eight (8) different countries - Greece (GR), Cyprus (CY), Italy (IT), Turkey (TR), Romania (RO), Bulgaria (BG), Lithuania (LT) and Czech Republic (CZ) - and the final evaluation has been concluded to a set of automatable transactions presented in Table 1. This list includes the most common transactions, which should be considered when talking about Enterprise Applications Interoperability. Figure 1 presents representative diagram of the evaluation results for the Ordering (B2B Transaction) processes.
Various VAT sub-statements and reports Declaration of hiring new employee B2Bank
Account Status
OFX
List of Account Transactions
OFX
Fund Transfer (intra-bank, inter-bank)
OFX
Specific Payment (VAT, tax, other)
OFX
Payment Check (Credit Note) Issuing
OFX
Payment Check (Credit Note) Status
OFX
Loan Status Inquiry
OFX
All the above transactions have been modeled up to the generic process view, by taking into consideration the different processes from the different countries. More specific, as far as it regards the B2B transactions, 6 countries were selected for gerenaring the genic process models (GR, TR, RO, BG, LT and CZ), whereas B2G and Banking generic models were based on the public view models of 4 countries (GR, CY, IT and TR). Three transactions have been selected and their process models are presented below with some basic metadata info which derives from the complete metadata models that were created for each process. These processes have been selected as the representative core of the transactions’ list. 3.2 Business to Business Models The generic model for the Ordering process is presented in Figure 2. All required documents which are exchanged between the collaborating parties during the process flow are present, from the Order document to the Order Cancellation document. Rules or time events are also present. The process models which are presented in this paper are modeled using BPMN notation. Table 2 presents the Model’s Meta-Data. Table 2. Ordering – Generic Model Meta-Data A. Buyer Parties Involved
3.3 Business to Government Models The periodic VAT Statement process between an enterprise which declares and pays its VAT and the VAT Service who is the recipient of the declaration and of the payment is shown in Figure 3 This process includes two subprocceses, namely “Specific Payment”, that resides under the “Payment Settlement” activity shown in the figure and “Account Status” which is performed by the VAT Service under the activity “Check VAT Statement and potential Payment”. Table 3. VAT Statement (periodic) - Generic Process Meta-Data A. Enterprise Parties Involved
B. VAT Service
Process Flow Pattern
A-B-B
166
S. Koussouris, G. Gionis, A. M. Sourouni, D. Askounis and K. Kalaboukas
C. Bank ( “as sub process”) No. of Exchanged Documents
3
No. of Decision Points (complexity)
2
Country fit
4 (GR, CY, TR, IT)
No. of Activities
6
Sub processes present (Decomposition)
1. Specific Payment
Legal Framework Interference
High
2. Account Status
Fig. 3. VAT Statement Process – Generic Model
3.4 Banking Models The Specific Payment Process of an entrerprise issuing a payment order to the bank is shown in Figure 4. This process includes the subprocesses of “Account Status” which is being done by the Bank to determine if the enterprise possess the required balance in its account in order to carry out the order and the “Fund Transfer” which is an intra or inter-bank process that deals with the actual money transfer. The following table (Table 4) presents the model’s Meta-Data
Table 4. Specific Payment - Generic Process Meta-Data A. Enterprise Parties Involved
B. Bank
Pattern
A-B
No. of Exchanged Documents
2
No. of Decision Points (complexity)
1
Country fit
4 (GR, CY, TR, IT)
No. of Activities
4
Sub processes present (Decomposition)
1. Account Status
Legal Framework Interference
2. Fund transfer
Low
Fig. 4. Specific Payment Process – Generic Model
4 Conclusions During modeling many issues have arisen, which mainly involved the legal rules and the data entities that accompany each transaction in each country. Moreover,
168
S. Koussouris, G. Gionis, A. M. Sourouni, D. Askounis and K. Kalaboukas
the way each transaction is carried out differs from country to country as the business logic is not the same. Those issues are by defacto not taken into consideration when trying to design interoperable systems for conducting transaction in country specific domains, as all enterprises operating in the same country follow the same legal rules and have the same data requirements. However, when trying to extend the environment of e-business by comprising cross-border and cross-sector transactions, all the above issues come up to the surface. Therefore, we have created several private and public processes; each one of these represents a specific transaction with different requirements than in other countries’ specific models. A consolidated generic process model has to respect all the underlying exceptions and has to aggregate all the underlying business logic, legal rules and data requirements into a unique model so as to satisfy all the needs which spring out of the public processes. In order to meet these demands and to finally reach the ultimate goal which is no other than proposing an approach for cross-border and cross-sector interoperability, those generic process models have to be designed at the highest abstraction level that could be reached. This way, the generic models represent the service orchestration which has to be established between the different parties in order to carry out their transactions successfully. The abstraction level chosen defines the obligatory business document exchange which must take place but at the same time does not interfere with the different internal processes of each party. However, several rules regarding the legal issues or the data entities have to be applied, which may or may not affect the internal of the parties’ processes, depending of the architecture that will be selected upon the system implementation process. As far as it concerns the execution way or the process flow of a transaction itself, based on the examined countries there seem to be only small differences in the business logic of the transactions. Therefore the generic models could easily fit to all the examined countries, as the chosen abstraction level is able to cover the core process flow of the examined transactions. Business to Business (B2B) transactions have almost similar business logic and follow the same business rules, with small differences. The way of conducting business seems to follow a globally accepted process flow, which is present from the smallest to the largest enterprises. Therefore the service orchestration between two business parties can be easily designed. However, the legal rules and the data entities that encompass these transactions present a high grade of differentiation. This fact springs from the national laws and from the historical and social conditions which have shaped throughout the years the national business domain, based upon the domestic needs and requirements. Business to Government (B2G) transactions do have a higher differentiation grade than B2B transaction in the terms of process flow and legal rules. However, as far as it concerns the data entities included in those transactions, they slightly differ from each other between different countries, as the low level information which is required by those transactions remains the same for each country (e.g. person details, address details, specific transaction details). Business to Bank Institutions transactions seem to possess the smallest grade of differentiation not only in the process flow but also in the legal rules and in data requirements. This situation derives from the fact that banks respect and follow an
internationally and globally agreed way of conducting business [22]. This behaviour evolved from the need of forming a unified banking environment for interconnecting the global markets. The results from these efforts produced common agreed banking processes with the same data entities and with almost identical legal frameworks. Therefore, the Banking transactions can be easily modeled and implemented into interoperable business environments. Figure 5 describes the differentiation between the processes’ categories regarding the data entities and the legal rules in a 3D diagram. Banking transactions have low differentiation in all three dimensions. B2G transactions have low to medium data differentiation and high differentiation regarding the legal rules, whereas they have also medium to high Process Flow differentiation. B2B transactions have medium to high data differentiation but medium process flow and legal rules differentiation.
Fig. 5. Process Flow, Data and Legal rules differentiaton between different transactions types
5 Future Work This paper presents generic Business to Business, Business to Government and Business to Bank transaction models that are constructed by a methodological approach for enabling interoperability for processes in heterogeneous business domains by defining service flow orchestrations. These models can be used by enterprises in different countries and heterogeneous business domains in order to model and revise their business transactions so that they can implement interoperable interfaces for expanding their business environments. Moreover, this approach can assist the creation of national and further more regional or pan-
170
S. Koussouris, G. Gionis, A. M. Sourouni, D. Askounis and K. Kalaboukas
European process models as standards in order to achieve European eBusiness integration between the different enterprises that are willing to take the most out of IT technologies. However, there are still significant issues, such as legal rules integration, business documents standards for Business to Governmental and Banking Institutions transactions, which should be challenged in the future for providing a fully interoperable environment.
Androutsellis-Theotokis S, Karakoidas V, Gousios G, Spinellis D, Charalabidis Y, (2005) Building an e-Business Platform: An Experience Report. e-Challenges 2005 Conference, European Commission Institute of Electrical and Electronics Engineers. IEEE Standard Computer Dictionary, (1990), A Compilation of IEEE Standard Computer Glossaries Charalabidis Y, Gionis G, Askounis D, Koussouris S, (2006) Enterprise Application Interoperability via Internet Integration for SME’s, Governmental Organisations and Intermediaries in the New European Union. eChallenges 2006 Conference Charalabidis Y, Karakoidas V, Theotokis S, Spinelis D, (2004) Enabling B2B Transactions over the Internet through Application Interconnection. eAdoption and the Knowledge Economy: Issues, Applications, Case Studies, IOS Publishing. The INTEROP Network of Excellence: http://www.interop-noe.org. The ATHENA project website: http://www.athena-ip.org Research Project FUSION, Business process fusion based on Semantically-enabled Service-oriented Business Applications http://www.fusionweb.org/fusion En-Vision, A New Vision for the participation of European SMEs in the future eBusiness scenario, http://www.e-nvision.org Genesis IST Project: http://www.genesis-ist.eu Sixth Framework Programme: http://cordis.europa.eu/fp6/ GENESIS Annex I – “Description of Work”, Version 7.0, (December 2005) Bussler C, (2001) B2B Protocol standards and their role in semantic B2B integration engines. Bull Tech Comm Data Eng 24(1):3–11 Athena Project Deliverable - A1.1.2 Second Version of State of the Art in Enterprise Modelling Techniques and Technologies to Support Enterprise Interoperability IDEAS Project Deliverable: D11_PartA Enterprise Modeling State-of-the-Art Dayal U, Hsu M, Ladin R, (2001) Business process coordination: state of the art, trends, and open issues. In: VLDB Conference, pp 3–11 Business Process Modeling Notation: http://www.bpmn.org Business Process Execution Language (BPEL): http://www.oasis-open.org UBL 2.0: http://docs.oasis-open.org/ubl/prd-UBL-2.0/ eEUROPE 2005: http://europa.eu.int/information_society/eeurope/2005/index_en.htm IDABC: http://europa.eu.int/idabc/ OFX, available online at, http://www.ofx.net/ Nikolaidou M, Anagnostopoulos D, Tsalgatidou, A, (2001) Business process Modelling and Automation in the Banking Sector: A case study. IJSSST, Special Issue on "Business Process Modelling", vol 22, p.65-76
Matching of Process Data and Operational Data for a Deep Business Analysis Sylvia Radeschütz1, Bernhard Mitschang1 and Frank Leymann2 Universität Stuttgart, Universitätsstr. 38, 70569 Stuttgart, Germany firstname.lastname@{ipvs1—iaas2}.uni-stuttgart.de
Abstract. Efficient adaptation to new situations of a company’s business and its business processes plays an important role for achieving advantages in competition to other companies. For an optimization of processes, a profound analysis of all relevant information in the company is necessary. Analyses typically specialize either on process analysis or on data warehousing of operational data. A consolidation of major business data sources is needed to analyze and optimize processes in a much more comprehensive scope. This paper introduces a framework that offers various alternatives for matching process data and operational data to obtain a consolidated data description. Keywords: Interoperability for Enterprise Application Integration, Architectures and platforms for interoperability, Interoperability for integrated product and process modeling, Ontology based methods and tools for interoperability, Design methodologies for interoperable systems
1 Introduction Most companies nowadays focus on business processes instead of separate business management functions. Business processes are a chain of thematically connected business activities, which usually have a contribution to the value of a company. Global competition and fast developing product cycles put continuous pressure on costs and efficiency on a company, because customers claim immediate support and response to any request concerning their orders. Since demands are changing very quickly, business processes have to be optimized and adapted relatively fast and cheaply to deal with new situations [22]. Hence, we need an efficient analysis to assess precisely the current situation and adaptation need of the processes. Today’s analyses are limited to single perspectives of a business: The history perspective with history logs of process or database executions, the organization
172
Sylvia Radeschütz, Bernhard Mitschang and Frank Leymann
perspective of people and resources that execute processes and other business applications, or the security perspective of business applications. In another perspective, called information perspective, data objects are handled that are processed within one business component or transferred between different systems. The perspectives are not standardized and depend on the used process language or data management system, e.g. if it is able to record history data at all. These separate perspectives complicate a quick optimization [23]. A combined approach will enable sophisticated evaluations of processes by matching different perspectives. Hence, for a deep business analysis, all perspectives have to be taken into account. As first step, however, we concentrate in this paper on the history perspective of processes and the information perspective within processes as well as within operational systems and refer to our future work for considering the remaining perspectives. In the remainder of the paper, we first sketch our view to deep business analysis and we discuss the state of the art in business analysis, especially, how to analyze processes and operational data sources. Sections 3 and 4 introduce our matching framework that aims to efficiently achieve a profound global analysis. Section 5 shows related work, before the summary section concludes the paper.
2 Deep Business Analysis Business analysis aims to improve processes of an organization by discovering and removing unnecessary activities and replacing them by more efficient ones. Fig. 1 shows how business applications are typically analyzed. Usually, process execution data is recorded and can then be analyzed by Process Analysis like monitoring or workflow mining (left side of Fig. 1). Further operational data of the business managed by different application systems like ERP (Enterprise Resource Planning) systems is examined separately by Data Analysis (right side of Fig. 1). For this, operational data is loaded into a business DWH (Data Warehouse) by ETL (Extraction, Transformation, Load) to apply OLAP (Online Analytical Processing) or data mining techniques. Some of this operational data refers to the same real-world object as operational business process execution data. Because the relation between the data often exists only implicitly, it is pictured by dashed arrows in Fig. 1. However, both process and operational data have to be matched for a Global Analysis, i.e. their relation must be made explicit for analysis purposes. This enables a deep business analysis on even more relevant attributes in order to get an overall view of the business and to allow more profound analyses and optimizations. In order to avoid laborious manual data consolidations, our framework offers an efficient and effective matching approach. As already mentioned before, a deep business analysis shouldn’t be restricted to the presented business data, but should also include further perspectives. In the following, an example illustrates the purpose of some business analysis methods and demonstrates the need of a multi-perspective, deep business analysis. Currently used business analysis approaches are outlined afterwards.
Matching of Process Data and Operational Data for a Deep Business Analysis
173
Fig. 1. Deep Business Analysis
2.1 Example Scenario Fig. 2 shows schematically which steps are needed in a car rental scenario modeled by means of BPMN (Business Process Modelling Notation) [4] and gives an of the process flows related to car rental employees and customers. The processes are designed as executable workflows controlled by a workflow management system. The workflow is started by a customer’s request for a car rental. After receiving the customer input (login ID, name, car class, car rental station, pick up time, leasing period), an employee selects a car, enters a contract and hands the car over to the customer. When the customer has returned the car, an employee inspects the car for damages and sends a bill to the customer. The transferred messages between the flows are indicated by dashed arrows. Most workflow systems record workflow execution data in a log file or database, called audit trail. Table 1 shows extracts of the audit of the operation "Receive Car Request" in instance "1" of the employee flow within the process model PID "1111" of version "1.0". For brevity, only two parts "ID" and "name" of the customer input message are exemplified. During process analysis, an analyst discovers that the execution of the activity of the car return lasts too long on certain dates like summer holidays, because there’s a shortage of car rental employees. To optimize the activity length, more people must be employed or the process rules adapted. However, the analyst might not detect every source of the problem of the long activity duration only by
174
Sylvia Radeschütz, Bernhard Mitschang and Frank Leymann
workflow execution data. Considering operational data sources with additional attributes may result in further insights and thus may enable further optimizations of the process. In our example, one operational data source, a database table, is shown partly in Table 2 which refers to the same real-world object "customer" as the customer input message of the workflow.
Fig. 2. Schematic Car Rental Process in BPMN
Table 1. Audit Trail ((1) in Fig. 1) PID
version
instanceID
operation
message
part
value
1111
1.0
1
ReceiveCarReq.
input
ID
11
1111
1.0
1
ReceiveCarReq.
input
Name
Smith
Table 2. Table Customer in Database "Rental" ((2) in Fig. 1) customerID last name
first name
licenseID
age
occupation
11
Smith
John
ABC-345
44
lawyer
12
Walker
Linda
XYZ-987
22
teacher
Table 3. Match Result Table (MRT) ((3) in Fig. 1) PID version
operation
message
part
DBConnection
ID
jdbc:db2:Rental customer customerID
1111
1.0
ReceiveCarReq.
input
1111
1.0
ReceiveCarReq.
input
table
Name jdbc:db2:Rental customer
column
last name
The table contains amongst others the ID of the customer and his first and last name. In fact, analyses only on this data may also lead to important conclusions, e.g. an analyst may detect the relevant age spectrum of customers that are targeted by a next propaganda action. However, by matching data, we are able to initiate deeper evaluations. In general, it is hard manual work to combine this operational data with the audit data. The analyst has to find out what workflow message matches which column of the
Matching of Process Data and Operational Data for a Deep Business Analysis
175
operational data tables. It’s not clear at first sight if the message part "ID" in Table 1 describes the customer ID or something else. Furthermore, the workflow data in the audit trail might only be available as binary large objects. The Match Result Table (MRT) in Table 3 displays explicitly the analyst’s result of matching internal operational data of the car rental process in Fig. 2 and Table 1 with the operational data in the "customer table" in Table 2. It connects the customer’s message parts "ID" and "name" in the operation "Receive Car Request" with the column "customerID" of table "customer" (in database "Rental" specified by a JDBC (Java Database Connectivity) string) and column "last name" respectively. If the associated values are needed for the analysis, MRT is used to retrieve these values from the original data sources. Using an integrated data warehouse with a MRT, an analyst is able to discover, that e.g. some customers under the age of 25, often cause big damages on the rented cars. Therefore, long duration and labor-intensive process parts are necessary, like examination of the damage and repair of the car. These process parts could be avoided by considering this customer group in an extra business rule allowing persons under 25 to rent only particular car classes. This results in a significant reduction of cost and length of the Car Rental Process since because of the restriction of the customer group, the operation "Check For Damage" will work faster now. 2.2 Business Analysis Approaches Process Analysis. Fig. 1 displays audit trails at the left side from different workflows or workflow management systems. An audit trail is needed for analyses like Business Activity Monitoring (BAM) [8]. Employing real-time monitoring information with tools like Oracle BAM [17], people in charge of processes are able to react to problems that arise during process enactment. Audit trails can be integrated into one audit data warehouse by ETL. On the audit warehouse, analyses for business performance management [3, 20] allow to optimize workflows. The goal of Workflow Mining [1, 19] is to discover new or optimized workflow models out of the audit logs. These techniques refer especially to the actual flow logic. Operational data sources are typically neglected. Data Analysis. Operational data comprises all data processed within the business that is not stored into a workflow management system, but somewhere else as in files, or data managed by other systems, e.g. by a database management system. It contains information that is ingested by ETL processes into a data warehouse for analyses such as Online Analytical Processing (OLAP) or Data Mining [10]. OLAP systems allow complex analytical multidimensional queries whereas Data Mining is the process of automatically searching large volumes of data for patterns using methods such as classification or clustering (see e.g. Oracle Data Mining [18]). Global Analysis. After matching both workflow data with operational data a global analysis – we call it deep business analysis – can be started. There’s not much related work in this area so far. The PISA tool in [23] and the Process Data Warehouse in [6] both provide evaluation methods on three perspectives: History
176
Sylvia Radeschütz, Bernhard Mitschang and Frank Leymann
and organization perspective within processes, and the process information perspective. In [6] however, only operational data sources are considered that are directly accessed within the process. PISA offers only relatively simple analyses. Furthermore, it requires manual a-priori matching of relevant workflow and operational data.
3 Towards a Matching Framework For effective optimization of business processes, it is necessary that all relevant data sources will be matched, that processes can be deeply analyzed and subsequently optimized on an encompassing scale. In this section we introduce the concept of a deep Business Data Description (dBDD) as the result of this matching task. Relevant data sources and basic methods are introduced. 3.1 Relevant Information Sources of Business Data Relevant operational data is provided by various sources in different formats: files, databases, or other applications. Workflow data can either be matched with the original operational source or the business warehouse. All audit trails and the meta data of processes are needed for a global analysis. Relevant process data for the matching is stored in process variables like input and output messages of workflow activities. These variables are either simple data types or complex with many parts per variable which in turn can be of complex type. They contain input and output of users, invoked services or applications, but also intermediate values for further processing. The following describes how this data can be categorized in terms of its matching to operational data: i. Process Internal Data: Relevant data for analysis is located within the process without referencing an operational source explicitly. However, workflow data and this operational source may refer to the same real-world object. This is the case in our example in Section 2.1. ii. Embedded Database Statements: A Data Definition Language (DDL) or Data Manipulation Language (DML) statement with an exact database, table and column indication might be part of a service and embedded in the process (e.g. IBM BPEL/SQL plugin[12]). Thus, all relevant operational data for the matching is already stored explicitly in the process messages. iii. Database Adapter: A DDL or DML statement is encapsulated in an external application that is called by the process. 3.2 Deep Business Data Description In order to enable the matching task, heterogeneity and incompleteness of involved information sources of Section 3.1 have to be overcome. General techniques for data integration are introduced in [5, 16] and are adapted for our matching framework. We emphasize two major ones: manual and semantic approaches.
Matching of Process Data and Operational Data for a Deep Business Analysis
177
Based on these two matching methods, our framework offers various options for data description. They describe business data in two "deep" ways: Either operational data sources and workflow data are augmented manually by matching data or both data is supplemented with semantic information for (semi-)automatic matching. We describe the result of the matching task by means of a deep Business Data Description (dBDD). The matching alternatives and the dBDD issues involved are presented in Fig. 3.
Fig. 3. Matching Alternatives
1.Manual Matching. Today, the most simple way to achieve matching is by manually linking workflow and operational data sources. However, it causes lots of costs and effort as an analyst has to know exactly about the data that is used in a matching task. This might be laborious if there is no documentation available. Furthermore, manual matching is very subjective and cumbersome because one has to take every element into consideration. Since there are different players in the game, our framework offers different manual approaches (Fig. 3(1)): (a) Direct BA: The first method you find at present-day warehousing systems. Here, the business analyst (BA) has to link the appropriate data directly – therefore called Direct BA. Although our goal is to improve this errorprone situation, there are certain situations where other methods don’t work. Thus, this approach is part of our framework. (b) Direct PD: In this matching alternative, the messages in the process and the operational data they refer to must explicitly be indicated manually by process designers (PD) during the design phase of a process – therefore called Direct PD. This approach is as good as a process designer knows about the meaning of the operational business data. (c) Extension DB: This matching can be used if an explicit relationship to the operational data source is already part of the workflow description and thus reflected in the audit trail. Paragraphs ii. and iii. in Section 3.1 provide prominent examples on how workflows can be extended by database (DB) references – therefore called Extension DB. (d) Extension Link: In this approach relevant workflow elements are extended by an attribute that indicates the corresponding operational data
178
Sylvia Radeschütz, Bernhard Mitschang and Frank Leymann
source. In comparison to "Direct PD", this alternative needs a workflow engine that recognizes the link attribute extension and stores the source reference into the audit trail. A parser within our framework derives the dBDD out of the audit data. As part of a workflow language extension, this approach defines a reliable and efficient way to gain the dBDD, because the user doesn’t have to think about the storage of the link. 2.Semantic Semi-automatic Matching. According to [5], semantics enables a next generation of information integration via the use of ontological concepts combined with reasoning. The user has to invest some effort to create an ontology and to find an appropriate reasoning tool that exploits that information and discovers interesting relationships used for mapping. But this pays off, because all can be reused for further matching tasks. After all, data matching works more precisely by reasoning on concepts and may find even new relationships between data, that wouldn’t be found just by manual search. Furthermore, the user doesn’t need to know the exact data connection, but he must indicate the semantic concepts of involved data. Our approach offers a semantically derived dBDD. In Fig. 3(2), each message of the workflow as well as all operational data has to be annotated semantically by the designers to classify both workflow and operational data. The annotation happens by adding an URI (Uniform Resource Identifier) reference to a concept in an ontology, similar to the annotation mechanism suggested by SAWSDL [21]. For workflows, we could also add the annotation within a semantic workflow description by adding descriptions to the appropriate syntactic workflow specification, called "grounding" in literature [14]. 3.Semantic Automatic Matching. Because the semi-automatic approach means still a lot of extra effort to a workflow designer and a business warehouse designer, "automatic" matching in Fig. 3(3) might be preferable. Automatic matching doesn’t require a semantic annotation by the user at all. Instead an NLP (Natural Language Processing) tool creates the semantic mapping automatically in a restricted domain. It uses a predefined ontology and maps the parameters onto the right category on the basis of their names.
4 Architecture of the Matching Framework At the moment, we are implementing the Matching Framework as a generic prototype that determines a dBDD and thus enables a global analysis based on an integrated view of both process-related data and operational data for various workflow management systems, operational data sources and semantic description languages. The choice of the best match method depends on several criteria as for some data the designer might know the related operational data source, for other data he is only sure about its semantic concept. Our framework provides various alternatives to support the most efficient and effective match alternative in each particular situation.
Matching of Process Data and Operational Data for a Deep Business Analysis
179
Fig. 4. Matching Framework Architecture
Fig. 4 shows an overview of the framework architecture. The goal of our framework is to come up with a deep Business Data Description (dBDD) that relates process data to its operational data and vice versa. The framework assists in the matching task. Details have been introduced before in Section 3.2. This figure displays the different options and the relevant data to build up dBDD. The upper level boxes illustrate the input of persons involved in a manual or semi-automatic match, i.e. of the process and the warehouse designers, as well as the analyst. Each box also shows the name of the corresponding match method as introduced in Section 3.2. Depending on which alternative is chosen, the dBDD is either directly or indirectly constructed by applying various tools. These tools are illustrated by boxes with dotted lines. We distinguish between a manual and a semantic matching. The manual approach comprises the possibility to match the data directly by the business analyst (Direct BA) or by the process designer (Direct PD) as well as to derive the match by an extension of the workflow language via embedded database statements (Extension DB) or embedded links (Extension Link). The semantic approach needs an ontological description of the relevant data which is used by the reasoner tool for an automatic matching. The description is either gained by involved persons (semi-automatically) or automatically by NLP. Fig. 4, parts I., II. and III. display typical usages of the framework. We illustrate these scenarios referring to the example data of Section 2.1. I.Direct BA: Both the message parts in the audit trails (Table 1) and operational customer data (Table 2) have to be provided together with a dBDD manually by the business analyst, i.e. they have to be integrated manually in Table 3. Since this is very laborious, this scenario should be avoided. But it is still used as backup, e.g.
180
Sylvia Radeschütz, Bernhard Mitschang and Frank Leymann
if the designer has forgotten to describe some relevant data or if an ontological concept is missing for automatic matching.
Fig. 5. Semi-automatic Matching Usage Scenario
II.Direct PD: While "Direct BA" works at analysis time, "Direct PD" works at process design time. We assume that the process designer knows about the semantics of the business process in detail and perhaps about the semantics of the operational data as well. Therefore "Direct PD" is preferable compared to "Direct BA", i.e. the quality of Table 3 might be improved. III.Semi-automatic: Both the business process of Fig. 2 and its audit data in Table 1 as well as the operational data in Table 2 will be changed to a semantically annotated version. This is exemplified in Fig. 5 by referencing an ontological concept for message parts "ID" and "name" by the process designer whereas the operational database columns "customerID" and "last name" are described by the data warehouse designer. Using this ontology a reasoner tool finds the matching operational data concepts for each process part which results in a dBDD, as externalized in Table 3.
5 Related Work Enterprise Information Integration Systems like IBM Information Server [13] support manual matching to integrate enterprise data. Although, it is still very laborious. In the PISA tool [23], the designer of the workflow has to insert manually an additional activity which matches workflow and business data. With this, the semantics of the process is falsified. In another approach described in [23] matching relationships are manually discovered. This is comparable to our approach "Direct BA" (cf. Fig. 4 part I.). In the process warehouse of [6], the
Matching of Process Data and Operational Data for a Deep Business Analysis
181
matching relationships with operational data must be detected manually by the business analyst. This seems to be cumbersome because there’s one operational table for each process model connecting the data in each process instance. Further tables have to be joined by hand. This approach compares to "Direct BA" (cf. Fig. 4 part I.) as well. Unlike manual matching where you can work on any data, semantic approaches concentrate on particular matching scenarios in a restricted domain. Typical semantic matching scenarios are used in data integration [7] to provide a uniform query interface for distinct data sources. However, our focus is on matching both operational data and workflow data. In the Web service area annotations are used to enable an automatic Web service composition (e.g. Adaptive Service Grid project [15]). Although it is similar to our semi-automatic approach (cf. Fig. 4 part III.), the scope is different. Other approaches, e.g. "Explicit Semantic Analysis" and "Natural Language Requests", don’t require user annotations at all, but map the items on a thesaurus [5, 9] or an ontological description [2] by NLP tools. In contrast to our approach (cf. "Semantic Automatic Matching", Section 3.2), they work at the instance level and compose new mappings for each user request. The benefit of linking semantic technology and business process technology has been sketched in [11]. But there, the goal is to ease modeling and monitoring of business processes by supporting domain ontologies in those tasks. Relating business process artefacts and operational data based on semantics as we describe has not been suggested. The fact that a standard for annotating messages and services with semantics is available [21], and that semantics is considered an important aspect of business process modelling [11], emphasizes the pragmatics of our approach.
6 Summary This paper demonstrates the need for a deep business analysis. A joint analysis of both business process data as well as operational data promises more and better improvements to a company’s operation than a single perspective analysis alone. For performing a joint analysis, we have to match both operational data and process data to obtain a consolidated data description called deep business data description (dBDD). In order to achieve a proper and efficient matching we developed a framework that offers a dBDD to describe data that is relevant for global analysis. Depending on the situation, (e.g. if the user already knows the matching relationships or if an ontology is available that covers the business domain) a user can choose the best way to derive the dBDD. Manually derived dBDDs require a manual matching and comprise "Direct BA", "Direct PD", "Extension DB" and "Extension Link" matching methods. Using semantic descriptions of the data, one can employ semiautomatic or automatic matching. As indicated by the names of the matching methods, the degree of match automation rises from manual to automatic. Currently, we are pursuing the "Direct PD" as well as the semi-automatic approach. For "Direct PD" an editor is prototyped to support process designers in the matching task by displaying process data and operational data together and
182
Sylvia Radeschütz, Bernhard Mitschang and Frank Leymann
offering features to link them. This editor can also be used for the semantic annotation in the semi-automatic approach in assisting users to connect relevant business data with a selected ontology. Then appropriate matches are detected by a reasoning tool within the framework.
R. Agrawal et al.: Mining Process Models from Workflow Logs. EDBT 1998. A. Bosca et al.: Composing web services on the basis of natural language requests. ICWS 2005. R. Bruckner et al.: Striving Towards Near Real-Time Data Integration for Data Warehouses. DaWaK 2002. Object Management Group: Business Process Modeling Notation. Final Adopted Specification. www.omg.org/cgi-bin/doc? dtc/2006-02-01 1, 2006. J. Cardoso; A.P. Sheth: Semantic Web Services, Processes and Applications. Springer, New York 2006. F. Casati et al.: A Generic solution for Warehousing Business Process Data. VLDB 2007. A. Doan; A. Halevy: Semantic Integration Research in the Database Community: A Brief Survey. AI Magazine 2005. H. Dresner: Business Activity Monitoring: BAM Architecture. Gartner Symposium ITXPO 2003. E. Gabrilovich; S. Markovitch: Computing Semantic Relatedness Using Wikipediabased Explicit Semantic Analysis. IJCAI 2007. J.Han et al.: Data Mining: Concepts and Techniques. Morgan Kaufmann 2006. M. Hepp et al.: Semantic Business Process Management: Using Semantic Web Services for Business Process Management. ICEBE 2005. IBM: IBM WebSphere Process Server V6.0.2. http://www.ibm.com/software/integration/wps/1. IBM: IBM Information Server. http://www-306.ibm.com/software/data/integration/info_server1. J. Kopecky et al.: Semantic Web Services Grounding. AICT/ICIW 2006. D. Kuropka; M. Weske: Die Adaptive Services Grid Plattform: Motivation, Potential, Funktionsweise und Anwendungsszenarien. Emisa Forum, 2006. U. Leser; F. Naumann: Informationsintegration. Dpunkt, Heidelberg 2007. Oracle: Oracle Business Activity Monitoring. http://www.oracle.com/technology/products/integration/bam/1 Oracle: Oracle Data Mining. www.oracle.com/technology/products/bi/odm1 V. Rubin et al.: Process Mining Framework for Software Processes. ICSP 2007. M. Sayal; F. Casati; U. Dayal; M. Shan: Business Process Cockpit. VLDB 2002. SAWSDL: Semantically Annotated WSDL. http://www.w3.org/TR/sawsdl1 S. Weerawarana et al.: Web Services Platform Architecture. Prentice Hall 2005. M. zur Mühlen: Workflow-based Process Controlling. Foundation, Design, and Implementation of Workflow-driven Process Information Systems. Logos 2004.
1last accessed on 2008-01-04
Methods for Design of Semantic Message-Based B2B Interaction Standards Erwin Folmer, Joris Bastiaans TNO Information and Communication Technology, Colosseum 27, 7521 PV Enschede, The Netherlands {erwin.folmer, joris.bastiaans}@tno.nl
Abstract. The business interest in interoperability is rapidly growing. However, interoperability is not an easy target and is in many cases not easily achieved. One of the means to reach interoperability, standardization of B2B interactions, is often lacking quality which unfortunately has impact on interoperability. It is well-known that methods can improve the quality of standards. This paper supports the search of adequate methods by comparing several methods than can be used for design of B2B standards. Keywords: design methodologies, interoperable systems, model-driven architecture, crossenterprise business processes, collaborative business processes, tools, standards, messages, interaction.
1 Introduction In order to achieve interoperability, business partners standardize their collaborative business processes. The result of such standardization effort is the delivery of a specification that describes how the participating business partners will interoperate with each other. Besides specifying an interworking [19], the specification also serves as a contract [1]. Designing and specifying collaborative business processes is not a straight forward task. For instance, the resulting specification needs to be of acceptable quality in order to allow for successful implementation of the collaborative business processes. When the specification is incomplete or ambiguous, it may be impossible to realize such collaborative business processes. Thus, a specification’s quality directly affects the interoperability that is intended [1]. In order to be able to produce high quality specifications repeatedly, a design methodology is essential. In [1], we propose a quality framework for specifications. Specifications are considered to be of sufficient quality when they
184
Erwin Folmer, Joris Bastiaans
can be unambiguously implemented. From that perspective, we motivate that model-driven design approaches provide the means to realize this implementability constraint. A model-driven specification then comprises the models that can be transformed into deployable artifacts. After all, when this specification can be transformed into the required deployable artifacts, the specification was implementable. It is clear that in this way, the MDA [2] directly contributes to realizing interoperability. It is no surprise that the MDA has gained much attention. However, interoperability can only be achieved when the design of the interoperations is correct. Design is obviously the most important factor in standardizing collaborative business processes. A model-driven approach can merely aid in delivery of deployable artifacts through transformations. Using a design methodology to guide the design process is the wise thing to do, as this leads to constant quality of designs. This paper identifies and compares state-of-the-art design methodologies for collaborative business processes.
2 Overview of UML-Based Design Methodologies Within this research, comparison and selection of modeling languages is out of scope. This research is limited to UML based modeling languages. The search for UML-based methods for the design of message-based inter-organizational collaborative processes yielded the following methodologies, many of them discussed in [3] and [4]: x x x x x x
UN/CEFACT’s Modeling Methodology (UMM) [5]; Villarreal’s MDA-based development process for collaborative business processes [6]; Kim’s modeling and specification method for ebXML-based B2B business processes [7]; Kramler’s UML 2 based approach for modeling Web Service collaboration protocols [8]; Bauer’s model-driven approach to designing cross-enterprise business processes [9]; Koehler’s model-driven transformation method [10].
2.1 Design Methodologies Elaborated In the following paragraphs, the state-of-the-art design methodologies are shortly elaborated. UN/CEFACT’s Modeling Methodology UN/CEFACT’s Modeling Methodology (UMM) is a UML modeling approach to design the business services that each business partner must provide in order to
Methods for Design of Semantic Message-Based B2B Interaction Standards
185
collaborate. It is a top-down and iterative approach that makes use of worksheets to capture the requirements and understand the domain [5]. The idea of standard business scenarios and the necessary services to support them was first created by the Open-EDI [12] reference model that became an ISO standard in 1997. Thereby Open-EDI separates the what in the Business Operational View (BOV) from the how in the Functional Service View (FSV). The BOV covers the business aspects such as business information, business conventions, agreements and rules among organizations. The FSV deals with information technology aspects supporting the execution of business transactions. UN/CEFACT’s Modeling Methodology is considered as a BOV-centric methodology [11]. It starts off with a clear understanding of the specific domain of business activities within which the entire model exists [5][11]. It de-emphasizes the use of business documents and transactions to model this view as that approach may have captured only one part of the required model. An emphasis is placed on the definition of business entities, their state management and state lifecycle identification to produce a model that encompasses all instances and can evolve as new business requirements emerge. The goal of UN/CEFACT’s Modeling Methodology is to understand and formalize the dependencies between partner processes for a problem domain. Historically, partner communication methodologies (such as EDI) have focused on modeling the business documents being exchanged while the UN/CEFACT’s Modeling Methodology instead focuses on modeling the business actions and objects that create and consume business information. Modeling with UN/CEFACT’s Modeling Methodology yields business collaboration models. These business collaboration models comprise four main views (all supported by worksheets and methodological guidance) [5]: x x x x
The business domain view (BDV); The business requirements view (BRV); The business transaction view (BTV), and The business service view (BSV).
The business domain view is used to gather existing knowledge from stakeholders and business domain experts. In interviews, the business process analyst tries to get a basic understanding of the business processes in the domain. Use case descriptions of business processes are used to define the high-level collaborative processes. Furthermore, it is investigated which partner types participate in which processes and which stakeholders have interest in these processes. The business processes from the BDV that provide a chance for collaboration will be further detailed and refined by business process analysts in the business requirements view. This view consists of a number of different sub views, namely: x
The business process view gives an overview about the business processes, their activities and resulting effects, and the business partners executing them. The activity graph of a business process may describe a single partner’s process, but may also detail a multi-party choreography. The
186
Erwin Folmer, Joris Bastiaans
x x x x
business process analyst tries to discover interface tasks creating/changing business entities that are shared between business partners and thus, require communication with a business partner to realize state alignment. The business entity view describes the shared business entities. This is done by modeling the shared business entity’s lifecycle in a state chart The transaction requirements view describes business transaction use cases between roles. Assigning roles allows for flexibility; you do not need to assign the same use cases for each business partner. The collaboration requirements view includes a business collaboration use case which aggregates business transaction use cases and/or nested business collaboration use cases. The collaboration realization view defines which business partners play which role in which collaboration use case.
The business transaction view builds on the business requirements view and defines a global choreography of information exchanges and the document structure of these exchanges. The choreographies are realized by refining all business activities (activity nodes in the activity diagram) to activity diagrams specifying one transaction (the sending and receiving of business objects and an optional response). The information envelopes used to transfer these business objects is defined in class diagrams. In the business service view, the services for each participant’s interface are specified. Each transaction is refined in send-methods that instantiates the information envelope that is modeled in the BTV. With UN/CEFACT’s Modeling Methodology, it is possible to model the exchanged business information with the use of Core Components [13]. The core components provide easy to use building blocks for the construction of complex data structures. Quite some support is available for the UN/CEFACT’s Modeling Methodology. As it uses UML solely for its specifications, it is specified as a UML profile [14]. Furthermore, quite some mappings and transformations exist. UMM models can be transformed to BPSS [14] and BPEL stubs [15][16][17]. Furthermore, a mapping to the new WS-CDL standard was proposed [18]. A free add-in [15] is available for Sparx Systems’ Enterprise Architect which provides useful tool support for the UMM. The add-in provides a UMM UML profile, allows for validation and offers transformations to choreography languages (BPEL, BPSS). Furthermore, it features a built-in worksheet editor to facilitate the requirements and information gathering process. Villarrreal In [6], Villarreal identifies the MDA as the key enabler to assure consistency between partners’ interface specifications and collaborative processes. A modeldriven approach for modeling collaborative processes is presented that is independent of the idiosyncrasies of particular B2B standards. To support the design of collaborative processes, the use of the UML Profile for Collaborative Business Processes based on Interaction Protocols (UP-ColBPIP) is proposed. The development approach consists of three steps:
Methods for Design of Semantic Message-Based B2B Interaction Standards
x x x
187
Analysis and design of collaborative processes; Verification of collaborative processes, and Implementation of collaborative processes.
The analysis and design phase is about the modeling of collaborative processes from a business perspective of the B2B collaboration; it focuses on analysis and design of collaborative business processes. In order to design the way the involved parties collaborate with each other, UML models (Using the ColBPIP UML 2.0 Profile) models are created on four different views: x x
x x
The B2B Collaboration View captures the participants and their communication relationships. These are modeled in extended collaboration diagrams. The Collaborative Process View identifies collaborative processes. To define this, UP-ColBPIP extends the semantics of use cases to represent collaborative processes as informal specifications of a set of actions performed by participants. The Interaction Protocol View defines the explicit behavior of the collaborative processes through the use of interaction protocols. UPColBPIP extends the semantics of UML 2 interactions to model this. The Business Interface View offers a static view of the collaboration through the definition of the business interfaces of the participants. This is modeled in extended composite structure and interfaces
In the verification phase, the collaborative processes are converted (based on a mapping) to a formal language. These formal models – such as Petri nets – can be validated for consistency and prevent common problems such as deadlocks and livelocks. In the implementation phase, specifications are generated for the business processes and the participating partners’ interfaces. It is argued that transformations exist from ColBPIP-models to ebXML, BPEL and WS-CDL specifications, yet, no proof of this could be found. Furthermore, it is also unclear to what extend message contents is modeled and to what kind of specifications these can be transformed. Kim Kim acknowledges in [7] that in order to support dynamic setup of business processes among independent organizations, a formal standard schema for describing the business processes is required. In other words, Kim standardizing interchangeable objects is not enough; the inter-organizational processes need to be standardized as well. The ebXML framework appears to provide such specification schemas (BPSS). Modeling inter-organizational processes can be done through the UN/CEFACT’s Modeling Methodology. As noted earlier, this method uses worksheets to capture the domain knowledge. Kim argues that UML diagrams can also be used for this purpose and proposes guidelines on how such models are to be
188
Erwin Folmer, Joris Bastiaans
produced. Furthermore, a prototype modeling tool (“ebDesigner”) with BPSS generation features is presented. Kim’s modeling approach is therefore not a complete methodology on its own. Rather, it builds on the fundaments of UN/CEFACT’s Modeling Methodology. Kramler In [8], Kramler acknowledges that UN/CEFACT’s Modeling Methodology tackles the problems of: x x x
Lack of graphical modeling; Support for modeling collaborations (choreography) instead of individual interfaces, and The lack of abstraction for modeling transactions.
However, it is argued that UMM’s approach does not support the specifics of Web Service technology. In order to cope with this, Kramler proposes a UML 2.0 based modeling technique that supports platform independent modeling of Web Service collaboration protocols. Modeling is done in a top-down fashion across three layers At the collaboration level, an overview of the collaboration is provided. This is expressed in collaboration diagrams. The transaction level considers transactions and transactional processes. Each transaction is being performed by a set of participants from the collaboration. This level abstracts from the distribution of state and control in a collaboration to provide a convenient high-level model. Activity diagrams define transactional processes which refine the defined collaborations. Class diagrams are used to model objects and associated state charts define the behavioral aspects. At the interaction level, actual asynchronous exchange of messages is modeled. The allowed sequences are captures in sequence diagrams. Furthermore, class diagrams are used to model the content of messages that The strong side of this approach is the formal connection of the object models to the messages exchanged in the interaction level. Thereby, it creates an aggregated form of data flow control that is argued to be missing in most modeling approaches. It is assumed that this connection between objects and object exchange allow for a mapping to the implementation level. However, such a mapping is not yet provided by Kramler et al. Although this methodology does not provide a full model-driven approach for developing messaging standards, it is focused on delivery of models (CIMs and PIMs) which – in the end – could be transformed to deployable artifacts. Therefore, this methodology could easily be fitted in a model-driven approach. Bauer Bauer et al propose a model-driven design approach [9] for modeling crossenterprise business processes. The methodology is a top-down approach, in line with the MDA principles. The methodology proposes to model the existing business process at CIM level. Proposed modeling languages are BPDM for process modeling and UML activity diagrams for visual representation. At PIM level, the business processes are refined
Methods for Design of Semantic Message-Based B2B Interaction Standards
189
with data-flows and also modeled in activity diagrams. The PIMs are modeled from an IT implementation point of view. This means, that specific send and receive activities are modeled in the activity diagrams for each process. The distinction between business process at CIM level and implementationoriented designs at PIM level make this approach clearly business-oriented. However, as the business processes are refined to more implementation-oriented models, information modeling is not mentioned in the methodology. No refinement of the data-flows is proposed. Furthermore, this method is merely a design method. No mappings are yet presented. In future work, mappings to BPEL4WS are promised. The methodology, however, does not restrict designers to model inter-organizational business process that can be implemented with BPEL4WS only. The methodology therefore is non-restrictive with regard to target realization platforms. Koehler Koehler presents a methodology [10] that implements model-driven transformations between particular platform-independent (business view) and platform-specific (IT architectural) models. It aims at deriving Web Services and executable business processes. On the PIM side, business process models (UML2 activity diagrams) are used to describe existing business processes. On the PSM side, the IT architectural models focus on WSDL’s and BPEL4WS models. Koehler acknowledges that the IT architecture should be aligned with the business processes and it should be the business process model that determines the IT architecture and not the other way around. The methodology is therefore clearly a top-down and absolutely business-oriented. The proposed methodology considers the PIMs to be specifications and the PSMs to be the solutions that satisfy the specification. Koehler proposes to a mapping of UML 2.0 activity diagrams to so-called process graphs. These process graphs reflect a value-network’s static information flows. Furthermore, it describes how BPEL4WS documents can be derived from these process graphs. Yet, a formal mapping is not presented. The methodology only focuses on supporting the process side of interorganizational collaborations. Information modeling (defining the messages) is not part of the methodology. 2.2 Comparison In [1], a quality framework for design processes is proposed. Furthermore, it is argued that a specification should minimally specify: x x x x x x
The format for valid messages; The vocabulary of valid messages that can be exchanged; The intentions and semantics of these messages should be specified; Which business interactions (services) to support; The procedure rules for these interactions (the choreography); The assumptions about the environment in which the interoperations take place.
190
Erwin Folmer, Joris Bastiaans
3 Selection Criteria The combination of the specification process’ quality aspects, the above stated ingredients for a specification and the concepts from the MDA yields us the following selection criteria for a suitable design methodology. A modeling methodology is adequate when it: x x x x x x x x x
Is top-down; Is model-driven approach; Is iterative; Is platform unbiased; Facilitates domain modeling; Facilitates choreography modeling; Facilitates information modeling; Has formal verification methods; Has tool support.
How these characteristics of design methodologies contribute implementability of specifications is elaborated in the following paragraphs
to
Top-down Messaging standards are used to facilitate an interworking. As the messaging mechanism must function in a value-network in the context of this interworking, the messaging mechanism must be designed for that purpose. This implies that a top down approach is followed where the interworking is refined with messagebased communication until this design is implementable. Modeling should be done from a business-operational viewpoint. Model-driven approach As motivated in this paper’s introduction, model-driven design contributes to implementability. Furthermore, it is believed that models are easier to maintain than implementations. Alterations to a collaborative process could be made to CIM or PIM models whilst deployable artifacts can be quickly generated (fully or partially automatically), hence improving maintainability. Iterative As it is better to have designs that are almost right than completely wrong, iterative design is suggested. Nowadays, iterative design is a common practice, particularly in software engineering. Platform non-restrictive Within the message-based abstract platform, a design methodology should not further restrict designers in their choice for potential target realization platforms. After all, the freedom of being able to select a target realization platform allows designers to select one platform that best fits the needs or situation. This obviously helps the quality of designs.
Methods for Design of Semantic Message-Based B2B Interaction Standards
191
Facilitates domain modeling As the exchanged messages in B2B interactions are about the business domain, this domain must be fully comprehended. Therefore, activities related to domain modeling and the involvement of domain experts contribute to the quality of the standard and should therefore be an integral part of the design methodology [roes]. Choreography modeling As this is one of the ingredients of a messaging standard’s specification, the design methodology should take this into account. This is closely related to domain modeling, since the choreography must be aligned with the existing business processes in the business domain. Information modeling In the end, as valid messages need to be specified, the methodology needs to facilitate the information modeling. Naturally, this is also closely related to domain modeling as the messages are about the business domain. Formal verification methods Formal verification methods offer the means to assess the design's correctness, completeness and consistency. Thereby, the standard's and specification's quality can be monitored and controlled Tool support Tool support improves the quality of specifications as it reduces errors or inaccuracies caused by (human) manual activities. Tool support is also essential to support the model-driven approach. 3.1 Comparison In our search for a design methodology, we do not want to be restricted to specific platforms within the abstract platform of message-based interaction. Therefore, Koehler’s methodology that focuses solely on the Web Service platform should not be selected. Kim, on the other hand, does not provide a methodology. His work merely explains how you could use UML diagrams to specify collaborative business processes. Kim’s approach should therefore not be used as the main design methodology for messaging standards. Furthermore, a design methodology should address all aspects of messagebased interaction. As we seek top-down methodologies – methodologies that focus on existing business processes or collaborations – process modeling is often the first step in a design methodology. This, however, does not mean that information modeling should not deserve any attention. In order to realize successful interworking, the information aspects need to be modeled as well (be it as a refinement of processes or collaborations). Bauer’s methodology omits the information modeling aspects in the design. A more comprehensive design methodology is obviously preferred.
192
Erwin Folmer, Joris Bastiaans
Kramler’s, Villarreal’s and UN/CEFACT’s modeling methodologies appear to be comprehensive enough. Villarreal proposes a fully model-driven approach. With respect to quality assurance we must acknowledge that the formal verification phase is very appealing. However, (too) much is unclear about this methodology. For instance, uncertainty about the support for information modeling and lack of proof of the claimed transformations leads us to believe that the methodology is not mature enough. Kramler proposes a methodology that is very similar to UN/CEFACT’s. Yet, its strength – to directly connect messages to process models – cannot be fully enjoyed due to lack of (proven) transformations.
UMM Villarreal Kim Kramler Bauer Koehler
Tool support
Formal verification
Information modeling
Choreography modeling
Domain modeling
Platform non-restrictive
Iterative
Model driven
Methodology
Top-down
Table 1. Comparison of design methodologies.
_ _ _ _ _ _ _ _ _ _ _ _
_
_ _ _
_ _ _ _
_
_
_ Fully satisfied Partially satisfied or not entirely clear
4 Conclusions UN/CEFACT’s Modeling Methodology has been around for quite some time. It has proven its value and a lot of support (tool support, mapping and transformations) are available. Furthermore, the worksheets and design guidelines offer valuable methodological support for the entire process of developing standards. UN/CEFACT’s Modeling Methodology is therefore the only design methodology that deals with all modeling aspects and is able to model all the required aspects of collaborative business processes.
Methods for Design of Semantic Message-Based B2B Interaction Standards
193
Although the UMM is not a fully model-driven design methodology, the availability of transformations and tool support provides opportunities. As the UMM is a modeling methodology for CIM and PIM models, the UMM and existing transformation tools could easily be incorporated in a model-driven design process. From that perspective, organizations involved in standards development and setting (such as TNO) will have a good starting point by using UMM.
References [1] [2] [3] [4]
[5] [6]
[7] [8]
[9] [10] [11] [12] [13]
[14]
[15]
[16]
Bastiaans, G.J.A. “Standardizing electronic transaction based on state-of-the-art concepts”, graduation thesis, TNO, 2007. Miller, J. & Mukerji, J., “MDA Guide Version 1.0.1”, OMG omg/2003-06-01, http://www.omg.org/docs/omg/03-06-01.pdf Roser, S. & Bauer, B., "A Categorization of Collaborative Business Process Modeling Techniques," cecw , pp. 43-54, 2005. Dorn, Grun, Werthner & Zapletal, "A Survey of B2B Methodologies and Technologies: From Business Models towards Deployment Artifacts," hicss , p. 143a, 2007. UN/CEFACT’s Modeling Methodology User guide, 2003-09-22, http://www.unece.org/cefact/umm/UMM_userguide_220606.pdf Villarreal, P.D., Salomone, E. & Chiotti, O., “A MDA-based development process for collaborative business processes”, in proceedings of European Workshop on Milestones, Models and Mappings for Model-Driven Architecture (3M4MDA), Bilbao, Spain, July 11, 2006 Kim, H.D., "Conceptual Modeling and Specification Generation for B2B Business Process based on ebXML." ACM SIGMOD Record 31.1 (2002): 37-42. Kramler, G., Kapsammer, E., Kappel, G., and Retschitzegger, W., “Towards Using UML 2 for Modelling Web Service Collaboration Protocols,” in Proceedings of the First International Conference on Interoperability of Enterprise Software and Applications (INTEROP-ESA’05), Feb. 2005. Bauer, B., Müller, J.P. & Roser, S.: A Model-Driven Approach to Designing CrossEnterprise Business Processes, University of Augsburg, 2004. Koehler, J., Hauser, R., Kapoor, S., Wu, F.Y. & Kumaran, S., "A Model-Driven Transformation Method," edoc , p. 186, 2003. Huemer, C., “UN/CEFACT’s Modeling Methodology (UMM) in a nutshell”, http://unece.org/cefact/umm/UMM_userguide-nutshell.pdf ISO. Open-EDI Reference Model, 1995. ISO/IEC JTC 1/SC30 ISO Standard 14662. Huemer, C. & Liegl, P., A UML Profile for Core Components and their Transformation to XSD. accepted at the Second International Workshop on Services Engineering, Istanbul, Turkey, 2007. Hofreiter, B., Huemer, C., Liegl, P., Schuster, R. & Zapletal, M., “UN/CEFACT’S Modeling Methodology (UMM): A UML Profile for B2B e-Commerce”, in Advances in Conceptual Modeling - Theory and Practice (ER 2006), 2006. Tucson, Arizona, VS: Springer, 19-31. Hofreiter, B. & Huemer, C.. UMM Add-In: A UML Extension for UN/CEFACT’s Modeling Methodology. In European Conference on Model Driven Architecture (ECMDA’06), July 2006. Hofreiter, B., Huemer, C., Liegl, P., Schuster, R.& Zapletal, M., "Deriving executable BPEL from UMM Business Transactions," scc, pp. 178-186, IEEE International Conference on Services Computing (SCC 2007), 2007
194
Erwin Folmer, Joris Bastiaans
[17] Hofreiter, B. & Huemer, C.. Transforming UMM Business Collaboration Models to BPEL. In Proceedings of the International Workshop on Modeling InterOrganizational Systems (MIOS), 2004. [18] Frick, A. & Helger, P., “UMM BTV nach WS-CDL Transformator”, online resource, http://philip.helger.com/gt/get.php?where=gt&file=wscdl.pdf, downloaded at 10-182007. [19] Krämer, B., Papazoglou, M. & Schmidt, H.W., “Information Systems Interoperability”, Research Studio Press, 1998.
Part III
Service Design and Execution
An Adaptive Service-Oriented Architecture Marcel Hiel, Hans Weigand and Willem-Jan Van Den Heuvel Tilburg University, PO Box 90153, Tilburg 5000 LE, The Netherlands {m.hiel,weigand,wjheuvel}@uvt.nl
Abstract. Service Oriented Architectures and Autonomic Computing are heralded as the defacto solutions for constructing and evolving complex and highly- adaptive enterprise applications. Unfortunately however, the architectures proposed are visionary; how to design, build and evolve adaptive service-enabled applications remains largely unclear. In this paper, we introduce an extension to the Service Oriented Architecture, called Adaptive Service Oriented Architecture, leveraging it with concepts and mechanisms from Autonomic Computing and Agent Technology. We illustrate the constituents and implications of ASOA with an prototypical architecture which deals with interoperability issues. Keywords: Managing challenges and solutions of interoperablity, Service oriented Architectures for interoperability, Architectures and platforms for interoperability, Agent based approaches to interoperability
1 Introduction In order to keep up with increasingly fast changing market demands, to accommodate new business rules and governmental regulations, and to cater for swift introduction of new products in new (international) markets, the business applications that are supporting enterprise processes must be changed continuously rather than sporadically. In addition, business applications are rapidly becoming more complex and difficult to evolve, due to the ubiquity, distribution and heterogeneity of devices (e.g., mobile telephone and handhelds), platforms and applications. Currently, Service Oriented Architectures, shortly SOAs, are heralded as the de-facto distributed computing technology for developing and managing this new breed of highly-adaptive business applications. This far, much progress is booked in development methods and techniques, however, management of services has been largely neglected. Some standards have emerged to cater for management requirements, however, SOA is not sufficiently equipped to define them in a concise and consistent manner.
198
Marcel Hiel, Hans Weigand and Willem-Jan Van Den Heuvel
At the same time, in response to the rising complexity of business applications, as well as the critical need for tools and techniques to facilitate their evolution, Autonomic Computing (AC) [1] is touted by industry leaders, mainly IBM, as the comprehensive solution to leverage the maintenance of business applications. As such, AC is defined as an implementation agnostic computing paradigm, and potentially provides SOAs with the concepts and management mechanism that help to overcome the shortcomings of the existing SOA paradigm. However, AC does not define exactly how business applications in general, and web-services and SOAs more in particular, may adapt themselves. Clarity can be gained by looking into more detail to the technologies that have been associated with AC to realize this vision. Under the umbrella of AC, Agents have been positioned as viable approach to increase the level of autonomy and adaptability of current SOAs. Agents embody the idea that software can be made to make decisions themselves without human intervention. The concepts described in this article, will be exemplified by a simple, running example. This example deals with an order management process that involves four parties: a customer, a retailer, a shipper and a bank. This process (graphically depicted in Fig. 1), is enacted by a composite web-service and is structured as follows. After having received a purchase order from a customer, the retailer executes three tasks in parallel. Firstly, the retailer ascertains that sufficient parts are in stock. Secondly, the retailer checks the creditworthiness of the customer. For this purpose, he invokes an external web-service offered by a trusted third party, here a bank. At the same time, the producer inquires a shipper, whether he can deliver the parts to the customer before the requested date. Once these tasks are completed, the order management process is concluded by sending an invoice to the customer, indicating the expected shipping date. Assume that the bank unilaterally upgrades its Financial
Fig. 1. The Order Management Process
service and sends a notification about this to its clients. In particular, the port type CheckCreditWorthiness is extended allowing its clients not only to check the creditworthiness of a specific bank account, but also that of credit cards that are issued by the bank. Hence, this port type allows options in its input message: either a bank account or a credit card number may be queried. The problem that would rise in a current SOA is that either a human administrator catches this notification and would start implementing the change manually or an error would occur at run-time when the upgrade at the bank has
An Adaptive Service-Oriented Architecture
199
been implemented. In today’s business environments, this type of change will happen frequently rather than sporadically and it is our purpose therefore that these type of changes should be automated. The motivating question is: can systems be designed in a generic way to adapt to changes like this one? In this paper we introduce an architecture that guides the design of such adaptive systems in a structured manner. It should be noted that we are interested in a generic method for dealing with these type of changes and not in the specific problem described above. The architecture we introduce - Adaptive Service-Oriented Architecture, ASOA for short - is built on top of the SOA architecture and recent developments in WS-Management and Agent Technology. This paper is structured as follows: In the next section we will describe briefly the Service Oriented Architecture, the concepts that constitute a service and our motives to extend the SOA. In Section 3, we will introduce the ASOA and the concepts that make it adaptive. Using the ASOA as our basis, we will discuss in Section 4 a prototypical architecture that entails all concepts and which will be applied to the example above. After this, we will discuss related work (Section 5) and conclude this paper (Section 6).
2 Service Oriented Architecture Service Oriented Architecture has the goal to address the requirements of loosely coupled, standards-based, and protocol-independent distributed computing, linking the business processes and enterprise information systems isomorphically to each other [2]. Essential characteristics of services in a SOA are [3]: x x x
All functions are considered services. This holds for business functions as well as system functions. Services are autonomous. The actual implementation of the functionality is encapsulated in a service, and is consequently invisible from the outside. Instead, the services are advertised in the interface of the service. Interfaces of the services are protocol-, network- and platform agnostic.
Fig. 2. Service Brokerage
SOAs support two key roles: a service requestor (client) and service provider, which communicate via service requests. While SOA services are visible to the service client, their underlying realizations remain hidden and inaccessible. For the
200
Marcel Hiel, Hans Weigand and Willem-Jan Van Den Heuvel
service provider however, the design of components, their service exposure and management reflect key architecture and design decisions. Fig. 2 depicts the standard SOA where a service broker serves as an intermediary interposed between service requesters and service providers. Under this configuration the broker typically offers a registry where the service providers advertise the definitions of services and where the service requestors may discover information about the services available. Once a requester has found suitable services, they may directly define bindings to the service realizations at the provider’s site, and subsequently, invoke them.
Fig. 3. Ontology of a Service
While the standard SOA in this figure defines the main roles and the ways in which they may interact, they do not represent a conceptual structure of services; the first class citizens in SOAs. To address this deficiency, we have derived an ontology of services from existing specifications, standards and research papers, in particular the WSA [4]. This ontology, depicted in Fig. 3, is comprised of six essential concepts, which are believed to collectively define the core fabric of services in SOAs: x
x
Organization: An organization (or person) denotes the concrete owner of an (abstract) web service. As such, it materializes the linkage between the service, e.g., web-service, and the organizational entity which bears responsibility for it in the real world. Service: Services are self-describing components that are capable of performing a task. A service can be nested, implying that a service can be composed of other services. Services that are assembled from other services are called composite, or aggregate, services, while atomic services
An Adaptive Service-Oriented Architecture
x x
x
x
201
refer to services that can not be further decomposed in finer-grained services. Action: An action constitutes a discrete function that may be executed by the service. Task: Several cohesive actions may be bundled into a task. Several criteria may be used to synthesize actions into a task, e.g., communicative- and functional cohesion. A functionally cohesive task should perform one and only one problem-related function and contain only actions necessary for that purpose, e.g., the services involved in the order management service. A communicatively cohesive task isone whose actions use the same set of input and output messages. The actions that collectively make up a task may be advertised in a service interface. Message: Services collaborate with each other by exchanging messages, each of which conveys one or more typed information elements. Messages may be correlated, e.g., in case of a two-way communication protocol, a send- and receive-message are associated with each other. Interface: A service interface defines the messages that a service can send and receive. Interfaces are the key instrument in SOA to ensure platformand protocol independence, defining a service contract that forms a pair with an associated service implementation, and captures all platformindependent assertions. As such, the interface defines the identity of a service and its invocation logistics.
Although SOA provides a good architecture for creating services in, concerning adaptivity we see three shortcomings. The first one is that although the essentials of services (and SOA) are captured by the service brokerage triangle and the six concepts described above, they do not capture adaptive behavior explicitly. Because of this implicitly it is hard to create standards for this type of behavior. The second is about the service discovery. The dynamics of SOA lies in its capability to discover services dynamically and make bindings to their implementations at runtime. However, service brokerage alone is not enough to create an adaptive architecture. Although new services can be found and bound, the assumption that these new services will fit (be compliant) concerning the interaction protocol and business process is a strong assumption. To relax this assumption, services can be made to adapt themselves to fit with newly found services. In this case you only need to find the service with the right functionality and not with the extra constraints of the right protocol and matching business process. The third shortcoming is that although the SOA is advocated to enable highly dynamic systems, in practice, its key components are typically static in nature. Current service standards and platforms typically deliver static solutions, which are brittle and resistant to change. Changes need to be carried out manually rather than automatically. For example, service reconfiguration is performed manually. The adaptivity we seek in the Adaptive Service Oriented Architecture is delivered by adaptiveness of the individual services. We envision that like in Autonomous Computing the services will act as autonomous units which are able to adapt themselves to their environment.
202
Marcel Hiel, Hans Weigand and Willem-Jan Van Den Heuvel
3 Adaptive Service-Oriented Architecture To enable the adaptivity required in future business environments, we have developed the Adaptive Service Oriented Architecture (ASOA). The AdaptiveService Oriented Architecture we introduce here is based on the SOA, however there are a number of changes in the design. 3.1 The Basic Adaptive Service-Oriented Architecture
Fig. 4. The Basic Adaptive Service Oriented Architecture
Fig. 4 illustrates the basics of the ASOA. Like in SOA, service brokerage is used to find new services. The main difference is that a new role is introduced, that of the manager. At both the side of the provider and the requester a manager controls the services. With the introduction of management in services, we make the distinction between the manageable (adaptable) service and the manager. The main advantage of this architecture is a clean separation of concerns: the service itself solely offers business functionality, while the management and adaptation are the responsibility of the manager. Managers can interact with each other, for example to establish a contract or to notify changes. We consider reconfiguration of a service to be a managerial task and not an operational, and therefore finding and integrating new services should be done by the manager. Fig. 4 illustrates this where the service broker is connected to the manager and not to the service. For the basis of the manager we have looked at Agent Technology. Agents are pieces of software able to make decision and motivate these decisions. The characteristics of agents such as the pro-activeness make agents a suitable candidate to deal with unforeseen changes.
An Adaptive Service-Oriented Architecture
203
3.2 ASOA Ontology The concepts we described in the previous section on SOA and concepts distilled from WS-Management and Agents have resulted in the ontology illustrated in Fig. 5. In this section, we will explain our choices concerning these concepts and describe how they relate to each other.
Fig. 5. Ontology of an Adaptive Service Oriented Architecture
The difference in color (blue and yellow) indicate the two types of concepts that we distinguish, namely specifications and artifacts. The specifications represent descriptions of the service and service components such as contracts, capabilities and interfaces and goals. The artifacts constitute the parts that are implemented such as the service and manager. The notion of a manager refers to a specific type of service that has goals. By defining the manager as a service, we have introduced a managerial dimension. Meaning that we have two types of specification and artifacts. An example of this, are the interfaces that a service provides, next to the operational interface it can posses an managerial interface to allow itself to be adapted. In Fig. 5, an adaptation cycle can be seen which is formed as follows: The manager listens to the events that are generated by and about the managed service. Based on these events, it will detect changes that for instance, affect the performance of the service. Depending on the goal(s) of the manager it will decide whether to react and how to react to this. The capabilities that the service provides are the means for the manager to execute these adaptations. The concepts introduced or changed (in relation to their definition in SOA) are explained in more detail below.
204
Marcel Hiel, Hans Weigand and Willem-Jan Van Den Heuvel
x
x
x
x
Goals: One of the characteristics of an agent is proactiveness, implying that agents are goal-directed. Goals used for Agent programming can be specified in two different ways: [5]: The first way is to define them in a declarative manner. In this way, they describe the state of affairs which is sought by the agent. Declarative goals are needed if agents need to reason about them. The second way, is to define goals procedural. Meaning that a goal is defined as a set of procedures, which is executed to achieve the goal. The procedural manner is related to planning. Contract: The contract stipulates the mutual agreement between two or more services (managers) and defines prerequisites and results of particular service interaction. The contract is implicitly included in WSA through the concept of "service semantics", but in the ASOA we need an explicit notion for restraining the adaptiveness of the service. Although the services should be adaptive and alter themselves to their environment, stability between parties is of critical importance to reduce uncertainty and establish trust relations. Capabilities: The manageability capabilities of a service provide the management operators to adapt the service to make it comply to the manager’s intentions. We can distinguish between the atomic and composite service in manageability. In the atomic service the manageability capabilities will be limited to parameter adaptations and optimization of the process. In the composite service the manageability capabilities could entail besides parameter adaptation, compositional adaptation and thus redesigning the process. Event: Events are basically messages that contain information about the systems functioning, and used for the purpose of logging, alerting and monitoring. Events combined with event-correlation models form the basis on which the manager can detect changes, providing the foundation for reactive change management. As can be seen in Fig. 5, we do not have the concepts of message in the ASOA. The reason for this is, that we assume that in asynchronous communication messages can be equated with events.
4 A Prototypical Adaptive Architecture In this section, we describe a realization of the conceptual ASOA. This specification of the concepts of ASOA, is aimed at interoperability and applied to the case of an composite service. The figures below illustrate the relation between manager and adaptable system and the specifications/implementations used to realize it. We illustrate its workings with the example from the introduction. 4.1 The Adaptable Service A composite service consists of multiple atomic services that constitute a business process. Here, we consider the composite service to be the adaptable
An Adaptive Service-Oriented Architecture
205
1. 2. 3. 4.
Interface(IF/Man.IF): WSDL Goal : 2APL Control Loop (CL) : 2APL Events : WSManagement/ WS-Eventing 5. Manag. Capabilities : Model Manage ment Operators 6. Business Process(BP): BPEL
Fig. 6. Manager-Adaptable Service Relation
Fig. 7. Listing of specification used
system which exposes the process itself to be managed. Typically, in a SOA environment, this process is described in BPEL and the interfaces of the service is published in WSDL. In order for the manager to be able to control the composite service, the service needs to provide manageability capabilities. For this part, we rely on work done in adaptive processes or workflow management such as Adeptflex [6] and Wide [7]. Although BPEL was not designed with flexibility or adaptability in mind, part of the work done in adaptive processes has also been applied to BPEL [8]. The manageability capabilities of the composite service include primitives such as adding (+) or removing (-) partnerlinks (P) and variables (V). Next to these, an activity X can be inserted +A(Process, X, Preds, Sucs) or removed –A(Process, X) from the control flow. 4.2 The Manager The manager is a special type of service that relies on goals. We choose to implement the manager with an Agent-Oriented Programming language. More specifically, we use a BDI-based agent-programming language. These languages encompass both a specification for goals as well as a control loop (adaptation cycle) to realize adaptive behavior. The most common ones that are used are built on top of Jade such as Jadex and 2APL. In our case, we have chosen for 2APL. An extensive comparison of these agent-programming alternatives would be beyond the scope of this paper. The goal of the manager is set by the organization. Typically, in interoperability this goal can be one of two things. First, the goal can be to stay compliant, meaning that despite changes of interfaces of other services, the process should still work. Second, the goal could be to support changes. This is a step further then compliance because it is also means to incorporate the changes of the interfaces in the process. For instance, if functionality is added then this additional
206
Marcel Hiel, Hans Weigand and Willem-Jan Van Den Heuvel
functionality should be supported. This leads to change propagation. For the remainder of this paper, we will assume that the manager is set with the goal to stay compliant. This can be specified as follows: Compliant(Process, Partner1,..,PartnerN)) Meaning, that the Process (BPEL) should stay compliant in relation to the interfaces of its partners. Typically, the partners are variables that are set at runtime and which can be changed. These would correspond to the partner-links in BPEL. When the manager receives an event that an interface has changed it must perform some actions in order to stay compliant, this can be expressed as Planning Goal Rules (2APL): Compliant(Process, Partner1, .., PartnerN) Å Event(PartnerX, NewIntface)| {CompareOldAndNewInterfaces(PartnerX, OldIntface, ChangeList); MapChangeListToBPELPrimitives(Changelist, ChangePlan); ApplyChangePlan(ChangePlan);} This rule states that if the agent has the goal to stay compliant and it receives an event concerning a changed interface then it will construct and execute a change plan. 4.3 Change Example To illustrate the workings of this setup, we return to our example in the introduction. Assume that the bank has decided to upgrade his interface to support the option of checking the validation of credit cards. The bank sends a link to its customers where the new interface can be found and an expiration date for the old interface. Part of the WSDL of the Bank would look like the following: <message name="CreditCheckRequest"> <part name="Accountnr" type="xs:int"/> <portType name = "CheckCreditWorthiness">
Once the manager receives the event from the bank, it will compare WSDL documents and see that the message send to the bank is different. <schema...> <element name="Accountnr" type="xs:int"/> <element name="CreditCardnr type="xs:int"/>
Based on these differences, he creates a plan to stay compliant. Such a change plan will look like the following: ( +P(newBankPT); +V(CredibilityCheck); + A(OrderManagement, InvokeBankNew, receivePO, sendInvoice); - A(InvokeBankOld); -V(CreditCheckRequest); -P(oldBankPT); )
5 Related Work There are many directions in which one can find related work to our architecture, so we limit ourselves to the closest ones. Papazoglou [9] states that SOA does not address issues among others such as coordination and management. The Extended SOA (xSOA) aims to remedy this and a layered model is provided with challenges and opportunities. However, like Autonomic Computing, the architecture remains on high level of abstraction and does not make design choices on how service management must be realized. Other approaches that have the purpose of creating an adaptive architecture can be found in the area of self-adaptive software. Examples of relevant architectures in the area of self-adaptive systems include: MADAM [10] and RAINBOW [11]. Similar to our approach these include adaptation cycles and use models to manage the changes. The main difference is the application domain in which they target their adaptations. They focus on specific technical domains, such as videoconferencing and network management where decisions regard Quality-of-Service only.
6 Conclusion Current SOA allows very constrained levels of adaptability, and falls short to the adaptivity requirements of future dynamic business environments. Building on SOA, WS-Management and Agents, we have presented an ontological definition of an Adaptive Service Oriented Architecture, which captures a control loop as well as contract-based collaboration. The high-level generic architeture, we have made more concrete by developing the prototypical architecture focused at interoperability. By exposing the BPEL process of the composite service through manageability capabilities to an Intelligent Agent, the latter is able to reason about it and construct plans to deal with interoperability issues. As such, the results that were presented in this paper are core research results in nature. Future refinements and extensions are pursuit in two directions. First, we
208
Marcel Hiel, Hans Weigand and Willem-Jan Van Den Heuvel
want to develop a fully adaptable version of BPEL. To this end, we have to define an exhaustive list of BPEL elements and integrity constraints on BPEL specifications, so that meaningful adaptations are enabled. Second, we aim at filling a repository of adaptation patterns. Each pattern is composed of basic operators and addresses a particular problem type encountered during service execution, such as the Compliant pattern sketched above.
References Kephart, J., Chess, D.: The vision of autonomic computing. Computer 36(1) (January 2003) 41 - 50 [2] Papazoglou, M.P., van den Heuvel, W.: Service Oriented Architectures: Approaches, Technologies and Research Issues. VLDB 16(3) (2007) 389-415 [3] Holley, K., Channabasavaiah, K., Tuggle, J.: Migrating to a Service-Oriented Architecture, IBM DeveloperWorks (December 2003) [4] Booth, D., Haas, H., McCabe, F., Newcomer, E., Champion, M., Ferris, C., Orchard, D.: Web services architecture. Working notes, W3C (February 2004) [5] Winikoff, M., Padgham, L., Harland, J., Thangarajah, J.: Declarative & procedural goals in intelligent agent systems. In: Proceedings of the Eights International Conference on Principles and Knowledge Representation and Reasoning (KR02), Toulouse, France, April 22-25, 2002. (2002) 470-481 [6] Reichert, M., Dadam, P.: Adept{flex}-supporting dynamic changes of workfows without losing control. JIIS 10(2) (March 1998) 93-129 [7] Casati, F., Ceri, S., Pernici, B., Pozzi, G.: Workfow evolution. In: International Conference on Conceptual Modeling. (1996) 438-455 [8] Reichert, M., Rinderle, S.: On design principles for realizing adaptive service flows with bpel. In: EMISA. Volume 95 of LNI., GI (2006) 133-146 [9] Papazoglou, M.: Extending the Service Oriented Architecture. Business Integration Journal (February 2005) 18-21 [10] Floch, J., Hallsteinsen, S., Stav, E., Eliassen, F., Lund, K., Gjorven, E.: Using architecture models for runtime adaptability. IEEE Softw. 23(2) (2006) 62-70 [11] Cheng, S.W., Huang, A.C., Garlan, D., Schmerl, B.R., Steenkiste, P.: Rainbow: Architecture-based self-adaptation with reusable infrastructure. In: ICAC, IEEE Computer Society (2004) 276-277
[1]
FUSE: A Framework to Support Services Unified Process Nicolaos Protogeros1, Dimitrios Tektonidis2, Androklis Mavridis2, Christopher Wills3, Adamantios Koumpis2 1
2 3
Macedonia University of Thessaloniki, Department of Information Systems and E-Commerce, Egnatia 156 str., Thessaloniki, Greece ALTEC S.A., 6 M. Kalou, 54629 Thessaloniki, Greece Kingston University, Faculty of Technology, Centre for Applied Research in Information Systems (CARIS), Penthyn Road,Kingston-upon-Thames, KT1 2EE , U.K
Abstract. Traditional methodologies for Software development life cycle such as the Unified Process or the Object Oriented Process, Environment and Notation (OPEN) are proven to be flexible and robust for developing traditional Information Systems. However in the new SOC (Service Oriented Computing) paradigm large systems emerge comprised of self contained building blocks: the services, which can be combined to form complex business processes located in different servers and companies, where users may customize and create their services dynamically and negotiate Service Level Agreements electronically. In such systems the traditional software development methodologies are simply inefficient. This paper presents the FUSE approach which provides a methodology and a framework to be used both by the IT industry and individuals with little or no ITexperience, such as specific domain experts, end users, testers and community members. This will support their common efforts in the efficient development of more reliable Service Oriented systems of the future. FUSE Framework is based and makes use of the Unified Process, OPEN, extended participatory design (PD) and other similar methodologies. Keywords: Service Oriented Computing, Rational Unified Process, Web Services
1 Introduction Technology has revolutionized the way that companies perform service, enabling the development of long-term individualized relationships with customers. Advancements in computing have allowed companies to improve both profits and financial accountability by providing high quality, personalized service more easily and affordably than ever before.
210
N. Protogeros, D. Tektonidis, A. Mavridis, Ch. Wills, A. Koumpis
In this direction the transition of the software industry into a service industry is becoming a reality [11]. Many vendors come to realize that software can be treated as a value delivery process, a view that brings many new opportunities. Since in the service industry clients mostly buy results rather than applications [3], appropriate business models must be adopted in order to guarantee success and sustain competitiveness. The traditional e-commerce paradigm is giving way to the e-services paradigm [9]. Service-oriented computing (SOC) is a driver for this change. Services hold the promise of moving beyond the simple exchange of information – the dominating mechanism for application integration today – to the concept of accessing, programming and integrating application services that are encapsulated within old and new applications. An important economic benefit of the Service Oriented Computing paradigm is that it enables application developers to dynamically grow application portfolios more quickly than ever before, by creating compound application solutions that use internally existing organisational software assets which they appropriately combine with external components possibly residing in remote networks. Previously isolated Enterprise Resource Planning (ERP), Customer Relationship Management (CRM), Supply Chain Management (SCM), Human Resource Management (HRM), financial, and other legacy systems can now be converted to service enabled architectures and integrated more effectively than when relying on custom, point-to-point coding or proprietary Enterprise Application Integration technology. The end result is that it is then easier to create new composite applications that use pieces of application logic and/or data that reside in the existing systems. This represents a fundamental change to the socio-economic fabric of the software developer community that improves the effectiveness and productivity in software development activities and enables enterprises to bring new products and services to the market more rapidly [6]. Only through adaptation of a holistic approach to Service Oriented Computing it is considered likely that new industries and economic growth factors can be provided [8]. Thus to unleash the full potential of SOC research a broader vision and perspective is required—one that permeates and transforms the fundamental requirements of complex applications that require the use of the Service-Oriented Computing paradigm. This will further enhance the value proposition of SOC and will lead to more effective and clearly inter-related solutions and better exploitation of research results.
2 Challenges for the Adoption of the Service-Oriented Approach Among the challenges for industry-wide adoption of the service-oriented approach is the automated composition of distributed business processes, i.e. the development of technology, methods and tools that support an effective, flexible, reliable, easy-to-use, low cost, dynamic, time-efficient composition of electronic distributed business processes. For several years, CEOs have been focused on product innovation as a driver of competitive advantage, recognising that innovation is a means to achieving all those other goals. Just recently, however,
FUSE: A Framework to Support Services Unified Process
211
their understanding of what enables innovation has changed, leading to a new focus not just on innovating products and services, but also on innovating the business processes and business models that influence the creation of innovative products [18]. Standards such as BPEL and WS-CDL that operate at the service composition plane in the Services Research Roadmap provide the basis for the composition of services and the integration of business processes that are distributed among the most disparate entities, both within an organization (e.g., different departments) and across organizational borders (e.g. consumers interacting with different businesses or government departments providing complementary services). However, so far, the automated composition of distributed business processes is still far from being achieved: no effective, easy-to-use, flexible support is provided that can cope with the life cycle of distributed business processes, with their inevitable evolution and required adaptation to changes in business strategies and markets, customers and providers relationships, interactions, and so on [8]. Service composition is today largely a static affair. All service interactions are anticipated in advance and there is a perfect match between output and input signatures and functionality. More ad hoc and dynamic service compositions are required very much in the spirit of lightweight and adaptive workflow methodologies. These methodologies must include advanced forms of coordination, end-user participation, instance based modification of process models, less structured process models, and automated planning techniques as part of the integration/composition process. On the transactional front, although standards like WS-Transaction, WS-Coordination and the Web Service Composite Application Framework (WS-CAF) are a step in the right direction, they fall short of describing different types of atomicity needs for e-Business and e-government applications. These do not distinguish between transaction phases and conversational sequences, e.g., negotiation. Another area that is lacking research results is advanced methodologies in support for the service composition lifecycle. Some of the major limitations of state-of-the-art technologies that prevent effective automated composition are: x
Lack of tools for supporting the evolution and adaptation of business processes. It is hard to define compositions of distributed business processes that work properly under all circumstances. Misunderstandings in the agreement between different organizations, as well as errors in the specification and implementation of the interaction protocols, easily occur, especially for complex processes and interaction protocols. Typical problems are business processes that wait forever, or for too long, to receive an answer from another process or that expect a different answer; or, business processes that fail to invoke another process as required and do not allow the distributed business to correctly proceed. Moreover, even in the case business interactions are initially correctly defined and implemented, they frequently stop working when some processes involved in the interactions are autonomously redefined by an external organization; this kind of evolution is very common in distributed, highly dynamic environments.
212
N. Protogeros, D. Tektonidis, A. Mavridis, Ch. Wills, A. Koumpis
x
x
Lack of integration of business requirements in the business process life cycle. While BPEL and WS-CDL are adequate for the specification of the detailed message exchanges in orchestrations and choreographies, there is the need for languages that define both the internal business needs of an organization and its requirements over external services, and for a systematic way of linking them to business processes. Indeed, without explicit requirements, it is not possible to motivate the choices that lead to the specification of a certain flow of activities within a business processes and of its interactions with other processes. Traceability, i.e. determining how a process is related to and affect business requirements and needs, cannot be supported if the two are not linked, which is of utmost importance in supporting legal requirements by IT [1]. Finally, and most importantly, if requirements are not accessible, there is no way to drive the automated composition of distributed business processes so that it could support the evolution and adaptation of the processes. Business-driven automated compositions: One of the main ideas of service oriented applications is to abstract away the logic at the business level from its non-business related aspects, the “system level”, e.g., the implementation of transaction, security, and reliability policies. This abstraction should make easier and effective the composition of distributed business processes. However, the provision of automated composition techniques, which make this potential advantage real, is still an open problem. Business-driven automated compositions should exploit business and system level separation in service compositions. According to this view, service composition at the business level should pose the requirements and the boundaries for the automatic composition at the system level. While the service composition at the business level should be supported by user-centred and highly interactive techniques, system level service compositions should be fully automated and hidden to the humans. System level compositions should be QoS-aware, should be generated and monitored automatically, and should also be based on autonomic computing principles.
Lessons learnt from the traditional ASP era are valuable for the future of SOCenabled SP. The need for a more precise focus on user needs and more control is the key feature of this new Service Oriented outsourcing paradigm. In this respect, clear answers to real problems should replace today’s hype.
3 Existing Approaches and Related Projects The evolution of Service Oriented Architecture frameworks and tools the last five years leveraged the capabilities of the IT companies for building solutions based on web services. Traditional tools have been enriched or replaced from specific SOAenabled to increase the productivity and the quality of the SOA solutions. The SOA approach created new explicit features changing the traditional system development lifecycle.
FUSE: A Framework to Support Services Unified Process
213
Current research in the field of software engineering aim to evolve existing system development methods in order to include the special characteristics of SOA. Research projects such as SeCSE [12] created tools and methodologies to facilitate application integration using the SOA approach. SeCSE project identified four areas for the engineering (design, develop, deploy and management of services) where new techniques and tools are required to facilitate and leverage the capabilities of SOA.
Fig. 1. The four stages of the SOA life-cycle (source: SeCSE project)
Research for the adoption of model driven architectures (MDA) and tools proved the strong relation of the component based nature of SOA with MDA. MODELWARE project [13] provided an open source tool integration platform that facilitates the customization of MDA tool chains for domain-specific needs. Related projects like TRUSTCOM [14] and SODIUM [15] provided SOA frameworks that facilitate the development of Information Systems based on web services. Apart from integrated systems and platforms, for SOA development there are also toolkits that can be adapted to different software engineering approaches. INFRAWEBS [16] project provide a set of tools based on the technologies of WSMO-WSMX Framework that allow the engineer to select the engineering method. The toolkits approach against the frameworks and platform due to their flexibility and adaptability to the software engineering methods that are adopted by IT companies. Commercial solutions [17][18][19] also appear to adopt the toolkit approach in order not to restrict the engineer [20]. Therefore the existing tools or frameworks use the same software engineering principles with traditional software engineering methods like Rational Unified Process. IBM[18] recently tried to incorporate the special features of SOA systems with a specialized tool that combined Rational Unified Process with SOA features. However the enhancement of the SOA features appears only in the technical level as a set of tools and functionality that facilitate the web services development. Similar to IBM approach SAP have included in their products manageability of the unified process not only to level of coding and versioning but also to the user requirement definition. Latest versions of products like the Netweaver platform
214
N. Protogeros, D. Tektonidis, A. Mavridis, Ch. Wills, A. Koumpis
[21] include components that enable and facilitate the communication between the architects and the customer. However the tools are adapted to specific technologies and framework of SAP and they mainly in the online documentation of the requirements. Another key point related to the problem is the management of the development lifecycle that also requires to be adjusted to the special characteristics of SOA development. Currently there is research on providing additional tools for management of the lifecycle that are included in the development methodology or framework. COMANCHE project [22] included tools that enable the management of knowledge produced during the development of the system improving the communication between the domain experts and the engineers. This extend typical tools or methods of project management since it manages the feedback and communication of the different communities not in the limited scope of user requirement definition but it increase the involvement of all parties in every phase of the cycle. In addition to this, the notion of requirement driven software engineering presented in REDSEEDS [23] project, includes tools and methods in order to manage feedback of the end users. The formation of the development scenario is handled by a toolkit that facilitates the communication and increase the involvement of the end user. However these tools are complementary to the development lifecycle.
4 FUSE Framework Overview FUSE vision can be encapsulated in two goals: x
x
To provide a new methodology and an open development paradigm on how user communities, development communities and IT-industry can coexist and co-work for the definition of new mission-oriented application concepts based on an on-going integrated change management that make extensive use of Service Provisioning. To provide the tools to manage the development process life cycle including the organization of the requirements elicitation processes, the compliance validation and quality checking processes in a synergetic way with both users and developers communities forming essential part of the intellectual service and software engineering processes
Based on these and the above findings, the FUSE project will tackle the problem of developing a framework that will cope with the life cycle of distributed business processes, with their inevitable evolution and required adaptation to changes. The framework will include methodologies with advanced forms of coordination, end-user and community participation, instance based modification of process models, less structured process models, and automated planning techniques as part of the integration/composition process. It will also include tools for supporting the evolution and adaptation of business processes. It is hard to define compositions of distributed business processes that work properly under many circumstances without the intervention of end-user communities. So FUSE
FUSE: A Framework to Support Services Unified Process
215
will concentrate on the development of a Web based virtual design portal that will allow the integration of business requirements in the business process life cycle. FUSE will support the abstraction of the logic at the business level from its nonbusiness related aspects, the “system level”. The service composition at the business level will be supported by user-centred and highly interactive web virtual design portal, while system level service compositions may be fully automated and hidden to the humans.
Fig. 2. Aspects taken into account in FUSE
Authors have recognized the need for taking a simultaneous view on methodological, organisational, personnel, and technical aspects to succeed in the development of FUSE, thus developing a socio-technical system. Moreover, the necessity of involving the end-users actively throughout the development process in order to arrive at systems that are actually usable, used and appreciated is today acknowledged by most system developers. Nevertheless, the software engineering approaches that still dominate indust56 ry tend less to put explicit emphasis on the end-users and on the organisational and social aspects of information systems. An example is the Unified Process (UP) which has, in recent years, received much attention as a defined process for development of software intensive systems ensuring a high quality product [5]. On the other hand, socio-technical oriented approaches, such as Participatory Design (PD), are often criticised for being imprecise and lack in defining a fully specified design process [2] and to put emphasis only on the early systems development phases resulting in that a readyto-use system is seldom delivered [10]. FUSE will combine the benefits of a sociotechnical perspective and active user participation with more formalised processes covering entire system life cycles such as the UP. The FUSE framework will be based and extend proven quality approaches such as the Unified Process, OPEN, extended participatory and socio-technical design and other. It will research the areas of Service Oriented Architectures, Web Services, Software Development, Project Management and Web based visual user interfaces. Finally, a functionally complete prototype of the FUSE system will be implemented, validated and evaluated. The project will be open source with the
216
N. Protogeros, D. Tektonidis, A. Mavridis, Ch. Wills, A. Koumpis
exception of the adaptations or extensions of commercial solutions of (mostly SME) consortium members. Our vision in FUSE is to develop the methodology and the tools that will allow the project management, the collection of the users’ requirements, the mapping to distributed business processes, and the definition of system level compositions that will allow information federation.
Project Management
Domain Expert Community
End-User Community
Engineers
PM People Collaboration – Role based FUSE Workplace (Portal) Methodology Rule Engine
Asset Mgmt
Distributed Business Process Modeling
Role Mgmt Access Ctrl Version Mgmt Rule Mgmt
Information Federation – Management of FUSE System level Compositions
Tracking
Framework for Unified Service Engineering
ERP services
SCM services
CRM services
PPM services
Parts Mgmt.
Other Business Services
Fig. 3. FUSE general architecture
5 FUSE Framework Implementation One of the main goals of our approach is to propose a new open development paradigm of increased participation of users and developers. Users will not only provide the requirements and test the final system, but will participate in each step of the service oriented development process. Based on the schematic representation of the proposed FUSE model (figure 4), there is a “community of users” which is in constant collaboration with the
FUSE: A Framework to Support Services Unified Process
217
software engineers. The authors identify two categories of users, namely the endusers and the domain experts. The first role of end-users is to provide set of features they wish the new system should encompass, captured in the form of use cases for example. The second role is to give feedback to engineers regarding the quality assessment of the system, through constant validations they perform each time a new release is available. The role of domain experts is also valuable. Domain experts define the business logic and a set of business rules and constraints to be taken into account upon services implementation. Domain experts also participate on validation procedures giving a different view on the feedback they provide.
Features List
Include & Improve
Business Processes Orchestrate
End Users
Define & Validate
Validate & Test
Rules
Formalize & Design
Features & Business rules
Map
Define & Revise
Include & Improve
Common Business Repository
Design & Deploy
Engineers
Business Process Definition
Web Services
Implement
Framework for UnifiedWeb Service Engineering Services Implementation Framework for Unified Service Engineering
Fig. 4. FUSE core blocks and interrelations
Spy-glassing the engineering community, the authors also distinguish two categories-roles, namely the system analysts and the developers. System analysts’ role is to provide the communication gateway to users’ community. They interpret the gathered features and business logic, and transform this information to lowerlevel requirements delivered to developers. The main pillars of FUSE are these communities, which identify the boundaries of the proposed open development model. The model consists of a hierarchy of three layers. The top layer is where the features and rules are defined and stored. The second layer is where the business logic and features are mapped to case business processes, and finally the third layer concerns the compositions in the system level with the application of web services. The application of the model follows an iterated fashion through a repetition of design and development processes involved within these layers. More specifically, the methodology initiates with the collection of features and business logic from the users community. Features are commonly used to specify and distinguish products in product lines. In a similar way FUSE will offer the
218
N. Protogeros, D. Tektonidis, A. Mavridis, Ch. Wills, A. Koumpis
means of communication and collaboration for capturing features and functionalities, while defining the commonalities and variability of a new service for an existing organisations’ business processes. Features also come with constraints on their usage: the selection of one feature may preclude or require the selection of others. Authorised users provide the list of desire features and authorised domain experts list the business rules and constraints. This information is stored into a physical storage and is available to all FUSE community members. System analysts then have the ability to view this information and make comments and corrections. This is an iterative process that eventually leads to an agreed set of requirements also stored. This first layer is “solution independent” meaning that stakeholders are not concerned with how but with what. For this collection and communication of information to be realised, it is needed to employ a set of techniques and tools. The outcome of this first step is a set of formulated and commonly agreed requirements to be fed to the second layer. The second layer is where features and rules identified previously are now mapped to business processes. In a same approach there is collaboration between domain experts and system analysts. The outcome of this step is a set of business processes that may be distributed, stored and made available to authorised members. The third layer is concerned with the web services implementation. Developers and system analysts collaborate to implement the business process as those are defined in the previous stage. They use their own development tools and they may develop and provide the services from their servers. The outcome is a list of web services that may be located anywhere, delivered to FUSE community. This marks the end of the first top down parse of the conceptual FUSE model. It is worth to mention here that the mapping and orchestration among the three layers is important for an s/w engineer to develop services that can be part of a distributed business process. An effort will be put by the consortium to extract the requirements of a standardization of these interfaces so that the IT-Industry or individual engineers to be able to produce web services that fit. What follows is the validation and testing of the deployed services by domain experts and end-users. The results of this validation are stored and are made available to software engineers. As changes and modifications are suggested, all the knowledge gathered into three layers will be affected. The collected feedback is communicated and a set of improvements and corrections are agreed, thus igniting a new iteration of the model’s employment. In a later time, a second parse of the FUSE methodology may happen that may initiates modification and improvements in the first development, by new features, rules, business processes and web services, which may produce a second release of the system. Again, this second release will be validated and another parse begins. This recursion is performed until one of the developed releases meets all stakeholders’ requirements and approval. Final approved outcomes of each layer will be stored into related databases for further use and improvements. These outcomes will be treated as assets that will increase reusability and effectiveness in later developments. Special focus will be given to the communication of ideas between users and developers. The exchange of tacit knowledge between the members of the FUSE communities, will lead eventually through the outcomes of each incremental
FUSE: A Framework to Support Services Unified Process
219
development to (a more tangible in nature) codified knowledge-filtered and tested-, directly mapped to services developed and through those to the rules employed and to the features provided/modified/selected. This codified knowledge and the associated specific business process mapped, will provide a template for future developments/improvements. Thus, there is the need to provide the means for effectively capturing this knowledge transformation. Apart from the obvious reason of fast traceability and hence minimising development time and maximising value, another major benefit gained by managing this knowledge transformation is the impact of the diffused knowledge organisation’s people, practices, policy and processes. From the description of our approach, the authors identify the requirements that need to be fulfilled. These requirements are associated with the proposed workplan. Initially resources will be spent on a research concerning the state of the art and the adoption of tools and technologies related to what FUSE concept (methodology and toolkit) is to incorporate. This research will primarily involve Project management, User access management, Database management, and Web Services design tools and technologies. This is to assure that fitting technologies do exist and to eliminate the risk of “re inventing the wheel”. Finally the feasibility and approval of FUSE approach is to be confirmed through the selected pilot applications.
6 Conclusions and Future Work This paper presented the requirements and the initial specification of the FUSE framework. The main objective is to adapt and specialise already developed methodologies like Unified Process to the notion of Service Oriented Computing. The requirements presented and analysed in this paper is the basis upon which the FUSE framework is built. Our future work is the selection, adaptation and development of tools to support the FUSE methodology. In addition, the authors aim to provide adapters to wide accepted commercial tools in order to allow users to continue working in the same environment. Finally an important final step is evaluation of the framework from the industry as well as from academic institutes in order to improve our work. The evaluation is expected to start after the first prototype toolkit enabling the research team to utilise early results.
References [1]
[2]
Agrawal R., Johnson Ch., Kiernan J., Leymann F. (2006). Taming Compliance with Sarbanes-Oxley Internal Controls Using Database Technology. 22nd Int’l. Conf. on Data Engineering ICDE’2006, Altanta, GA, USA, April 2006. Constantine, LL and Lockwood, LAD (2002) Usage-centered engineering for web applications IEEE Software March/April
220
N. Protogeros, D. Tektonidis, A. Mavridis, Ch. Wills, A. Koumpis
[3]
Currie, W., Desai, B., Khan, N., (2004). Customer evaluation of application services provisioning in five vertical sectors. Journal of Information Technology 19 (1), 39– 58. IBM (2007). Product Lifecycle Management. Empowering product lifecycle management with service oriented architecture. How SOA makes PLM more flexible and cost-effective to support business imperatives. March 2007. Kruchten P., (2004). The Rational Unified Process: An Introduction. Boston, Addison Wesley. Leymann F., (2003). Web Services: Distributed Applications without Limits. Proc. BTW'03 (Leipzig, Germany, February 26-28, 2003), Lecture Notes in Informatics, volume P-26, Gesellschaft fuer Informatik (GI), Bonn, Germany, 2003. Leymann F., (2005). Combining Web Services and the Grid: Towards Adaptive Enterprise Applications. Proc. CAiSE/ASMEA’05 (Porto, Portugal, June 2005). Papazoglou M. P., Traverso P., Dustdar S., Leymann F. (2006). Service-Oriented Computing Research Roadmap. 1 March 2006. Rust, R.T., Kannan, P.K., (2003). E-service: a new paradigm for business in the electronic environment. Communication of the ACM 46 (6), 36–42. Tollmar, K (2001) Towards CSCW design in the Scandinavian tradition department of numerical analysis and computer science, Stockholm University Doctoral Dissertation Walsh, K.R., (2003). Analyzing the application ASP concept: technologies, economies, and strategies. Communications of the ACM 46 (8), 103–107. SeCSE Project, Service Centric Systems Engineering Project, available at http://secse.eng.it/pls/secse/ecolnet.home as viewed in March 2007 MODELWARE Project, available at http://www.modelware-ist.org/ as viewed in March 2007 TRUSTCOM Project , available at http://www.eu-trustcom.com/ as viewed in March 2007 SODIUM Project, available at http://www.atc.gr/sodium/, as viewed in March 2007. INFRAWEBS Project, available at http://www.infrawebs.eu/ as viewed in March 2007 BEA Integration Platform, available at http://www.bea.com/framework.jsp?CNT=index.htm&FP=/content/products/integrate / as viewed in March 2007 IBM WebSphere available at http://www-306.ibm.com/software/websphere/, as viewed in March 2007 ORACLE SOA Application Server available at http://www.oracle.com/appserver/bpel_home.html, as viewed in March 2007 JARDIM-GONCALVES, R., GRILO, A. & STEIGER-GARCAO, A. (2006) Challenging the interoperability between computers in industry with MDA and SOA. Computers in Industry, 57. SAP Netweaver available at http://www.sap.com/platform/netweaver/index.epx, as viewed in March 2007 COMMANCHE Project, available at project http://www.ist-comanche.eu/ as viewed in March 2007 REDSEEDS Project, available at http://www.redseeds.eu/, as viewed in March 2007
[4]
[5] [6]
[7] [8] [9] [10]
[11] [12] [13] [14] [15] [16] [17]
[18] [19] [20]
[21] [22] [23]
Adopting Service Oriented Architectures Made Simple L. Bastida, A. Berreteaga, I. Cañadas European Software Institute, Parque Tecnológico #204, 48170 Zamudio, Bizkaia, Spain {leire.bastida},{alberto.berreteaga},{inigo.canadas}@esi.es
Abstract. Service Oriented Architectures (SOA) is envisioned as providing major benefits related to business, such as: increasing efficient cost and complexity management; enabling organisational evolution and interoperability; or increasing competitiveness. But, will the adoption of SOA benefit your organisation? Before adopting SOA, organisations must understand not only what it is, how it can benefit them and what their motivations to adopt SOA are, but also the organisational challenges required to undertake this transition as well as the changes required to manage the entire Service based systems lifecycle. The adoption of SOA has to be carefully understood from the organisational and technological perspectives. In this paper we analyse the organisational and technological challenges an organisation adopting SOA faces and propose a set of best practices that will enable an organisation to efficiently adopt SOA. Keywords: Service oriented Architectures for interoperability, Intelligent infrastructure and automated methods for business system integration, Modelling methods, tools and frameworks for (networked) enterprises
1 Introduction Service Oriented Architecture (SOA) is an approach focused on software development to build loosely-coupled distributed applications using a collection of services. An organisation adopting SOA will be able to: x x
Implement loosely-coupled integration approaches that will reduce both the complexity and cost of resources, communication and information integration and management. Increase the Information Technology (IT) productivity, agility and business flexibility, since SOA enables component/service based application development.
222
L. Bastida, A. Berreteaga, I. Cañadas
x x
Reduce organisation’s business processes management and maintenance costs due to SOA enables defining modular and flexible applications/systems. Manage business evolution since new and/or upgraded services can be introduced with minimum impact on existing systems. The service interfaces provide stability for service users while the underlying systems may evolve.
In summary, SOA defines the adequate solution for all those organisations that need to align their business strategy with IT, enabling effective process evolution and maximizing resource management.
2 Myths Related to SOA Every organisation should have in mind that adopting SOA may not be the only valid answer to fulfil their necessities. An organisation should understand that not only for deploying SOA will all its problems be solved. There are several SOA myths that can lead to a misconception about what SOA is and how it may help organisations: x x
x
x
x
SOA provides a complete architecture. SOA is an architectural pattern or a new way of developing software but it is not the system architecture by itself. Legacy systems can be easily integrated within SOA. One of the most attractive promises for an organisation to adopt SOA is to provide full support in reusing its old legacy systems capacities and in allowing a significant Return of Investment (ROI). However, the process of migrating legacy systems may not be always easy and automatic, since this migration can imply a great adaptation effort to expose those legacy systems as services. Using Web services means to have SOA. Although most of the SOA existing definitions mention Web services technology as a way to implement SOA. Web services do not necessarily mean SOA. SOA can be implemented not only by using Web services but also by using other technologies, like CORBA or DCE (Distributed Computing Environment). By using XML and WSDL interoperability among services developed by different organisations is guaranteed. Web services use XML to structure information, facilitating syntactic interoperability. However, Web services present certain limitations like the lack of expressing semantic data. This implies that no relevant information about the service, such as its behaviour or the underlying protocol, is provided. Interoperability requires semantic as well as syntactic agreements in order to succeed. SOA is just a technology. SOA is more a concept than a technology, SOA goes beyond that since it does not only imply a technological but also an organisational change.
Adopting Service Oriented Architectures Made Simple
x
223
The evolution towards SOA requires a cultural change that impacts all business areas within an organisation. Developing service based applications and runtime dynamic services composition is simple. Current existing technologies have not reached the point to make true this affirmation in organisational contexts.
Service discovery and composition issues are currently under research in several R&D projects (such as SeCSE [1], SODA [2] or AMIGO [3]) and some of the obtained solutions will require a maturity process before being ready to be applied in real life environments. By presenting these SOA myths we do not intend to discourage organisations in using SOA, on the contrary to ensure organisations a clear understanding of SOA and what an organisation should expect. There are clear benefits for organisations to adopt SOA: reduce costs and delivery times, provide faster access to services, provide more flexible services or guarantee a greater customer satisfaction.
3 The Four Pillars for a Successful SOA Adoption SOA understands business processes as a set of linked services. When business processes are established on SOA basis, an organisation is able to ensure interoperability between data and software applications (before isolated), not only through the different business units and departments within the organisation but also with third parties. This approach enables organisations to improve productivity, to provide faster response to market changes and to take advantage of new business trends.
Change Management
Governance
Technologies
Maturity
Business-IT Alignment
Fig. 1. The four pillars in SOA adoption.
Therefore, when adopting SOA the first step is to identify organisational business processes in order to support the business strategy requirements. These four fundamental pillars are shown in Fig. 1.
224
L. Bastida, A. Berreteaga, I. Cañadas
3.1 SOA Maturity Once an organisation has decided to adopt SOA, it is necessary to evaluate the maturity of the organisation in order to acknowledge if it is prepared to evolve to Service oriented Systems and establish the SOA adoption path. Maturity models are useful to know how to adopt technologies since they enable the identification of existing needs, reduce risks when adopting technologies and accelerate the adoption of the technology. Maturity models provide a reference model which enables an organisation evaluation and a set of best practices that facilitate the adoption of the desired technology in an organisation. Nowadays, there are several SOA maturity models available. Defined by companies like Sonic [4], HP [5], BEA [6] o IBM [7] which are focused on selling their own commercial products. In other words, these models are based on the capabilities these product vendors are able to offer and not on the architecture maturity neither on its adoption process an organisation should follow. As a result, there is a need for a global and practical maturity model allowing organisations to evaluate against it and a set of best practices guiding their successful SOA adoption. It is important to point out that most of these maturity models are clearly inspired by CMMI (Capability Maturity Model Integration) [8], a de facto standard within the software industry for software process as well as for other disciplines improvement. Due to the wide adoption of CMMI and the recent explosion of SOA approach, some initiatives have been ongoing to extend CMMI model in order to provide support to the establishment, provision and management of services. This extension, known as CMMI for Services [9] and expected to be released in 2008, will manage the changes that SOA implies for the software developing techniques and practices. 3.2 SOA Technologies To be able to face a specific problem, it is necessary to understand the benefits and disadvantages offered by the different SOA technologies depending on the business and system requirements. However, how to know which is the best technology to implement SOA in an organisation? Web services have become the preferred way to realize SOA, being the most current used technology to implement Service centric Systems for real projects. This is mainly because Web services are based on industry supported standards. Other available technologies that could be used to implement SOA are CORBA, DCE, JINI/JS, OSGi, MQSERIES, etc. The organisations that are influencing in the future of SOA are those involved in the standardisation of the different technologies related to SOA. For instance, there are several standardisations bodies with a great involvement of industry, such as W3C [10], OASIS [11] or WS-I [12].
Adopting Service Oriented Architectures Made Simple
225
3.3 SOA Governance SOA governance implies the definition and management of policies and guidelines to coordinate SOA infrastructure providers, service providers and application developers. In other words, a well designed and managed SOA governance model helps the organisation to manage and supervise its complete life cycle, and prevents an excessive growth of its SOA infrastructure. An InfoWorld [13] report addressing the problems and the barriers organisations find when adopting SOA reveals that the lack of a governance model is an important barrier when adopting SOA. Fig. 2 shows that the lack of a SOA governance model in organisations has increased its relevance a 4% between 2005 and 2006 as a barrier to adopt SOA.
Organisational barriers Lack of skills and training Lack of budget Lack of best practices Lack of governance model Others Fig. 2. Governance as a barrier to adopt SOA
3.4 Change Management Every organisation has to be aware that the evolution towards SOA requires not only technological changes but also cultural changes, which affect at the same time the organisation, its people, its processes and its technology. Any change in an organization often implies risks, so it is required to provide an efficient management in order to reduce those risks and increase the advantages. Therefore, management should define a policy which establishes new guidelines and practices, set up an implementation plan, which establishes roles and responsibilities, and finally, tracking the implementation plan in its execution. However, the most important requirement to support the change management is the creation, from the early stages in the SOA adoption process, of a governance committee, composed of representatives from both business and IT areas.
226
L. Bastida, A. Berreteaga, I. Cañadas
4 How to Adopt SOA from a Management Perspective There are many and different problems that can be identified when adopting SOA with respect to required changes in management; probably as many as organisations. We will consider as scenario a big organisation, a governmental public institution or else a big organisation with different business units. In this scenario, it is common to find different development teams attending specific needs of each business unit, normally working independently and, in some cases, with a restricted financing or budget depending on each business unit. Taking into account just these two characteristics, it is possible to highlight some issues where the SOA management and governance should put special emphasis. SOA adoption will involve the need to coordinate those development teams and business units. Each team should know, before starting a new project, which are the existing services (already developed and operational) and which services are currently under development, in order to avoid redundancies and possible conflicts. Concerning the currently operative services, the information provided should specify the interface and possible special restrictions, such as security requirements. For those services under development, not only the previous mentioned information is necessary but also when they will be available and the current development state. What has even a higher impact in the organisational management is the financial independence among the different business units. Conflicts may arise regarding the finance and competence issues among different business units with respect to the development and use of services. These types of issues as well as many others will arise at organisational level when adopting SOA. This is why it is so important to consider SOA management and governance at early stages of the process of adopting SOA. If these are not considered, even though technically an organisation can succeed in the adoption of SOA, failure may overcome at organisational level, and therefore the whole adoption initiative will fail. Managing the four pillars, showed in Fig. 1 in adopting SOA, will ensure that an organisation avoids these problems by defining adequate control mechanisms (using roles, speakers and specific attributions, etc) as well as the definition of working patterns and management systems for the organisation as well as for the development considerations. This enables the adequate managing of this new organisational structure that integrates and coordinates the working flow and the communication among the different business units. The following are some best practices based on ground experience when focusing on management and government issues when adopting SOA: x
It is not necessary to start from scratch. Any effort that does not add value to the business must be avoided (i.e. redesigning those applications that already work correctly). On the other hand, if there are applications that do not work properly, or it is the organisational process which must be improved, then there are good reasons that motivate the creation of a service to provide the required functionality.
Adopting Service Oriented Architectures Made Simple
x
x
x x x x
227
In SOA, the policies define the design, the deployment and the access to the services, the technical protocol implementation, the data protection and the Service Level Agreements (SLA). Therefore, it is necessary to define who has the authority and responsibility to decide and formulate specific policies and the procedures to be followed in order to communicate, execute and control those policies. It is essential to clearly define the roles and responsibilities within the organisation. For doing so, it is useful to create a group composed of different representatives of multiple business units, which will be in charge of planning and assisting in the execution at organisational level. In terms of responsibilities, every person involved in the SOA infrastructure needs to be trained in order to ensure the fulfilment of the defined policies and procedures. It is necessary to define metrics and indicators to be able to measure and evaluate if the established objectives have been reached and to show the maturity progress of SOA adoption. Automate as much as possible the management and governance process, specially during execution, which helps offering higher agility to business when changes both in policies and in the SOA infrastructure occur. A reasonable approach to start with a SOA adoption process is to undertake an evolutionary approach, by developing and deploying a few services within the organisation in order to reduce risks. It is important to have early validation of the approach in real business experience at different steps of the development cycle and not limiting this to laboratory implementations.
Bearing in mind these best practices, one solution to the proposed scenario could be to structure the development groups in order to support the different business units, and to define policies to control the “purchase-selling-exploitation” of the services between different business units. In any case, the solution to be applied depends on each organisation.
5 SOA vs Traditional Development SOA involves a change in the traditional software development practices. Fig. 3 shows the key differences between the software developments using traditional approaches and those using SOA approach. So as to make it clear, SOA approach is oriented to the classic development of distributed applications, where applications and services that compose a system are loosely coupled. This enables in SOA based systems to easily replace and add new services and changes without impacting on the service interface. Even though this kind of development requires an additional effort from a more traditional development, the resulting applications are flexible and capable of adapting in runtime to different needs, making them optimal and profitable. Moreover, the Service based Systems are composed by several applications and distributed services from different organisations; therefore the control and
228
L. Bastida, A. Berreteaga, I. Cañadas
management of the systems is a critical issue. In exchange, these systems are characterized by being accessible by different and multiple users. Traditional Software Development
SOA-based Systems Development
Tightly coupled system components
Loosely coupled system components
No flexible applications
Flexible and runtime adaptable applications
Required to know final users and usage patterns
Multiple and different users
All system components located in the same organisation
Systems composed of applications and services provided by third parties
Fig. 3. Traditional development vs SOA.
Taking into account these differences between SOA and the traditional software development; how should a SOA project be approached technically? Which steps or best practices must an organisation follow to develop a new project based on SOA?
6 Adopting SOA at Technological Level This section defines the SOA ecosystem that encompasses all aspects of service oriented engineering and comprises the main processes that deal with the development, execution and management of services in the operational environment. These main processes map well-defined topics of interest in the service oriented world, in particular Service Engineering, Service Acquisition and Provisioning, and Service based Systems Engineering. Fig. 4 describes this SOA ecosystem, identifying the main actors and roles, the main processes (as use cases) and the links representing interaction relationships among actors/roles and processes. In particular, the circular arrow in the middle of this figure represents that the various processes can be executed sequentially starting by building the atomic services, provisioning them, and composing them into a service centric system, which in turn can potentially lead to a higher level service being offered in the marketplace.
Adopting Service Oriented Architectures Made Simple
229
Service Broker Service Acquisition and Provisioning Service Provider
Service Consumer
Validation & Verification
Service based Systems Engineering
Service Engineering
Service Developer
Service Integrator
Fig. 4. SOA Ecosystem.
Service engineering is the overall discipline where the service developer will develop services, and make them available for consumption. At a point in this lifecycle, the service provider will take over from the service developer to deliver the service in the marketplace. In some cases the company developing the service will host the service itself or may choose to work with a third party as a service provider. This brings us to the service marketplace, which is the functional area where service providers and potential service consumers meet and negotiate via a service broker to come to a formal agreement about consuming the services. This aspect gains particular importance in a service centric world. Once the service consumer has made the choice about which services suit their needs, the focus shifts to the functional area of building and managing the service centric system. Here the service developer will compose the system according to the initial business requirements, deliver and manage the service centric solution. Next, these main processes are described in more detail. 6.1 Service Engineering The purpose of Service Engineering process is to perform the analysis, design, implementation, validation & verification and preparation of the atomic services for future deployment. An atomic service is a service whose implementation is self-contained and does not invoke any other services [14]. Service Engineering process is focused on service developers. The tasks are similar to those found in traditional software engineering, but naturally with a greater emphasis on those aspects related to service-oriented system development. These tasks are five: x
Business modelling: The objective of this task is to define the business model taking into account the interaction relationships among the different
230
L. Bastida, A. Berreteaga, I. Cañadas
x x
x x
organisational units, such as organisation departments, providers or partners. Requirements definition: This task is responsible of eliciting and collecting system requirements. Design, development and implementation: After defining the business model and the requirements, this task is in charge of developing and implementing the service. Here activities related to the integration of existing legacy systems to a service oriented context are included. Validation & Verification: The objective of this task is to validate and verify the service works properly and behaves as expected. Preparation for future deployment: Once the service is validated and verified, it is necessary to prepare the service for its future deployment in the operational environment. For doing so, some training, special requirements or user manual could be required to define.
6.2 Service based Systems Engineering Service based System Engineering process represents the core steps of servicebased system development, including analysis, design, development, deployment and operating a system based on services. These Service based Systems, also called composite services, integrate and compose different existing services. Service based System Engineering process is oriented to service integrators. Most of the tasks gathered in Service based System Engineering process are similar to those found in Service Engineering process but focusing on the special aspects related to composite services. Those special aspects affect to the two following tasks: x
x
Design, development and implementation: A Service based System has been defined as a composite service. So, to design and develop a Service based System some mechanism for discovery and composition techniques are required. Preparation for future deployment: Once the service is validated and verified, it is necessary to prepare the service for its future deployment in the operational environment. In the case of defining a Service based System that is context-aware and self-configuring, it is important to define the issues and actions required to manage the system at runtime (i.e. how to re-negotiate the SLA previously defined during the execution when the system does not behave as expected).
6.3 Service Acquisition and Provisioning Service Acquisition and Provisioning is concerned with the construction and maintenance of a “service marketplace”. The service marketplace refers to a conceptual market where services are made available, publicised and traded between consumers and providers. This takes the focus away from services as an
Adopting Service Oriented Architectures Made Simple
231
implementation strategy and highlights the separation of service providers and consumers. The tasks gathered in this process are focused on enabling both the exposition and provision of services and also the management and monitoring at runtime. These tasks are the following ones: x x
x
x
Service deployment: This task makes the service available in a suitable runtime environment in order to receive incoming requests from potential consumers. Service publication & exposition: This involves making a deployed service’s specification available, via appropriate registries, to potential service consumers. Once publicised, the specification allows a service to be discovered by potential users when they query the registry according to their system needs. Negotiation of the contractual agreement: This task manages the mechanism by which a Service Level Agreement (SLA) is reached, between service providers and service consumers, on issues such as cost and Quality of Service (QoS) and is in charge of the SLA management. Service Monitoring: Monitoring allows to observe and supervise the service behaviour at runtime in order to ensure that the service performs according to the consumer’s expectations. If it does not, then according to previously defined recovery policies, suitable recovery action may be undertaken.
6.4 Validation & Verification Verification and Validation is a cross-cutting process that highlights the dualperspective nature of testing in SOA. Service developers, whether atomic or composite, employ testing activities to establish a level of confidence in the operation of the services they develop. Service consumers employ testing both to clarify the operation of services and also to determine a level of performance. Service consumer testing may take place prior to making a decision to employ the service in question – as part of service selection activities – and it may also take place after a decision to employ a service – as a form of monitoring and as regression testing following a service upgrade, for example.
7 Conclusions SOA approach is foreseen as the best way to achieve the common goals of interoperability, agility, and reusability. However, there are different aspects that must be taken into consideration when adopting SOA: x
Many issues related to SOA are under research in several R&D projects and initiatives, such is the case for automation, context-aware dynamic service composition or service discovery and others. Some of these solutions will require some time before being ready fir industrial use.
232
L. Bastida, A. Berreteaga, I. Cañadas
x
It is important to have in mind at all times that SOA adoption does not require that each element in the organisation must be migrated to services. This may not be online with the organisational business goals.
As a final consideration, the key for success in the adoption of SOA relays in the four pillars mentioned in Fig. 1, technology by itself is not the way. There is a need for the establishment of an adequate management and governance process.
IST-SeCSE project, VI Framework Programme (2004-2008). http://secse.eng.it SODA ITEA Project, 2006-2008. http://www.soda-itea.org IST-AMIGO project, VI Framework Programme (2004-2008).http://www.hitechprojects.com/euprojects/amigo/ Sonic, Systinet, AmberPoint, BearingPoint: A New SOA Maturity Model, 2005. Andrew Pugsley: SOA Domain Model & HP SOA Maturity Model, HP, 2006. David Groves: Successfully Planning for SOA: Building Your SOA Roadmap, BEA, 2005, http://dev2dev.bea.com/pub/a/2005/12/soa-roadmap.html Ali Arsanjani: Increase flexibility with the Service Integration Maturity Model (SIMM), IBM, 2005. Software Engineering Institute: CMMI® A-Specification v1.6, February 2004. Software Engineering Institute: CMMI for Services: Initial Draft, September 2006 World Wide Web Consortium (W3C), www.w3.org Organisation for the Advancement of Structured Information Standards (OASIS), www.oasis-open.org Web services Interoperability (WS-I), www.ws-i.org InfoWorld Research Report, SOA, 2005-2006 Booby Woolf: Composite Services, April 2006 ww-03.ibm.com/developerworks/blogs/page/woolf?entry=composite_services
Making Service-Oriented Java Applications Interoperable without Compromising Transparency Sven De Labey and Eric Steegmans University of Leuven, Dept of Computer Science 200 A Celestijnenlaan, B-3000 Leuven, Belgium {svendl,eric}@cs.kuleuven.be
Abstract. Object-oriented programming languages lack high-level support for platformindependent service interactions. In Java, for instance, the burden of guaranteeing sustainable interoperability is put entirely on the programmer. Java Remote Method Invocation requires its invocation targets to be remote Java objects, so Web Services and other targets can never be invoked without verbose interactions with specialized library classes. This lack of transparency forces programmers to consider heterogeneity problems over and over again, even though interoperability is ideally a middleware responsibility. Also, the mix of business logic and technical concerns obfuscates the source code, thus decreasing maintainability and code comprehension. In this paper, we show that interoperability in Java applications can be achieved without compromising transparency. We describe a Java extension and focus on how this language enables a precompiler to transparently inject the boilerplate code that realizes interoperable service interactions. Deferring interoperability provisioning to such a precompiler allows programmers to focus on the implementation of the business logic without being distracted by heterogeneity issues occurring in the service architecture in which their application will eventually be deployed. Keywords: Architectures and platforms for interoperability, Service oriented Architectures for interoperability, Engineering interoperable systems, Design methodologies for interoperable systems
1 Introduction Web Services expose their functionality using standarized WSDL [1] documents and they communicate over an agreed upon protocol (SOAP [2] over HTTP). By advertising their interfaces using WSDL, they realize platform interoperability as every programming language can define WSDL bindings that translate their method calls back and forth to WSDL. Java service clients, for instance, can
234
Sven De Labey and Eric Steegmans
execute such translations by means of Apache Axis [3], a set of specialized library classes for web service interactions. Using this API, programmers are able to overcome the problems of platform heterogeneity by benefitting from the interoperability that web service interactions provide. While they may effectively hide platform heterogeneity, Web Services cannot cope with protocol diversity: their hardwired reliance on SOAP is thus a limiting factor for true enterprise application interoperability. Various components executing in a service architecture may publish services that talk other protocols, and these services cannot be contacted using basic APIs such as Apache Axis [3]. Currently, the programmer is forced to decide on the communication protocol at implementation time, leading to interoperability problems in architectures where services may decide to switch to other protocols at runtime. This drawback can be avoided by relying on another framework: the Web Service Invocation Framework [4]. WSIF allows programmers to invoke services in a protocol-independent way by deferring protocol decisions to dynamic protocol providers. It is clear that such frameworks provide increased interoperability at the cost of decreased transparency. Programmers need to learn APIs such as Axis or WSIF, while essentially, interoperability has little or nothing to do with an application’s business logic. It is a middleware responsibility, so it should be hidden for the implementors of the business logic. Nonetheless, verbose and technical middleware interactions continue to appear in the source code, and the failure to separate functional concerns from non-functional code leads to poor code reusability and portability, as well as increased maintenance cycles. In this paper, we investigate how the lack of transparency in object-oriented programming languages can be cured, taking Java as an example. We begin by pointing out some major transparency issues in Axis and WSIF (Section 2). Then, we provide a quick overview of ServiceJ, our language with specialized concepts for programming service interactions (Section 3) and we show how the ServiceJ precompiler is able to inject interoperability-specific technicalities during the transformation from ServiceJ to Java (Section 4). Section 5 provides an example of such an interoperable client-service interaction. Sections 6 and 6 present related work and some final thoughts.
2 The Transparency–Interoperability Tradeoff in Java In this section, we show how platform-independent, protocol-independent service interactions can be programmed in Java. We survey the functionality found in Axis to invoke web services (platform independence), and we review the Web Service Invocation Framework (platform and protocol independence). Web Service Interactions with Axis. Listing 1 contains a service invocation using Axis. An echoString operation is invoked and the result is printed on screen. Lines 7–8 contain the business logic; the remaining lines comprise middleware interactions. This example shows that programmers are made responsible for the creation of a Service object (line 3) representing the target, and a Call instance
Making Service-Oriented Java Applications Interoperable
235
(line 4) representing the invocation. They must also specify the target address (line 5) and the name of the operation (line 6). Evaluation. Driven by Axis, Java clients can now cooperate with any application that exposes its functionality as a web service. This gain in interoperability, however, comes with a large cost. The functional code in Listing 1 contains verbose, non-functional interactions with the Axis platform, leading to a number of transparency problems:
Listing 1. Invoking Web Services using Apache Axis
x
x
Reified service calls. Contrary to method invocations via local object references, a web service call must be built from the ground up based on reification. The web service is reified as a Service instance, whereas the service call is represented by a Call object, of which the target and the operation must be set by the programmer. Consequently, seven lines of code are needed where local Java applications would need only one or two. Endpoint URL. The business transaction executed here is independent of the location of the service, but the service endpoint URL necessarily appears in the source code, as it is needed to specify the endpoint address of the Call object. Such hardwired URLs not only jeopardize transparency, they also break the client if that remote service is migrated or deleted.
But there is more to be sacrificed before programmers can integrate the benefits of interoperability in their applications. In Listing 1, it is easy to see that no compiler can guarantee that echoString is an existing method, or that it accepts exactly one String as a formal argument. This is in strong contrast with local method invocations that can easily be checked for syntax violations and typing errors. Another problem, already mentioned above, is that SOAP, the underlying communication protocol, is hardwired. This decreases interoperability, since services not talking SOAP can never be invoked. Even worse, SOAP-speaking services dynamically switching to another protocol (e.g. REST) are bound to break clients that statically depend on SOAP as the messaging protocol. In summary, interoperability in Axis comes with a high cost: it compromises transparency, disables important compiler guarantees, and hardwires protocol
236
Sven De Labey and Eric Steegmans
decisions at compile-time. Another framework, WSIF, tries to overcome these problems, and it is discussed next. Service Interactions with WSIF. Compared with Axis, the Web Service Invocation Framework improves interoperability in two ways. First, it allows programmers to invoke any service that exposes its functionality in a WSDL document, so services not following the web services paradigm may be invoked as well. Second, WSIF determines the underlying protocol dynamically, so it can speak other protocols than SOAP, and it can react on protocol changes by switching between protocols at runtime. Listing 2 illustrates how client-service interactions are programmed using WSIF. Both StockQuote and QuoteLog are WSIF stubs, acquired using a WSDL2Java compiler [3]. The code first requests information from a StockQuote service and then sends the result to a QuoteLog service.
Listing 2. Invoking Web Services using WSIF
Making Service-Oriented Java Applications Interoperable
237
Evaluation. WSIF further improves interoperability of Java service clients, but this gain is again overshadowed by a tremendous increase in lines of code, which firmly decreases transparency. Developers need to create a service factory (line 3), they need to provide the WSDL location of the service, combined with other technical data (lines 6–20). Then, they must ask for stubs, and convert these stubs to the right type (lines 23–24). And when the service client can finally interact with the service reference, programmers must hope that the service still provides the same QoS (service volatility problems are ignored by WSIF) and that it is still reachable (WSIF does not handle distribution problems). In addition, robustness problems remain unsolved: a ClassCastException occurs when the stub is cast to an incompatible type on lines 23–24. Towards Transparent Interoperability in Java. In this paper, the main research question is whether we can make Java service clients interoperable without sacrificing transparency and compile-time guarantees. The next section presents ServiceJ, our Java language extension that introduces declarative language constructs for programming robust client-service interactions. Section 4, shows how the transformation of this language extension to regular Java code realizes the injection of middleware interactions that foster sustainable interoperability.
3 ServiceJ. A Java Dialect for Client-Service Interactions ServiceJ is a Java extension that introduces specialized language concepts for client-service interactions in service-oriented architectures. The main objective of this extension is to provide programmers with a higher level of abstraction by shielding them from the technical, often middleware-dependent, details of serviceoriented computing. In this section, we provide a conceptual overview of ServiceJ, and we refer to [5] for the technical details of the extension.
Listing 3. Invoking services using ServiceJ language constructs
Type Qualifiers. The objective of type qualifiers is to distinguish between variables pointing to local objects and variables containing volatile service references. ServiceJ decorates the latter with the pool qualifier. This qualifier triggers the ServiceJ-to-Java transformer to inject middleware interactions for (1) non-deterministic service retrieval, (2) dynamic service binding, and (3) faulttolerant service invocation. By relying on code injection, we allow programmers to
238
Sven De Labey and Eric Steegmans
focus on the business logic, while the injected middleware interactions ensure the robustness of service invocations. ServiceJ also introduces a second qualifier, sequence, to be used when deterministic service selection is required in stead of the default non-deterministic selection algorithm (see [5] for a complete discussion on ServiceJ’s type qualifier hierarchy). Declarative Operations. Increased transparency does not imply that programmers lose control over service interactions: ServiceJ introduces declarative operations that allow programmers to fine-tune service selection at a high level of abstraction. Listing 3 illustrates how ServiceJ’s where operation is used to impose a Quality of Service constraint: the middleware may only inject Printer services that print at least 15 pages per minute (line 5). A second operation, orderby forces the middleware to take into account user preferences [5]. In listing 3, for instance, the middleware must bind the printer variable to the Printer service with the shortest job queue (line 6). In summary, type qualifiers and declarative operations specify what must be done rather than how it must be done. They provide important information to the middleware, but they don’t jeopardize transparency nor do they compromise compile-time guarantees on the correctness of method invocations. The next section shows how middleware instructions for increasing interoperability are injected driven by ServiceJ’s new language constructs.
Fig. 1. Overview of the ServiceJ-to-Java transformation process
4 Compiling ServiceJ to Java. Injecting Interoperability ServiceJ source code is transformed to Java before it is compiled by the Java compiler. The advantage of this transformation is that ServiceJ applications become interoperable with existing Java applications. Libraries and APIs can thus be reused without requiring source code modification. In this section, we focus on this transformation, which comprises two phases: (1) the construction of a metamodel representation of the ServiceJ application, as described in Section 4.1, and (2) the transformation of that metamodel instance to Java code, as described in Section 4.2. An overview of the ServiceJ-to-Java transformation process is shown in Figure 1. This compilation process is entirely transparent for the programmers; they can treat it like a black box process consuming ServiceJ source code (on the left) and producing Java class files (on the right).
Making Service-Oriented Java Applications Interoperable
239
4.1 Representing ServiceJ Applications in a Metamodel The first step of the transformation process is the construction of a basic abstract syntax tree (AST) representing a ServiceJ program. This AST is built by the ServiceJ Lexer and Parser. Both tools were built by feeding the language definition of ServiceJ to ANTLR [6]. Basic ASTs represent ServiceJ source programs, but they lack important semantic information about the program. This is because ASTs consist of semantically poor nodes, while we need to operate on classes, methods, and other semantically rich language concepts. Without the appropriate semantic information, it is very hard to decide whether or not to decorate source code elements with middleware interactions for increasing the interoperability of the resulting Java program. Therefore, we include another preprocessing step before executing the ServiceJ-to-Java transformation.
Fig. 2. ServiceJ metamodel instances store semantic information of ServiceJ programs
Building a Metamodel from an AST. Once the AST of the ServiceJ program is built by the parser, it is transformed into an instance of the ServiceJ metamodel. This metamodel contains classes representing programming languages artifacts, such as variables, classes, method invocations, etcetera. Thus, the transition of an AST to a metamodel instance injects semantically rich information that is required by the ServiceJ-to-Java transformer. A simplified example of a metamodel part is shown in Figure 2. The pool variable is modeled by two metamodel classes: (1) a MemberVariable and (2) a class PoolQualifier. The method invocation quote.getStockQuote("IBM") is created as an instance of MethodInvocation connected to an instance of InvocationTarget. The latter is linked to the declaration of the quote variable, represented by MemberVariable. The major advantage of having this semantic link is that it becomes straightforward to check whether or not the operation is invoked on a pool variable; if the InvocationTarget is linked to a MemberVariable with a Pool instance attached to it, then the ServiceJ-to-Java transformer must inject WSIF interactions for realizing transparent interoperability; if the MemberVariable is not linked to a PoolQualifier instance, then the method invocation is executed on a local object reference, so WSIF injection must be avoided. 4.2 Transforming ServiceJ Metamodel Instances to Java & WSIF The second step of the ServiceJ compilation process is the transformation of the ServiceJ metamodel instance to a Java metamodel instance. This finishes the compilation process because a Java metamodel instance can be written out as Java
240
Sven De Labey and Eric Steegmans
code, which is eventually fed to the Java compiler so as to obtain the relevant .class files. The transformation is depicted in Fig. 3, which presents a more detailed version of the process depicted on the right part of Fig. 1.
Fig. 3. Metamodel instance transformation and Java code injection
Transformation Inputs. Fig. 3 shows that the ServiceJ transformer requires two inputs. The first one is the ServiceJ metamodel instance ’300, obtained after semantic information was injected in the abstract syntax tree representing the service client (see Sect. 4.1). The second input comprises a list of precompiled stubs ’301. For each pool variable of type T, the ServiceJ transformer requires a stub of type T. These stubs are used to typecheck the original ServiceJ code, as such allowing the compiler to provide static guarantees on the correctness of the service client. Other information, such as the location, the platform and the protocol are ignored by the ServiceJ transformer because hardwiring that information dramatically reduces the interoperability of the service client being compiled. Instead, the transformer injects instructions that deal with these challenges at runtime, as discussed next. Code Transformation. The ServiceJ transformer ’302 starts by iterating over the ServiceJ metamodel instance, transforming each program element to its corresponding Java metamodel element. The transformation from ServiceJ method invocations to Java method invocations represents the major challenge during this process. Indeed, ServiceJ uses different transformation strategies based on the type qualifier (pool, sequence, or nothing) that was used to declare the variable on which the method is invoked. Method invocations on regular fields are left unmodified during the transformation whereas interactions with pool or sequence fields require the ServiceJ compiler to inject code that handles (1) interoperability challenges and (2) middleware-specific tasks. Both are depicted as step ’303 in Figure 3, and they are discussed below. 1.
Interoperability Provisioning. x
Service Discovery. Code is injected for dynamically discovering service candidates before starting the interaction. This increases interoperability when compared with WSIF, which relies on service endpoints of which the location was hardwired during the implementation phase.
Making Service-Oriented Java Applications Interoperable
x
x
x
2.
241
Service Selection. Programmers may have constrained the candidate set of services to those services that exhibit special characteristics using a where or orderby operation (see Section 3). The ServiceJ transformer injects code for interacting with these services using a dynamically selected protocol so as to obtain information about their QoS. Service Binding. The ServiceJ transformer injects instructions for dynamically binding a pool variable to a service after service selection. To increase interoperability, the ServiceJ transformer also injects instructions for dynamically negotiating on the protocol that will be used when invoking operations on the selected service. It is important to note that service binding is done just-in-time. This is because service quality levels may be subject to continuous change, so it is important to use the most recent data when selecting and binding a service. Service Invocation. After injecting instructions for selecting and binding the service, the ServiceJ transformer injects instructions for dealing with dynamic protocol changes. If compatibility problems or other protocol errors arise during a client-service interaction, then these instructions will rollback the conversation and negotiate dynamically on the new protocol before restarting the interaction.
Other Middleware Tasks. x
x
x
Service Fail-over. Every client-service interaction may fail due to reachability or availability problems. The ServiceJ transformer therefore decorates service interactions with code for fail-over. In case of problems, the injected instructions select another service from the constrained service pool and restart negotiations on the protocol that will be used during the interaction with that new service. Then, the operation is transparently reinvoked on that new endpoint. Service Transactions. ServiceJ introduces additional language concepts for demarcating service sessions, which comprise multiple interactions between a client and a service so as to complete a complex business transaction (see [5] for more information). Instructions are injected so as to successfully support such transactional behaviour. Service Volatility handling. Newer services may be added to the architecture at runtime, and these are overlooked by WSIF’s hardwired service references. The ServiceJ transformer, on the other hand, injects instructions for dynamic service discovery and binding, and is therefore able to adequately handle volatility issues.
Code Output and Compilation. The transformation process incrementally leads to the construction of an equivalent Java metamodel instance ’304. To complete the transformation process, a codewriter [7] is used to turn the metamodel instance into a set of Java source files ’305, which are eventually compiled with the Java compiler to Java bytecode.
242
Sven De Labey and Eric Steegmans
Fig. 4. ServiceJ & WSIF cooperate to enable interoperable service interactions
5 Example In this section, we illustrate how responsibilities are divided between the developers of a service client and the ServiceJ compiler. The top part of Figure 4 shows how a service interaction for printing a file is implemented. A variable of type Printer is decorated with a type qualifier (sequence), denoting that a non-deterministic service selection and injection strategy is to be followed at runtime. An additional constraint is provided using the declarative where clause, stating that the printer must support color printing with an output of at least 20 pages per minute. Multiple printers may satisfy these constraints, so the file must be printed on the printer with the shortest job queue. The non-deterministic selection strategy enforced by the sequence qualifier will use this preference to sort the Printer services that remain after applying the constraint. The programmer can easily implement this service interaction by using the new language concepts that ServiceJ provides. They are no longer responsible for realizing service interoperability, nor are they forced to insert instructions for
Making Service-Oriented Java Applications Interoperable
243
service selection, service binding, and service failover. These tasks are now transparently handled by feeding the source file of the service client to the ServiceJ-to-Java transformer. The latter injects the appropriate middleware interactions, and compiles the service client to Java bytecode. ServiceJ Runtime Behaviour. Under the hood, ServiceJ and WSIF cooperate tightly to enable interoperable service invocation. Before the print operation can be invoked on a remote service, the ServiceJ middleware has to search for services that satisfy all the constraints that were imposed when the pool field was declared. It does so by contacting a ServiceJ registry, requesting the list of URLs that publish the WSDL definitions of those services that conform to the static service type (step 1 in Figure 4). ServiceJ passes these URLs to the WSIF middleware and requests a protocol-independent stub for contacting the services (steps 2–4). All these stubs are returned to the ServiceJ middleware layer, where they are added to a service pool (step 5). Next, the pool must be constrained. Each service is contacted using its own specific protocol in order to check whether it satisfies the QoS requirements of the service client. ServiceJ transparently relies on WSIF and its dynamic providers architecture [4] to communicate with these services. Given this information, the ServiceJ middleware constrains the service pool, retaining only those services that comply with the constraint as represented by the where operation (step 6). Finally, the first service from the resulting service sequence (S1 in Figure 4) is injected into the printer variable. The print operation can now be invoked. Assume this invocation fails because S1 has become unreachable (steps 8a–8b). The middleware then automatically tries to invoke the operation on the second-best service from the sequence, S4, which successfully handles the request (steps 9a–9b).
6 Related Work Due to space restrictions, we cannot provide an extensive discussion of related work. A more complete discussion of how our language extension relates to other frameworks and runtime platforms can be found in [5]. One important related Java technology is Jini [8], which supports late binding and dynamic discovery of services registered in a central services registry. Similar to ServiceJ, developers express service constraints, although preference-driven service optimisation is not supported. The drawback of Jini, however, is that it lacks provisions for ensuring interoperability during service interactions and forces developers to write a considerable amount of boilerplate code for service lookup, service selection and fault tolerance, thus further reducing transparency. ProActive [9] and JavaGroups [10] are Java-based languages and libraries developed for supporting service failover and service communication. They assume remote services to be Java-compliant, making them less appropriate for building interoperable systems. They also require the service provider to install additional software, thus further jeopardizing interoperability in cross-enterprise service architectures. Similar liabilities were detected in Java RMI extensions that
244
Sven De Labey and Eric Steegmans
support fault tolerance and constrained method invocation, such as mChaRM [11] and RMI Tactics [12]. ServiceJ is not the first extension that introduces language concepts for service programming. Similar constructs were introduced by Cardelli et al. in [13]. They introduce service combinators such as the sequential execution operator (denoted “S1? S2”) for stating that S2 must be contacted if S1 is unreachable. Service combinators are used in other object-oriented languages such as XL [14] to enable web service failover. The use of such combinators in volatile service environments, however, is limited because they rely on hardwired service references.
7 Conclusion Java frameworks and API’s increase interoperability at the cost of decreased transparency. They put the burden of guaranteeing sustainable interoperability on the programmer, even though interoperability is essentially a middleware responsibility that has little or nothing to do with the business logic. To solve these problems, ServiceJ introduces special language concepts that allow programmers to focus on the business logic, at the same time allowing them to fine-tune service selection in a declarative, type-safe way. We have shown how the ServiceJ compiler translates ServiceJ to Java, a process during which locationand protocol-independent service interactions cooperate with ServiceJ’s service selection algorithms so as to dynamically bind the optimal service to those fields depending on external services.
WSDL Specification v1.1: www.w3.org/TR/wsdl. (2001) SOAP Specification: http://www.w3.org/TR/soap/ Apache Axis: http://ws.apache.org/axis/ (2004) Duftler, M., Mukhi, N., Slominski, A., Weerawarana, S.: Web Service Invocation Framework (WSIF), http://www.research.ibm.com/. (2001) De Labey, S. et al.: ServiceJ. A Java Extension for Programming Web Service Interactions. In: International Conference on Web Services. ICWS07. (2007) Ashley, A.J.: ANTLR: http://supportweb.cs.bham.ac.uk/documentation/tutorials/ M. van Dooren, K. Vanderkimpen, and S. De Labey: The Jnome and Chameleon Metamodels for OOP, http://www.cs.kuleuven.be/, marko. (2007) Jini Architecture Specification: http://www.jini.org Baduel, L., Baude, F., Caromel, D.: Efficient, Flexible, and Typed Group Communications in Java. In: The Java Grande Conference. (2002) Ban, B.: JavaGroups. Group Communication Patterns in Java. Technical Report, Cornell University. (1998) Cazzola, W.: mChaRM: Reflective Middleware with a Global View of Communications. IEEE DS On-Line 3(2) (2002) F. Pereira et al.: Tactics for Remote Method Invocation. Journal of Universal Computer Science 10(7) (2004) 824–842 Cardelli, L., Davies, R.: Service Combinators for Web Computing. In: Trans. on Softw. Engineering 25(3) (1999)
Making Service-Oriented Java Applications Interoperable
245
[14] Florescu, D., Gruenhagen, A., Kossmann, D.: XL: A Platform for Web Services. In: Proc. of the First Conference on Innovative Data Systems Research. (2003)
A Service Behavior Model for Description of CoProduction Feature of Services Tong Mo, Xiaofei Xu and Zhongjie Wang Research Center of Intelligent Computing for Enterprises and Services (ICES), School of Computer Science and Technology, Harbin Institute of Technology P.O.Box 315, No.92, West Dazhi Street, Harbin, China 150001 [email protected], {xiaofei, rainy}@hit.edu.cn
Abstract. In order to ensure better service quality during service design and execution, requirements raised by service customers need to be fully and rightly gathered, understood and described as models. Co-production between service providers and customers is a key feature of services, however until today there still lack of proper methods to address this issue. In this paper, state of the art of service models in literatures are firstly summarized, then several key aspects of service models are analyzed in details, e.g., roles, interactive behaviors, value and risk, etc. Subsequently, a new service behavior model named “ServiceProvider-Customer (SPC)” is presented, including its graphical representations and attributebased semantics descriptions. To validate this model, a case study on ocean logistics services is put forward, and a qualitative comparison between SPC and some other service models from literatures is shown.
Keywords: Business models for interoperable products and services, Interoperability of EBusiness solutions, Enterprise modeling for interoperability
1 Introduction Over the past three decades, services have become the largest part of most industrialized nations’ economies [1]. Especially with wide and rapid application of IT in numerous service domains, e.g., IT service, healthcare, logistics, etc, the world has been into the era of services [2]. Generally speaking, a service is a provider-to-client interaction that creates and captures value while sharing risks [3]. In order to support execution a service, there should be a well-designed service system. Furthermore, developing a service system need to based on a complete and soundness service models which accurately describe requirements from customers and the agreements between customers and providers in a formal way.
248
Tong Mo, Xiaofei Xu and Zhongjie Wang
In fact, service model has long ago been considered as the foundations of service methodology and service theories. A service model should cover at least the following five aspects [4]. x x x x x
Organization and roles (who participate in services); Location and environment (where the service happens); Functions and behaviors (what the service does and how it is accomplished); Resource and capabilities (what external resources are required) Shared information (what information will be exchanged between participants).
Concerning current researches on service models, there have been rich results presented in literatures. Typical cases include molecular model [5], service blueprint model [6], structured analysis and flow chart [7], design techniques (SADT) [8], dynamic event process chain (EPC) [9], feedback control system model [8], etc, where UML, blueprint, structured models, etc are adopted as graphical languages to describe structure and behaviors of services [6], and Object Constraint Language (OCL), Description Logic (DL), Web Ontology Language (OWL), etc as formal languages to describe service semantics [10]. Among the five aspects mentioned above, service behavior is the most complicated one that is very difficult to describe due to the “Co-Production” feature on which service is distinguished from manufacturing process, i.e., service customers and providers should interact with each other to co-produce and share values. A service behavior model must have the ability to depict how both sides collaborate together, what responsibilities each side solely takes, what kinds of value they co-produce and what kinds of risks underlie in service process. Unfortunately, each of current existing service model specifications usually ignores one or several aspects mentioned above. For example, none of molecular model, blue print, flow chat, SADT and Dynamic EPC describes the service value and risk. Most of them except blue print don’t contain service participants. Though blueprint use interaction line to divide customer and provider behavior, organization information and structure of service are not included. Molecular model shows the hiberarchy and resource of service, but it can’t give us a whole view of service process. Flow chart adds branch and other flow control elements to enhance the flow expression of service, resource and information semantic are lacked. SADT and Dynamic EPC are formalized model and good at expressing input/output information of behavior, but the service interaction is not well described. To eliminate such deficiencies, in this paper we present a new service behavior model named Service-Provider-Customer (SPC), which emphatically focus on the description of co-production feature. Rest of this paper is organized as follows. Section 2 analyzes some key aspects of service behaviors that should be covered in service models. Section 3 puts forwards SPC model specification based on extended UML activity diagram, including graphical and formal descriptions of each service element and a practical case from domain of ocean logistics services. Section 4 discusses some
A Service Behavior Model for Description of Co-Production Feature of Services
249
interoperability issues about SPC, and a brief comparison between SPC and other classical service models. Finally is the conclusion.
2 Key Aspects of Service Behavior Models 2.1 Behavior Participants: Customers and Providers In manufacturing process, customers just pay attention to the final products while do not care about how they are produced (such as the manufacturing process). In this situation, the manufacturing process is just solely fulfilled by providers whereas customers need not participate in it at all. However in services, the situation is quite different. Firstly, there are no tangible products and services are just ideas and concepts that are part of a process. Secondly, Services are created and consumed at the same time. Thirdly, from customer’s perspective, there is typically a wide variation in service requirements and the corresponding service behaviors and offerings. From such points of view, customers and their individual personalized requirements should be carefully considered in service behavior models. In some complex services, there will be more than one customer and provider. Furthermore, the role of a participant will change in different phases of services. For example, a website provides weather forecasting service to its cell-phone users via short message; at the same time, it is also a customer of meteorological station to obtain weather information and a customer of mobile telephone company to exchange messages with its end users. 2.2 Interactive Co-Production Patterns Concerning different numbers of customers and providers participating in services, the interactive co-production process is classified into the following five patterns. x
x
x
Single-customer Single-provider (SCSP), which is an atomic and basic interaction pattern and any other complex services could be composed by it. For example, in restaurant service, a customer books and receives a dinner service from a restaurant. Multi-customer Single-provider (MCSP), where a provider offers services to multiple customers and these customers collaborate together to receive services from providers simultaneously or sequentially. For example, when an ERP vendor (the provider) implementing a cost management system in a company, managers and employees from financial section, production section, procurement section and all other related sections (multiple customers) should work together for preparing cost-related business data, learning new cost budgeting and auditing techniques, etc. Single-customer Multi-provider (SCMP), where each provider provides a partial service behaviors, all of which are composed together to accomplish customer requirements. Relationships between multiple providers might be
250
Tong Mo, Xiaofei Xu and Zhongjie Wang
x x
collaboration or competition. This pattern is considered as a major source of service innovation because it exhibits the fundamental principle “competitive advantage” in economics, and has been the most popular and dominates patterns in modern services. For example, mash-up techniques in Web 2.0 tries to integrate services from different providers to form a new value-added service to a customer. Multi-customer Multi-provider (MCMP), which is the most complicated service pattern and could be decomposed into several MCSP or SCMP. C2C e-business such as eBay is this kind of service. No Customer Multi-provider (NCMP), which reflects traditional manufacturing process, i.e., providers collaborates with each other to fulfill some tasks without participating of customers.
Fig. 1 shows the five types of co-production patterns in services. C C
P
C
P
C
C SCSP
MCSP
P
C
P
P
C
P
P
C
P
P
MCMP
SCMP
P
NCMP
Fig. 1. Five types of co-production patterns
Above five types of co-production patterns could be further composed together to describe more complex co-production process. For example, Fig.2 shows a chained service where {C1, C2, C3, C4} and {PC1, PC2} form a MCMP pattern, {PC1, PC2} and {PC3} form a MCSP pattern, {PC3} and {P1, P2, P3} form a SCMP pattern, and {PC2} and {P4} form a SCSP pattern. P1 C1 PC1
PC3
P2
C2 P3 C3 PC2
P4
C4 Customers (C)
Both Customers and Providers (PC)
Providers (P)
Fig. 2. A complex service chain composed of four co-production patterns
A Service Behavior Model for Description of Co-Production Feature of Services
251
2.3 Value Creation and Sharing In manufacturing, the output products or parts are considered as the tangible value created by manufacturing process and activities, whose performance and effects could be measured by a set of parameters and attributes attached on the parts or products. However in services, since there are no such physical products, it is rather difficult to measure what and how many values a service behavior creates. Such value is a type of intangible and no-physical value. In our opinions, we define service value as “the improvements of states of customers or providers”. Following are some examples of “service value”. x
x
x
After receiving four-year PhD education service, a student got his doctor degree from university. He got the value by obtaining some domainspecific knowledge, skills, experiences, etc, and a legal certification to engage in some professions. The university as service provider got its value by receiving the student’s tuition fee and a good reputation on PhD education; The goods of a consignor are transported from one to another city by a transportation service provided by a logistics company. The value the consignor gained is that his goods’ state of spatial location has been changed. After an enterprise implements an ERP system, period for monthly material requirement planning is reduced from one day to ten minutes.
2.4 Risk Sharing In build-time, we design service behaviors and expect a good result. However in run-time, there may possibly be a gap between real results and our former expectations. Such gap is called “service risk”. The following are some examples of risks. x x
The university provides courses to students, but there are some students who can’t pass the examination and get the degree, which is a risk of education services; A consultant with ten-year experiences in financial domain is designated to a consulting service; however he is sick during the consulting, which makes the consulting period delayed (risk of the customer), and the customer is unwilling to pay the consultant’s expected incomes (risk of the provider).
3 Service-Provider-Customer (SPC) Model As mentioned in section 1, most of current service models in literatures cannot fully cover the aspects listed in section 2. In order to address this issue, we present our service behavior model.
252
Tong Mo, Xiaofei Xu and Zhongjie Wang
3.1 Overview We name our new service behavior model as “Service-Provider-Customer (SPC)”, whose goal is to describe how multiple service providers and customers fulfill service requirement via “co-production” to create value and share risks. SPC is modeled in the form of extended swim-lane based on UML activity diagram. Fig.3 shows a simple example, which is an IT service project for implementing an Enterprise Software and Application (ESA), including four service tasks, three provider organizations and three customer organizations. In SPC model, there are three types of elements, (1) Area division elements such as swim lanes and task zones; (2) Service tasks and their semantics descriptions; (3) Flow elements. One SPC has horizontal and vertical dimension. In horizontal dimension, SPC is partitioned into multiple rows, each of which is named service zone and represents one service task. In vertical dimension, SPC is firstly partitioned into four swim-lanes, providers’ lane PL, interaction lane IL, customers’ lane RL, and quality lane QL, from left to right’s turn. PL, IL and RL are used to represent the organization (or dominated organization in service interaction) of service task. The QL is adopted to list quality parameters that represents customer’s quality requirement of task. Round rectangles located in the cross of service zone and swim line are service sub-tasks. Their locations in service zone and swim line represent father-task and organization semantics. There are three types of service tasks: service sub-task in IL, No. n provider sub-task in PL and No. n customer sub-task in RL. Service subtask represents interaction semantic of service customer and provider. No. n provider and customer sub-task represents the duty of corresponding provider and customer. The broken line between them is task mapping which represents mapping semantic between service sub-task and No. n provider (customer) subtask.Arrow; hollow diamond, short bold line; solid round and circle with a solid core are flow elements. They are used to represent the temporal dependencies of service sub-tasks.
A Service Behavior Model for Description of Co-Production Feature of Services
PROVIDERS (PL)
INTERACTIVE PROCESS (IL)
253
CUSTOMER (RL) QUALITY (QL)
(PLP1) Consultants
(PLP1)ES A OEMs
(PLC3) ESA Implementators
ILP
ILC1
ILC2
(RLC1) CxOs
ILC3
(RLC2) IT Department
(RLC3) Business Departments Start date: <=07-01-01 End date: <=07-12-31
T1 Modeling AS-IS
T2 Modeling TO-BE
T3 IT Design
T4 IT Implementation
Fig. 3. SPC model of service cluster.
Sometimes, if a service sub-task is too coarse-grained to be clearly described in one SPC, it may be refined as a new and separated SPC model. Fig. 4 shows an example. Info. Of Service Task
Provider PL.5
PL.3
Interaction
PL.4
PL
PL.2
IL.1
IL.2
Customer IL.3
IL.4
RL.1
RL.2
RL.3
RL.4
Quality Info. QInfo.1
Task 1
QInfo.2
Info. Of Service Task
Interaction
Provider PL.2.1
PL.2.2
PL
IL.1.1
Customer IL.1.2
RL.1.1
RL.1.2
Quality Info.
From T1.1
QInfo.1 QInfo.2
Task 2
QInfo.1
QInfo.3 QInfo.4
Task 1.2
QInfo.2
To T2.1
Fig. 4. An example of SPC model refinement of service sub-task
254
Tong Mo, Xiaofei Xu and Zhongjie Wang
3.2 Model Elements in SPC
3.2.1 Task Zone In horizontal dimension, SPC is partitioned into multiple zones, each of which represents one service task Ti. All the round rectangles locating in Ti’s row are the sub-tasks of Ti, denoted as {Ti1, Ti2, …, Tik}. If Tij needs to be detailed, it is used to be the task zone in SPC submodel and sub-tasks Tij of are denoted as {Tij1, Tij2, …, Tijk}. The relationships between Ti, Ti’s sub-task set{Ti1, Ti2, …, Tik}, and sub-subtask set {Tij1, Tij2, …, Tijk} represent SPC hiberarchy semantic. 3.2.2 Swim Lane Swim lanes are used to describe organization information and quality information. The organization swim lines express organization semantic of service task, include customer, provider and the importance of these organizations. Because there are possibly multiple customers {r1, r2, …, rm} and multiple providers {p1, p2, …, pn} in one service, RL and PL are further divided into sublane, i.e., {RL1, RL2, …, RLm} and {PL1, PL2, …, PLn}. IL is also refined to m+1 sub-lanes, including one provider’s sub-lane ILm+1 and m customers’ sub-lane {IL1, IL2, …, ILm}. For arbitrary sub-task t, concerning t’s responsible organization, there exist the following two situations. (1) t is collaboratively responsible by multiple provider organizations {Pt1, Pt2, …, Ptj}(0<j
A Service Behavior Model for Description of Co-Production Feature of Services
Info. Of Service Task
Provider CO.5
CO.3
Interaction
CO.4
CO.2
Provider CO.1
CO.2
255
Customer CO.3
CO.4
CO.1
CO.2
CO.3
CO.4
Quality Info.
Task i
Fig. 5. Task zone and Swim line
3.2.3 Service Task and Task Mapping A service task contains six types of semantic enhanced attributes and is defined as six-atom set . Task Basic Iifo. includes basic information of service task, such as ID, name, start/end date, budget, cluster, purpose, place and location information of zone and swim line. It gives a basic introduce of service task. Quality Info. is a set of quality parameters {Q1, Q2, …, QjQjෛ{Q1, Q2, …, Qj}} and their values. This set is a sub-set of the set of quality parameters of the QL coordinated to zone of service task. Organization Info. is a set of organizations {O1, O2, …, Oi} that interacted in service task. These organizations correspond to the swim lines of service task’s location. Resource Info. is a set of resources {R1, R2, …, Ri} that needed in service task. The resource can be human, software or hardware. I/O Info. is a set of information objects {I1, I2, …, Ii} that input or output in service task. Value&Risk Info. is a set of value objects {VO1, VO2, …, VOi}. Value object is an (or a set of) attribute of organization, resource or information. The change of value object represents value and risk of service task. Task mapping is used to express the mapping semantic between service subtask and No. n provider (customer) sub-task. The figure of service task is a round rectangle and task mapping is a dashed line. They are shown in Fig. 6. No. n Provider Subtask
Service Subtask
No. n Customer Subtask
Task Mapping
Fig. 6. Service task and task mapping
The attributes of service task and task mapping are shown in table 1. Table 1. Attribute of service task and task mapping. Element Service task
Attribute
Explanation
Task ID Task name
ID of service task Name of service task
256
Tong Mo, Xiaofei Xu and Zhongjie Wang
Task type
I/O type Value object ID Value object name Value object type Value object init Value object change Risk change Risk probability
Service sub-task, No. n provider/customer sub-task When the task start When the task finish Where the task is executed How much it cost The cluster of service project Explanation of what the task is going to do ID of task zone Name of task zone ID of swim line Name of swim line Quality evaluation standard of task ID of organization interacted in the task Name of organization interacted in the task Customer/Provider ID of resource used in the task Name of resource used in the task ResourcePool/Humanware/Software/Har dware How many resources are needed Unit of number ID of information input/output by the task Name of information input/output by the task Input/Output ID of value object changed by task Name of value object changed by task Qualitative/Quantitative Change from Planning change to Risk change to How often risk happened
Sub-task ID Sub-task name No. n p/c sub-task ID No. n p/c sub-task ID No. n p/c sub-task type
ID of service sub-task Name of service sub-task ID of No. n p/c sub-task ID of No. n p/c sub-task Provider/Customer sub-task
Task start date Task end date Task execute place Task budget Cluster Purpose
Zone ID Zone name Swim line ID Swim line name Quality info. ID Organization ID Organization name Organization type Resource ID Resource name Resource type Number Unit Information ID Information name
Task mapping
3.2.4 Task Flow Elements These elements are used to express the flow of service sub-tasks.The figure of task flow is an arrow; branch is a hollow diamond; synchronization is a short bold line; start is a solid round and end is a circle with a solid core. They are shown in Fig. 7.
A Service Behavior Model for Description of Co-Production Feature of Services
Task Process
Branch
Synchronization
Start
257
End
Fig. 7. Task flow, branch, synchronization, start and end
3.3 Case Study This case study is from an ocean logistics service, where Haidu is a Chinese food company and requires exporting its fillet products to South Korea, Huadong is ocean marine transportation company owning some ships, and Huodai is a freight forwarder who helps Haidu export its fillets by utilizing Huadong’s ships. There are six service tasks: booking cabins, loading goods, apply to the customs, ship loading, arriving, pick up in this case. In this case, there are three customers: Haidu, Huodai, and Huadong, and five providers: Huodai, Huadong, C.Y., Motoercade and Custom. Huodai are both customer and provider because it has role change in booking cabins and loading goods. The process of loading goods is that: after booking cabins, Huadong and C.Y. prepare container and Huodai engage trucks from motorcade. Then, trucks pick empty container from C.Y. to load Haidus foods. And it can be divided into five sub-tasks: container preparing, tractor distributing, empty container distributing, taking delivery, container to container yard. In loading goods, container preparing is NCMP; tractor distributing is SCSP; empty container distributing is MCMP; taking delivery from shipper is MCSP; and container to container yard is SCMP. A value object of these services is delivery’s status. A value of taking delivery from shipper is that delivery’s status is changed from unload to load. But there is a risk that the delivery may be broken during loading. The SPC model is shown in Fig. 8.
258
Tong Mo, Xiaofei Xu and Zhongjie Wang
Info. Of Service Task
Provider Custom
Motor cade
C.Y.
Interaction Hua dong
Huo dai
P
Haidu
Huo dai
Customer Hua dong
Haidu
Huo dai
Hua dong
Quality Info.
… Booking Confirm Booking
cabins
Container preparing
Info. Confirm
Confirm Booking
Confirm Booking
Container preparing
Tractor distributing
Loading goods
Empty Container getting
Tractor distributing
Check
Taking delivery
Loading Info. input
Apply to the customs
Check
Confirm Booking
Empty Container distributing
Container Require Confirm
Taking Delivery
Container transport
Tractor Require
Get Message
Message Haidu
Loading& sealing
Check
container to C.Y.
Updating B/L
Apply to custom
Apply to the customs
…
Fig. 8. SPC model of Haidu’s full container load
4 Comparisons between SPC and other Service Behavior Models Compared with other service process model, SPC not only shows dependencies between service tasks, but especially address the co-production relationship visually, which is a pivotal feature where service is distinguished from other types of business processes. Based on the briefly introduction of other service behavior models in section 1, a comparison between SPC and other five models is shown in Table 2. Table 2. Comparisons between SPC and other service behavior models Model Molecular model Blue print Flow chart SADT Dynamic EPC SPC Semantic Structure O X O O X O Organization X X O O O O Process X O O O O O Interaction X O X X X O Resource O O X O X O I/O Information X X X O O O Formalization X X O O O O Value and Risk X X X X X O Toolkit X O O O O O
Concerning interoperability issues of SPC, since it is based on traditional activity diagram of UML, it is convenient to transform a SPC model to UML
A Service Behavior Model for Description of Co-Production Feature of Services
259
diagram model. Furthermore, it is also easy to transform SPC to any service models listed in Table 4 and interoperate with them. For example, Fig. 9 is a blueprint model mapped from the SPC model of Haidu case in Fig. 8. Confirm Booking
Tractor Require
Message Haidu
Tractor distributing
Empty Container getting
Container preparing
Container Require Confirm
Info. Confirm
Check
Get Message
Loading& sealing
Check
Updating B/L
Apply To the customs
Line of interaction
…
Confirm Booking
Taking delivery
Check
…
Line of visibility Container transport
Line of inner-interaction Loading Info. input
Fig. 9. Blueprint model transformed from SPC model of Haidu case
5 Conclusions In this paper, service behavior’s characters are analyzed so that the SPC model can express co-production feature. In SPC, participants of service are distinguished and role change is also considered. Based on this and classifying service interaction mode, interaction and duty of each participant are expressed clearly. Using value object as attribute of service task, value and risk of service can be described. The other semantic enhanced elements and attributes give a stronger support for service requirement gathering, understanding and describing. Further research work include: detailing and refining the relationship between SPC and organization/resource/information model; finishing the modeling tools of SPC.
Acknowledgement Research works in this paper are partial supported by the National High-Tech Research and Development Plan of China (2006AA01Z167, 2006AA04Z165) and the National Natural Science Foundation (NSF) of China (60673025).
References [1]
H. Chesbrough, J. Spohrer. A Research Manifesto for Services Science. Communications of the ACM, 49(7): 35-40 (2006)
260
[2]
Tong Mo, Xiaofei Xu and Zhongjie Wang
IBM. SSME Course Materials. http://www.almaden.ibm.com/asr/SSME/coursematerials [3] Xiao-Fei Xu, Zhong-Jie Wang, Tong Mo. SMDA- A Service Model Driven Architecture. IESA07 (2007) [4] XU Xiaofei, WANG Zhongjie, MO Tong. The Current State and Development Plan of Research and Education on SSME in Harbin Institute of Technology. http://www.almaden.ibm.com/asr/summit/papers/harbin.pdf. (2006) [5] L. Shostack. How to Design a Service. European Journal of Marketing, 16(1): 49-63 (1981) [6] L. Shostack. Designing Services that Deliver. Harvard Business Review, 62(1): 133139 (1984) [7] Ramaswamy, R. Design and Management of Service Process, Addison-Wesley Publishing Company, Reading, MA. (1996) [8] C. Congram, M. Epelman. How to Describe Your Service. International Journal of Service Industry Management, 6(2): 6-23 (1995) [9] HW Kim, YG Kim. Rationalizing the customer service process 139 Business Process Management, Journal, Vol. 7 No. 2, pp. 139-156 (2001) [10] N. Milanovic, M. Malek. Current Solutions for Web Service Composition. IEEE Internet Computing, 51-59 (2004)
An Abstract Interaction Concept for Designing Interaction Behaviour of Service Compositions 2 Teduh Dirgahayu1, Dick Quartel2 and Marten van Sinderen1 1
2
Centre for Telematics and Information Technology (CTIT), University of Twente, P.O. Box 217, 7500 AE Enschede, The Netherlands {t.dirgahayu, m.j.vansinderen}@utwente.nl Telematica Instituut, P.O. Box 589, 7500 AN Enschede, The Netherlands [email protected]
Abstract. In a service composition, interaction behaviour specifies an information exchange protocol that must be complied with in order to guarantee interoperability between services. Interaction behaviour can be designed using a top-down design approach utilising high abstraction levels to control its design complexity. However, current interaction design concepts that merely represent interaction mechanisms supported by communication middleware force designers to design interaction behaviour close to an implementation level. Such design concepts cannot be used for designing interaction behaviour at high abstraction levels. Designers need an interaction design concept that is able to model interactions in an abstract way. In this paper we present such a design concept called abstract interaction. We show the suitability of our abstract interaction concept for designing interaction behaviour at high abstraction levels by comparing it to BPMN interaction concept in an example. Keywords: Enterprise modeling for interoperability, Modelling methods, tools and frameworks for (networked) enterprises, Service oriented Architectures for interoperability
1 Introduction To run its business efficiently, an enterprise makes its business processes interact with each other and, if necessary, with the business processes of its partners. Service-oriented computing facilitates the realisation of such interacting business processes by means of service compositions [2][3][9]. Business processes are exposed as services and then linked to make them interact with each other. 2 This work is part of the Freeband A-MUSE project (http://www.freeband.nl), which is sponsored by the Dutch government under contract BSIK 03025.
262
Teduh Dirgahayu, Dick Quartel and Marten van Sinderen
An interaction between services can be simple, e.g. sending a message from one service to another, or complex, e.g. a negotiation for some procurement. A complex interaction is composed of a number of simpler interactions performing certain behaviour. We call this behaviour interaction behaviour. Interaction behaviour specifies an information exchange protocol that must be complied with in order to guarantee interoperability between services. Choreography and orchestration [2][14] are common ways to specifying and implementing interaction behaviour. Several design methods have been proposed for designing interaction behaviour [1][4][5][6][7][8][10][19][20][21][22]. We argue that the interaction design concepts adopted in those design methods force designers to design interaction behaviour close to an implementation level. It is because those design concepts are very much similar to interaction mechanisms supported by communication middleware, e.g. message-passing and request/response interactions. Designing close to an implementation level reveals design complexity at once. A top-down design approach utilising high abstraction levels can be used to give the designers some control over the complexity of an interaction behaviour design. Using such an approach, designers transform user requirements gradually into designs at different abstraction levels. In this way, the designers reveal the complexity in a controlled manner. To be able to do that, designers need an interaction design concept that can model interactions in an abstract way. The objective of this paper is to present an interaction design concept for designing interaction behaviour at high abstraction levels. In order to define the design concept, we identify problems with current interaction design concepts and define a set of requirements for the design concept. To show its suitability, we compare the interaction design concept to BPMN interaction concept in an example. This paper is structured as follows. Section 2 describes problems with current interaction design concepts. Section 3 presents an interaction design concept and its use in the design of interaction behaviour at high abstraction levels. Section 4 compares the interaction design concept to of BPMN to show their suitability for designing interaction behaviour at high abstraction levels. Finally, section 5 concludes this paper and identifies future work.
2 Problems with Current Interaction Design Concepts Current methods for designing interaction behaviour adopt interaction design concepts represented in different design languages, such as UML [1][8][10] [19][20], BPMN [5][6][21], Petri Nets [4][7], or others, e.g. Let’s Dance [22] and ISDL [16]. UML supports two kinds of interaction, namely sending a signal and calling an operation [13]. They represent message-passing and request/response interactions, respectively. BPMN represents an interaction as a message flow showing that a business process sends a message and another business process receives that message [12]. In Let’s Dance, an interaction is made up from two communication actions, namely
An Abstract Interaction Concept for Designing Interaction
263
a message sending action and a message receipt action [22]. Thus, interactions in BPMN and Let’s Dance represent message-passing interactions. Petri Nets [15] do not have any interaction concept. To model an interaction, the design methods in [4][7] use a pair of Petri Net transitions: one transition represents an activity sending a message and another represents an activity receiving that message. An interaction that is modelled this way represents a message-passing interaction. The interaction design concept in ISDL [17] can be used to model interactions at a high abstraction level and interaction mechanisms at an implementation level. The design method in [16], however, does not give some hints on how to use ISDL interaction design concept for modelling interactions at high abstraction levels. As mentioned earlier, an interaction design concept representing interaction mechanism forces designers to design interaction behaviour close to an implementation level. Designing close to an implementation level reveals design complexity at once. Examples of such complexity are as follows. A complex interaction has to appear as a composition of interaction mechanisms. When an interaction behaviour design involves many complex interactions, such compositions will increase design complexity. Furthermore, when complex interactions are related with each other, presenting complex interactions as their compositions potentially makes the relationships between them unclear, i.e. which interaction mechanisms belong to a complex interaction. Some structuring technique has to be applied to make those relationships clear. The application of such a technique adds more complexity into the design. The participants of an interaction can be primary or supporting participants. A primary participant is a participant that concerns with the result of the interaction. A supporting participant is a participant that facilitates the interaction between primary participants. For example, the primary participants of a payment are a payer and payee. This payment may include a supporting participant, e.g. a bank that provides money-transfer service. To produce a complete interaction behaviour design at or close to an implementation level, designers have to identify all the participants and take into account their behaviour in the design. Considering the behaviour of the supporting participants might add unnecessary complexity at the early phases of a design process. Some design methods [6][8][10] introduce multiple abstraction levels. However, when dealing with interaction behaviour, those design methods specify the interactions in terms of interaction mechanisms. To be able to design interaction behaviour at high abstraction levels, designers need an interaction design concept that can model interactions in an abstract way.
3 Interaction Design Concept for High Abstraction Levels In this section we present an interaction design concept for designing interaction behaviour at high abstraction levels.
264
Teduh Dirgahayu, Dick Quartel and Marten van Sinderen
3.1 Requirements We define three requirements for the interaction design concept. R1. The interaction design concept should be independent from any interaction mechanisms. This requirement is to prevent the design concept from forcing designers to design interaction behaviour close to an implementation level. R2. The interaction design concept should be able to model a complex interaction abstracting from the interaction’s behaviour. This requirement is to allow designers to include a complex interaction into a design without cluttering the design with the details about the interaction’s behaviour. Such details shall be decided and included into designs at lower abstraction levels. As a result, a design at a high abstraction level would be less complex and easy to understand. R3. The interaction design concept should be realisable using existing interaction mechanisms. An interaction behaviour design produced by designers is eventually realised by developers by mapping the design onto existing interaction mechanisms. This requirement is to allow the design concept to expressively model interaction mechanisms. An expressive model avoids misinterpretation between the designers and developers.
3.2 Abstract Interaction Concept To fulfil requirement R1, we define an interaction as an activity which is shared by multiple participants to establish some common results. An interaction represents a negotiation between participants in order to establish the results. An interaction either occurs for all participants or does not occur. If the interaction occurs, all participants can refer to the interaction results. If the interaction does not occur, none of the participants can refer to any (partial or temporal) result of the interaction. The participation of each participant is represented by an interaction contribution, which defines the constraints it has on the interaction results. An interaction can only occur if the constraints of all participants are satisfied. In this case, common results are established. The results are the same for all participants; but possibly a participant may not be interested in the complete results. To fulfil requirement R2, we define the notion of abstract interaction by specialising the definition of interaction. An abstract interaction is an interaction at a higher abstraction level that represents a composition of interactions performing certain interaction behaviour at a lower abstraction level. An abstract interaction is concerned only with (i) the results of the interaction behaviour and (ii) the constraints that should be satisfied by the results. An abstract interaction, hence, may abstract from supporting participants, the results and constraints of the composed interactions, and the relation between the composed interactions.
An Abstract Interaction Concept for Designing Interaction
265
In a top-down design process, an abstract interaction is meant to be refined into an interaction behaviour design. This design consists of a composition of interactions that are more concrete than the abstract interaction they refine. The design may also introduce supporting participants. An abstract interaction does not impose a certain interaction behaviour design. The interaction behaviour design, however, must conform to or be consistent with the abstract interaction it refines. The interaction behaviour design must establish the results specified by the abstract interaction without violating the constraints of the abstract interaction. To fulfil requirement R3, we define that an abstract interaction may specify the direction in which its information flows. In an interaction mechanism, such a direction can be identified and gives an indication of the roles of the participants. For example, in a message-passing interaction, the information flows from a sender to a receiver. A participant from whom the information originates plays the role of the sender. A participant to whom the information sinks plays the role of the receiver. The ability to specify such a direction makes abstract interaction expressive enough to model interaction mechanisms. 3.3 Modelling using Abstract Interaction Concept To support the design of interaction behaviour using abstract interaction, we borrow behavioural design concepts defined in ISDL [16]. We design the interaction behaviour for the following scenario. A buyer service interacts with a seller service in the purchase of an article. The buyer wants to buy a notebook for a maximal price of 900 euro and wants the notebook to be delivered to Enschede within seven days. The seller is willing to sell any article listed in its catalogue with a minimal price that depends on the particular article. The seller can deliver a purchased article within two and five days if its target delivery location is in Europe. The purchase interaction consists of three phases: selection, payment, and delivery. In the selection phase, the buyer selects a notebook whose price is not higher than 900 euro from the seller’s catalogue. In the payment phase, the buyer pays the seller for the selected notebook. Finally, in the delivery phase, the seller delivers the purchased notebook to the buyer. At a high abstraction level, we design the purchase interaction between the buyer and seller as a single abstract interaction (see Fig. 1). Services are represented as rounded rectangles. An interaction is represented as segmented ellipses linked with a line. A segmented ellipse represents the interaction contribution of a service. Interaction results are represented as information attributes. An information attribute has an information type and will be assigned a value when the interaction occurs. Information attributes and constraints are written in boxes attached to their corresponding interaction contributions. If this interaction occurs, it results in the purchase of a notebook that is delivered to Enschede at some price and delivery days that meet the constraints defined by the participants. This design abstracts from how the purchase is performed.
266
Teduh Dirgahayu, Dick Quartel and Marten van Sinderen
Fig. 1. A purchase interaction as an abstract interaction
At a lower abstraction level, we refine the design to show the behaviour of the purchase interaction (see Fig. 2). The purchase interaction is decomposed into three interactions: selection, payment, and delivery representing the phases within the purchase interaction. The interaction behaviour design also defines the relations between those interactions. The relations are represented as the arrows between the interaction contributions. To be realisable, the interactions in Fig. 2 have to be further refined into their interaction behaviour because they cannot be straightforwardly mapped onto existing interaction mechanisms. We show their refinement in section 4.2.
Fig. 2. The behaviour of the purchase interaction
3.4 Modelling Interaction Mechanisms Abstract interactions at an implementation level should be realisable. At this level, an abstract interaction should expressively model its target interaction mechanism. We illustrate how to represent a message-passing interaction as an abstract interaction. Fig. 3 models the behaviour of a message-passing interaction between a sender and receiver. The sender gives a message “Hello” to communication middleware through a send interaction. The middleware then passes the message to the receiver through a receive interaction.
An Abstract Interaction Concept for Designing Interaction
267
Fig. 3. The behaviour of a message-passing interaction
The middleware plays the role of a supporting participant. An abstract interaction should be able to abstract the interaction behaviour from the middleware participation while maintaining the model expressiveness. For indicating the direction in which the message flows, we use an arrow to link the interaction contributions (see Fig. 4).
Fig. 4. Message-passing interaction abstracting from the middleware
4 Comparison with BPMN In this section, we show the suitability of our abstract interaction concept by comparing it to BPMN interaction concept. We choose BPMN because of two reasons. First, BPMN interaction design concept represents a message-passing interaction. Hence BPMN can be considered as a representative of other design languages whose interaction design concepts also represent message-passing interaction. Second, BPMN supports abstraction levels by providing the notation of abstract processes and (collapsed) sub-processes. Therefore, we can compare the use of the concepts at multiple abstraction levels. We use the purchase scenario as described in Section 3.3. We add the following user requirements. To facilitate payment, the buyer and seller agree to use a money-transfer service provided by a bank. To facilitate delivery, the seller makes use of a delivery service provided by a courier. 4.1 Designs in BPMN At a high abstraction level, we model the purchase scenario as interacting abstract processes (see Fig. 5). The model shows the message exchange between participants. All participants, i.e. the primary and supporting participants, and all message flows have to appear in the design.
268
Teduh Dirgahayu, Dick Quartel and Marten van Sinderen
We cannot abstract closely-related message flows into a single message flow because such an abstraction is not supported by the semantics of message flow. Abstracting the design from the supporting participants, i.e. the bank and the courier, will remove the message flows numbered with 5, 6, 8, 9, 11, 12, and 14. This would leave the design incomplete and unclear. Questions may arise, e.g. after receiving an invoice (no. 4), should the buyer pay the invoice before notifying the seller about the payment (no. 7)? Buyer
Seller
Bank
Courier
1) Request Catalogue 2) Send Catalogue 3) Send Order 4) Send Invoice 5) Order Money Transfer 6) Confirm Money Transfer 7) Notify Payment
8) Check Account
10) Confirm Payment
9) Show Balance 11) Order Delivery
13) Notify Delivery
12) Confirm Order 14) Deliver product
Fig. 5. The purchase scenario as interacting abstract processes
To model the phases in the purchase scenario, we refine the design by adding the phases as collapsed sub-processes within the participants’ processes. The collapsed sub-processes are selection, payment and delivery (see Fig. 6). Since the interactions are already at implementation level, we do not refine them.
Fig. 6. Phases are represented as collapsed sub-processes
An Abstract Interaction Concept for Designing Interaction
269
To model the complete private business processes of the participants, we further refine the design by expanding the sub-processes with activities (see Fig. 7). We do not refine the interaction.
Fig. 7. The purchase scenario in BPMN
4.2 Designs using Abstract Interaction Concept At a high abstraction level, we represent the purchase scenario as a single purchase interaction between the primary participants, i.e. the buyer and the seller (see Fig. 1). The interaction models the results intended from the scenario. The design abstracts from the supporting participants, i.e. the bank and courier. To model the phases, we refine the purchase interaction into three interactions: selection, payment, and delivery (see Fig. 2). Information attributes and constraints are refined and distributed over the interactions.
270
Teduh Dirgahayu, Dick Quartel and Marten van Sinderen
To include the participation of the bank and the courier, we further refine the design by introducing the bank in the payment interaction and the courier in the delivery interaction (see Fig. 8). For brevity, we omit the information attributes and constraints. We further refine the design to model the behaviour of each interaction (see Fig. 9). The refinement results in the choreography between the participants. We structure the behaviour of the buyer and the seller to indicate the phases. Refinement should be further done until all the interactions become realisable.
Fig. 8. The participation of the bank and courier
Fig. 9. The behaviour of the purchase interaction
4.3 Discussion Ultimately an interaction is performed to establish some results. The results are more essential than the way they are established. Therefore, we define that an interaction design concept is suitable for designs at high abstraction levels if it can
An Abstract Interaction Concept for Designing Interaction
271
represent an interaction and its results abstracting from the way the results are established. Abstraction levels in BPMN can only be applied within the behaviour of individual business processes participating in interaction behaviour. BPMN cannot raise the interaction behaviour to a higher abstraction level. BPMN cannot model interactions and its results without specifying how the results should be established. We conclude that BPMN interaction design concept is not suitable for designing interaction behaviour at high abstraction levels. Our abstract interaction concept is defined with an intention to model interaction behaviour designs at high abstraction levels. As evidence, we have shown its suitability in the design of the example. We start the design by modelling the results that are expected from the scenario. Identification and inclusion of the supporting participants and detailed interactions are deferred until they matter to the design. For instance, at a lower abstraction level, we want to show the phases in the scenario. We model the phases as interactions (see Fig. 2) without defining yet the interaction behaviour of the phases. We claim that the abstract interaction concept is suitable for designing interaction behaviour at high abstraction levels.
5 Conclusions We have presented an interaction design concept called abstract interaction for designing interaction behaviour of service compositions at high abstraction levels. An abstract interaction is able to represent interaction behaviour as a single interaction at a high abstraction level. An abstract interaction is concerned with the results of the interaction behaviour and the constraints which should be satisfied by the results, abstracting from the behaviour itself. We have shown the suitability of the abstract interaction concept in the design of interaction behaviour at high abstraction levels. An abstract interaction may have several possible refinements at a lower abstraction level and, hence, multiple realisations. Thus, the abstract interaction concept can be used to extend the approach defined in the Model-Driven Architecture (MDA) [11]. An abstract interaction may have not only multiple realisations at different technology platforms, but also multiple realisations with different interaction behaviour. For example, the design in Fig. 2 can be refined into different interaction behaviour in which the payment interaction is done using credit card. The abstract interaction concept supports as many abstraction levels as needed by designers. In some cases, designers prefer to have a limited set of abstraction levels; each of which has a pre-defined purpose. Design methods defining such a limited set of abstraction levels can be developed as guidelines in designing interaction behaviour using the interaction concept. For example, a framework in [18] defines three generic abstraction levels. At a high abstraction level, a service is modelled as a single interaction between a service user and provider. At a lower abstraction level, this interaction is refined into choreography of multiple interactions. At another lower abstraction level, the service provider may be
272
Teduh Dirgahayu, Dick Quartel and Marten van Sinderen
refined into an orchestration of other service compositions. The abstract interaction concept fits within this limited set of abstraction levels. The abstract interaction concept is applicable at different abstraction levels. This capability allows designers to apply the same refinement patterns and conformance assessment method consistently. The interaction concept does not require designers to master different design concepts and tools for different abstraction levels. In future work, we will identify patterns of interaction refinement. Such patterns may serve as guidelines for designers in refining an abstract interaction into an interaction behaviour design. We will also develop rules to support conformance assessment for those patterns. The patterns and rules should include time attributes of an interaction. Time attributes are useful for specifying the time moment and the duration an interaction may occur.
References [1]
[2]
[3] [4] [5] [6]
[7] [8]
[9] [10]
[11] [12] [13] [14]
Baresi L, Heckel R, Thöne S, Varró D, (2003) Modeling and validation of serviceoriented architectures: application vs. style. Proc. 9th European Software Engineering Conf.: 68-77 Benatallah B, Dijkman RM., Dumas M, Maamar Z, (2005) Service Composition: Concepts, Techniques, Tools and Trends. Service-Oriented Software Engineering: Challenges and Practices. Idea Group, Inc.: 48-66 Curbera F, Khalaf R, Mukhi N, Tai S, Weerawarana S, (2003) The Next Step in Web Services. Communications of the ACM 46(10): 24-28 Dijkman R, Dumas M, (2004) Service-Oriented Design: A Multi-Viewpoint Approach. International Journal of Cooperative Information Systems 13(4): 337-368 Dijkman RM, (2006) Choreography-Based Design of Business Collaborations. BETA Working Paper WP-188, Eindhoven University of Technology Emig C, Weisser J, Abeck S, (2006) Development of SOA-Based Software Systems an Evolutionary Programming Approach. Proc. Advanced Intl. Conf. on Telecommunications and Intl. Conf. on Internet and Web Applications and Services: 182-187 Hamadi R, Benatallah B, (2003) A Petri Net-Based Model for Web Service Composition. Proc. 14th Australasian Database Conf.: 191-200 Kramler G, Kapsammer E, Retschitzegger W, Kappel G, (2006) Towards Using UML 2 for Modelling Web Service Collaboration Protocols. Interoperability of Enterprise Software and Applications, Springer: 227-238 Leymann F, Roller D, Schmidt M-T, (2002) Web Services and Business Process Management. IBM Systems Journal 41(2): 198-211 Millard DE, Howard Y, Jam E-R, Chennupati S, Davis HC, Gilbert L, Wills GB, (2006) FREMA Method for describing Web Services in a Service-Oriented Architecture. Technical Report ECSTR-IAM06-002, University of Southampton OMG, (2001) Model Driven Architecture (MDA). ormsoc/2001-07-01 OMG, (2006) Business Process Modeling Notation (BPMN) Specification. dtc/06-0201 OMG, (2007) Unified Modeling Language: Superstructure version 2.1.1. formal/ 2007-02-03 Peltz C, (2003) Web Services Orchestration and Choreography. IEEE Computer 36(8): 46-52
An Abstract Interaction Concept for Designing Interaction
273
[15] Peterson JL, (1981) Petri Net Theory and the Modeling of Systems. Prentice-Hall [16] Quartel D, Dijkman R, van Sinderen M, (2004) Methodological support for serviceoriented design with ISDL. Proc. 2nd Intl. Conf. on Service Oriented Computing: 1-10 [17] Quartel D, Ferreira Pires L, van Sinderen M, (2002) On Architectural Support for Behaviour Refinement in Distributed Systems Design. Journal of Integrated Design and Process Science 6(1): 1-30 [18] Quartel DAC, Steen MWA, Pokraev S, van Sinderen MJ, (2007) COSMO: A Conceptual Framework for Service Modelling and Refinement. Information Systems Frontiers 9: 225-244 [19] Skogan D, Grønmo R, Solheim I, (2004) Web service composition in UML. Proc. 8th IEEE Intl. Enterprise Distributed Object Computing Conf.: 47-57 [20] Thöne S, Depke R, Engels G, (2003) Process-Oriented, Flexible Composition of Web Services with UML. LNCS 2784: 390-401 [21] White SA, (2005) Using BPMN to Model a BPEL Process. BPTrends 3(3): 1-18 [22] Zaha JM, Dumas M, ter Hofstede A, Barros A, Decker G, (2006) Service Interaction Modeling: Bridging Global and Local View. Proc. 10th IEEE Intl. EDOC Conf.: 45-55
Preference-based Service Level Matchmaking for Composite Service Ye Shiyang 1,2, Wei Jun2, Huang Tao1,2 1
Department of Computer Science, University of Science & Technology of China Institute of Software, Chinese Academy of Sciences {syye, wj, tao}@otcaix.iscas.ac.cn 2
Abstract. In advanced service oriented systems, service providers provide their services with multiple service levels and service consumers have different preference to different service levels. Service level matchmaking can be performed identifying the best service level available at runtime. However, current preference models aren’t suitable for pricesensitivity situation and inefficient when dealing with multi-QoS properties. Moreover, current approaches overlook the changing of the service levels of composite services in dynamic environment. In this paper, we present a preference model that is particularly efficient for multi-QoS by using utility function and suitable for price-sensitivity situation by introducing acceptable price. A service level automatic generating approach is also presented to deal with the changing of service levels of composite services. After that, a preference-based service level matchmaking model and algorithm is proposed. Experimental results indicate that our service level matchmaking mechanism can match a service level conforming to consumer preference effectively with a high performance. Keywords: Web services, service matchmaking, service level agreement, quality of service.
1 Introduction With the development of Web Services technology, more and more enterprises provide their business through Web Services. However, some enterprises, as service providers, need to provide their services with multiple service levels because of the diversity of service consumer’ requirements; While other enterprises, as service consumers, have different preference to different service levels. Therefore, it is very necessary to establish a mechanism to match service levels automatically between service provider and service consumer. In order to match service levels automatically, the preference should be represented and the service levels of composite services should be decided automatically.
276
Ye Shiyang, Wei Jun, Huang Tao
Current approaches to preference representation, such as GlueQoS [5] and WSAgreement [11], enumerate the service levels that service consumer prefers. However, these approaches are inefficient when dealing with multiple QoS properties and their values. For example, a service level described by 5 QoS properties, each with 10 possible values, leads to 105 different service levels [14]. The inefficient representation model lead to service level matchmaking inefficient with a complexity of O(m*n), where m is the number of service levels that service consumer prefers, and n is the number of service levels that service provider provides, m and n are generally very large as discussed above. And moreover, these approaches aren’t suitable for price-sensitivity consumer because service price is regarded as a common QoS property in these approaches. Current approaches to deciding the service levels of composite services are based on service repository, such as Universal Description, Discovery, and Integration (UDDI) [13]. However, the registered service levels of composite services will be inaccurate when the service levels of underlying component services changes in a dynamical environment. In this paper, we present a preference model. In this model, preference is presented by a utility function and multiple QoS properties are mapped to this function. This preference model avoids emulation and decrease the time complexity. Through introducing the notion acceptable price, this preference model can describe the preference to service price more flexibly so that it is very suitable for price-sensitivity consumer. Moreover, an automatic service level generating approach for composite services based on available component services is presented in this paper. In this approach, through executing the service level composition semantic, service levels of composite service can be generated automatically and dynamically. So that, the service levels of the composite service are still accurate when component service changes. Finally, basing on the preference model and service level generating approach, we further present a service level matchmaking model and algorithm. The key contribution of this paper is to present 1) an efficient preference model that is more suitable for pricesensitivity consumer, 2) an automatic service level generating approach to determine what service levels of composite services, and 3) a service level matchmaking model and algorithm. Our approach is implemented within the OnceServiceExpress architecture, a platform which supports the execution of web service [15]. The following of this paper is organized as follows. Section 2 discusses the related works. Section 3 outlines a scenario of service level matchmaking. Section 4 introduces the service level matchmaking mechanism in details, and then Section 5 presents the service level generating approach. After that, Section 6 gives our experiments and the evaluation. Finally, Section 7 concludes the paper.
2 Related Works There are basically two categories of works in the field of service level matchmaking. One is service selection based service level matchmaking techniques
Preference-based Service Level Matchmaking for Composite Service
277
[1, 2, 3]; another is service negotiation based service level matchmaking techniques [4, 5, 6]. Service selection based service level matchmaking techniques allow consumer to select the service which mostly matches its nun-functional requirement from the services satisfying its functioning requirement. The work [1, 2] proposed a QoSdriven service selection approach on service composition, through which composite service can provide different service levels. The work [6] proposed a QoS broker based service selection architecture, which uses two-phrase verification techniques to match the most satisfying service. However, the service selection based matchmaking techniques are not appropriate to the partners with steady business relationship. The reason is that these consumers generally expect to get services from specific business partners. The service negotiation based service level matchmaking techniques fit this situation well. Service negotiation based service level matchmaking technique is to negotiate between service consumer and service provider on service levels, so as to match a service level that satisfies both of them. The work [4] presented a trade-off based negotiation mechanism to match a service level satisfying both requirements. But it didn’t consider the dynamics of service levels that service provider can provide. The work [5] introduced a Policy-based negotiation mechanism to fit this, but the policy can only be specified by policy engineer, it can’t be generated automatically. After these, [6] suggested a rule and utility function based negotiation method; it can automatically adapt the policies according to system workload by using resource function and risk function. However, this work mainly focuses on atomic service, instead of composite service. In fact, in service oriented computing environment, a service is generally composed by some component services, and the service levels that it can provide mainly rely on these component services. Our work focuses on studying how the service levels of a composite service rely on its component services. No matter service selection based or service negotiation based service level matchmaking techniques, service consumers need to model the service levels they preference. A lot of works have been done in modeling a service level. There have been several works modeling service level through extending WSDL[7] with QoS properties, e.g. WSLA[8] and WSML[9]. XQuery[10] and WS-Policy[11] are able to express arbitrary properties by generalizing beyond QoS properties. But these techniques can’t quantitatively specify how the service price influences the service consumer’s preference. Works, such as WS-Agreement [12] and WSOL [15], attach service price and service consumer’s preference to service level through introducing classes of services, but they do this through enumeration and this approach is very inefficient as discussed formerly. The work [14] suggested a utility function based approach to describe the preference on a service level, but it didn’t consider the critical factors like price and QoS properties. However, service price is one of the most important factors that influence service consumer’s preference. For example, service consumer may prefer a lower service level with very low service price to a higher one with very high service price. In this paper, we present a service level preference model considering service level and service price by introducing a notion of acceptable price.
278
Ye Shiyang, Wei Jun, Huang Tao
3 A Service Level Matchmaking Scenario ONCETravel is an enterprise that provides travel solutions like Travel Order Service which includes Transportation Ordering Service and Hotel Ordering Service. The Transportation Ordering Service includes Self-driving Service or Bus Ticket Ordering Service, and the Self-driving Service provides Map Search Service and Car Rental Service together. ONCETrip is a travel website, which relies on its partner ONCETravel to provide travel order services to website users, sketched in Figure1. On one hand, in order to improve the market competitiveness of service, ONCETravel needs to provide multiple service levels. On the other hand, ONCETrip shows different preferences to different service levels provided by ONCETravel for reason of considering cost and attraction to website users. A service level should be matched according to its preference. Therefore, we need a way to match a service level provided by ONCETravel based on ONCETrip’s preference. However, to enable this, three problems should be solved: Travel Order Service Begin
{MSService1, MSService2, MSService3} Map Search Service
OR
AND
{HOService1, HOService2}
{CRService1, CRService2} Car Rental Service
Travel Web Site
{BTOService1, BTOService2, BTOService3} Bus Ticket Order Service
Travel Order
Hotel Order Service
Service Consumer: ONCETrip End
Service Provider: ONCETravel
Fig. 1. Scenario for service level matchmaking.
First, how to decide what service levels can be provided by Travel Order Service? Travel Order Service is composed by four component services. The service level of Travel Order Service is mainly depends on the service levels of these component services and the composite structure. Moreover, available component services and their service levels are usually dynamic in service-oriented computing environment. Second, how can ONCETrip describe its preference to different service levels? Since the services provided by Travel Order Service are dynamic, ONCETrip is hard to enumerate its preference to every service level statically. Moreover, enumeration is inefficient. We also find that there are many factors that can influence ONCETrip’s preference, e.g., the QoS properties and service price. As a result, we also need a method to model the preference to different service levels. Third, an approach is needed to match the service levels automatically between ONCETrip and ONCETravel. Because the service levels of Travel Order Service and the preference of ONCETrip are dynamic, an automatic service level matchmaking mechanism is very necessary.
Preference-based Service Level Matchmaking for Composite Service
279
4 Automatic Service Level Matchmaking Mechanism In this section, we present an automatic service level matchmaking mechanism, which including a preference model, a service level providing model and a service level matchmaking model. Consumers describe their preference for a specific service level through the preference model; Service level providing model is composed of several service levels and the prices of these service levels; service level matchmaking model presents the principle of service level matchmaking based on the preference model and service level providing model. A matchmaking algorithm is also given in this section. 4.1 Preference Model Consumer has particular requirement on service level because of business objects. Satisfaction grade to a service level is one of the most important factors that influence consumer’s preference, but not the exclusive factor. The price of service level is another important factor because of cost consideration especially for pricesensitivity consumer. We argue that the satisfaction grade and the service price decide the consumer’s preference together. In this subsection, through introducing the notion acceptable price, we present a preference model in which satisfaction grade and service price are mapped to service consumer’s preference. Definition 1(Service Level). Service Level is a set of attribute expressions {AttrExp1, AttrExp2, …, AttrExpn}, where n is the number of QoS properties and AttrExpi is defined by the following BNF: AttrExp ::= Attr AttrOP AttrValue; Attr ::= RespTime | Availability | Throughput | Reputation | Security; AttrOP ::= “” | “” | “=”; AttrValue ::= float | true | false; An example of service level is {RespTime150, Availability99.9%, Throughput400, Reputation80%, Security=true}, which describing the QoS properties including response time, availability, throughput, reputation and security. Definition 2(Satisfaction Grade). If {AttrExp1, AttrExp2, …, AttrExpn} is a service level᧨the Satisfaction Grade to this service level is defined as n w u f ( Attr ) .
¦
i
i
i
i 1
Where Attri is the QoS property expressed by AttrExpi᧨fi is the satisfaction function of Attri, wi is the weight of this QoS Property. For example, to fulfill website user’s requirements, ONCETrip have different satisfaction function on different QoS properties, shown as the following:
if 0 d RespTime 100ms; 1 ° Attr ° Re spTime 100 1 if 100ms d RespTime 300ms᧷ ® 200 ° if 300ms d RespTime; °¯0
f Throughput (AttrThroughput )
if 0 d Throughput 200times/sec; 0 ° ° AttrThroughput 200 if 200times/sec d Throughput 500times / sec᧷ ® 300 ° if 500times / sec d Throughput; ¯°1
f Availabili ty ( Attr Availabili ty )
0 °0.25 °° ®0.50 °0.75 ° °¯1
1
satisfaction function
f Re spTime (AttrRe spTime )
satisfaction function
Ye Shiyang, Wei Jun, Huang Tao
if 0 d Availabili ty 99%; if 99% d Availabili ty 99.9%᧷ if 99.9% d Availabili ty 99.99 %᧷
satisfaction function
280
if 99.99 % d Availabili ty 99.999 %᧷ if 99.999 % d Availabili ty᧷
0
100 200 300 RespTime satisfaction function
0
100 200 300 400 500 600 Throughput satisfaction function
RespTime ᧤ms᧥
1
0
Throughput (times/second)
1 0.75 0.50 0.25
satisfaction function
if 0 d Reputation 70%; 0 ° f Reputation(AttrRe spTime) ® AttrRe putation 70% 1 )4 if 70%d Reputation 100% °( ¯ 100%- 70% if Security true; 1 f Security ( Attr Security ) ® if Security false ᧷ ¯0
0
0 0
99% 99.9% 99.99% 99.999% Availability Availability satisfaction function
1
0 0
60% 70% 80% 90% 100% Reputation satisfaction function
Reputation
acceptableprice
pricingfunction( SatisfactionGrade)
if 0 d SatisfactionGrade 0.6᧷ 0 ° SatisfactionGrade - 0.6 2 ® ) if 0.6 d SatisfactionGrade d 1 °¯0.8 u ( 0.4
Acceptable Price
The weights of these QoS properties are {wRespTime=0.3, wAvailability=0.2, wThroughput=0.1, wRepution=0.3, wSecurity=0.1}. As a result, the satisfactory grade is: Satisfaction grade = 0.3 × fRespTime(AttrRespTime) + 0.2 × fAvailability(AttrAvailability) + 0.1 × fThroughput(AttrThroughput) + 0.3 × fRepution(AttrRepution) + 0.1 × fSecurity(AttrSecurity). Consumer will pay different prices for different service levels for reason of cost consideration. We introduce a notion Acceptable Price to model how satisfaction grade influencing consumer’s preference to a service level. Definition 3(Acceptable Price). Acceptable Price is the highest price that the service consumer can accept for the satisfaction grade of a service level. Definition 4(Pricing Function). Pricing Function is a function of one variable. The function maps the satisfaction grade to the acceptable price. In the following example, ONCETrip will pay nothing for service level which satisfaction grade is lower than 60%. While the satisfaction grade approaches 100%, the price ONCETrip will pay increases rapidly with a max price of 0.8$. To describe the scenario above, ONCETrip defines a pricing function shown as following:
0.8$
0 0
60% 70%
80%
90% 100% Satisfaction Grade
Definition 5(Service Level Alternative). A Service Level Alternative is defined as a pair <Service Price, Service Level>, where Service Price is the price specified by service provider for this Service Level. Definition 6 (Preference). Preference is defined as the difference between Acceptable Price and Service Price. Preference = (Acceptable Price - Service Price), Preference describes the consumer’s preference to a service level alternative. In the previous example, for service level {RespTime150, Availability99.9%, Throughput400, Reputation80%, Security=true}, the service price specified by service provider is 0.05$, therefore, service level alternative is <0.05$,
Preference-based Service Level Matchmaking for Composite Service
281
{RespTime150, Availability99.9%, Throughput400, Reputation80%, Security=true}>. Calculated with definition 2, the Satisfaction Grade is 0.735. Through definition 4, Acceptable price is 0.09. As a result Preference is 0.04 according to definition 6. 4.2 Service Level Providing Model In this subsection, we present a service level providing model QService, which describes the service levels of composite service and the price of each service level. Definition 7(QService). If a service provides function F with n service levels: L1, L2, , Ln, and the prices of these service levels are ServicePrice1, ServicePrice2, , ServicePricen respectively, QService is defined as a pair , where Q is a set with n elements {<ServicePrice1, L1>, <ServicePrice2, L2>, , <ServicePricen, Ln>}. In the example of ONCETravel, the service level providing model of Map Search Service is: QServiceMapSearch==, <ServicePrice2, L2>, <ServicePrice3, L3>}> =, <0.12$, {RespTime150, Availability99.9%, Throughput400, Reputation80%, Security=true }>, <0.15$, {RespTime100, Availability99.99%, Throughput500, Reputation90%, Security=true >}}>
As we known, the service levels of composite services depend on the component services and the composite structure. In section 5, we will present an automatic approach to generate service level providing model of composite service. 4.3 Service Level Matchmaking Model Based on the preference model and the service level providing model, we present a service level matchmaking model, through which a service level mostly satisfying consumer’s preference can be matched. Definition 8(Service Level Selection Model). For a service level providing model QService and a preference model, Service Level Matchmaking Model is defined as Max { preference} . {QService}
According to the definition 8, we present the service level matchmaking algorithm: Input: Preference, QService᧷ Output: The services level matching preference mostly. Description: 1. set MaxPreference as 0; 2. for each element <ServicePricei, ServiceLeveli> in QService: 1). compute SatisfactionGradei through ServiceLeveli ;
282
Ye Shiyang, Wei Jun, Huang Tao
2). compute AcceptablePricei := PricingFunction(SatisfactionGradei); 3). Preferencei := AcceptablePricei - ServicePricei 4). if Preferencei> MaxPreference { MaxPreference := Preferencei; service level = ServiceLeveli; } 3. Return service level.
5 Service Level Automatic Generating approach for Composite Service In this section, we first present a Service Level Enable Composite Service Description Model, which describes the factors influencing service level provide model of composite service. Then we define the semantic of service level composition and the semantic of QoS property composition. Through executing the above composition semantic, the service providing model of composite service can be generated automatically. 5.1 Service Level Enable Composite Service Description Model Definition 9(Available Services). For a composite service with n component services, Available Services is a set {QServiceSet1, QServiceSet2, …, QServiceSetn}, where QServiceSeti is the available services for component servicei, and QServiceSeti is also a set {QServicei,1, QServicei,2, …, QServicei,m}. Definition 10(Composite Service Structure Expression, CSSE). CSSE describes the structure of a composite service which is formally defined as follows: QServicek::=QServiceiyQServicej … Sequential Structure; QServicek::=QServiceiServicej …Parallel Structure; QServicek::=QServiceiQServicej …Choice Structure; Loop Structure can be unfolded as in [2] to be Sequential Structure. CSSE is illustrated by the Figure 2.
or
and
Service i Service k ::=
Service k ::=
Service i
Service j
Service k ::=
Service i
Service j
Service j
Sequential Structure
Parallel Structure
Choice Structure
Fig. 2. Composite Service Structure Expression
Definition 11(Service Level Enable Composite Service Description Model, QCServcie). QCServcie is defined as a pair . As an example, Travel Order Service can be described as following:
Preference-based Service Level Matchmaking for Composite Service
5.2 Service Level Composition Semantic In this subsection, we give service level composition semantic for the three different composition structures. The service level composition semantic is based on the composition semantic of QoS properties. We first give the composition semantic of QoS properties. 5.2.1 QoS properties composition semantic. Different QoS properties have different composition semantics under different composition structures. We categorize the QoS properties and give a QoS properties composition semantic table. Add-able: A QoS property is Add-able if the QoS property value of a composite service is the sum of its component services’ corresponding QoS property values. For instance, the response time in sequential composition structure is Add-able. Multiply-able: A QoS property is Multiply-able if the QoS property value of a composite service is the multiple of its component services’ corresponding QoS property values. For example, availability under sequential structure and parallel structure is multiply-able for the reason that availability is a probability, and the composite service is not available unless both of the component services are available. For example, the availability of a composite service is 72% if the availabilities of the component services are 80% and 90% respectively. And-able: A QoS property is And-able if the QoS property value of a composite service is true if and only if both of its component services’ corresponding QoS property values are true. Security is an And-able QoS property because the composite service is not secure unless both of the component services are secure. Max-able: A QoS property is Max-able if the QoS property value of a composite service is the maximum value of its component services’ corresponding QoS property values. Response time under parallel structure is a Max-able QoS property.
284
Ye Shiyang, Wei Jun, Huang Tao
Min-able: A QoS property is Min-able if the QoS property value of a composite service is the minimum value of its component services’ corresponding QoS property values. Throughput under parallel structure and sequential structure is such kind of property. Average-able: A QoS property is Average-able if the QoS property value of a composite service is the average value of its component services’ corresponding QoS property values. An example is reputation under parallel structure and sequential structure. Table 1. QoS properties composition semantic. Add-able AttrVal i AttrVal j
And-able
Multiplyable
AttrVal i & & AttrVal
Max-able j
Min-able
Average-able
max ( AttrVal i , AttrVal j ) min ( AttrVali , AttrVal j ) avg ( AttrVal i , AttrVal j )
AttrVali u AttrValj
AttrValiy
RespTime Availability
Security
Throughput
Reputation
Availability
Security
RespTime Throughput
Reputation
AttrValj AttrVali AttrValj
5.2.1 Semantics of Service Level Composition A composite service will provide new service levels though composing component services with different service levels. We analyze the semantic of service level composition in the following: 1) Semantic of Service Level Sequential Composition QServicek::=QServiceiyQServicej QServiceiyQServicej=y= QiyQj={, , … , }y{, , … , } i
Where Attri,k and Attrj,l are the same QoS property, AttrOPi,k and AttrOPj,l are the same operator. The semantic of AttrValuei,k yAttrValuej,l is introduced in table1. 2) Semantic of Service Level Parallel Composition QServicek::=QServicei QServicej QServicei QServicej= = Qi Qj ={, , … , } {, , … , } i
The semantic of AttrValuei,kAttrValuej,l is also introduced in table1. 3). Semantic of Service Level Choice Composition QServicek::=QServicei QServicej QServicei QServicej= = Qi Qj={, , … , } {, , … , } i
i
j
j
={, , … , , , … , } i
i
j
j
6 Experiments and Evaluation We have implemented this approach in a Web Service platform OnceServiceExpress developed by Institute of Software, Chinese Academic of Science. In OnceServiceExpress, we study the performance of our preference model and the effectiveness of our service level matchmaking mechanism. 6.1 Experimental Setup Service consumers and service providers run on two machines respectively, with the configuration of CPU Pentium IV 2.8G Hz, 512M memory, windows professional operating system and JDK 5.0. The number of service consumers ranges from 1 to 200, and each service consumer has different preferences to a particular service level. The composite service is composed of four component service. The number of available services of a component service ranges from 1 to 200. Every available component service has 10 service levels. 6.2 Experiments and Evaluation First, we study the performance of our preference model comparing with WSPolicy. In the first experiment, we fix the number of available services of each component service with 200. We study the performance changing as the number of service consumer increasing from 1 to 200. In the second experiment, we fix the number of service consumer with 200. We study the performance changing as the number of available services of each component service increasing from 1 to 100. Figure3 a) and b) show the experiment results respectively.
PDW FKL QJ W L PH PV
PDW FKL QJ W L PH PV
2XU 3U HI HU HQFH 0RGHO :6 3RO L F\ 0RGHO
QXPEHU RI FRQVXPHU V
Fig. 3. a) influence of consumer number
2XU 3U HI HU HQFH 0RGHO :6 3RO L F\ 0RGHO
QXPEHU RI DYDL O DEO H VHU YL FHV
b) influence of available services
286
Ye Shiyang, Wei Jun, Huang Tao
Figure 3 a) shows that the time of matchmaking using our preference model increases linearly when the number of service consumers increases, while the time of matchmaking using WS-Policy model increases exponentially. Figure 3 b) shows a similar result when the number of available services increases. As illustrated in figure 3, the preference model presented in this paper is very efficient. Then, we verify the effectiveness of our service level matchmaking approach. We evaluate the service consumer’s preference by varying the available services and varying the service levels of available services. Each component service has 200 available services and each available service has 10 different service levels. As shown in Figure 4(a), we set the changing rate of available service to be varying from 1 to 100 per second. We notice that as changing rate increases, our service level matchmaking model maintains a high level consumer preference without any reduce, while in static model, service consumer preference decrease constantly. This is mainly because that when changing rate of available services increases, the service levels provided by composite service change rapidly, and the probability of that the service level matched by static model conforming to service consumer’s preference becomes more smaller. Figure 4(b) shows the influence of changing service level of available services. The changing rates of service level are set in the range from 1 to 100 per second. We can see that with our model the service consumer preference maintains a high level consumer preference when changing rates of service level increases. Compare with Figure 4(a), service consumer preference decrease slowly in this situation in static model. The reason is service levels provided by composite service changed more slowly when changing the service levels of available service. As a result, the probability of that the service level selected by static model conforms to service consumer’s preference decreases slowly.
SU HI HU HQFH
SU HI HU HQFH
RXU PDW FKPDNL QJ 0RGHO
VW DW L F PDW FKPDNL QJ PRGHO
RXU PDW FKPDNL QJ 0RGHO VW DW L F PDW FKPDNL QJ
1R RI FKDQJL QJ DYDL O DEO H VHU YFL HV SHU VHFRQG
Fig. 4. a) available services changing
1R RI FKDQJL QJ O HYHO V RI DYDL O DEO H VHU YL FH SHU VHFRQG
b) levels of available services changing
As shown in the above experiments, our proposed service level matchmaking mechanism can effectively satisfy service consumer’s preference.
7 Conclusion In this paper, we present a service consumer preference model. The preference model considers both the service level and the price of service level by introducing the notion acceptable price. Moreover, the model gets a better efficiency because of avoiding service level enumeration. We also proposed a service level automatic
Preference-based Service Level Matchmaking for Composite Service
287
generating approach for composite service. Based on service levels of component services and composite service structure expression, this approach can determine the service levels of a composite service automatically and dynamically. After that, a service level matchmaking model is proposed. In contrast to other work in this area, our approach can maintain a high level consumer preference because it considers the changing of available component services and determines its service levels dynamically. Experiment results show that this approach is efficient and effective. In the future work, we plan to optimize our service matchmaking mechanism. On one hand, we plan to study the influence boundary when an available component service or a service level becomes invalid; on the other hand, we plan to analyze the incremental influence when adding a new available service.
Acknowledgement This Work is supported by the National Natural Science Foundation of China under Grant No. 60673112, the National Grand Fundamental Research 973 Program of China under Grant No.2002CB312005 and the High-Tech Research and Development Program of China under Grand NO. 2006AA01Z19B.
References [1]
Zeng L., Benatallah B., Dumas M., Kalagnanam J., Sheng Q. Z.: Quality driven web services composition. Proc. 12th Int’l Conf. World Wide Web (WWW), 2003 [2] Zeng L., Benatallah B.: QoS-aware middleware for web services composition. IEEE Transactions on Software Engineering, 2004, 30(5): 311-327. [3] Serhani M., Dssouli R., Hafid A.: A QoS Broker Based Architecture for Efficient Web Services Selection [A]. Proc of the IEEE Int'l Conf on Web Services (ICWS'05)[C].2005. [4] Comuzzi M., Pernici B.: An Architecture for Flexible Web Service QoS Negotiation. Proceedings of 9th IEEE EDOC Conference, Enschede, The Netherlands, September 2005 [5] Wohladter E., Tai S., Thomas A., Rouvellou I., Devanbu P.: GlueQoS: Middleware to Sweeten Quality-of-Service Policy Interactions, ICSE 2004, Edinburgh, UK. [6] Gimpel H., Ludwig H., Dan A., and Kearney R.: PANDA: Specifying Policies for Automated Negotiations of Service Contract, Research Report RC22844, IBM T.J. Watson Research Center, Yorktown Heights, N.Y. 10598 (2003). [7] Ariba, IBM, and Microsoft. Web services definition language (WSDL), March 2001. [8] IBM Corporation. WSLA language specification, version 1.0. http://www. research.ibm.com/wsla, 2003. [9] Sahai V. Machiraju M. Saya A. v. Moorsel, and F. Casati. Automated SLA monitoring for web services. In Proc. of 13th Int. Workshop on Distributed Systems, Montreal, Canada, 2002. [10] W3C. W3C XML Query (XQuery 1.0). http://www.w3.org/XML/Query/, January 2007. [11] W3C. Web Services Policy Framework 1.5. http://www.w3.org/2002/ws/policy/, July 2006.
288
Ye Shiyang, Wei Jun, Huang Tao
[12] Global Grid Forum. Grid Resource Allocation Agreement Protocol. Web Services Specification. Available from http://www.ogf.org/Public_Comment_Docs/Documents /Oct-2006/WS-AgreementSpecificationDraftFi nal_sp_tn_jpver_v2.pdf, October 2006. [13] W3C. Universal Description, Discovery and Integration. http://www.uddi.org/. [14] Lamparter S., Ankolekar A., Grimm S., Studer S.: Preference-based Selection of Highly Configurable Web Services In Proc. of the 16th Int. World Wide Web Conference (WWW'07). Banff, Canada, May 2007. [15] ONCEServiceExpress. www.once.com.cn
Ontological Support in eGovernment Interoperability through Service Registries Yannis Charalabidis National Technical University of Athens, 9 Iroon Polytechniou, Athens, Greece [email protected]
Abstract. As electronic Government is increasing its momentum internationally, there is a growing need for the systematic management of the newly defined and constantly transforming services. eGovernment Interoperability Frameworks usually cater for the technical standards of eGovernment systems interconnection, but do not address service composition and use by citizens, businesses or other administrations. An Interoperability Registry is a system devoted to the formal description, composition and publishing of traditional or electronic services, together with the relevant document and process descriptions in an integrated schema. Through such a repository, the discovery of services by users or systems can be automated, resulting in an important tool for managing eGovernment transformation towards achieving interoperability. The paper goes beyond the methodology and tools used for developing such a system for the Greek Government, to population with services and documents, application and extraction of useful conclusions for electronic Government transformation at global level. Keywords: eGovernment Interoperability, Enterprise modeling for interoperability, Metaddata for interoperability, eGovernment Ontology
1 Introduction In the dawn of 21st century, where system complexity, multiplicity and diversity in the public sector is posing extreme challenges to common interoperability standards, eGovernment Interoperability Frameworks (eGIF’s) pose as a cornerstone for the provision of one-stop, fully electronic services to businesses and citizens [16]. Such interoperability frameworks aim at outlining the essential prerequisites for joined-up and web-enabled Pan-European e-Government Services (PEGS), covering their definition and deployment over thousands of front-office and backoffice systems in an ever extending set of public administration organizations.
290
Yannis Charalabidis
Embracing central, local and municipal government, eGovernment Interoperability assists Public Sector modernization at business, semantic and technology layers. As more and more complex information systems are put into operation everyday, the lack of interoperability appears as the most long lasting and challenging problem for governmental organizations which emerged from proprietary development of applications, unavailability of standards, or heterogeneous hardware and software platforms.
2 Background and Scope In order to effectively tackle the transformation of Public Administration, European Union has set key relevant priorities in its “i2010 eGovernment Action Plan” [15]. At national level, most European Union Member States have produced their own National Digital Strategies (e.g. the Greek Digital Strategy 2006-2013 [25], or the Estonian Digital Strategy [11]) which include measures and strategic priorities aimed at developing eGovernment. Within this context, most countries have tried to face the interoperability challenge with the adoption of national e-GIF’s covering areas such as data integration, metadata, security, confidentiality and delivery channels, which fall into the technical interoperability layer. Such frameworks have issued “sets of documents” guiding system design but have not developed to date appropriate infrastructures, such as repositories of XML schemas for the exchange of specificcontext information throughout the public sector – observed only partially in United Kingdom’s e-GIF Registry [6] and the Danish InfoStructureBase [8]. Furthermore, as shown in recent eGovernment Framework reviews [7, 14], there exists no infrastructure proposal for constructing, publishing, locating, understanding and using electronic services by systems or individual users. In order to take full advantage of the opportunities promised by e-Government, a second generation interoperability frameworks era, launching “systems talking about systems” and addressing issues related to unified governmental service and data models, needs to commence. As presented in the next sections of this paper, such an Interoperability Registry infrastructure, should consist of: x x x x
An eGovernment Ontology, able to capture the core elements and their relations, thus representing services, documents, providing organizations, service users, systems, web services and so on. A metadata schema, extending the eGovernment Ontology and providing various categorization facets for the core elements, so as to cover for information insertion, structuring and retrieval. Formal means for describing the flow of processes, either still manual or electronic, and the structure and semantics of various electronic documents exchanged among public administrations, citizens and businesses. An overall platform integrating data storage, ontology management, enterprise modelling and XML authoring, data input and querying mechanisms as well as access control and presentation means.
Ontological Support in eGovernment Interoperability through Service Registries
x
291
The population of the eGovernment Ontology database, with information about administrations, their systems, services and documents is an important step. Since this task usually involves gathering huge amounts of information, an initial set of data should be considered first: this way, population achieves a critical mass, while automatic knowledge acquisition tools are being developed.
3 Defining the eGovernment Ontology The representation means of the proposed system should first capture the core elements of the domain, together with their main relationships. Most existing approaches for eGovernment ontologies cover neighboring domains, such as Public Administration Knowledge [12,28], Argumentation in Service Provision [22], eGovernment projects [23], or types and actors in national governments [24]. As depicted in Figure 1, the proposed eGovernment Ontology provides for the representation of the following core elements: x x x x x x x x
Services, of various types, provided by Administrations towards citizens, businesses or other administrations. Documents, in electronic or paper forms, acting as inputs or outputs to services. Information Systems, being back-office, front-office or web portals providing the electronic services. Administrations, nested at infinite hierarchical levels, being ministries, regions, municipalities, organizations or their divisions and departments. Web Services, being electronically provided services, either final or intermediate ones (contributing to the provision of final services). XML (eXtensible Markup Language) Schema Definitions, for linking the formal representation of data elements and documents. BPMN (Business Process Modelling Notation) Models, for linking services with their workflow models. WSDL (Web Services Definition Language) Descriptions, linking Web Services with the respective systematic, machine readable description of their behaviour.
292
Yannis Charalabidis isControlledBy
Administration
isPartOf
operates
Information System
issues
isPartOf
provides
Document
isSupportedBy
isProvidedBy
relatesTo
Service
relatesTo
Web Service
isPartOf hasDescription
XML Schema
hasDescription
BPMN Model
hasDescription
WSDL Description
Fig. 1. Core Elements of the Ontology
Additional objects complementing the core ontology elements are Citizens (as various types of citizens requesting services), Enterprises (both as service recipients but also as contractors for government projects), Legal Framework Elements (that guide services provision), Life Events and Business Episodes that may trigger a service request, or Technical Standards affecting the provision of electronic services. 3.1 Metadata standards for multi-faceted classification The eGovernment Ontology is supported by numerous categorization facets and standardized lists of values for systematically structuring database contents during the population phase, including types of services and documents (according with the Government Category List – GCL categorization), All the core elements of the eGovernment ontology have predefined metadata, so that their description, search and retrieval can be assisted. The implemented metadata structure is based and extends on a number of existing metadata structures in literature and practice, namely: x
The Dublin Core metadata standard [10] which provides a generic set of attributes for any government resource, be it document or system, including various extensions [20].
Ontological Support in eGovernment Interoperability through Service Registries
x x
x
x
x
293
The European Interoperability Framework – EIF (Version 1.0) [16] published by the IDABC Programme. The United Kingdom e-Government Interoperability Framework [3] and its relevant specifications (e.g. the e-Government Metadata Standard [4], the e-Government Strategy Framework Policy and Guidelines [5], and the relevant Schema Library [6]). The German Standards and Architectures for e-Government Applications (SAGA) Version 3.0 (October 2006) [17], which identifies the necessary standards, formats and specifications, sets forth conformity rules and updates them in line with technological progress. The Danish Interoperability Framework (Version 1.2.14) [9] released in 2006, which includes recommendations and status assessments for selected standards, specifications and technologies used in e-government solutions while the collaboration tool InfoStructureBase [8] includes an international standards repository containing business process descriptions, data model descriptions, interface descriptions, complex XML schemas and schema fragments. The Belgian Interoperability Framework (BELGIF) [2] that is built on a wiki collaborative environment and has released recommendations on web accessibility and on the realization of XML Schemas, apart from a list of approved standards.
The resulting metadata definitions cover all the important facets for classifying and querying the elements of the ontology, so as to provide answers to important questions around the status of electronic provision of services, existence and structure of documents, relation of services with public administrations, characteristics of the various governmental information systems and so on. Table 1 shows the metadata definitions for the Service element, indicating which of them are represented as strings, numbers, list of values or structured elements themselves. Table 1. Services Metadata Field Code Title Providing Administration Engaged Administration Final Service Beneficiary Type Category Life Event Business Event
Description, Type The Service Code, Unique, String The Service Title, Unique, String The Administration (organization, department or office providing the service), Element Other Administrations, taking part in the service provision, Multi-Element Yes/No, if it is a final service, giving output to the citizens or businesses, List Citizens or Businesses or subtypes of them, Element The Service Type, (e.g. Registration, Benefit, Application, Payment, etc), List Service Category, according to GCL (e.g. social service, taxation, education), Element The associated Life Event, Element The associated Business Event, Multi-Element
294
Yannis Charalabidis
Field Legal Framework Ways of Provision Electronic Provision Level Multilingual Content Manual Authentication Type
Electronic Type
Authentication
Frequency Web Site International Policy National Policy Information Source Date of last Update
Description, Type The applying legal framework for the service, MultiElement Manual, Internet, SMS, I-TV, etc., Multi-List Level of electronic provision (1 to 4), according with the EC standardization Languages in which the content for the service exists, Multi-List Type of Authentication needed when the service is provided in a manual way (e.g. presence in person, idcard, proxy), List Type of Authentication needed when the service is provided electronically (e.g. username/passwd, token, digital signature), List Frequency the service is requested, by means of High-Medium-Low, List The URL of the portal providing the service, String Yes / No, if the service is included in the i2010 20 core-services set, List Yes / No, if the service is included in the National Digital Strategy core-services set, List The source(s) of information for the service, String The date of last sampling, Date
Relevant, extensive metadata description fields exist for Documents, Administrations, Information Systems and WebServices, providing an indication of the descriptive power of the Ontology.
4 Combining Processes and Data The description of Services and Documents cannot be complete without formal representation of the services flow and of the documents internal structure. The importance of formal, combined description of services and document schemas has been properly identified in current literature [22,13]. Business modeling and analysis of the processes and the public documents that take part in their execution, is done using the BPMN notation [18] and the ADONIS modeling tool, provided by BoC International [21]. On top of the ADONIS tool, integration with the eGovernment Ontology (Services, Documents, Administrations) has been implemented, ensuring a complete, interoperable data schema. As shown in Figure 2, eGovernment Processes are modeled using BPMN Notation, resulting in easy identification of documents to be exchanged, decisions taken during the service flow by citizens/businesses or administrations and specific activities or information systems that take part in the overall process execution – in this case the electronic VAT declaration from an enterprise towards the TAX Authority.
Ontological Support in eGovernment Interoperability through Service Registries
295
Fig. 2. VAT Declaration Model
Design of data schemas involved in the execution of the processes under consideration, has been performed with the use of the UN/CEFACT CCTS methodology [27], for the creation of common components among the various governmental documents that have been identified through process modeling. Then, following modeling and homogenization of data components, Altova XML authoring tools [1] have been used for defining the final XSD descriptions representing business documents of all types. Final XSD files have been linked with the respective governmental documents of the ontology, resulting in a comprehensive and easy to navigate semantic network structure.
296
Yannis Charalabidis
5 The Interoperability Registry Platform The architecture that implements the Interoperability Registry comprises three layers: (a) the Web-based and UDDI (Universal Description, Discovery and Integration) interfaces for various groups of users, (b) the tools layer including ontology management, process and data modeling and (c) the information repository for interconnected data elements, process models, XML schemas and Web Services descriptions. These three layers, as shown in Figure 3, are integrated through a Relational Database Management System and the Common Access Control and Application Engine.
Registry Intranet
Registry Web Site
(Limited Access: Public Administrations and Portal Builders)
(Public Access: Citizens and Businesses)
Registry UDDI Interface (Limited Access: Systems)
Common Access Control and Application Engine
Process Modeling Tools (incl. COTS software)
Ontology Management, Population & Reporting Tools
XML Management Tools (incl. COTS software)
Relational Database Management System
BPMN Process Models
Services, Documents, Systems & Organisations Metadata
XML Schemas & Core Components
Web Services
Fig. 3. Platform Architecture
The front-end platform components are the following: x x
The Registry Web Site found within the Greek eGIF Web Site [26], which publishes the various documents of the eGovernment Framework but also gives access to citizens and businesses for publicly available data. The Registry Intranet, accessible to pre-selected public administrations and portal builders that gives access to the Registry Tools (processes, ontology, XML).
Ontological Support in eGovernment Interoperability through Service Registries
x
297
The Registry UDDI interface, where administrations publish their Web Services or find existing, available Web services to use through their information systems, constructing truly interoperable, one-stop services.
The Tools layer consists of the process modelling facilities, based on ADONIS engine, the XML Management facilities, based on ALTOVA XML platform, and the custom-developed ontology management, data entry and reporting tools that integrate all representations and models. Finally, the Data Storage layer, incorporates connected database schemas for the ontology instances, the Web Service descriptions in WSDL, the process models and the XML schemas and Core Components. The development and integration of the whole platform has been performed with the use of Microsoft Visual Studio .net suite, using the ASP 2.0/AJAX development paradigm. A parallel installation has also been performed using Java/J2EE/MySQL components.
6 Population of the Repository Initial Population of the Interoperability Registry Repository was greatly assisted by the existence of data in electronic form, through the Greek Ministry of Public Administration. As shown in Table 2, even for a country close to the average European Union Member State population (11,000,000 citizens), the size of the domain is significant, involving thousands of governmental points, services and document types. Furthermore, a plethora of information systems are currently under development, during the new Greek Digital Strategy plan, aiming to achieve full electronic operation of the State by 2013. Table 2. Size of the Domain in Greece Organisational Aspect
18 ministries, 13 prefectures, 52 districts, 1,024 municipalities, 690 public sector organizations 2,500 Governmental “Points of Service” Services and Data Aspect
3,000 non-interoperable Service Types 4,500 Document Types exchanged 1,000 IT companies supporting IT systems
Systems Aspect
300 Central Government Internet Portals 1,000 Municipal Government Internet Portals 2,500 Systems
Public
Administration
Back
Office
Users Aspect
750,000 Enterprises (small, medium and large) 11,000,000 Citizens 18,000,000 Tourists per year
Population of the repository was achieved through the following automated and semi-automated activities:
298
Yannis Charalabidis
x x x x
Automated import of more than 1,797 administrations including ministries, prefectures, districts, municipalities and public sector organisations. Automated import of 1,009 public services definitions, with core metadata descriptions and frequency indications, stemming out of 3,000,000 service requests by citizens and businesses during the last year. Modelling of the core-100 governmental services (including all i2010 services and the services amounting to 85% of the yearly service requests), Modelling of the core XML schemas and WSDL for Web Services to be developed – an activity that is still going on.
The resulting platform is now being maintained and further populated with the assistance of engaged public administrations. The acceptance of the Interoperability Registry by the Public Administration is following a three-stage approach: (a) the core team, including the Ministry of Public Administration and the National eGIF team, (b) the main Public Sector stakeholders, including key ministries, organisations and local administrations and (c) eGovernment project managers and implementation teams, from the public and private sector. Currently, registry users, with various levels of access, exceed 100.
7 Conclusions The new Greek Interoperability Registry presented in this paper introduces a new system (not a paper-based specification) that will interact with e-Government portals and back-office applications, administration stakeholders, businesses and citizens, guiding eGovernment transformation and ensuring interoperability by design, rework or change. The initial application of the system, as well as the relevant evolutions from other European eGIF’s, are indicating that new perspectives should be taken into consideration in eGovernment Frameworks from now on, analysed as following: x x
x
x
Importance and adequate effort should be put in defining standard, formally described electronic services for businesses and citizens, thus providing clear examples to administrations and service portal developers. The paper-based specification should give way to system-based presentation of the framework, incorporating service descriptions, data definitions, unified domain representation ontologies and metadata in a common repository. Organisational interoperability issues should be supported by a more concrete methodology of how to transform traditional services to electronic flows, with the use of decision-making tools. In this direction, the Interoperability Registry infrastructure presented can be of great assistance as it contains all the necessary information in a comprehensive, welldefined and connected semantic network. The collaboration among European e-Government Interoperability Frameworks is particularly beneficial for the ongoing efforts of individual
Ontological Support in eGovernment Interoperability through Service Registries
299
countries, since it ensures that lessons from the pioneers’ experience are learnt and that the same mistakes will not be repeated. Future work along the Greek eGIF and the Interoperability Registry includes both organisational and technical tasks, since the proper maintenance and usage of the registry is now the crucial issue. So, efforts will be targeting the following objectives: x x x
Binding with the Central Governmental Portal for citizens and businesses, so that the registry can by used for locating and enrolling to electronic services. Completion and publication of additional XML Schemas based on Core Components methodology. Initial training of key staff within administrations for using and extending the registry.
Finally, is has been identified that no system can work without the engagement of the public servants: more effort is to be put towards encouraging stakeholders to interact with the registry and among themselves, building synergies across the public sector authorities in a truly interdisciplinary way – hopefully extending the eParticipation features of the registry.
References [1] [2]
Altova XML-Spy Authorware Tools, www.altova.com Belgian Interoperability Framework, Belgif, (2007), http://www.belgif.be/index.php/Main_Page [3] Cabinet Office – e-Government Unit: e-Government Interoperability Framework, Version 6.1, Retrieved February 5, 2007 from http://www.govtalk.gov.uk/documents/e-GIF%20v6_1(1).pdf [4] Cabinet Office – Office of the e-Envoy, e-Government Metadata Standard, Version 3.1, Retrieved February 5, 2007 from http://www.govtalk.gov.uk/documents/eGMS%20version%203_1.pdf [5] Cabinet Office – Office of the e-Envoy, Security - e-Government Strategy Framework Policy and Guidelines Version 4.0, Retrieved Feb. 2007 from http://www.govtalk.gov.uk/documents/security_v4.pdf [6] Cabinet Office, UK GovTalk Schema Library, http://www.govtalk.gov.uk/schemasstandards/schemalibrary.asp, (2007) [7] Charalabidis Y., Lampathaki F., Stassis A.: “A Second-Generation e-Government Interoperability Framework” 5th Eastern European e|Gov Days 2007 in Prague, Austrian Computer Society, April 2007 [8] Danish e-Government Project, InfostructureBase (2007), http://isb.oio.dk/info [9] Danish Interoperability Framework, Version 1.2.14, Retrieved February, 2007 from, http://standarder.oio.dk/English/Guidelines [10] Dublin Core Metadata Element Set, Version 1.1, Retrieved January 25, 2007 from http://dublincore.org/documents/dces/ [11] Estonian Information Society Strategy 2013, ec.europa.eu/idabc/en/document/6811/254, 30 November 2006 [12] Fraser J, Adams N, Macintosh A, McKay-Hubbard A, Lobo TP, Pardo PF, Martinez RC, Vallecillo JS: “Knowledge management applied to e-government services: The
300
[13]
[14]
[15] [16]
[17]
[18]
[19]
[20]
[21] [22] [23] [24]
[25] [26] [27]
[28]
Yannis Charalabidis
use of an ontology”, Knowledge Management In Electronic Government Lecture Notes In Artificial Intelligence 2645: 116-126, Springer-Verlag, 2003 Gong R., Li Q., Ning K., Chen Y., O'Sullivan D: “Business process collaboration using semantic interoperability: Review and framework”, Semantic Web - ASWC 2006 Proceedings, Lecture Notes In Computer Science, Springer-Verlag, 2006 Guijarro L.: “Interoperability frameworks and enterprise architectures in egovernment initiatives in Europe and the United States”, Government Information Quarterly 24 (1): 89-101, Elsevier Inc, Jan 2007 The i2010 eGovernment Action Plan, European Commission, http://ec.europa.eu/idabc/servlets /Doc? id =25286, 2007 IDABC, European Interoperability Framework for pan-European e-Government Services, Version 1.0, Retrieved February 5, 2007 from http://europa.eu.int/idabc/en/document/3761 KBSt unit at the Federal Ministry of the Interior, SAGA Standards and Architectures for e-Government Applications Version 3.0, Retrieved February 5, 2007 from http://www.kbst.bund.de/ OMG Business Process Modelling Notation (BPMN) Specification, Final Adopted Specification http://www.bpmn.org/Documents/OMG%20Final%20Adopted%20BPMN%2010%20Spec%2006-02-1.pdf Seng JL, Lin W: “An ontology-assisted analysis in aligning business process with ecommerce standards”, Industrial Management & Data Systems 107 (3-4): 415-437, Emerald Group Publishing, 2007 Tambouris E, Tarabanis K: “Overview of DC-based e- -driven eGovernment service architecture”, Electronic Government, Proceedings Lecture Notes In Computer Science 3591: 237-248, Springer-Verlag, 2005 The ADONIS Modeling Tool, BoC International, http://www.boc-group.com/ The DIP project eGovernment Ontology http://dip.semanticweb.org/documents/D9-3improved-eGovernment.pdf, 2004 The Government R&D Ontology, http://www.daml.org/projects/integration/projects20010811 The Government type Ontology, http://reliant.teknowledge.com/DAML/Government.owl, CIA World Fact Book, 2002 [12] The Greek Digital Strategy 2006-2013 (2007), http://www.infosoc.gr/infosoc/enUK/sthnellada/committee/default1/top.htm [2] The Greek eGIF Website, available at http://www.e-gif.gov.gr [28] UN/CEFACT Core Components Technical Specification, Part 8 of the ebXML Framework, Version 2.01, Retrieved January 25, 2007 from http://www.unece.org/cefact/ebxml/CCTS_V2-01_Final.pdf Wimmer, M.: “Implementing a Knowledge Portal for e-Government Based on Semantic Modeling: The e-Government Intelligent Portal (eip.at)”, Proceedings of the 39th Annual Hawaii International Conference on System Sciences (HICSS'06) Track 4 p. 82b, 2006.
Towards Secured and Interoperable Business Services A. Esper, L. Sliman, Y. Badr, F. Biennier LIESP, INSA-Lyon, F-69621, Villeurbanne, France {alida.esper, layth.sliman, youakim.badr, frederique.biennier}@insa-lyon.fr
Abstract. Due to structural changes in the market, from mass customisation to increased interest in product-services management, an exponential growth of a service ecosystem will emerge in the coming years. This shift in the economy creates a need for instant and ondemand collaborative organisations which involve radical changes in the organizational structure of enterprises, increasing the need for business interoperability. Unfortunately, existing enterprise engineering approaches and information systems technologies lack the intrinsic agility and adaptability features required by these service-based collaborative organisations. To overcome these limits, we introduce a new approach called the Enterprise Urbanism Concept to reorganize enterprises into sets of interoperable industrial services. This new approach relies on the extension of the concept of information system urbanism in order to take into account industrial constraints while reorganising service business units. Nevertheless, despite this intrinsic partner reorganisation, instant and on-demand collaborative organisations can be limited due to a lack of trust between partners. To overcome these limits, we reinforce our approach by clearly assessing contextual security policies based on the patrimony of a company and technological security components. These components can be dynamically added in respect to the collaboration context when organising a consistent chain of industrial services. Keywords: Security issues in interoperability, Interoperable enterprise architecture, Service oriented Architectures for interoperability, Business Process Reengineering in interoperable scenarios, Enterprise modeling for interoperability
1 Introduction The need for increased customisation and service-oriented products has forced enterprises to adapt their organisational strategy. While focusing on their core business competencies, outsourcing and collaborative strategies must respond to market requirements, for example, being able to provide a high-quality service level to consumers. These organisational trends enhance the enterprise’s agility such as its ability to quickly respond to structural changes, client requests,
302
A. Esper, L. Sliman, Y. Badr, F. Biennier
technological or activity changes, supplier management [24, 30] and to reduce waste leading to a lean manufacturing organisation [42]. These organisations heavily use Information and Communication Technologies (ICT) [34] leading to increase the call for IT inter-operability. Their performance level is related to efficient information sharing systems so that deviation risks can be reduced [18]. Moreover, the increasing association between products and services creates a need to organize instant collaborations to fulfil a customer’s request. Consequently, an exponential growth of a ubiquitous ecosystem of services will emerge in the coming years. The ecosystem of services in the economy will rely on software services and support which span multiple organizations and service providers. The dynamic interaction between services creates chains of organizations that will provide agile support for business applications, governments, or, simply, the consumer. This global service vision leads us to identify two major statements: 2.
As such service ecosystems rely on a collaborative industrial organisation, service models must be designed to take into account industrial constraints. Furthermore, firms must be re-organised to provide consistent and interoperable industrial services. 3. Building a dynamic service chain depends on the capability to link services together in arbitrary ways in order to meet customer needs. Nevertheless, the potential lack of trust between partners can an obstacle to the global adoption of such distributed architectures. To overcome this limit, particular attention must be paid to both security requirements and management while specifying the context of the service and user preferences. The propagation of these constraints must be included in service compositions and orchestrations to respond to a particular service need. The paper is organized as follows: after defining the extended context of services in Section 2, we will introduce a new service model in Section 3 to specify business interoperability constraints. Lastly, in Section 4, we will define how services can be dynamically orchestrated through the combination of adapted technological components that fulfil context requirements.
2 Challenges in Ecosystem of Services The integration of services when selling a product increases the need for inter-firm collaborations and leads to the organization of industrial activities according to dynamic added-value networks instead of Michael Porter’s traditional value chain model [40]. Such a vision involves being able to dynamically define the way that global services can be linked together in industry. Yet, the definition of a global service is in contrast with the traditional enterprise organisation in terms of business sectors (i.e. sales and production) supported by underlying corporate information systems. This approach imposes an organization of the enterprise in respect to generic models of business sectors that can be customised. Unfortunately, this approach often leads to monolithic top-down engineering
Towards Secured and Interoperable Business Services
303
practices, gathering different engineering tools and methodologies in a common reference architecture [27]. In addition, it creates a vertical organisation of the enterprise in terms of customer relationship, supply management and production systems. The vertical organization, fully impacted when organising a dynamic collaboration, exhibits a poor agility level. 2.1 Interoperable Business and Production Processes Moreover, corporate information systems depend on a wide variety of technological-based support systems (i.e. DBMS and programming languages, etc.) and various business sectors. Examples include the Enterprise Resource Planning, Product Life Management, Manufacturing Execution Systems and Supply Chain Management Systems. Such corporate information systems exhibit a high level of technological complexity and lack system interoperability. They involve information redundancy which cause inconsistencies and decrease the flexibility and scalability of their architectures. Although leaner approaches such as workflow management systems can be fruitfully used to support both business and production processes [35], these on-demand tools may adversely increase Information System complexity and inconsistency. We attempt to solve these problems by recalling the Information System Urbanization approach [33] which organises the Information System into different levels. The urbanization approach separates activities from the Information System and technical infrastructure [12]. Inversely, the introduction of ServiceOriented Architecture (SOA) enables a flexible design and which produces a reactive environment of applications interconnected through an applicative bus that orchestrates processes [38]. Coupling the urbanization paradigm and the ServiceOriented Architecture in the design of Information Systems will improve the consistency and the agility of the firm to reorganize its activities. It is the first step towards the organisational business interoperability between collaborative firms. This coupling approach focuses on activities as fine-grained components of information and production systems in respect to the enterprise functional organisational chart and without taking into account the logic of production processes or organisational constraints. 2.2 Propagation of Security Requirements A lack of trust between partners can limit the emergence of collaborative organizations and weaken collaboration strategies in the way that partners share information. As information represents an important part of the enterprise patrimony [31, 15], security requirements must be taken into account while designing both business processes and the physical implementation [7]. The definition of security policy is often limited to the implementation of a secure infrastructure. For this purpose, methodologies and industrial standards have been defined since the 1980s to propose international level of certifications such as the DoD Rainbow series [19], ITSEC [21] and Common Criteria [16]. Nevertheless, this reduced point of view provides technical solutions to threats and vulnerabilities caused by the technology. Early risk analysis approaches, such as
304
A. Esper, L. Sliman, Y. Badr, F. Biennier
EBIOS [17] and MEHARI [14], allow the design and implementation of safe architectures of consistent and reliable systems based solely on technological solutions as proposed in SNA [36] and OCTAVE [2]. Organisational vulnerability must also be considered in defining consistent security policy as proposed in the ISO 17799 standard [28]. Few approaches, such as UML-SEC [29] and web services deal with the integration of security requirements when designing processes. Security requirements in web services can be achieved at the technical level in respect to a reference architecture [25] that integrates different standards such as WS-Security [37] and WS-federation [26] for securing SOAP messages and federating different security realms. The security policies of infrastructures and processes suffer from being locally limited to corporate information systems [20]. Policy requirements and implementation are generally bounded by the boundary of each firm. The dominant logic of ecosystem of services implies global security roadmaps among chains of collaborative services. Security requirements and constraints should be propagated when orchestrating chains of distributed services. We attempt to deal with the problem of security propagation by examining Digital Rights Management architectures [23]. These architectures provide long life-cycle protection of audio and video contents based on cryptographic techniques and associated players. Adapting such an approach to a dynamic ecosystem of services involves the design of services to support contextual dynamic binding. The technology of Hippocratic databases [1] which respects the privacy of data it manages could ensure that disclosure concerns (i.e. privacy policy, security policy, legislation, etc.) are protected. Digital Rights Management architectures and Hippocratic Databases support security policies and access rights management [32] including user preferences in respect to the industrial standard of Platform for Privacy Policies (P3P) [13]. In conclusion, the ecosystem of services urges solutions to build on-demand and secured interoperable service systems. In the next section, we introduce an extended service model to embed both business interoperability and security constraints. We then pay special attention to define how contexts can be used to implement such service framework. We emphasize the contextual binding and orchestration monitoring functions.
3 Secure Interoperable Business Organisation To organise interoperable business organisations, our works rely on the Enterprise Urbanism concept [5] which splits the enterprise organisation into several “business areas” coupled with the production organisation. In this organisation the evolution of the corporate information system is guaranteed by the flexible and incremental design of processes [4]. In the enterprise urbanisation approach, the specification of business areas integrates industrial flows, competencies, decisions and information viewpoints. Because of the correlation between business areas and the production organisation, a given full process, including customer relationship management, production and purchase activities, belongs to the same autonomous
Towards Secured and Interoperable Business Services
305
area [39]. This process can easily be integrated in dynamic collaborative organizations by binding and orchestrating industrial services. 3.1 Interoperability Constraints While traditional information systems focus on management and business aspects, we pay particular attention to the industrial aspect to efficiently support coproduction constraints and build interoperable business organisation. This leads to take into account several interoperability constraints: x
x x
Organisational Interoperability: means that enterprises must share the same goal and have compatible management strategies. This interoperability level is also related to the security policy and security context management so that a consistent end-to-end secured organisation can be built. Industrial Interoperability: means that enterprises must share information regarding products and production processes such as the process maturity level and the required real time of execution. Technical Interoperability: means that the different applications of the information system can exchange information.
3.2 Interoperable Business Services Based on different constraints, we propose to extend the service model that we have proposed in previous work [8] to design a multi-level architecture in order to build interoperable business services (Fig. 1): x x
x
x
The enterprise level: consists of the organisation of traditional business areas. The business service level: consists of sets of shared services that take into account both organisational and industrial interoperability constraints. These two levels are interconnected by means of urbanisation composition of services which gather different groups of technology tools and the routing system. Roughly speaking, the group of technology tools is used to identify consistent industrial units. The routing system connects each industrial business area to the traditional “vertical” business area. The collaborative level: defines different collaboration policies and trust rules in order to simplify both the contractual enactment phase and the dynamic building of adapted security policies. Policy requirements depend on partners engaged in the chain of services. The technological level: denotes shared and accessible interfaces to connect the industrial services to the implemented services. This level includes the contextual composition and orchestration engine, the Enterprise Service Bus (ESB) as well as connectors to components of the information system.
306
A. Esper, L. Sliman, Y. Badr, F. Biennier
Supply
Customer
Accounting
Production
Design
Enterprise traditional organisation
Group technology tools
Business routing
Urbanisation
Industrial services
Service A Service A Industrial services
Service B Service B
Composition / orchestration
Service C Service C
Service D Service D
Service E Service E
Policies / preferences
Context manager
Composition / orchestration Security
B2B gateway
Mediation Entreprise
External Service provider
Soap service request
Transformation
Routing
Service Bus
Application Access service
Data access service
ERP
Web
Business service choregrapher
Service routing directory Business service directory
IT system SCM
Mainframe
CRM
Fig. 1. Multi-Level Architecture of Interoperable Industrial Services
3.3 The Service Extended Model The organisation of interoperable industrial services relies on a multi-interface extended service model (see figure 2): x
x
The industrial service: describes competencies of the service in terms of what the service can achieve (i.e. a “virtual” service product, a “real” manufactured product or a “product-service” offer connecting a manufactured product to the related value-added services). This conceptual service includes a global accessible interface defining exactly what the service will achieve (i.e. a stand alone product or a service-included product). The manufacturing interface: includes two views describing how the manufacturing service will be achieved: -
Product view: gathers information on material and products (i.e. bills of material, management policy, quality indicator, etc.) as well as production management strategies. As far as value-added services are concerned, the required resources including human
Towards Secured and Interoperable Business Services
-
What will be achieved
Service Competencies
Pattern Indicator SLA Interface Acquisition
Manufacturing Interface
Exchange
Management Data
x
The control interface of the Service Level Agreement: Real-time constraints are used to describe global QoS constraints to be included in SLAs whereas safety constraints are used to define pre-emptive or non-preemptive services. This interface is coupled with a dynamic monitoring system to dynamically define adapted QoS monitoring features [9]. The security control interface: describes the global protection that should be applied to both data and processes depending on perceived risks and according to partner trust levels. In order to take into account threats related to the pervasive infrastructure of services, a particular facet devoted to the exchange protection is added. The implementation interface: indicates the semantic service to be executed by the implementation level. Adding this “virtual service layer” before composing and orchestrating “concrete services” allows us to remain independent from the IT system organisation and protect it as well by publishing only “embedded” service interface [10]. It consists of the conceptual service description and contextual policy information (i.e. security, QoS…) defined by WS-policy based ontologies [11]. Then, technological services, which are devoted to information access, monitoring and security are added before orchestrating industrial services.
Process
x
software and hardware resources are described in a bill of resources and management strategies which define the way resources will be scheduled. Process view: describes the routing specification, the process qualification according to the CMMI classification [41] and potential resource constraints used when allocating the manufacturing service to resources.
3.4 The Binding Mechanism Binding industrial services is achieved according to the following steps: 1.
The service “semantic” selection first defines potential candidates. This service competencies description is associated to the public process interface without exactly specifying what will be processed or the returned information granularity level. 2. The industrial interoperability is then taken into account by refining the selection process according to process maturity and management rules so that a consistent service chain can be built. 3. The contextual information related to the service consumer is used to select the convenient “core process” associated with the private part of the process, adapting the returned information granularity level to the exact context. The duo of process and service description is only used to publish a visible part of the business service. 4. Lastly, Quality of Service (QoS) requirements are confronted with the current context to define which potential candidates can be selected and which monitoring services must be set to control the orchestration of services. After these industrial services composition phases, the “conceptual service” and the contextual information are transmitted to the implementation level so that technological services can be composed in order to support the technological interoperability.
Technical Abstraction Level
Business Abstraction Level
Conceptual service
Interface
« Published » unique Business instance
Implementations Collaboration Strategy based instances
Interface
Implementation
Interface
Implementation
Fig. 3. Embedded Conceptual Service and Core Process Relationships
Towards Secured and Interoperable Business Services
309
4 Implementation Architecture To fit the technological interoperability requirement we propose an on-demand dynamic service binding process. Organised in a bottom-up way, the binding process consists of assembling “technological” sub-components so that the orchestrated services can be tuned to exactly fit environmental needs such as implementing a convenient security policy and supporting semantic and syntactic interoperability. Security policies are defined according to the following contexts: x
x
x
x
Provider Preferences: define different security requirements of the information system according to data and process patrimonial-values which describe the importance of the potentially shared element. Depending on the trust level of service customer and the underlying infrastructure, different security policies can be applied including authentication and authorisation processes and information cryptographic requirements [6]. Customer Preferences: are described in respect to the P3P specification and exactly define which customer information can be processed (i.e. authentication, logged accesses, etc) as well as mediation constraints (i.e. data interchange format). Collaborative Context: describes contractual relationships between the service provider and the service customer. These contractual relationships are used to specify what the service customer is allowed to directly process as well as Quality of Service requirements through industrial SLAs [9]. Infrastructure Context: in a pervasive environment, different threats are caused by the communication infrastructure. Particular security components must be added to protect the underlying infrastructure.
To provide dynamic contextual composition and orchestration features, we consider the orchestrated services as autonomous components. Each contextual view is associated with generic patterns implemented as classes, in a similar way as the PIM4SOA framework [3]. Then the dynamic binding process consists of providing a multi-model instantiation process conditioned by different contexts and standard components (i.e. pattern components). The instantiation process is used to select and bind the technological components patterns and generates the convenient BPEL script in respect to technological interoperability constraints. These autonomous services include: x x
x
Information System Access Services: these technological components implement connectors to the different software of the corporate information system. Mediation and Transformation Services: these technological components are used to support syntactic and semantic interoperability in translating information to understandable formats by applications. They are based on B2MML and FDML [22] in order to implement information interface services towards business service providers and manufacturing resources, respectively. Access Services: these technological components implement access controls, namely authentication, access filtering and reporting functions,
310
A. Esper, L. Sliman, Y. Badr, F. Biennier
x
x
according to the partners’ collaboration scenario and based on the security policy and contextual information. Exchange Security Services: denote cryptographic components ensuring that soap message can be secured according to infrastructure contextual information. They provide encrypted content and its associated “playing” service which is invoked each time the message content is accessed. The exchange security service checks out whether the access request is defined in the “information licensing conditioned” base of the service provider. Monitoring Services: dynamically generate monitoring components based on QoS requirements and monitoring patterns.
5 Conclusion and Further Works In this paper, we propose extended an Industrial Service-Based Architecture to support secured interoperable businesses. This architecture is implemented in the Service-Oriented Enterprise Architecture. As trust and perceived risks are key points that are taken into account when defining a collaborative strategy, we pay particular attention to security policies and requirements when composing chains of services with regards to service consumer and infrastructure characteristics. The next steps will focus on the way a Lean Service Bus can be implemented in order to reduce IT structuring effects. Such a Lean Service Bus will adapt the orchestration process to a “pull flow” logic. Coupled with a trigger system, this service bus will improve the way the industrial service architecture fit the lean enterprise requirements.
Acknowledgements This work was supported by grants from the INSA BQR project “Enterprise urbanism” and the Rhône-Alpes Region Council through the GOSPI Cluster project INTERPROD.
References [1] [2]
[3]
[4]
Agrawal R., Kiernan J., Xu Y., Srikant R., 2002. Hippocratic Databases, 28th VLDB Conference, 10 pages, 2002. Alberts C., Dorofee A., 2001. An Introduction to the OCTAVESM Method. CERT White Paper. Available Online at http://www.cert.org/octave/methodintro.html, [Last Visited September 30 2007] Benguria G., Larruceat X., Elvesaeter B., Neple T., Beardsmore A., Friess M., 2007. A platform independent model for service oriented architecture. In Enterprise Interoperability: new challenges and approacghes. Doumeingts G., Müller J., Morel G., Vallespir B. Eds., Springer. pp. 23-32 Biennier F., Favrel J., 2003. Collaborative Engineering in Alliances of SMEs. Actes de PRO-VE’03. Lugano (Suisse), October 2003. In : Processes and foundations for
Towards Secured and Interoperable Business Services
[5] [6] [7] [8]
[9]
[10]
[11]
[12] [13] [14]
[15]
[16]
[17]
[18] [19] [20]
[21] [22]
311
virtual organizations. Camarinha-Matos L., Afsarmanesh H. (Eds.). Kluwer academic publishers. pp. 441-448 Biennier F., Buckard S., 2005. Organising Dynamic Virtual Organisation: Towards Enterprise Urbanism, APMS 2005 Biennier F., Favrel J., 2005. Collaborative Business and Data Privacy: Toward a Cyber-Control Computers in Industry, V. 56, n° 4, pp. 361-370 (May 2005) Biennier F., Mathieu H., 2005. Security Management: Technical Solutions v.s Global BPR Investment. Schedae informatica vol. 14, pp. 13-34 Biennier F., Mathieu H., 2006: Organisational Inter-Operability: Towards Enterprise Urbanism. In Entreprise interoperability – New challenges and approaches, Eds.Doumeingts G., Müller J., Morel G., Vallespir B. Eds. Springer. pp. 377-386 Biennier F., Ali L., Legait A., 2007. Extended Service Integration: Towards Manufacturing SLA. IFIP International Federation for Information Processing, Volume 246, Advances in Production Management Systems, Olhager, J., Persson, F.. Eds., pp.87-94 Chaari S., Benamar C., Biennier F., Favrel J., 2006. Towards service oriented enterprise. In the IFIP International Conference on PROgraming LAnguages for MAchine Tools, PROLAMAT 2006, 15-17 June, Shanghai, China, pp 920-925. (ISBN: 978-0-387-34402-7) Chaari S., Badr Y., Biennier F., 2008. Enhancing Web Service Selection by QoSBased Ontology and WS-Policy. accepted in the 23rd ACM Symposium on Applied Computing, Ceará, Brazil, 16-20 March 2008 CIGREF 2003. Accroitre l’agilité du système d’information. Livre blanc du CIGREF, September 2003. Cranor Lorrie, Privacy with P3P, 239 pages, O’Reilly, 2001 CLUSIF, 2000. Mehari. Rapport Technique. 91pp, Available Online at https ://www.clusif.asso.fr/fr/production/ouvrages/pdf/MEHARI.pdf [Last Visited September 30, 2007] CLUSIF, 2005. Enquête sur les politiques de sécurité de l'information et la sinistralité informatique en France en 2005. online [Last Visited September 30, 2007] : http://www.clusif.asso.fr/fr/production/sinistralite/docs/etude2005.pdf Common Criteria Organisation, 1999. Common Criteria for Information Technology Security Evaluation – Part I: introduction and general model version 2.1 – CCIMB 99-031. Available Online at http://www.commoncriteria.org/docs/PDF/CCPART1V21.PDF, 61 p. [Last Visited, September 30, 2007] Direction Centrale de la Sécurité des Systèmes d’Information (DCSSI), 2004. Expression des Besoins et Identification des Objectifs de Sécurité : EBIOS, Rapport Technique. Available Online at http://www.ssi.gouv.fr/fr/confiance/ebios.html, [Last Visited, September 30, 2007] DeVor R., Graves R., Mills J.J., 1997. Agile Manufacturing Research: Accomplishments and Opportunities. IIE Transactions n° 29, pp. 813-823 Department Of Defence (DoD), 1985. Trusted Computer Security Evaluation Criteria - Orange Book. DOD 5200.28-STD report. Djodjevic I., Dimitrakos T., Romano N., Mac Randal D., Ritrovato P., 2007. Dynamic security Perimeters for Inter-enterprise Service Iintegration. Future generation of computer systems (23). pp. 633-657 EEC, 1991. Information Technology Security Evaluation Criteria. Available Online at http://www.cordis.lu/infosec/src/crit.htm, [Last Visited September 30, 2007] Emerson D., Brandl D., 2002. Business to Manufacturing Markup Language (B2MML) version 01. 60 p.
312
A. Esper, L. Sliman, Y. Badr, F. Biennier
[23] Erickson J.S., 2003. Fair Use, DRM and Trusted Computing. Communications of the ACM, vol 46, n°4, , pp.34-39 [24] Goldman S. Nagel R., Preiss K., 1995. Agile Competitors and Virtual Organisations. New York: Van Nostrand Reinhold. [25] IBM and Microsoft Corp., 2002. Security in a Web Services World : A Proposed Architecture and Roadmap. 28pp white paper, Available Online at ftp://www6.software.ibm.com/software/developer/library/ws-secmap.pdf, [Last Visited, September 30, 2007] [26] IBM, Microsoft, BEA, Layer 7 technology, Verisign, Novell Inc., 2006. Web Services Federation Language. Version 1.1. Available Online at http://download.boulder.ibm.com/ibmdl/pub/software/dw/specs/ws-fed/WSFederation-V1-1B.pdf. [Last Visited , September 30, 2007] [27] IFAC-IFIP, 1999, GERAM: Generalized Enterprise Reference Architecture and Methodology, Version 1.6.3, IFAC-IFIP Task Force on Architecture and Methodology. [28] ISO, 2000. ISO/IEC 17799:2000 standard - Information technology. Code of Practice for Information Security Management. [29] Jürjens J., 2002, UMLsec: Extending UML for Secure Systems Development. Lecture Notes in Computer Science 2460, UML 2002 Proceedings, pp. 412-425 [30] Lee H.L., 2004. The Triple A Supply Chain. Harvard Business Review, October 2004, pp. 102-112 [31] Levitin A.V., Redman T.C., 1998. Data as a Resource: Properties, Implications and Prescriptions. Sloan management review, fall 1998. pp. 89-101 [32] Lin A., Brown R., 2000, The Application of Security Policy to Role-based Access Control and the Common Data Security Architecture, Communication (23) pp. 15841593 [33] Longépé C, 2003. The Enterprise Architecture IT Project - The Urbanisation Paradigm, Elsevier. 320p. [34] Mahoué F., 2001. The E-World as an Enabler to Lean. MSc Thesis. MIT. [35] Martin J., 1992. Rapid Application Development, Prentice Hall, Englewood Cliffs. [36] Moore A. P.; Ellison, R. J., Architectural Refinement for the Design of Survivable Systems. Technical Note (CMU/SEI-2001-TN-008), Pittsburgh, PA: Software Engineering Institute, Carnegie Mellon University, October 2001, Available Online at http://www.sei.cmu.edu/publications/documents/01.reports/01tn008.html [Last Visited, September 30, 2007] [37] OASIS, 2004. Web Services Security: SOAP Message Security 1.0 (WS-SECURITY 2004). 56 pages Available Online at http://www.oasisopen.org/committees/download.php/16790/wss-v1.1-spec-osSOAPMessageSecurity.pdf [Last Visited, September 30, 2007] [38] Schmidt M.T., Hutchinson B., Lambros P., Phippen R., 2005. The Enterprise Service Bus. : Making Service Oriented Architecture Real. IBM System Journals, vol. 44, n° 4, pp.781-797. [39] Sliman L., Biennier F., Servigne S., 2006. Urbanisarion Conjointe de l’entreprise et de son Système d’Information. Colloque IPI 2006 proceedings : "Comprendre et piloter la mutation des systèmes de production", pp. 169-180 [40] Tekes, 2006. Sara - Value Networks in Construction 2003-2007. Sara technology programme, Available online at http://www.tekes.fi/english/programmes/sara [Last Visited, September 30, 2007] [41] Williams R., Wegerson P., 2002. MINI CMMI(SM), SE/SW/IPPD/SS Ver 1.1, Staged Representation. Cooliemon. [42] Womack J.P., Jones D.T., 2003. Lean Thinking, 2nd edition. Simon & Schuster, 404 p
Part IV
Ontologyies and Semantics for Interoperability
Semantic Web Services based Data Exchange for Distributed and Heterogeneous Systems Qingqing Wang1,2, Xiaoping Li1,2 and Qian Wang1,2 1
School of Computer Science and Engineering, Southeast University, Nanjing 210096, P.R.China 2 Key Laboratory of Computer Network and Information Integration(Southeast University), Ministry of Education, Nanjing 210096, P.R .China [email protected], [email protected], [email protected]
Abstract. Data exchange is the process of exchanging data between systems online, in which heterogeneity, distribution and different semantic descriptions are the main difficulties. In this paper, a data exchange framework constructed on the architecture of Web services is proposed, in which data provider deploys Web services for data exchange and publishes description of service function and exchange data on a register center, data requester searches for web services on the register center according to his requirements on function and data. To improve precision of data discovery, a semantic matching mechanism for matching the provided and requested Web services, based on OWL-S (Web Ontology Language for Services), is presented. The matching mechanism which takes both service function and exchange data into account calculates the semantic similarity of two Web services as matching result. A prototype system is implemented to verify the proposed framework and matching mechanism. Keywords: Interoperability for knowledge sharing, Ontologies and Semantic Web for interoperability, Interoperability for knowledge creation, transfer, and management, Design methodologies for interoperable systems, Enterprise application Integration for interoperability
1 Introduction Sharing information is difficult in systems with different operating systems, heterogeneous database management systems and distributed data sources based on various semantic description abilities and isolating levels. Data exchange, which is one of the ways to share information between such systems, is the process of exchanging data dynamically between systems [1] [2] and transforming data under a source schema to data under a target schema [3].
316
Qingqing Wang, Xiaoping Li and Qian Wang
Data exchange is a traditional problem, for which there are many different solutions according to different application scenes, user requirements and technical environments. Here we present a brief survey of some methods for data exchange: (I) EDI (Electronic Data Interchange). This is the electronic transfer between separated computer applications of commercial or administrative transactions, using an agreed standard to structure the transaction or message data. This kind of methods [4] focuses on the issues of data transfer and data format. However, other parts of data exchange process are not taken into account. (II) Traditional approaches of integrating heterogeneous data sources. This kind of methods [5] mostly uses wrapper components to shield the differences among data sources. They focus on transforming data formats. Other issues such as data discovery and data transferring are not referred to. These methods suit the scene that the data sources to exchange data are not many and changing frequently. (III) XML based method. This kind of methods uses XML as the medium of data exchange. XML files whose schemas can be matched and formats can be transformed is the common data format in the process. These methods [6] [7] focus on schema matching or format transforming. However, data discovery on internet are not emphasized. Moreover, some standards such as RosettaNet [8] and ebXML [9] also focus on data exchange and describe data with XML. But people in different industries define different XML tags according to their own customs, which causes that these standards become complex and separate with each other and lack of compatibility. Web services technology which defines how to interoperate between Web applications have three parts: service provider, service requester and service broker. Web services can solve the problem of distribution in data exchange and improve the efficiency and precision of data discovery, in which Web services matching is a key issue. Presently, existing Web services matching mechanism can be divided into two categories: (1) Syntax level. This kind of mechanisms usually publishes Web services according to industry classify standards, describes Web services interfaces by WSDL and matches Web services based on key words. These mechanisms can realize Web services discovery on Internet, but lack of function description for Web services as well as semantic information. Usually, these mechanisms are terse but with low discovery precision. Typical application are the UDDI systems developed by IBM, Microsoft, HP and so on [10]. (2) Semantic level. This kind of mechanisms usually describes Web services based on ontology theory, which can solve the semantic heterogeneity of syntax level mechanisms and add semantic description of Web services’ function. Service matching usually takes IOPE (Input, Output, Precondition and Effects) of Web services into account. Generally, discovery precision of semantic level is higher than the syntax level’s. Research on matching mechanism of semantic level and some related matching algorithms are introduced in [11] [12] [13]. Now, weaknesses of existing data exchange methods are: (1) exchange data is not easy to be discovered on Internet, (2) few methods are capable of completing the whole process of data exchange on Internet. To solve these shortcomings, a semantic Web services based data exchange method is proposed in this paper. Our focus is realizing the whole process of data exchange and easy data discovery on Internet. In the method, a data exchange framework is constructed on the
Semantic Web Services based Data Exchange
317
architecture of Web services, in which information of exchange data can be published and discovered, data discovery can be realized based on semantic information, data can be obtained and transferred remotely and data schema matching as well as data format transforming can be realized too. In order to improve the precision of data discovery, a Web services matching mechanism of semantic level for data exchange is also presented.
2 Terms and Definitions Data Exchange on Internet is the process of discovering needed data on Internet, exchanging data between information systems and transforming data under a source schema to data under a target schema. The process consists of data publishing, data discovery, data getting, data transferring, schema matching and format transforming. Data schema
Data publishing Data discovery Data getting Data transferring Schema matching
Format transforming
Service matching
Web services for data exchange
is the description for structures, compositions and properties of a data set. For database, it is the description for structures of tables, relations between tables as well as properties of fields. is the process of publishing information of exchange data on Internet for being requested. is the process of requesting needed data on Internet. is the process of extracting needed data from the database of data provider’s system. is the process of transferring exchange data from data provider’s system to data requester’s system. is the process of matching or mapping data schemas between data provider and data requester according to certain rules. is the process of transforming exchange data under provider’s schema to data under requester’s schema according to the result of schema matching. is the process of matching published Web services and requested Web services, the result of which should show the requester’s requirements at a certain extent. is a special kind of Web services which can provide exchange data.
318
Qingqing Wang, Xiaoping Li and Qian Wang
3 Data Exchange Method 3.1 Architecture Figure 1 shows the architecture of the WSDE (Web Services based Data Exchange method) proposed in this paper, which is constructed on the architecture of Web services. There are three participants: DPRO (Data Provider), DREQ (Data Requester) and SUDDI (Semantic Universal Description Discovery and Integration) center. 68'', &HQWHU Preceding API Module
: HE 6 HU Y 3X EOLV K '352
Service Requesting API Result Displaying API Service Matching Module 6HUYLFH GDWD GHVFULSWLRQ 2:/ 6
6HUYLFH PDWFKLQJ HQJLQH
6HUYLFHRQWRORJ\ EDVH
8'', PDSSLQJ HQJLQH
V LFH HU Y 6 HE W : HV TX 5H
LFH V
Service Publishing API
'5(4
'RPDLQRQWRORJ\ EDVH
Service Requesting Module
Service Publishing Module 8'', %DVHG RQ 8'', Criterion
Data Encapsulating Module
Schema Matching Module 6FKHPD PDWFKLQJ HQJLQH
Service Deploying Module
,QYRNH :HE 6HUYLFHV Database
Service Invoking Module
Web Server
0DWFKLQJ UHVXOW HQJLQH ;6/7
'RPDLQ RQWRORJ\ EDVH
Format Transforming Module
Database
Application Software
Fig. 1. Architecture of WSDE
DPRO is the system who can provide exchange data on Internet. It deploys a local Web service and publishes service description as well as information of exchange data on the SUDDI center. When the service is invoked, DPRO extracts data from database dynamically, transforms it into a XML file and transfers the file to DREQ. DREQ is the system who needs exchange data. It requests Web services for data exchange on the SUDDI center. If a useful Web service has been requested successfully, DREQ invokes the service, gets exchange data from data provider, matches the data schema of provider and itself and transforms the exchange data under provider’s format to its own data format.
Semantic Web Services based Data Exchange
319
The SUDDI center accepts and matches the description of published services and requested services. It extends common UDDI systems found on UDDI 2.0 criterion, to which adds a semantic module. A semantic Web services matching mechanism for data exchange is presented in the semantic module, which is introduced in section 4. 3.2 DPRO (Data Provider) Modules of DPRO are described as follows: (1) Service Publishing Module This module is used to describe functions of published Web services and provided exchange data according to the service description model and the data description model respectively (introduced in Section 4), and publish function description as well as data description on the SUDDI center. (2) Service Deploying Module This module is used to deploy Web services that have been designed for data exchange to local web server. The WSDL file of these Web services will be uploaded on the web server too. When invoked, DPRO extracts requested data by data encapsulating module and transfer data to DREQ by SOAP. (3) Data Encapsulating Module This module is used to extract data schema of exchange data when the Web service is published and extract exchange data dynamically from provider’s database according to data requester’s needs when the Web service is invoked. 3.3 DREQ (Data Requester) Modules of DREQ are described as follows: (1) Service Requesting Module This module is used to describe DREQ’s requirements according to the service description model and the data description model, and request Web services that could provide needed exchange data on the SUDDI center. This module manages to realize data discovery by discovering Web services. (2) Service Invoking Module After discovering a Web service, service invoking module obtains the provider’s data schema, sets query condition for extracting needed exchange data, reads the WSDL file and invokes the chosen Web service. Then exchange data and its schema will be downloaded to local machine. This module manages to realize data getting and transferring by invoking Web services. (3) Schema Matching Module This module is used to match data schemas of DPRO’s and DREQ’s. Firstly, it extracts local data schema and transforms it to a XML Schema file, then matches the two schemas according to a schema matching algorithm. In this paper, a hybrid matching algorithm for XML Schemas introduced in [14] is adopted. Matching results are stored in a XSLT file. (4) Format Transforming Module
320
Qingqing Wang, Xiaoping Li and Qian Wang
This module is used to transform the format of exchange data according to the schema matching result. So that data could be stored in local database or use it to participate in local applications. 3.4 SUDDI (Semantic Universal Description Discovery and Integration) Center Modules of SUDDI center are described as follows: (1) Preceding API Module This module consists of three kinds of APIs: x x x
Service Publishing API: These APIs accept description of published Web services functions as well as exchange data, and send them to the service matching module. Service Requesting API: These APIs accept description of requested Web services functions as well as demanded data, and send them to the service matching module. Result Displaying API: These APIs accept the Web services matching results from service matching module and display them on the preceding interface.
(2) Service Matching Module This module is used to match the published Web services and the requested service. Functions of this module are creating owl-s profile (namely service description) for Web services, parsing owl-s profile and domain ontology, and producing matching results according to the service matching mechanism introduced in Section 4. (3) UDDI Module This module founded on the UDDI 2.0 criterion is used to store WSDL file path and other common description of published Web services. It also produces mapping relations between owl-s profile base and UDDI module so that full information of a Web service could be published and requested. 3.5 Flow of Data Exchange Figure 2 illustrates the flow of data exchange in WSDE.
DPRO firstly deploys a Web service to local web server, publishes description of service function and exchange data on the SUDDI center and waits for the Web service being invoked. When DREQ needs data, it requests Web services and exchange data according to its own needs on the SUDDI center. The SUDDI center matches the requested service with all published services in the center, produces matching results according to the service matching mechanism, sorts the results and displays the match results. Then DREQ looks over these Web services for data exchange in the matching result list, chooses the most appropriate service, sets invoking condition for extracting needed exchange data, reads the WSDL file and invokes the chosen Web service. DPRO responses the invoking, executes the Web service and returns exchange data as well as its schema. After obtaining the exchange data and its schema, DREQ extracts local data schema, matches or maps the two data schemas and uses XSLT engine to transform data format according to the schema matching result. Finally, DREQ stores the exchange data that has been transformed in local database.
4 Web Services Matching Mechanism Precision of Web services discovery depends on the services matching mechanism. In the proposed data exchange method, data discovery is realized by Web services discovery, so the services matching mechanism adopted in WSDE determines precision of data discovery directly. Data discovery is the first and key step in the process of data exchange, whose result will affect the following steps. Therefore it is very important to choose an effective service matching mechanism for a data exchange method. The key task of Web services for data exchange, which have some differences with common Web services, is realizing data discovery. So services matching
322
Qingqing Wang, Xiaoping Li and Qian Wang
between Web services for data exchange should cover not only IOPE that services matching between common Web services takes into account but also information of exchange data. As information of exchange data is so important to data discovery, the proposed method extends existing semantic matching mechanisms and presents a SWSM-DE (Semantic Web Services Matching mechanism for Data Exchange). SWSM-DE constructs both function description model and data description model, and integrates the result of function matching and data matching as final services matching result. Following will describe the SWSM-DE in detail. 4.1 Services Description Web services description is the base of services publishing, services discovery and services matching. SWSM-DE uses OWL-S which is published by W3C to describe semantic Web services to describe services. However, OWL-S profile only contains function items such as IOPE, so SWSM-DE extends the profile and adds description of exchange data to it. Definition 1. SDM (Service Description Model) is the semantic description of Web services function and exchange data in SWSM-DE, defined as SDM=. FM is function description model and DM is data description model. Definition 2. FM (Function Model) is the semantic description of name, text description, inputs and outputs of Web services, defined as FM᧹<Sn, Sd, Sin, Sout>. Sn is service name, Sd is text description of service, Sin is inputs of service and Sout is outputs of service. Definition 3. DM (Data Model) is the semantic description of domain, source, application scene and content of exchange data, defined as DM=
. Dd is domain of exchange data, Ds is source of exchange data, As is application scene of exchange data and Dc is content of exchange data.Figure 3 shows the structure of SDM. 6'0 Service Description Model
)0 Function Model
Fig.3. Structure of SDM
'D
GDWD FRQWHQW
'V
DSSOLFDWLRQ VFHQH
'G
GDWD VRXUFH
6RXW
GDWD GRPDLQ
6LQ
VHUYLFH RXWSXWV
6G
VHUYLFH LQSXWV
WH[W GHVFULSWLRQ
VHUYLFH QDPH
6Q 6Q
'0 Data Model
'F
Semantic Web Services based Data Exchange
323
4.2 Matching Approach Different people or systems have different semantic description abilities, even to the same service or concept, so “completely same” for service matching is impossible and insignificant. We consider services matching is successful when the description of two services is “similar at a certain extent”. The key issue is how to judge “similar at a certain extent”. A matching mechanism introduced in [15] calculates the total similar degree of OWL-S service profile and other two parts of OWL-S profile which are OWL-S service model and OWL-S service grounding. The matching result is more precise, but more complex because it calculates similar degree of service model and service grounding additionally. A matching mechanism introduced in [16] matches services based on OWL-S service profile, and gets matching result by combining results of inputs matching, outputs matching, service category matching and user-defined matching. The mechanism is terser and easier to be realized, but the matching result is distinguished by only four ranks. So this mechanism is not flexible and the ability of distinguishing matching result is not perfect too. Besides, for data exchange Web services, the two mechanism are all lack of description and matching for exchange data. SWSM-DE which combines the merits of the above mechanisms describes services by OWL-S and measures similarity of Web services by a numerical value. Besides, it takes both service function and exchange data into account. SWSM-DE has two stages: the first stage is function matching and the second is data matching. Services that do not satisfy certain conditions will be filtered after each stage. The first stage calculates similarity of service function and set threshold value to filter useless services. Services that won’t be filtered in the first stage must be Web services for data exchange which have the ability of providing exchange data, and satisfy some conditions on inputs and outputs. The second stage calculates similarity of exchange data and set threshold value to filter services again. Services that will not be filtered in the second stage must satisfy some conditions on data domain, data source, application scene and data content. For each service that overpasses the two stages, SWSM-DE combines the similarity of services function and exchange data as final services matching result which is a numerical value. The two stages can use different matching algorithms. Figure 4 illustrates the matching process of SWSM-DE.
324
Qingqing Wang, Xiaoping Li and Qian Wang
Fig. 4. Matching Process of SWSM-DE
According to SDM, the similarity of service Spro which represents the published Web service and service Sreq which represents requested Web service is defined as:
Sim( spro, sreq ) T 1 Simfunc ( Spro, Sreq ) T 2 Simdata ( Spro, Sreq ), T 1 T 2 1, 0 d T 1, T 2 d 1
(1)
Simfunc(Spro, Sreq) is the similarity of service function. Simdata(Spro, Sreq) is the similarity of exchange data. T1 is the weight value that function matching has in service matching. T2 is the weight value that data matching has. The two values can be adjusted according to actual situation. In SWSM-DE, they are initialized by 0.5 and 0.5 respectively. According to FM, the similarity of service function is defined as:
Sim ( S , S ) func
pro
req
O Sim ( S , S ) O Sim 1
name
O Sim 4
output
pro
req
2
description
( S , S ) O Sim ( S , S ) pro
( S , S ), O O O O pro
req
1
2
3
req
4
3
input
pro
req
1, 0 d O , O , O , O d 1 1
2
3
(2)
4
Simname(Spro, Sreq), Simdescription(Spro, Sreq), Siminput(Spro, Sreq) and Simoutput(Spro, Sreq) are the similarity of service name, service text description, service inputs and service outputs respectively. Simname(Spro, Sreq) and Simdescription(Spro, Sreq) can be calculated by MCMA (Multi-Concept Matching Algorithm) introduced in [14]. Siminput(Spro, Sreq) and Simoutput(Spro, Sreq) can be measured by MIMA (Multi-Input Matching Algorithm) and MOMA (Multi-Output Matching Algorithm) introduced in [10] respectively. O1, O2, O3 and O4 are the weight value of the above items in function matching, which can be adjusted according to actual situation. In SWSMDE, they are initialized by 0.25, 0.25, 0.25 and 0.25 respectively. According to DM, the similarity of exchange data is defined as:
Semantic Web Services based Data Exchange
Sim ( S , S ) data
pro
req
P Sim P Sim
( S , S ) P Sim ( S , S ) P Sim ( S , S ) ( S , S ), P P P P 1, 0 d P , P , P , P d 1
1
domain
4
content
pro
req
pro
req
2
1
source
2
pro
3
req
3
4
scene
1
2
pro
req
3
4
325
(3)
Simdomain(Spro, Sreq), Simsource(Spro, Sreq), Simscene(Spro, Sreq) and Simcontent(Spro, Sreq) are the similarity of data domain, data source, application scene and data content respectively, which can be measured by MCMA. P1, P2, P3, and P4 are the weight value of the above items in data matching, which can be adjusted according to actual situation. In SWSM-DE, they are initialized by 0.15, 0.15, 0.15 and 0.55 respectively. Let Cfunc represent the threshold value in the first stage and Cdata represent the threshold value in the second stage. According to the matching process of SWSMDE, if Simfunc(Spro, Sreq)=Cfunc, the second stage would be executed and Simdata(Spro, Sreq) would be calculated. In the same way, if Simdata(Spro, Sreq)=Cdata, the final matching result Sim(Spro, Sreq) would be calculated by formula (1). Namely, the more exact formula for calculating Sim(Spro, Sreq) is:
Sim( s , s ) pro
req
0, Sim ( S , S )< Cfunc or Sim ( S , S )< Cdata ® ¯(1) , Sim ( S , S ) t Cfunc and Sim ( S , S ) t Cdata func
func
pro
pro
req
req
data
data
pro
pro
req
(4)
req
5 Prototype System The prototype system which is developed by Borland JBuilder 9.0 uses three computers to simulate DPRO, DRQE and SUDDI respectively. DRPO which runs on Windows NT system uses SQL Server 2000 database and Tomcat 4.1 as local web server. DREQ which runs on Windows 2000 system uses Microsoft Access 2000 database. SUDDI which runs on Windows XP system uses OWL-S API to create and parse OWL-S profile, juddi0_9RC4 to realize UDDI 2.0 APIs, mysql 5.0 as database for registering, tomcat 5.0 as web server and JSP pages as user interfaces. The initial weight values in formula (1)(2)(3) are adopted. System is based on the following application scene: A machine factory in Beijing needs a special type of axletrees, while there are enterprises which can manufacture the very type in New York, London, Sydney, and Tokyo. However, the machine factory in Beijing doesn’t know which enterprises can manufacture them and doesn’t know how to inquire about them. If WSDE is adopted, a SUDDI center should be constructed: x
Enterprises that manufacture axletrees act as DPRO. They provide information of axletrees by Web services and publish the services information on the SUDDI center (Figure 5(a)).
326
Qingqing Wang, Xiaoping Li and Qian Wang
x x x
The factory in Beijing which acts as DREQ requests Web services based on his requirements on services function and exchange data from the SUDDI center. The SUDDI center executes services matching and returns list of information of enterprises that can manufacture the very type of axletrees (Figure 5(b)). The factory in Beijing chooses a proper item associating with certain enterprise, invokes the Web services that the enterprise has deployed remotely and gets product information namely exchange data to local. Then data schema matching and format transforming will be executed. Finally, the exchange data after being transformed can be stored in local database or participate in application.
a
b
Fig. 5. a. Interface of publishing Web services; b. Interface of displaying matching results.
6 Conclusions In this paper, a semantic Web services based data exchange method is proposed, which can realize the whole process of data exchange on Internet. In the proposed method, data publishing, data discovery and data getting are realized by publishing Web services, discovering Web services and invoking Web services respectively. Data schema matching and format transforming are realized by XML technology. In order to improve the precision of data discovery, a Web services matching mechanism of semantic level for data exchange is presented, which takes both service function and exchange data into account and calculates the semantic similarity of published and requested Web services based on OWL-S. Compared to the existing data exchange methods, the proposed method can complete the whole process of data exchange, realize easy data discovery and provide or obtain exchange data dynamically on Internet. The prototype system takes an industry use case for example by which the presented framework and mechanism are verified. Also, the proposed data exchange method can be applied to various applications.
Semantic Web Services based Data Exchange
327
References [1]
[2]
[3] [4] [5]
[6] [7] [8] [9] [10] [11]
[12] [13]
[14] [15]
[16] [17]
ISO/TC184/SC5/WG4, (2002) ISO16100-1 Industrial automation systems and integration - Manufacturing software capability profiling for interoperability — Part 1: Framework ISO/TC184/SC5/WG4, (2005) ISO16100-3 Industrial automation systems and integration - manufacturing capability profiling for interoperability - Part 3: Interface services, Protocol and capability templates Fagin1 R, Kolaitis PG, Popa L, (2005) Data Exchange: Getting to the Core. ACM Transactions on Database Systems, 174–210 Meadors K, (2005) Secure Electronic Data Interchange over the Internet. Internet Computing, IEEE, 82–89 Wang N, Chen Y, Yu BQ, Wang NB, (1997) Versatile: A scaleable CORBA 2based system for integrating distributed data. In Proceedings of the 1997 IEEE International Conference on Intelligent Processing Systems, 1589–1593 Pendyala VS, Shim SSY, Gao JZ, (2003) An Xml Based Framework for Enterprise Application Integration. In IEEE International Conference on E-commerce, 128–135 Zhang M, Xu QS, Shen XC, (2006) Data exchange and sharing platform model based on XML. Journal of Tsinghua University (Science and Technology), 105–107, 119 ROSETTANET, (2007-10) About RosettaNet Standards. http://portal.rosettanet.org/ cms/site/RosettaNet/ EbXML Technical Architecture Project Team, (2001) EbXML Technical Architecture Specification v1.0.4. http://www.ebxml.org/specs/ebTA.pdf, 1-39. Hu JQ, Zou P, Wang HM, Zhou B, (2005) Research on Web Service Description Language QWSDL and Service Matching Model. Chinese Journal of Computers, 505–513 Gao S, Omer FR, Nick JA, Chen DF, (2007) Ontology-based semantic matchmaking approach. Advances in Engineering Software, 59–67 Paolucci M, Kawamura T, Payne TR, Sycara K, (2002) Semantic matching of web service capabilities. In Proceedings of 1st International Semantic Web Conference, Springer-Verlag, 333–347 Cui Y, (2005) Research on Service Matchmaking Model Based on Semantic Web. Master Dissertation, Dalian University of Technology, China Kajal TC, Vaishali H, Naiyana T, (2005) QMatch - A Hybrid Match Algorithm for XML Schemas. In Proceedings of the 21st International Conference on Data Engineering. IEEE Computer Society, 1281–1290 Hau J, Lee W, Darlington J, (2005) A Semantic Similarity Measure for Semantic Web Services. http:// www.ai.sri.com/WSS2005/final-versions/WSS2005-Hau-Final.pdf Jaeger MC, Rojec-GoldMann G, Liebetruth C, Muhl G, Geihs K, (2005) Ranked Matching for Service Descriptions using OWL-S. In: Proceedings of Communication in Distributed System, 91–102
Ontology-driven Semantic Mapping Domenico Beneventano1, Nikolai Dahlem2, Sabina El Haoum2, Axel Hahn2, Daniele Montanari1,3, Matthias Reinelt2 1
2
3
University of Modena and Reggio Emilia, Italy {domenico.beneventano, daniele.montanari}@unimore.it University of Oldenburg, Germany {dahlem, elhaoum, hahn, reinelt}@wi-ol.de Eni SpA, Italy [email protected]
Abstract. When facilitating interoperability at the data level one faces the problem that different data models are used as the basis for business formats. For example relational databases are based on the relational model, while XML Schema is basically a hierarchical model (with some extensions, like references). Our goal is to provide a syntax and a data model neutral format for the representation of business schemata. We have developed a unified description of data models which is called the Logical Data Model (LDM) Ontology. It is a superset of the relational, hierarchical, network, objectoriented data models, which is represented as a graph consisting of nodes with labeled edges. For the representation of different relationships between the nodes in the data-model we introduced different types of edges. For example: is_a for the representation of the subclass relationship, identifies for the representation of unique key values, contains for the containment relationship, etc. In this paper we discuss the mapping process as it is proposed by EU project STASIS (FP6-2005-IST-5-034980). Then we describe the Logical DataModel in detail and demonstrate its use by giving an example. Finally we discuss future research planned in this context in the STASIS project. Keywords: business schema representation, business interoperability, meta- model
1 Introduction Today’s enterprises, no matter how big or small, have to meet the challenge of bringing together disparate systems and making their mission-critical applications collaborate seamlessly. One of the most difficult problems in any integration effort is the missing interoperability at the data level. Frequently, the same concepts are embedded in different data models and represented differently. One difficulty is identifying and
330
D. Beneventano, N. Dahlem, S. El Haoum, A. Hahn, D. Montanari, M. Reinelt
mapping differences in naming conventions, whilst coping with the problems of polysemy (the existence of several meanings for a single word or phrase) and synonymy (the equivalence of meaning). A connected problem is identifying and mapping differences stemming from the use of different data models. For example information expressed in a relational schema is based on the relational data model, while XML Schema is basically a hierarchical model (with some extensions, like references). Therefore we propose an ontology to describe a unified data model which is called the Logical Data Model. The purpose of the Logical Data Model ontology is to provide a common representation able to encapsulate substantial information coming from different sources and various schema formats. Data models represented by such an ontology can be considered as a neutral specification which allows a common processing in an interoperability framework. In the remainder we first discuss the related work (section 2). Then we describe the mapping process to provide the context for this work in section 3. Section 4 presents the Logical Data Model Ontology and gives an example. Section 5 discusses ontology-driven semantic mapping and section 6 concludes with a discussion and an outlook to the future research.
2 Related Work The integration costs for enterprise applications cooperation are still extremely high, because of different business processes, data organization, application interfaces that need to be reconciled, typically with great manual (and therefore error prone) intervention. This problem has been addressed independently by MDA and ontology-based approaches. The Model Driven Architecture (MDA) proposed by the Object Management Group (OMG) uses platform-independent models (PIMs) [1] as the context for identifying relations between different applications. Transformation is a central concept in MDA to address how to convert one model into another model of the same system, and further into executable code. MDA provides technologies to handle meta models, constraints etc. which can be used for semantic enrichment and model transformation. In model-based approach, Unified Modelling Language [2] is used to express conceptual models. The meta language Meta Object Facility (MOF) is defined as part of the solution in order to capture relationships between data elements. Transformation languages are used to create executable rules, and transformation techniques can be used in the process of detailing the information needed, converting from abstract MOF compliant languages to more formal ones [3]. Today, ontology technologies have reached a good level of maturity and their applications to industrial relevant problems are proliferating. Ontologies are the key elements of the Semantic Web. The notion of the Semantic Web is led by W3C and defined to be a “common framework allowing data to be shared and reused across application, enterprise and community boundaries” [4]. Ontologies support semantic mapping by providing explicitly defined meaning of the information to be exchanged. The development of the LDM Ontology was
Ontology-driven Semantic Mapping
331
particularly inspired by related work on relational schema modeling and the general goal of establishing mappings among (fragments of) domain ontologies. The latter has been an active field of research in the last ten years, exploring a number of approaches. The basic expression of mapping for ontologies modeled with description logic formalisms and the associated languages (like OWL) involves the use of basic language constructs or evolved frameworks to express the existence and properties of similarities and then mappings [5] [6] [7]. One significant result in this area is the MAFRA framework [8]. Research in the area of database schema integration has been carried out since the beginning of the 1980s, and schema comparison techniques are often well suited for translation into mapping techniques. A survey of such techniques is offered in [9]. One system extensively using these techniques is MOMIS (Mediator Environment for Multiple Information Sources); MOMIS creates a global virtual view of information sources, independent of their location and heterogeneity [10]. The discovery of mappings has been studied by means of general methods often derived from other fields. One such approach is graph comparison, which comprises a class of techniques which represent the source and target ontologies (or schemas) as graph, and try to exploit graph structure properties to establish correspondences. Similarity flooding [11] and AnchorPrompt [12] are examples of such approaches. Machine learning techniques have also been used. One such example is GLUE [13], where multiple learners look for correspondences among the taxonomies of two given ontologies, based on the joint probability distribution of the concepts involved and a probabilistic model for combination of results by different learners. Another example is OMEN (Ontology Mapping Enhancer) [14] which is a probabilistic mapping tool using a bayesian net to enhance the quality of the mappings. Linguistic analysis is also quite relevant, as linguistic approaches exploit the names of the concepts and other natural language features to derive information about potential mates for a mapping definition. For example, in [15] a weighted combination of similarities of features in OWL concept definitions is used to define a metric between concepts. Other studies in this area include ONION [16] and Prompt [17], which use a combination of interactive specifications and heuristics to propose potential mappings. Similarly, [18] use a bayesian approach to find mappings between classes based on text documents classified as exemplars of these classes. In Athena (ST-507849), a large European IST project, two different technologies have been applied to support model mapping. Semantic mapping involves the application of an ontology. However, current literature does not provide detailed description regarding how this is to be done, as pointed out by [19] and [20]. In Athena, a solution has been proposed, based on semantic annotation (A* tool), reconciliation rules generation (Argos tool), and a reconciliation execution engine (Ares). Parallel, in Athena, also a model-based approach has been proposed, based on a graphic tool (Semaphore) aimed at supporting the user in specifying the mappings and XSLT based transformation rules.
332
D. Beneventano, N. Dahlem, S. El Haoum, A. Hahn, D. Montanari, M. Reinelt
Other European projects addressing mapping issues include the IST FP6 projects SWAP (IST-2001-34103) [21], SEKT (IST-2003-506826) [22], and DotKom (IST-2001-34038) [23].
3 Mapping of Business Schemata When analyzing semantic relations between business schemata, we follow the approach of A* [24] to obtain a neutral representation of the schemata first. In a subsequent step this neutral representation is processed to identify mappings. These steps are discussed in the following two sections. 3.1 Obtaining a Neutral Schema Representation
The proposed mapping process works on a neutral representation, which abstracts from the specific syntax and data model of a particular business schema definition. Therefore, all incoming business schemata first need to be expressed in this neutral format. Fig. 1 shows the steps of this acquisition process.
Fig. 1. Schema acquisition process
Firstly, the incoming schema is expressed in terms of a corresponding structural ontology. Several parseable and non-parseable schema formats are already analyzed and supported namely relational databases, XML schema, EDIFACT-like EDI environments, FlatFile representations. For each of these formats a specific structural ontology is defined [25]. Then, in a second step, the model specific structural ontology representation is transformed into a neutral representation which is based on the Logical Data Model. This transformation can be automated by applying a set of predefined rules.
Ontology-driven Semantic Mapping
333
3.2 Identification of Mappings
Once the schema information has been acquired and expressed in the unified model, further analysis and/or processing can be performed to identify a set of mappings between semantic entities being used in different business schemata. The goal is to provide such sets of mappings as input to translator tools to achieve interoperability between dispersed systems without modifying the involved schemata. The definition of the mappings is done through the acquisition of the crucial features in the schemata of the source and target, giving them a conceptual representation, enriching this representation with semantic annotations and then using the system functionalities to synthesize and refine the mappings. An overview of this process is given in Fig. 2.
Fig. 2. Mapping process
As shown in Fig. 2 the neutral representation of incoming schemata provides the basis for the identification of the relevant semantic entities being the basis of the mapping process. This step is labeled “extraction” and the resulting semantic entities are the a-box of the LDM Ontology. Apart from the element being identified as semantic entity, a semantic entity holds metadata such as annotations, example values, keywords, owner and access information, etc. The analysis of the information encapsulated in semantic entities can support the finding of mapping candidates. A more advanced way to identify mappings between semantic entities is to derive them through reasoning on aligned ontologies. For this purpose the semantic entities need to be annotated with respect to some ontology as proposed in A*. Based on the annotation made with respect to the ontologies and on the logic relations identified between these ontologies, reasoning can identify correspondences on the semantic entity level and support the mapping process. Beyond the capability of A* this reasoning can also benefit from the conceptual information derived from the LDM Ontology because all semantic entities carry this extra information by being instances of the concepts of the LDM Ontology.
334
D. Beneventano, N. Dahlem, S. El Haoum, A. Hahn, D. Montanari, M. Reinelt
4 Logical Data Model Ontology This section contains a general description of the Logical Data Model Ontology followed by an example to demonstrate its main characteristics. 4.1 General Description of the Model
The LDM Ontology contains generic concepts abstracting from syntactical aspects and different data models. As an intuitive example, in the relational model a foreign key expresses a reference between two tables; at a more abstract level we can consider the two tables as nodes of a graph and the foreign key as an edge from one table to another table; more precisely this is a directed (since a foreign key has a “direction”) and labeled (since we want to distinguish two foreign keys between the same pair of tables) edge. In this way, the LDM Ontology corresponds to a graph with directed labeled edges and it has the following types of concepts: 1.
The Nodes of the graph, which are partitioned in SimpleNodes and ComplexNodes. 2. The edges of the graph, which represent Relationships between Nodes. The following types of Relationships can exists: -
Reference: A Reference is a directed labeled edge between ComplexNodes. Identification: A ComplexNode can be identified by a SimpleNode or a set of SimpleNodes. Containment: A ComplexNode can contain other Nodes, SimpleNodes and/or ComplexNodes. Qualification: A Node can be qualified by a SimpleNode. Inheritance: Inheritance can exist between ComplexNodes.
The LDM Ontology has been represented as an OWL ontology. An overview of the concepts and their relations in the ontology is shown in Fig. 3. A detailed description of the LDM Ontology is provided by [25]
Fig. 3. Overview of the concepts in the LDM Ontology
Ontology-driven Semantic Mapping
335
4.2 Demonstration example
In this section an example is introduced to show how a relational data base schema is first represented in terms of a structural ontology and then transformed into a LDM Ontology representation by means of respective transformation rules. For the relational case the structural ontology has to provide concepts for the terms Database, Relation and Attribute and a property consistsOf to create a hierarchy involving them. For this purpose the structural ontology contains the concepts of Catalogue, Table and Column and the object property hasColumn. Consider the relational schema in Fig. 4. Expressed in terms of the structural ontology for the relational case (hereafter shortly referred as relational structural schema) there are two Tables: Table “Order” and Table “OrderLine” with their Columns “number”, “date”, “customerID” and “articleNumber”, “quantity”, “lineNumber”, “orderNumber” respectively. Additionally, the Column “number” is declared a PrimaryKey of the “Order” Table and the Column “lineNumber” the PrimaryKey of the Table “OrderLine”. Further, the “OrderLine” is connected to one specific “Order” using a ForeignKey reference “FK_OrderLine_Order”.
Fig. 4. Part of an exemplary relational data base schema
The next step to achieve an LDM Ontology representation is to apply transformation rules to the structural ontology representation. A brief overview of the transformation rules is presented in Table 1. Due to space limitation the table only gives an intuition of the rules; their detailed explanation is given in [25]. In general, in the LDM Ontology all Tables will be represented as ComplexNodes, Columns as SimpleNodes, and so on. Table 1. Transformation rules from the relational structural ontology into a LDM Ontology representation Entity in the relational Entity in the Comments structural ontology Logical Data Model Table ComplexNode All Tables are represented as ComplexNodes Column SimpleNode All Columns are represented as SimpleNodes KeyConstraint Identification All KeyConstraints (i.e. PrimaryKeys and AlternativeKeys) are represented as Identifications ForeignKey Reference All ForeignKeys are represented as References hasColumns Containment The relationship between a Table and its Columns is represented as a Containment relationship
336
D. Beneventano, N. Dahlem, S. El Haoum, A. Hahn, D. Montanari, M. Reinelt
The application of the transformation rules leads to an LDM Ontology based representation of the example as shown in Fig. 5. The notation used in this figure is described by [25].
Fig. 5. LDM Ontology representation of the exemplary schema
According to the graphical representation in Fig. 5 the example schema contains two ComplexNodes “Order” and “OrderLine”. For each Column a SimpleNode is introduced and connected with its Table/ComplexNode via the Containment relation. Identification relations are defined for the PrimaryKeys “number” and “lineNumber”. The ForeignKey “FK_OrderLine_Order” is transformed to a Reference “belongsTo”.
5 Ontology-driven Semantic Mapping As discussed in the section 3 mappings between Semantic Entities can be achieved based on annotations linking the semantic entities with some concepts being part of an ontology. The annotation of semantic entities with respect to external ontology means that additional machine processable knowledge is associated with them. As in A* the ontology-driven process of deriving correspondences between semantic entities belonging to different schemata will make use of this additional knowledge. Our approach also benefits from structural knowledge on the data model represented by linking the semantic entities to the concepts of the LDM Ontology. When the annotation of semantic entities belonging to different schemata is based on one common ontology and the LDM Ontology (see Fig. 6), the annotations can directly facilitate the discovery of semantic relations between the semantic entities. The definition of semantic link specification (SLS) is based on [26]. The following semantic relations between semantic entities of two business formats are defined: equivalence (EQUIV); more general (SUP); less general (SUB); disjointness (DISJ). As in [26], when none of the relations holds, the special IDK (I do not know) relation is returned; Notice IDK is an explicit statement that the system is unable to compute any of the declared (four) relations. This should be interpreted as either there is not enough background knowledge, and therefore, the system cannot
Ontology-driven Semantic Mapping
337
explicitly compute any of the declared relations or, indeed, none of those relations hold according to an application. The semantics of the above relations are the obvious set-theoretic semantics.
Fig. 6. Ontology-based schema mapping with a single common ontology
More formally, an SLS is a 4-tuple where ID is a unique identifier of the given mapping element; semantic_entity1 is an entity of the first format; R specifies the semantic relation which may hold between semantic_entity1 and semantic_entity2; semantic_entity2 is an entity of the second format. Our discussion is based on examples. To this end we consider the following two business formats: The graphical representation of semantic_entity1 from a business format 1 (bf1) shown in Fig. 7 and semantic_entity2 from another business format 2 (bf2) shown in Fig. 8. We consider the annotation of the above business format with respect to the Purchase_order Ontology (see Fig. 9).
Fig. 7. Business format specification (bf1) (derived from relational schema)
338
D. Beneventano, N. Dahlem, S. El Haoum, A. Hahn, D. Montanari, M. Reinelt
Fig. 8. Business format specification (bf2) (derived from an XML schema)
Fig. 9. The ontology of Purchase_order
The proposed “Ontology-based schema mapping with a single common ontology” is based on the annotation of a business format with respect to this single common ontology. Here we will use the following proposal. An annotation element is a 4-tuple where ID is a unique identifier of the given annotation element; SE is a semantic entity of the business format; concept is a concept of the ontology; R specifies the semantic relation which may hold between SE and concept. The proposal is to use the following semantic relations between semantic entities of the business format and the concepts of the ontology: equivalence (AR_EQUIV); more general (AR_SUP); less general (AR_SUB); disjointness (AR_DISJ). Let us give some examples of annotation. In the examples, the unique identifier ID is omitted. x
(bf2:Address, AR_EQUIV,O:Address) may be considered as the output of automatic annotation
Ontology-driven Semantic Mapping
x x
339
(bf2:Address, AR_SUB,O:Address) may be considered as the output of a ranked automatic annotation/search: the AR_SUB relation instead of AR_EQUIV is used since the rank is less than a given threshold (bf2:Address, AR_EQUIV, O:Address and Billing-1.Purchase_Order) may be considered as a refinement by the user of (bf2:Address, AR_EQUIV,O:Address) to state that the address in the BF is exactly the “address of the Billing in a Purchase_Order”
Let us also consider the following possible annotations of bf1 x x x x x
(bf1:Address, AR_EQUIV,O:Address) (bf1:Address, AR_SUB,O:Address) (bf1:Address, AR_EQUIV, O:Address and Shipping-1.Purchase_Order) (bf1:Address, AR_EQUIV, O:Address and Shipping-1.Purchase_Order) (bf1:Address, AR_DISJ, O:Address and Billing-1.Purchase_Order)
Now, some example of the SLS derived from annotation will be discussed. To this end, let us suppose that in the bf2 there is the following annotation for address (bf2:Address, AR_EQUIV,O:Address). We want to discuss what is the SLS derived between bf2:Address and bf1:Address, by considering the following cases for the Address annotation in bf1 Case 1) (bf1:Address, AR_EQUIV,O:Address) The following SLS can be derived (bf1:Address, EQUIV, bf2:Address) Case 2) (bf1:Address, AR_SUB,O:Address) and The following SLS can be derived (bf1:Address, SUB, bf2:Address) Case 3) (bf1:Address, AR_EQUIV, O:Address and InverseOf(Shipping).Purchase_Order) The following SLS can be derived (bf1:Address, SUB, bf2:Address) since Address and InverseOf(Shipping).Purchase_Order (the annotation of bf1:Address) is subsumed by Address (the annotation of bf2:Address). This shows how the semantic mapping can be derived from the semantic entity specification. The information of the linkage to the LDM Ontology is used in the same way. One topic is still open. A possible extension [to be evaluated] w.r.t. the [26] framework is the addition of the overlapping (OVERLAP) semantic relation. Formally, we need to evaluate if with OVERLAP we can decide IDK relations; moreover we need to proof that with OVERLAP “relations are ordered according to decreasing binding strength”
6 Discussion and Future Research We provide a joint approach to integrate the benefits of the MOF and ontology based semantic mapping methods. Model entities of business formats/standards are described by a generic meta model which is made explicit by an ontology, called the Logical Data Model Ontology. By annotating these semantic entities w.r.t. business ontologies an enriched knowledge base is available to reason on semantic links to align the entities of business formats. These technologies are going to be integrated in an interoperability framework to share the semantic information in
340
D. Beneventano, N. Dahlem, S. El Haoum, A. Hahn, D. Montanari, M. Reinelt
peer groups to enrich the semantic basis for cooperation. This enhances the common ontology to provide an even better basis for the mapping process. This is accompanied by further approaches to simplify the definition of ontologies, their linkage to semantic entities (annotation) and verification of the jointly generated semantic net.
Acknowledgments The STASIS IST project (FP6-2005-IST-5-034980) is sponsored under the EC 6th Framework Programme.
References [1] [2] [3]
[4] [5]
[6]
[7] [8]
[9] [10] [11]
[12]
[13]
[14]
Kleppe, A., Warmer, J., Bast, W.: MDA Explained: The Model Driven Architecture-Practice and Promise. Addison-Wesley, Boston (2003) Rumbaugh, J., Jacobson, I., Booch, G.: The Unified Modeling Language Reference Manual. Second Edition, Addison-Wesley, Boston (2005) Karagiannis, D., Kühn. H.: Metamodelling Plattforms. In Bauknecht K., Min Tjoa, A., Quirchmayr, G. (eds.): Thirst Int. Conference EC-Web 2002, p. 182. Springer, Berlin (2002) W3C Semantic Web Activity, http://www.w3.org/2001/sw/ last accessed 2007-10-24 Ehrig, M., Haase, P., Hefke, M., and Stojanovic, N.: Similarity for Ontologies – a Comprehensive Framework. In Workshop Enterprise Modelling and Ontology: Ingredients for Interoperability, PAKM 2004 (2004) Weinstein, P., Birmingham, W.P.: Comparing Concepts in Differentiated Ontologies. In 12th Workshop on Knowledge Acquisition, Modelling, and Management (KAW99), Banff, Alberta, Canada (1999). Choi, N., Song, I-Y., Han, H.: A Survey of Ontology Mapping. In SIGMOD Record 35, Nr. 3 (2006) Mädche, A., Motik, B., Silva, N., Volz, R: MAFRA - A Mapping Framework for Distributed Ontologies. In 13th Int. Conf. on Knowledge Engineering and Knowledge Management, vol. 2473 in LNCS, pp. 235--250 (2002) Rahm, E., Bernstein, P.: Survey of Approaches to Automatic Schema Matching. VLDB Journal 10 (2001) Beneventano, D., Bergamaschi, S., Guerra, F., Vincini, M.: Synthesizing an Integrated Ontology, IEEE Internet Computing, September-October (2003) Melnik, S., Garcia-Molina, H., Rahm, E.: Similarity Flooding: A Versatile Graph Matching Algorithm and its Applications to Schema Matching. In 18th International Conference on Data Engineering (ICDE-2002), San Jose, California, (2002) Noy, N.F., and Musen, M.A.: Anchor-PROMPT: Using non-local context for semantic matching. In Workshop on Ontologies and Information Sharing at the 17th International Joint Conference on Artificial Intelligence (IJCAI-2001), Seattle, WA, US (2001) Doan, A., Madhavan, J., Domingos, P., Halevy, A.: Learning to map between ontologies on the semantic web. In The 11th Int. WWW Conference, Hawaii, USA (2002) Mitra, P., Noy, N.F., Jawal, A.R.: OMEN: A Probabilistic Ontology Mapping Tool. LNCS, Vol. 3729/2005, Springer, Berlin (2005)
Ontology-driven Semantic Mapping
341
[15] Euzenat, J., Valtchev, P.: Similarity-based ontology alignment in OWL-Lite. In The 16th European Conference on Artificial Intelligence (ECAI-04), Valencia, Spain (2004) [16] Mitra, P., Wiederhold, G., Decker, S.: A scalable framework for interoperation of information sources. In SWWS01, Stanford University, Stanford, CA, US (2001) [17] Noy, N.F., Musen, M.A.: The PROMPT suite: Interactive tools for ontology merging and mapping. International Journal of Human-Computer Studies, 59, pp. 983--1024 (2003) [18] Prasad, S., Peng, Y., Finin, T.: A tool for mapping between two ontologies using explicit information. In AAMAS 2002 Ws on Ontologies and Agent Systems, Bologna, Italy (2002) [19] Wache, H, Vögele T., Visser, U., Stuckenschmidt, H., Schuster, G., Neumann H., Hübner, S.: Ontology-Based Integration of Information - A Survey of Existing Approaches. In: IJCAI-01 Workshop Ontologies and Information Sharing (IJCAI01), pp. 108-118 (2001) [20] Gómez-Pérez, A., Fernández-López M, Corcho, 0.: Ontological Engineering with examples from the areas of Knowledge Management, e-Commerce and the Semantic Web. Springer (2004) [21] SWAP Project Web Site, http://swap.semanticweb.org/ last accessed 2007-10-24 [22] SEKT Project Web Site, http://www.sekt-project.com/ last accessed 2007-10-24 [23] DotKom Project Web Site, http://nlp.shef.ac.uk/dot.kom/ last accessed 2007-10-24 [24] Callegari, G. Missikoff, M., Osimi, N., Taglino F.: Semantic Annotation language and tool for Information and Business Processes Appendix F: User Manual, ATHENA Project Deliverable D.A3.3 (2006) available at http://lekspub.iasi.cnr.it/Astar/AstarUserManual1.0 last accessed 2007-10-24 [25] STASIS Project Web Site, http://www.stasis-project.net/ last accessed 2007-10-24 [26] Giunchiglia, F., Yatskevich, M., Shvaiko. P.: Semantic Matching: Algorithms and Implementation. Journal on Data Semantics (JoDS), IX, LNCS 4601, pp. 1-38 (2007)
Self-Organising Service Networks for Semantic Interoperability Support in Virtual Enterprises Alexander Smirnov, Mikhail Pashkin, Nikolay Shilov, Tatiana Levashova St.Petersburg Institute for Informatics and Automation of the Russian Academy of Sciences, 39, 14 Line, 199178 St.Petersburg, Russia {smir, michael, nick, oleg}@iias.spb.su
Abstract. Virtual enterprises consist of a number of independent distributed members, which have to collaborate in order to succeed. The paper proposes an approach to creation of self-organising service networks to support semantic interoperability between virtual enterprise members. Since the centralized control is not always possible, presented approach proposes decentralized communication and ad-hoc decision making based on the current situation state and its possible future development. The presented approach proposes usage of self-organising networks of knowledge sources and problem solvers. The paper is devoted to questions of semantic interoperability in such kind of agent-based service networks. Ontologies are used for description of knowledge domains. Application of objectoriented constraint networks is proposed for ontology modelling. The approach uses such technologies as knowledge and ontology management, profiling, intelligent agents, Webservices, etc. Keywords: Self-organisation, semantic interoperability, agent, service, ontology, context
1 Introduction Nowadays, complex decision making faces problems of management and sharing of huge amount of information & knowledge from distributed and heterogeneous sources (experts, electronic documents, real-time sensors, etc.) belonging to virtual enterprise members, personalization, availability of up-to-date and accurate information provided by the dynamic environment. The problems include search of right sources, extraction of content, presentation of results in a personalized way, and other. As a rule, the content of several sources has to be extracted and processed (e.g., fused, converted, checked) to produce required information. Due to such factors as different data formats, interaction protocols and others this leads to a problem of semantic interoperability.
344
Alexander Smirnov, Mikhail Pashkin, Nikolay Shilov, Tatiana Levashova
Ontologies are widely used for problem domain description in the modern information systems to support semantic interoperability. An ontology is an explicit specification of a structure of a certain domain. It includes a vocabulary for referring to notions of the subject area, and a set of logical statements expressing the constraints existing in the domain and restricting the interpretation of the vocabulary [1]. Ontologies support integration of resources that were developed using different vocabularies and different perspectives of the data. To achieve semantic interoperability, systems must be able to exchange data so that the precise meaning of the data is readily accessible and the data itself can be translated by any system into the form that it understands [2]. Centralized control in complex distributed systems is not always possible: for example, virtual enterprises consist of independent companies and do not have a central decision making unit. Thus, decentralized self-organisation of distributed independent components is a promising architecture for such kind of systems [3], [4], [5]. However, in order for the self-organisation to operate it is necessary to solve a number of problems including: (i) registration and cancelling of registration of network elements, (ii) preparation of initial state, (iii) self-configuration: finding appropriate network elements [6], negotiation of conditions and assignment of links, and preparation of alternative configurations. Different research projects are devoted to self-management of such networks: self -contextualization, -optimization, -organization, -configuration, -adaptation, -healing, -protection [7]. The following major requirements to the approach to virtual enterprise interoperability support have been selected (some of the decision making processes in virtual enterprises have been identified in [8]): (i) intensive information exchange, (ii) distributed architecture, (iii) decentralised control, (iv) semanticbased information processing, (v) ad-hoc decision making support based on the current situation state and its possible future development. Self-configuration of heterogeneous sources providing knowledge and tools for this knowledge processing is a basic idea of the presented approach. The developed methodology proposes integration of environmental information in a certain context. The context is any information that can be used to characterize the situation of an entity where an entity is a person, place, or object that is considered relevant to the interaction between a user and an application, including the user and applications themselves [9]. The context is purposed to represent only relevant information from the large amount of those. Relevance of information is evaluated on a basis how they are related to a modelling of an ad-hoc problem. A number of already solved problems and problems to be solved includes interoperability at both technological and semantic level, situation understanding by the members via information exchange, protocols of ad-hoc decision making for self-organization. Proposed technological framework incorporates such technologies as situation management, knowledge and ontology management, profiling, Web-services, decision support, negotiation protocols. The proposed methodology is based on the earlier developed concept of knowledge logistics [10] and includes such technologies as situation management, ontology management, profiling and intelligent agents [8]. Standards of information exchange (e.g., Web-service standards), negotiation protocols, decision making rules, etc. are used for information exchange and rapid
Self-Organising Service Networks for Semantic Interoperability Support
345
establishing of ad-hoc partnerships and agreements between the operation members. In the second section of the paper the developed methodology is presented. The technological framework is described in the third section. Some results are summarised in the Conclusion.
2 Proposed Approach The main idea of the approach is to represent virtual enterprise members by sets of services provided by them Fig. 1. This makes it possible to replace the interoperability between virtual enterprise members with that between their services.
Fig. 1. Representation of virtual enterprise members by sets of services.
At the first stage of the research the lifecycle phases of the self-configuring network and major requirements to them were defined (Table 1). Based on these requirements the main ideas the approach is based on were formulated: 1.
A common shared top-level ontology (application ontology) serves for terminology unification. Each service has a fragment of this ontology corresponding to its capabilities / responsibilities. This fragment is synchronized automatically when necessary (not during the operation). 2. Each service has a profile describing its capabilities, appropriate ontological model. 3. Each service is assigned an intelligent agent, representing it (together they will be called “agent-based service”). The agent collects information required for situational understanding by the service, negotiates with other agents to create ad-hoc action plans. The agent has predefined rules to be followed during negotiation processes. These rules depend on the role of the appropriate member.
346
Alexander Smirnov, Mikhail Pashkin, Nikolay Shilov, Tatiana Levashova
4.
Web-service standards are used for interactions. External sources (e.g., medical databases, transport availability, weather forecasts) should also support these standards and the terminology defined by the application ontology. This is achieved by developing services for each particular source.
Table 1. Lifecycle phases for the self-configuring network, its needs and services to fulfil them Life cycle phase Community building (once, new members are added on a continuous basis)
Needs Common infrastructure
Services Modelling goals and objectives
Common communication standards and protocols
Identification, qualification, registration of members Common knowledge representation Common modelling for community members
Formation (continuous, initiated by the situation, or a task as a part of the situation) Operation (continuous)
Task definition model (context)
Task modelling Rules of partner selection
Partner selection Coordination and synchronization
Rules of re-negotiation and solution modification if necessary
Discontinuation Termination of the established Update of (continuous, initiated by agreements solution members)
the
current
The developed methodology proposes a two-level framework of context-driven information integration for decision making. The first level addresses activities over a pre-starting procedure of the system as creation of semantic models for its components (Fig. 2); accumulating domain knowledge; linking domain knowledge with the information sources; creation of an application ontology describing a macro-situation; indexing a set of available e-documents against the application ontology. This level is supported, if required, by the subject experts, knowledge and ontology engineers. The second level focuses on decision making supported by the system. This level addresses a problem recognition presented by a user request; context creation; identification of relevant knowledge sources; generation of a set of problem solutions; and making a decision by the user.
Self-Organising Service Networks for Semantic Interoperability Support
System Components
347
Model Domain ontology
Tasks&methods ontology
Domain knowledge Application ontology User (decision maker)
User profile
Environment
Information source models Formalism of OOCN
Fig. 2. Models for system components.
The internal knowledge representation is supported by the formalism of objectoriented constraint networks (OOCN) [11]. All the system components and contexts are represented by means of this formalism. According to the formalism, the application ontology is represented as sets of classes, class attributes, attribute domains, and constraints. The set of constrains comprises constraints describing “class, attribute, domain” relation; constraints representing structural relations as hierarchical relationships “part-of” and “is-a”, classes compatibility, associative relationships, class cardinality restrictions; and constraints describing functional dependencies. Below examples of some constraints are given: x x x x x x
“class, attribute, domain” relation: the attribute costs belongs to the class component and takes positive values; hierarchical relationship “is-a”: the body production facility is a resource; hierarchical relationships “part-of”: an instance of the class component can be a part of an instance of the class product; associative relationship: an instance of the class body can be connected to an instance of the class body production facility; classes compatibility: the class body is compatible with the class body production facility; functional dependence: the value of the attribute cost of an instance of the class body production facility depends on values of the attribute cost of instances of the class component connected to it and on the number of such instances.
Fig. 3 represents the macro-level taxonomy of the application ontology for a virtual enterprise built by domain experts. The represented classes are the main concepts for the production network configuration problem.
348
Alexander Smirnov, Mikhail Pashkin, Nikolay Shilov, Tatiana Levashova
Fig. 3. Taxonomy of the application ontology for a virtual enterprise.
3 Technological Framework The generic scheme of a self-organizing service network is presented in Fig. 4. Each enterprise member is represented as an intelligent agent acting in the system. The architecture of the agent is presented in Fig. 5. Each agent has its own knowledge stored in its knowledge base and is described by a portion of the common shared application ontology. A part of this knowledge related to the current agent’s (and member’s) tasks and capabilities is called “context” and is stored separately to provide for faster task performance (only relevant information is processed). Capabilities, preferences and other information about the agent are stored in its profile that is available for viewing by other agents of the community. It facilitates communication, which is performed via the communication module responsible for meeting protocols and standards that are used within the community.
Self-Organising Service Networks for Semantic Interoperability Support
Alternative information sources
Common shared application ontology
Information source
349
Agent-based service
Agent-based service
Agent-based service
OWL-based information h SOAP-based
information h Fig. 4. Generic scheme of a self-organising network.
Communication module
Context
Core / engine
Profile
Personal knowledge base
Fig. 5. Agent architecture.
The agents communicate with other agents for two main purposes: (1) they establish links and exchange information for better situation awareness; and (2) they negotiate and make agreements for coordination of their activities during the operation. The agents may also get information from various information sources, for example, local road network for transportation can be acquired from a geographical information system (GIS). To make agent-based services independent a component Service Directory Facilitator (SDF) has been proposed. It could be hosted by independent or governmental organisations (for example, CLEPA [12] or regional associations existing in some European countries) SDF is responsible for service registration and update of autonomous services (Fig. 6). Initially an agent representing a service does not have any information about neighbouring services and its slice of the application ontology is empty. An automatic tool or an expert assigns it a set of the application ontology elements related to the service. For knowledge source services this could be classes which can be reified or attributes those values can be defined using content of knowledge sources. For problem solving services this could be tasks and methods existing in the problem domain.
350
Alexander Smirnov, Mikhail Pashkin, Nikolay Shilov, Tatiana Levashova
Service Directory Facilitator
PSS
KSS
PSS
KSS
KSS PSS
PSS
PSS
KSS KSS PSS – Problem Solving Service KSS – Knowledge Source Service Fig. 6. Registration and update of services.
PSS
Agents inform SDF about their appearance, modifications, intention to leave the community and send scheduling messages to update the internal repository. The task of SDF is to build a slice of the application ontology for each service, update references to the neighbouring services, support list of services (type, location, schedule, and notification of services about changes in the network). Organization of references between services is a complex task and is out of scope of the paper. Well-organized references result in a list of services which are used as initiators of the self-organisation process. Knowledge source services are responsible for (i) representation of information provided by knowledge sources by means of the OOCN-formalism (ii) querying and extract content from knowledge sources, (iii) transfer of the information, (iv) information integration, and (v) data conversion. Two types of strategies for interaction with knowledge sources have been proposed: x
x
Pull – knowledge sources deliver content when it is requested. E.g., a temperature sensor can be mentioned as an example of such information source. It measures temperature and supplies this information needed (Fig. 7, left part). Push – knowledge sources send content to a certain Web-service. For example fire alarm sensor in case of fire can send this information to a special agent that would activate a corresponding scenario (Fig. 7, right part). request inform Pull IS
Push IS reply Fig. 7. Pull (left part) and push (right part) strategies.
When a situation that requires some action occurs, an appropriate context is built. The initiators of the self-organisation process receive a notification. Using references to neighbour services and developed rules / protocols an appropriate
Self-Organising Service Networks for Semantic Interoperability Support
351
network organization is build, to produce a solution. In Fig. 8 a simplified example of self-organizing network is presented. Initiators are agents representing services KSS1, KSS2, and PSS1. Agents of KSS2 and PSS1 delegate the task of participation in problem solving to agents of KSS3, and PSS2. Content of knowledge sources presented by services KSS1, and KSS3 is sufficient and solvers presented by services PSS1 and PSS2 process this content. When the configuration is finished, peer-to-peer interactions between members of the network take place.
4 Conclusion The paper represents an approach and its technological framework for semantic interoperability in virtual enterprises. It is proposed that self-organization can resolve problems arising from failures of centralized control.
PSS
KSS
PSS
KSS1
KSS PSS
PSS KSS3 PSS – Problem Solving Service KSS – Knowledge Source Service Discovery of neighbor services Self-organized network
PSS1 PSS2
KSS2
Context
Fig. 8. Self-organizing of service network.
Ontologies have been proposed for description of problem domain knowledge. In accordance with the selected model of interoperability, services directly connect and exchange information with one another, but service descriptions (in the paper this is description of the knowledge source content and tasks that can be solved) are mapped to the common shared application ontology. The further research activities will address refinement of algorithms (i) of neighbor services definition, (ii) selection of services that initialize selforganization of the network, (iii) traverse of network and (iv) generation and estimation of alternative network configurations. The authors believe that once completed the proposed architecture could efficiently work for a range of the real world problems.
Acknowledgments The paper is due to research projects supported by grants # 05-01-00151, # 06-0789242, and #07-01- 00334 of the Russian Foundation for Basic Research, projects
352
Alexander Smirnov, Mikhail Pashkin, Nikolay Shilov, Tatiana Levashova
# 16.2.35 of the research program "Mathematical Modelling and Intelligent Systems", and # 1.9 of the research program “Fundamental Basics of Information Technologies and Computer Systems” of the Russian Academy of Sciences (RAS).
References [1]
Foundation for Intelligent Physical Agents (FIPA) Documentation, http://www.fipa.org. [2] Heflin, J., Hendler, J.: Semantic Interoperability on the Web. In: Proceedings of Extreme Markup Languages 2000, pp. 111--120. Graphic Communications Association, (2000). [3] Viana, A. C., Amorim, M. D., Fdida, S., Rezende, J. F.: Self-organization in spontaneous networks: the approach of DHT-based routing protocols. Ad Hoc Networks J., special issue on Data Communications and Topology Control in Ad Hoc Networks, vol. 3, no. 5, 589-606 (2005). [4] Hammer, B., Micheli, A., Sperduti, A., Strickert, M.: Recursive self-organizing network models. Neural Networks, vol. 17, no. 8-9, 1061--1085 (2004). [5] Nakano, T., Suda, T.: Self-Organizing Network Services with Evolutionary Adaptation. In: IEEE Transactions on Neural Networks, vol. 16, no. 5, 2005. [6] Chandran, R., Hexmoor, H.: Delegation Protocols Founded on Trust. In: KIMAS'07: Modeling, Exploration, and Engineering (proceedings of the 2007 International Conference on Integration of Knowledge Intensive Multi-Agent Systems), ISBN 14244-0945-4, pp. 328--335, IEEE (2007). [7] Baumgarten, M., Bicocchi, N., Curran, K., Mamei, M., Mulvenna, M., Nugent, C., Zambonelli, F.: Towards Self-Organizing Knowledge Networks for Smart World Infrastructures. In: Teanfield, H. (ed.) International Transactions on Systems Science and Applications, ISSN 1751-1461, vol. 2, no. 2., 123--133 (2006). [8] Smirnov, A., Shilov, N., Kashevnik, A.: Constraint-Driven Negotiation Based on Semantic Interoperability in BTO Production Networks. In: Panetto, H., Boudjlida, N., (eds.) Interoperability for Enterprise Software and Applications (proceedings of the Workshops and the Doctoral Symposium of the Second IFAC/IFIP I-ESA International Conference: EI2N, WSI, IS-TSPQ 2006), ISBN-13 978-1-905209-61-3, ISBN-10 1-905209-61-4, pp. 175-186, ISTE Ltd. (2006). [9] Dey, A. K., Salber, D., Abowd, G.D.: A Conceptual Framework and a Toolkit for Supporting the Rapid Prototyping of Context-Aware Applications, Context-Aware Computing. In: T.P. Moran, P. Dourish (eds.) A Special Triple Issue of HumanComputer Interaction, 16, Lawrence-Erlbaum (2001). [10] Smirnov, A., Pashkin, M., Levashova, T., Chilov, N.: Fusion-Based Knowledge Logistics for Intelligent Decision Support in Network-Centric Environment. In: George J. Klir (ed.) Int. J. of General Systems, ISSN: 0308-1079, vol. 34, no 6, pp. 673--690, Taylor & Francis (2005). [11] Smirnov, A., Sheremetov, L., Chilov, N., Sanchez-Sanchez, C.: Agent-Based Technological Framework for Dynamic Configuration of a Cooperative Supply Chain. Multiagent-based supply chain management. In: Chaib-draa, B., Müller, J. P. (eds.) Series on Studies in Computational Intelligence, vol. 28, ISBN: 3540338756, pp. 217--246, Springer (2006). [12] CLEPA: Comité de liaison de la construction d’équipements et de pièces automobiles (the European Association of Automotive Suppliers), http://www.clepa.be.
Semantic Service Matching in the Context of ODSOI Project S. Izza and L. Vincent Ecole des Mines de Saint-Étienne, Industrial Engineering and Computer Science Laboratory, OMSI Division, 158 cours Fauriel, 42023, Saint-Etienne, Cedex 2, France. {izza, vincent}@emse.fr
Abstract. Matching services still constitutes a big challenge for most of enterprises in general and notably for large and dynamic ones. This paper delineates a service similarity approach in the context of ODSOI (Ontology-Driven Service-Oriented Integration) project that concern the intra-enterprise integration issues in the field of manufacturing industry. Our approach is based on an extension of OWL-S service similarity. It proposes a rigorous quantitative ranking method based on some novel semantic similarity degrees. An implementation of this ranking method is provided in the form of a prototype coded on Java platform exploiting some existing APIs mainly Racer OWL API, and OWL-S-API. Keywords: Information System; Integration; Ontology; Semantics; Service; Similarity; Matching; OWL-S.
1 Introduction In the last few years, matching of services has been a very active research field in the context of semantic web in general and enterprise information systems in particular. Service matching is generally considered as the process by which similarities between service source (ideally candidate services) and service target (ideally the client's objectives presented in a form of a service template) are calculated. From an architectural point of view, the service matching involves three types of stakeholders: (i) service providers which have the goal of providing particular types of services and publish or advertise their services to enable potential requestors to discover and utilize them, (ii) service requestors which have the goal of finding services that can accomplish some internal objectives, (iii) matchmakers which are middle agents that can be used to find the service providers that match the stated requirements of a requestor [1].
354
S. Izza and L. Vincent
Today, the service matching is particularly challenging in the context of intraenterprise integration because of the large number of available web sources and services, and also of the different user profiles that may be involved. The use of efficient matching approach based on a pertinent ranking method becomes necessary in order to correctly facilitate and achieve some integration issues. This paper deals with these issues and precisely the service matching approach in support for intra-enterprise integration in the context of ODSOI (OntologyDriven Service-Oriented Integration) project [11]. It is organized as follows: Section 2 introduces some previous work in the domain of service similarity and matching. Section 3 presents the main principles of our service matching approach. Section 4 presents our semantic similarity approach. Section 5 presents some preliminary experimental results and lessons learned. Finally, Section 6 presents some conclusions and outlines some future work.
2 Related Work Semantic matching constitutes an important field that has been widely investigated in the last few years in several areas of research. We present a brief survey of some approaches to semantic matching that are related to the context of matching services. These works are presented following four categories of approaches: (i) concept similarity approaches, (ii) resource matching approaches, (iii) service matching approaches, and (iv) OWL-S service matching approaches. The first category of approaches concern the concept similarity metrics and the main representative ones with respect to our work are those proposed by [20], [19] and [7]. [20] proposed, in the context of semantic nets, a metric to measure conceptual distance between concepts in hierarchical semantic nets as a minimum number of edges separating the involved concepts. [19] proposed in the context of measuring information content of a concept a metric that associates probability with concepts in an hierarchy to denote the likelihood of encountering an instance of a concept. [7] proposed, in the context of similarity for ontology framework, a metric for measuring similarity for ontology concepts as an amalgamation function that combines the similarity measure of three complementary layers that are: (i) data layer that measures the similarity of entities by considering the data values of simple or complex data types such as integers and strings; (ii) ontology layer that considers and measures the semantic relations between the entities; and (iii) context layer that considers how entities of the ontology are used in some external context, most importantly, the application context. Although these approaches are pertinent, we exploit within ODSOI project a concept similarity approach based on OWL constructs. Another important work that is related to ours is [4] that proposed a taxonomy-based similarity measuring of ontology concepts. This latter, which does not provide an efficient asymptotic behaviour of the similarity measure, is improved by our similarity approach. The second category of approaches is closely related to resource matching such as: text document, schema, and software component matching. Text document matching constitutes a long-standing problem in information retrieval where most solutions are based on term frequency analysis. Schema matching is based on
Semantic Service Matching in the Context of ODSOI Project
355
methods that try to capture clues about the semantics of the schemas, and suggest matches based on them. Such methods include linguistic analysis, structural analysis, the use of domain knowledge and previous matching experience. Software component matching is an important activity for software reuse and is generally based on the examination of signature matching and specification matching. However, most of the approaches of this category are insufficient in the web service context because they present differences from the information structure, granularity and coverage point of view. The third category of works that are closed to ours concern those that exploit the notion of service similarity in general. [5], [10], [2], [3], [22], [15] and [24], constitute some important works that propose techniques that perform service matching exploiting the notion of service similarity. [5] proposed a metric for similarity search for web services in the context of Woogle search engine that exploits the structure of the web services and employ a novel clustering mechanism that groups parameter names into meaningful concepts. [10] proposed a metric for measuring the similarity of semantic services annotated with OWL ontology. Similarity is calculated by defining the intrinsic information value of a service description based on the inferencibility of each of OWL Lite constructs. [2] proposed, in the context of METEOR-S, a service similarity approach that is based on syntactic, operational and semantic information as a way to increase the precision of the match of web services. Syntactic similarity is based on the similarity of service names and service descriptions. Operational similarity is based on the functionality of services. Semantic similarity is based on the similarity of concepts and properties that defines the involved services. [2] proposed a matching algorithm that extended the work presented in [18] and precisely they mainly extended the subsumption based matching mechanism by adding information retrieval techniques to find similarity between the concepts when it is not explicitly stated in the ontologies. [22] uses, in the context of LARKS framework, five different types of matching: context matching, profile comparison, similarity matching, signature matching and constraint matching. [24] proposed an approach to rank services by their relative semantic order by defining three different degrees of matching, which are: (i) exact match (all requests in a demand are available in supply); (ii) potential match (where some requests in demand are not specified in supply); (iii) partial match (where some requests in demand are conflict with supply). Although these approaches present important principles, however, they do not concern OWL-S services. Finally, the fourth category of works that are the most closed to ours concern those discussed semantic discovery using OWL-S ontology such as [18] [3], [13] and [8], when the advertisement and request use terms from the same ontology. Most of these works proposed similar approaches for service discovery using a Semantic UDDI registry. They mainly enhanced the semantic search mechanism by enabling users to specify semantic inquiries based on web services capabilities. [18] propose, in the context of OWL-S services, a service similarity approach based on the use of Service Profile similarity, mainly the similarity of inputs and outputs. To achieve flexible matching, they define four degrees of service matches, which are: (i) exact, when there is an equivalence or a subsumption relationship between the two concepts, (ii) plug-in, when there is a subsumption relationship
356
S. Izza and L. Vincent
between the two concepts, (iii) subsumes, when there is an inverse subsumption relationship between the two concepts, and (iv) fail, when no subsumption relation can be identified between the two concepts. [3] proposed, in the context of MAIS project, a service similarity approach that extends that proposed by [18] and that allow to calculate a similarity, between a published service and the user request, taking into account the semantics contained in the corresponding OWL-S ontologies. [13] refines the work by [18] and proposed a ranking method that concern OWL-S services. [8] introduced a complex semantic similarity measure for determining the similarity of OWL-S services as a linear combination of the functional similarity of services and their textual similarities. The functional similarity of services is determined by measuring semantic similarity which exists between their sets of inputs and outputs. In order to be able to measure the input/output similarity of services we introduced techniques for finding the similarity of OWL concepts which are used for annotating the inputs and outputs of services. Our work is very closed to this category of approaches and is aiming to propose more rigorous ranking method that may be used for matching of OWL-S services.
3 Main Principles of the Semantic Service Matching ODSOI (Ontology-Driven Service-Oriented Integration) project concerns the intraintegration issues in the context of large and dynamic enterprises with the aim to define agile integration architectures. Within this project, service matching constitutes an important point that allows to discovering pertinent services in order to compose them into more gross-grained services and/or useful processes. In our approach [11][12], services are mainly characterized by four elements that are defined using ontology concepts (OWL-DL constructs) and that are: context, signature, constraint, and non-functional attributes. Service context is defined using one or more of the following properties: service classification which is the service category where the considered service belongs to, service cluster which is the cluster, in terms of service area and domain, where the considered service belongs to, and the enterprise view which is the enterprise component that is exposed by the considered service. Service signature includes the inputs and the outputs of the service. Service constraint mainly includes constraints on service preconditions and postconditions of the service. Non functional service parameters include quality of service and service visibility. In our approach, the defined service matching method improves the traditional matching of services [18] which is mainly based on signature matching. Our matching approach is based on the matching of all the four above elements that describe enterprise services: x
Service context matching: the source service and the target service must share the same context (i.e.: contextsource contextt arg et where context source and contextsource are respectively the context of the source and the target service.
Semantic Service Matching in the Context of ODSOI Project
x
x
Service signature matching: the source service and the target service must present comparable signatures (ie.: ) where is the subsumption ( Int arg et In source ) (Out source Out t arg et ) operator, Insource and Outsource are respectively the inputs and the outputs of the source service, Inttarget and Outtarget are respectively the inputs and the outputs of the target service. Service constraint matching: the source service and the target service must have compatible constraints on their preconditions and postconditions. This means that we must make valid the following expression: ( precond source precond t arg et ) ( postcond t arg et postcond source ) where
precond source and postcond source are preconditions postconditions of precond
x
357
postcond
t arg et t arg et the source service, and and are preconditions and post-conditions of the target service. Non functional parameters: the source service and the target service must have comparable non functional parameters that are quality of service and service visibility. Precisely, the source and the target service must present comparable levels of quality (i.e.: QoS source QoS t arg et where QoS source and QoS t arg et are respectively the quality of the source service and of the target
service. Concerning the service visibility, we must also have compatible visibilities (i.e.: visibility source visibility t arg et where visibilitysource and visibility source are respectively the quality of the source and the target service.
4 Semantic Service Similarity Our approach is based on the calculation of a similarity degree that measures the degree of satisfaction evaluated using the similarity of ontology concepts. This similarity degree is based on some more elementary degrees that quantitatively refines the traditional inference between two concepts (equivalence, subsumption, inverse subsumption, intersection, and disjunction) such as those generally proposed in the literature [18].
5 Global Semantic Service Similarity Formally,
we
propose
to
calculate
the
global
similarity
degree
simglobal ( S , S ' ) between two given services (or a source service and a target service)
S and S' as a weighted product accordingly to the following formula:
358
S. Izza and L. Vincent
sim global ( S , S ' )
sim xOx ( S , S ' ) (1)
xFilter
where x {context, signature, constraint, non-functional} and Ox is a positive real weight and where simcontext ( S , S ' ) denotes the similarity degree of service contexts,
sim signature ( S , S ' ) sim constra int ( S , S ' )
denotes
the
similarity
degree
of
service
signatures,
denotes
the
similarity
degree
of
service
constraints,
'
simnon functional ( S , S ) denotes the similarity degree of non-functional attributes. As it is shown, this formula may be interpreted as a probability of similarity of two services S and S'. The choice of the weighted product in the formula is not fortuitous and it can be justified by the fact that the global similarity must depend of the performance of all the elementary similarity degrees and not of the excellence of only certain degrees.
6 Elementary Semantic Service Similarity In a similar manner than the global similarity degree, all the above intermediary degrees are a combination of some more elementary degrees. For example, the similarity degree of service contexts simcontext ( S , S ' ) is calculated using the following formula: simcontext (S , S ' )
simO (S , S ) '
x
x x{type, cluster, view}
(2)
In a similar manner, the other intermediary similarity degrees are calculated accordingly to the following formulas:
simO (S , S )
simsignature (S , S ' )
'
x
(3)
x
x{input, output}
simconstra int (S , S ' )
simO (S , S )
(4)
simO (S , S )
(5)
x
'
x x{precondition, postcondition}
simnon functional ( S , S ' )
x
x x{QoS, visibility}
'
Therefore, we gradually define the global similarity degree as a combination of elementary degrees. We define an elementary degree as a degree that can not be divided in order to calculate it. For example, we consider in our approach that the two degrees siminput ( S , S ' ) and simoutput ( S , S ' ) as elementary because we consider them as not composite. The elementary degree is very important in the way that the problem of calculation of the global similarity degree is translated in the form of some simple problems of calculation of elementary degrees.
Semantic Service Matching in the Context of ODSOI Project
359
7 Semantic Ontology Concept Similarity For the calculation of the elementary similarity degrees, we exploit a variant that combines the structural topological dissimilarity [23] and the upward cotopic distance [16]. Based on these two measures, we define an appropriate measure that matches two ontology concepts of a same ontology and that takes into account the sensibility of the concepts depth. Given two concepts C and C' of the same ontology O with a depth Depth(O), the similarity degree between these two concepts is calculated accordingly to the following formula: 1 d d' D max( d , d ' ) 1 (1 min(d , d ' ) 1)( ) (6) sim (C , C ' ) (1 ( ) ) (1 ) 1 Depth(O) (min(d , d ' )) E d d' 1 Depth (O ) where D and E are parameters that control the flatness of the similarity function and that are defined as follows: D 1 max(d , d ' ), E log10 ( Depth(O)) (7) and where d and d' are respectively distances (augmented with one in order to avoid null denominators) between the concept C (resp. C') and the smallest common ancestor C0 :
d
dist (C , C0 ) 1, d ' dist (C ' , C0 ) 1
(8)
The common ancestor C0 of the two concepts C and C' is the solution of the optimization problem that allows to define the structural topological dissimilarity as introduced in [23] : (9) ?c: min[ dist (C , c ) dist (C ' , c )] c
Of course, the global concept similarity function (sim) is mathematically a function of similarity because it verifies the conditions of positivity, maximality and symmetry [14]. Furthermore, this function is normalized due to the fact that its values are in [0, 1].
360
S. Izza and L. Vincent
(1-a)
(1-b)
Fig. 1. Curves of the concept similarity function
As it can be noted, the global similarity formula (formula 6) contains four factors. The first factor (1 ( d d ' )D ) represents the global behaviour of the d d'
similarity calculation. It allows a measure that is sensible to the depth of the two concepts. More the concepts are near from the common ancestor and more this quantity is important. In opposite, this quantity diminishes when the concepts are far away from the common ancestor. The disadvantage is that it favors the cases where the distances d et d' are equal even if one or more concepts are so far away from the common ancestor accordingly to the bisecting plan (d, d'). This disadvantage is corrected by the second factor (1 max(d , d ' ) 1) , that allows to 1 Depth(O )
diminish the curve when the distances d and d' increase in a same manner. This allows us to have a desired effect that is the effect of a bell. This effect is important because it allows to increase the similarity around the common ancestor and to decrease when the concepts move away from the common ancestor. Concerning 1 the last two factors (1 min(d , d ' ) 1) and ( ) , they allow to smooth the (min(d , d ' )) E 1 Depth(O) curve, notably along the contour of the curve where d = d'. Figure 1-a and figure 1b show the curves of this similarity function from two different angles of view. The calculation of some values of the similarity function, with Depth(O) equals to 100 is represented on table 1 that quantitatively illustrates the asymptotic behavior of this function. As shown, this latter conforms to the desired behavior which is to provide an efficient measure that takes into account the depth of the concepts in the ontology, their neighborhood, and also the asymptotic behavior of the measure when these concepts are far from the common ancestor.
Semantic Service Matching in the Context of ODSOI Project
361
Table 1. Some values of the concept similarity function d'
8 Preliminary Experimental Results and main Lessons Learned We conducted some preliminary evaluation analyzing the efficiency and comparing the performances of our service matching approach to show that exploiting such a method is pertinent and does not hinder the performance and scalability of the matching process. We implemented a matching prototype using Java under the Eclipse environment [6] and some APIs mainly OWL API [21] to extend and manage OWL ontologies, OWL-S API [17] to extend and manipulate OWL-S descriptions, and Racer [9] to perform OWL inferences. Figure 2 illustrates the developed prototype. As it is shown, the graphical user interface contains some graphical components that allow fulfilling the elements of the discovery request. In addition to this, we have the possibility to change the different weights and thresholds that are used for calculating the global similarity degree between the request and the published services.
362
S. Izza and L. Vincent
n FunctionService MaintenanceArea
o
EquipementQualificationPredictio n
EquipementQualification
p
Fig. 2. Snapshots of the developed prototype
In order to facilitate the fulfilling of the request, the prototype allows visualizing some stored ontologies for selecting some desired concepts. Once the user finished its request, he can execute the discovery process by clicking on the button "Discover". Then the prototype restitutes the matched services that are ranked accordingly to the calculated similarity degrees. An example of simple discovery request that was experimented is to search for functional services within the cluster MaintenanceArea that produces Qualifications. The prototype executes the matching algorithm and restitutes ranked answers accordingly to the service similarity degree. But within the restituted answers, some services may present the same similarity degree (here we obtained in this case two services that have the same similarity degree 0,8684.). In order to differentiate them, we exploit other elements of the request. So we exploit the enterprise view (or other request elements) that allows to precise for example the type of the activity held by the service (planning, prediction, reporting, etc.). This is important in order to increase the performance (mainly precision and recall) of the prototype. By exploiting the Service View element, we can indicate precisely which kind of activity of qualification management we want to retrieve. For example, when we indicate in the Service View zone text the functional concept "QualificationQueryFunction" which is a concept from the Functional Ontology, then we can differentiate between the above services and we obtain two different
Semantic Service Matching in the Context of ODSOI Project
363
answers with two different degrees of similarity: 0,2921 and 0,8684. These similarity degrees may be more refined with the exploitation of the other elements, or by choosing more restricted service clusters. Based on our experimentation of Semantic Web services, and precisely of the semantic matching of enterprise services on a significant sample of enterprise services, we believe that the existing technologies (precisely the existing open OWL-S APIs and also on the existing OWL inference engines) are sufficiently mature to use them in industries even if they are sometimes very heavy in the sense that they need a consequent consuming time. The suggested matching method is very promoting in the way it proposes a pertinent quantitative ranking of enterprise services. We worked successfully with a sample of enterprise services and some evaluation of the performance was also made. And we are convinced by the importance of the implemented prototype. An additional notice concerns the graphical user interface. This latter seems to be important in order to correctly guide the process of discovery of enterprise services. With such an interface, a user may user-friendly make a request. By clicking on the browsing buttons, the prototype visualizes appropriate ontologies (in order to select ontology concepts), and the user gradually builds his request by only choosing desired concepts. Furthermore, the user can change the request weights (default values were set to one) in order to adapt them to the specific enterprise context. He can also change the different thresholds in order to better control the failure condition in the discovery process. Our experimentation shows that the values of the different weights and the different thresholds must be correctly determined. The experimentation also shows some limitations that need some amelioration in the future. Two notable ameliorations must at least be mentioned in order to enhance the prototype. The first one concerns the taking into account of the numerical values in the formulation of the discovery requests (for example to take into account a reliability of service higher than 0.9 which is not possible in the actual version of the prototype because we use the simple hypothesis that formalizes the quality as some discrete modalities) and also the possibility of visualizing the answers graphically using the principle of the cartography of the enterprise services that is more convivial.
9 Conclusions and Future Work We have presented in this paper an approach for service matching in the context of intra-enterprise integration, and precisely in the context of ODSOI project. This approach which extends the existing matching methods, may improve the service discovery approach in order to correctly search and rank services within and also outside the enterprise. Our service matching approach is based on the matching of all the four elements that describe enterprise services: context, signature, constraint and non-functional attributes. Furthermore, our approach proposes a rigorous ranking method that provides an efficient measure that takes into account the depth of the concepts in the ontology, their neighborhood, and also the asymptotic behavior of the measure when these concepts are far from the common ancestor concept. However, our approach presents some limitations and the main ones
364
S. Izza and L. Vincent
concern its run-time-consuming in the sense that it needs a longer execution path than in a traditional matching approach. But the benefits are above all for long term because the matching method gives more refined results. We have implemented a matching prototype and we have performed several experiments within a large enterprise that gave good results. The future work will focus on improving the time consuming through reducing the loading and inference time by managing and manipulating small interrelated ontologies. It also focuses on testing the prototype in large realities in order to validate its scalability and also its importance in the context of large enterprises and the semantic Web services.
Disclaimer Certain commercial software products are mentioned in this paper. These products were used only for demonstration purposes. This use does not imply approval or endorsement by our institution, nor does it imply these products are necessarily the best available for the purpose.
References [1]
[2] [3]
[4]
[5]
[6] [7]
[8]
[9]
Burstein M. H., Bussler C., Zaremba M., Finin T. W., Huhns M. N., Paolucci M., Sheth A. P. and Williams S. K., "A Semantic Web Services Architecture". IEEE Internet Computing 9(5): 72-81 (2005). Cardoso A. J. S., "Quality of service and semantic composition of workflows". PhD Thesis, University of Georgia, Athens, Georgia, 2002. Corallo A., Elia G., Lorenzo G. and Solazzo G., "A Semantic Recommender Engine enabling an eTourism Scenario". ISWC2005 International Semantic Web Conference 2005, 6-10 November 2005, Galway, Ireland. Correla M. A. and Castels P., "A Heuristic Approach to Semantic Web Services Classification". 10th International Conference on Knowledge-Based & Intelligent Information & Engineering Systems (KES 2006), Invited Session on Engineered Applications of Semantic Web (SWEA). Bournemouth, UK, October 2006. Springer Verlag Lecture Notes in Computer Science, Vol. 4253, ISBN 978-3-540-46542-3, pp. 598-605. Dong X., Halevy A., Madhavan J., Nemes E. and Zhang J., "Similarity Search for Web Services". In proceedings of the 30th VLDB Conference, Toronto, Canada, http://www.vldb.org/conf/2004/RS10P1.PDF, 2004. Eclipse, "Eclipse - an open development platform". 2005, http://www.eclipse.org (accessed 3 March 2005). Ehrig M., Haase P., Hefke M. and Stojanovic N., "Similarity for ontology - a comprehensive framework". In Workshop Enterprise Modelling and Ontology: Ingredients for Interoperability, 2004. Ganjisaffar Y., Abolhassani H., Neshati M. and Jamali M., "A Similarity Measure for OWL-S Annotated Web Services". Web Intelligence (WI'06), IEEE/WIC/ACM pp. 621-624, 2006. Haarslev V. and Möller R., "RACER System Description". In Proceedings of the International Joint Conference on Automated Reasoning, June 18-23, 2001.
Semantic Service Matching in the Context of ODSOI Project
365
[10] Hau J., Lee W., and Darlington J., "A semantic Measure for Semantic Web Services". WWW2005, May 10–14, 2005, Chiba, Japan. [11] Izza S., Vincent L. and Burlat P., "A Framework for Semantic Enterprise Integration". In Proceedings of INTEROP-ESA'05, Geneva, Switzerland, pp-78-89, 2005. [12] Izza S., Vincent L., Burlat P., Lebrun P. and Solignac H., "Extending OWL-S to Solve Enterprise Application Integration Issues". Interoperability for Enterprise Software and Applications Conference (I-ESA'06), Bordeaux, France, 2006. [13] Jaeger M. C., Rojec-Goldmann G., Liebetruth C., Mühl G. and Geihs K., "Ranked Matching for Service Descriptions Using OWL-S". Springer Berlin Heidelberg, 2005. ISBN: 978-3-540-24473-8. [14] KnowledgeWeb, "Deliverables of KWEB Project". EU-IST-2004-507482, 2004, http://knowledgeweb.semanticweb.org/, accessed 10 June 2006. [15] Li L. and Horrocks I., "A software framework for matchmaking based on semantic web technology". In the International Journal of Electronic Commerce, 8(4):39-60, 2004. [16] Mädche A. and Zacharias V., "Clustering ontology-based metadata in the semantic web". In proceedings 13th ECML and 6th PKDD, Helsinki (FI), 2002. [17] Mindswap Group, "OWL-S API". 2005, http://www.mindswap.org/ 2004/ owl-s/api/ (accessed 24 March 2006). [18] Paolucci M., Kawamura T. and Payne T., "Sycara, K.: Semantic Matching of Web Service Capabilities". In proceedings of the First International Semantic Web Conference, 2002. [19] Resnik P. "Semantic similarity in a taxonomy: An information-based measure and its application to problems of ambiguity in natural language". In Journal of Artificial Intelligence Research, volume 11, pages, 95–130, July 1999. [20] Roda R., Mili H., Bicknell E. and Blettner M., "Development and application of a metric on semantic nets". In IEEE Transactions on Systems, Man, and Cybernetics, volume 19, Jan/Feb 1989. [21] Sourceforge, "OWL API". http://owlapi.sourceforge.net/, accessed April 2006. [22] Sycara K., Widoff S., Klusch M. and Lu J., "Larks: Dynamic Matchmaking Among Heterogeneous Software Agents in Cyberspace", ACM Portal, Source: Autonomous Agents and Multi-Agent Systems, v5, issue 2, pp. 173–203, June 2002. [23] Valtchev P. and Euzenat J., "Dissimilarity measure for collections of objects and values". In proceedings Coen X. Liu and M. Berthold, editors, Proc. 2nd Symposium on Intelligent Data Analysis., Vol. 1280, pp. 259–272, 1997. [24] Tommaso D. N., Di Sciascio E., Donini F. M. and Mongiello M., "A System for Principled Matchmaking in an Electronic Marketplace". In proceedings of the Twelfth International Conference on World Wide Web (WWW), 2003.
Ontology-based Service Component Model for Interoperability of Service Systems Zhongjie Wang and Xiaofei Xu Research Center of Intelligent Computing for Enterprises and Services (ICES), School of Computer Science and Technology, Harbin Institute of Technology P.O.Box 315, No.92, West Dazhi Street, Harbin, China 150001 {rainy, xiaofei}@hit.edu.cn
Abstract. Service system as the foundation for service providers and customers to create and capture value via co-production relationship, is a complex socio-technological system composed of people, techniques and shared information. One of the most significant characteristics of a service system is that there exist a mass of dynamic, stochastic and semantic interactions (i.e., interoperability) between heterogeneous service elements. For better considerations on interoperability during service system design, we import OWLbased service ontology to precisely express service semantics. Based on such ontology, heterogeneous service elements are elaborately classified and encapsulated into a unified form “Service Component” with a set of uniform interfaces to expose or acquire semanticsbased services to/from outside. In such way, right service components will be chosen out by ontology matching, and semantics conflicts between service elements will be easily discovered. Based on interface connections, selected service components are weaved together to form service system with good interoperability performance. Keywords: Ontology based methods and tools for interoperability, Design methodologies for interoperable systems, Model Driven Architectures for interoperability
1 Introduction Over the past three decades, services have become the largest part of most industrialized nations’ economies (Rouse and Baba 2006, Spohrer et al 2007) and researchers and practitioners have evolved more and more R&D on service related activities. In 2004, IBM firstly presented the concept “Services Sciences Management and Engineering (SSME)” (IBM, 2004) and promoted the creation of a new discipline “Services Sciences”, which tries to integrate across traditional disciplinary areas to obtain globally effective solutions in a service business
368
Zhongjie Wang and Xiaofei Xu
environment (Chesbrough and Spohrer, 2006) to better facilitate the development of service industry. Generally speaking, a service is a provider-to-client interaction that creates and captures value while sharing risks (Wikipedia, 2007). In order to support execution a service, there should be a well-designed service system for supporting. Consider education service system as an example, in which universities as service providers that aim to transform student knowledge through agreements, relationships, and other exchanges among students and university faculty, including courses offered and taken, tuition paid, and work-study arrangements (Spohrer et al, 2007). There has reached a consensus that, a service system is a value co-production configuration of people, technology, other internal and external service systems connected by value propositions, and shared information (such as language, processes, metrics, prices, policies, and laws) (Spohrer et al, 2007). It is a type of complex socio-technological system that combines features of natural system and manufacturing system (IBM, 2004). Easily to see that a service system is not just a pure software system but is additionally composed of various complex, heterogeneous and distributed service elements, e.g., people, techniques, resources, environments, etc. Besides, in order to fully accomplish service business, tight and continual collaborations between service elements provided by different participants cannot be thoroughly avoided. From above two points of view, service system is essentially considered as the interoperability between multiple sub-systems of different participants, or between multiple service elements. Formally speaking, we define service interoperability as the ability of people, resources, or behaviors to provide services to and accept services from other people, resources, or behaviors and to use the services so exchanged to enable them to operate effectively together. During service process, a smooth and effective interoperability channel between all service elements must be created and maintained to ensure coherent communications between participants, so as to realize the objective of “co-producing and share values”. Concerning state of the art about interoperability of service systems, current research mainly concentrates on the following two aspects, i.e., (1) Fundamental theories on service systems. Researchers try to borrow some universal theoretical methods from other similar complex systems (e.g., natural ecosystem, social networking system, ESA, etc) to analyze the interoperability mechanisms between service elements. Such methods include ecosystem based idesign method (Tung and Yuan 2007), small world network (Yuan and Zhang 2006), catastrophe theory (Liu et al 2006), etc. Unfortunately, at present these methods are merely resting in theoretical level and there still lack of concrete techniques to support the implementation of such theories in practical service systems. (2) Methodology for service engineering. This aspect usually emphasize on step-by-step mapping from customer requirements to service systems following model-driven approach to address interoperability issues (Tien and Berg 2003, Cai 2006). However, most of related literatures emphasize particularly on the interoperability between softwarized services (e.g., web services, SCA, etc) (Cox and Kreger 2005), in which ontology languages (e,g., OWL, RDF, DAML+OIL,
Ontology-based Service Component Model for Interoperability of Service Systems
369
etc) are adopted to eliminate semantics gap between services (Vetere and Lenzerini 2005), but those non-softwarized service elements are mostly ignored. In order to address limitations mentioned above, in this paper we present a new concept “Service Component (SC)” as basic and uniform building blocks to encapsulate heterogeneous service elements. Each SC has a set of unified interfaces to interact with others, and pre-defined domain-specific ontology is imported to formally express semantics of service elements encapsulated in SC. In this way, structural and semantic diversities will be thoroughly eliminated. During design and implementation of a service system, SCs could be precisely discovered, understood, invoked and composed together based on ontological information attached to them. This paper is organized as follows. In section 2 we analyze basic elements of a service system and possible interoperability scenarios between them. In section 3, we define the corresponding ontology for describing semantics of each type of service elements. In section 4 the concept of “Service Component” is put forward, including its classifications and unified interfaces. Section 5 discusses how to deal with interoperability issues by SC composition. Finally is the conclusion.
2 Service System and Typical Interoperability Requirements 2.1 Basic Constituents of a Service System
Service system is considered as a socio-technological system composed of multiple types of service elements. Generally speaking, such elements are classified into two types, i.e., technological and non-technological ones, either of which depends closely on each other to achieve global optimization of service performance (IBM 2004). Distinctive characteristics of a service system are specified as nine classes, i.e., customer, goals, input, output, process, human enabler, physical enabler, informatic enabler and environment (Karni and Kaner 2000). In our opinion, such classification is too fine-grained to be lack of clarity, therefore we re-classify typical service elements into the following four types based on IBM’s viewpoint (Spohrer et al 2007), i.e., x x x x
People, mainly referring to service customers and service providers, including their organizations, roles, human resources, professional skills and capabilities; Resource, including technological (virtual) and physical material resources, e.g., software, hardware and equipment, physical environments, etc, that could be utilized during service process; Shared information, including papery or electronic files, reports and documents which are created, processed by and exchanged between service customers and providers; Behavior, referring to the physical, mental and social behaviors that are provided by or as the responses of people or resources during services.
370
Zhongjie Wang and Xiaofei Xu
Organization
People 0..*
1..* Shared Information 0..*
1..*
Behavior 0..*
0..*
Capability 0..*
Software
Hardware
0..*
0..*
0..* Resource
0..*
1..* 1..*
0..*
0..*
Value
QoS/SLA Parameter Risk
Environment
Fig. 1. Service elements and their relationships
The reason that interoperability of service system is quite complex is that it contains not only a mass of non-technological elements (e.g., people, whose behaviors are difficult to model and simulate), but also because such interoperability is stochastic, nonlinear, difficult to forecast, and frequently changing along with requirements and resource providing. In Fig. 1, we present some typical service elements and relationships between them. 2.2 Typical Interoperability Issues in Service Systems
Considering different types of service elements, typical interoperability in service systems may possibly fall into one of the following situations: x x x x x
Interoperability between people, i.e., verbal communication or behavior interactions face-to-face or via collaborative web-based environment; Interoperability between people and software, i.e., people learn to use software, provide input information to it and receive returned information from it; Interoperability between people and information, i.e., people receive and try to understand information, then take specific actions or produce new information based on these information; Interoperability between people and hardware, i.e., service behaviors initiated by people requires the support by some specific hardware or equipments; Interoperability between people and environment, i.e., only when people locate in some specific environment, can they better provide proper services to others;
Ontology-based Service Component Model for Interoperability of Service Systems
x x x
371
Interoperability between software, i.e., software belonging to different service participants should interoperate with each other via function calls or information sharing to automate accomplish some specific service tasks; Interoperability between resources, i.e., one resource may possibly use other resources for its own functions; Interoperability between resource and environments, i.e., a resource should be configured in a specific environment so as to be utilized by other service elements.
In real services, because service elements provided by different participants are quite heterogeneous, the interoperability scenarios mentioned above might not be easily realized. We summarize possible interoperability issues into the following three types, i.e., x
Function mismatch
-
-
-
Functions provided by a service element are rather larger or smaller than the required functions of other elements. For example, a service behavior “Business Consulting” include four packaged service activities (AS-IS modeling, component business analysis, business redesign, IT re-design), but a service customer might only require “ASIS modeling”; Functions provided by a service element require too much input information that other elements cannot fully provide. For example, a service “Switch a ERP system from simulative running to formal online running” requires that complete BOM (Bill of Materials) and BOP (Bill of Process) data have been fully prepared, but almost all customer enterprises usually cannot provide complete BOM/BOP at all; Output of a service element contains too much information that is not completely required by other service elements.
x
SLA mismatch Quality is a very important aspect of a service. When multiple service elements are composed together to form service system, quality of different service elements should be compliant with each other so that total service quality will reach an accepted level. Even two service elements are both in high-level quality, if they are not matching, the composite quality would be possibly bad. Typical SLA/QoS parameters include response time, cost, price, security, competence, etc, of service behaviors, resources and capabilities.
x
Semantics mismatch - Syntax format of service element representations/descriptions are inconsistent with others; - Semantics representations of service elements are inconsistent with other elements.
372
Zhongjie Wang and Xiaofei Xu
If such interoperability issues cannot be effectively eliminated, it is difficult for service providers and customers to establish favoring collaborations between respective service elements, therefore performance of the service system will keep in low level and some mandatory functional service requirements are difficult to cover indeed. Since interoperability in service systems is quite similar with interoperability between software systems, we prefer to find possible interoperability scenarios and solve them in service model level, then try to map them to the execution level of service systems. It is quite similar to the approach of Model-Driven Interoperability (MDI) in the domain of ESA interoperability (Elvesæter et al 2006).
3 Domain-specific Service Ontology A common approach to address interoperability is to import ontology by which the concepts are consolidated into a unified semantics space. In this paper we adopt similar approach, too, i.e., defining domain-specific service ontology, using which to express semantics of service components and support interoperability during service component composition. Table 1 lists some typical concepts and their property-based associations. Due to limited space, we will not explain each concept and property in details; however, readers could get to understand their meanings from the names. Table 1. Service ontology: concepts and properties Concept
Organization
People
Behavior
Property has contains provides provides has inStateOf has belongsTo provides provides has inStateOf has has has hasBehaviorType hasExecutionType in has isProvidedBy hasInput hasOutput
Associated Concepts OrganizationProfile People Resource Behavior Capability State PeopleProfile Organization Behavior Capability FeasibleDate State Profile PreCondition PostCondition BehaviorType ExecutionType GranularityLevel SLAParameter Organization or People Information Information
Ontology-based Service Component Model for Interoperability of Service Systems
Information ServiceValue ServiceRisk State StateParameter
isRelatedTo has provides requires provides inStateOf Requires has fromState toState fromState toState contains hasName hasValue hasMetrics
373
Associated Concepts Resource Capability Behavior Behavior Resource or Organization or People or Behavior or Capability ServiceValue ServiceRisk CapabilityProfile CapabilityRepresentation Capability State {QualitativeMetrics, QuantitativeMetrics} {Advantage, Knowledge, Experience, ControllingResCapability} CapabiliytMetrics ResourceProfile Capability Capability Behavior State Resource InformationProfile State State State State StateParameter Name Value Metrics
SLAParameter
With flourish researches on Semantic Web, there have appeared so many ontology description techniques, e.g., RDF, DAML+OIL, OWL, KIF, etc, among which OWL (Web Ontology Language) (W3C 2004) now has been the most popular and de-festo standard of ontology description language for semantics on the Web. Although web-based services are just one of typical service elements in service system, it is easy to make use of OWL to describe other non-softwarized services; therefore we adopt OWL as the tool to describe our service ontology. Fig. 2 is a snapshot of service ontology defined in Protégé (Stanford 2007).
374
Zhongjie Wang and Xiaofei Xu
Fig. 2. A screen snapshot of service ontology definition by Protégé
Compared with other service elements, service behaviors are more emphasizing on interactive process. In Semantic Web, OWL is extended to OWL-S (OWL 2004) for defining interactive processes between web services, and here we also employ OWL-S as the tool for describing inner detailed processes of service behaviors, each of which is represented as an OWL-S process model according to pre-defined service behavior ontology. Fig. 3 shows an example of IT consulting behavior process composed of five fine-grained service behaviors, i.e., (1) CBM (Component Business Modeling) based AS-IS modeling and analysis, (2) business value analysis to specify the value of each business components on strategic goals of an enterprise, (3) business transformation design, (4) business re-deployment and (5) IT asset re-planning, each of which will be further decomposed as more fine-grained behaviors. Above listed ontology concepts are all universal to any service domains, but concrete semantics in various specific service domains are not included. When such ontology is applied to a given domain, it should be further refined and extended for more elaborated semantics.
Ontology-based Service Component Model for Interoperability of Service Systems
375
Fig. 3. A screen snapshot of service behavior definition based on OWL-S
4 Interoperability-oriented Unified Service Component Model In this section we import the concept “Service Component (SC)” to facilitate encapsulation of heterogeneous service elements with uniform structure and semantics representations. Inner descriptions and functions are exposed by interfaces and their semantics are represented by the service ontology discussed in Section 3. 4.1 Classification of Service Components
A service component (SC) is defined as a reusable service element. Any service elements could be mapped to service components, which are correspondingly classified into the following types: x
x
People-ware SC (PSC): a people with some professional skills to provide behaviors to others during service execution, e.g., an IT consultant who has ten-year’s experience on cement manufacturing industry and could provide consulting services to such kinds of enterprises; Resource SC (RSC): a resource with specific capabilities to provide specific behaviors, including
376
Zhongjie Wang and Xiaofei Xu
-
-
-
x x
Software SC (SSC): a software entity with specific computation capabilities to provide specific services behaviors, e.g., a web services with WSDL-based interfaces, a SCA component, a database, an encapsulated legacy system, etc, to offer web-based service behaviors; Hardware SC (HSC): a hardware or physical equipment with specific capabilities, e.g., a machine for manufacturing, a computer server for residing software, an instrument for measuring and checking, a GPS for indicating directions, a telephone for communicating with customers, etc. Environment SC (ESC): a location with specific facilities as a container where service behaviors will take place, e.g., a meeting room with a council board, ten chairs, one projector and one whiteboard for Joint Application Design (JAD), a call center with 120 seats and 250line SPC telephones for providing help to customers, etc.
Behavior SC (BSC): processes, activities or actions that a person could behave to accomplish a service task, e.g., consulting, training, manipulating a machine, using a software system, reporting unanticipated problems, etc. Information SC (ISC): a physical or electronic information entity that is exchanged and shared among software systems, people, etc, e.g., a sales order, a log of call center, an ESA design document, a service manual for guidance, etc.
4.2 Architecture of Service Component
Similar to the architecture of a software component, service component is also designed as a black-box, i.e., interfaces are exclusive channels that service component communicates with external environment or other service components. There are two types of interfaces, i.e., Providing Interface and Required Interface, the former of which specifies the channel by which other components access its basic profiles and functions (behaviors, capabilities), and the latter specifies the functions/ information/resources/behaviors it requires from environment or other components, as shown in Fig. 4. Required Interface
Providing Interface Service Component
Fig. 4. Brief structure of a service component
For each type of service components, we’ve designed a set of specific interfaces listed in Table 2.
Ontology-based Service Component Model for Interoperability of Service Systems
Cardinality 1 n n 1 1 1 1 1 1 n n n n n 1 1 1 n n 1 1 1 n n n 1 1 1
5 SC Composition Based Service System Interoperability Solutions When specific service requirements are jointly confirmed by service providers and customers, service components will be selected from repository and composed together to get the corresponding service system. During service system development, the service component model presented in section 4 can effectively support interoperability in service system development, where x x x
Domain-specific service ontology is imported for solving semantics conflicts; Unified interfaces are designed for identifying function mismatch; SLA ontology is imported for solving SLA mismatch;
378
Zhongjie Wang and Xiaofei Xu
The process of interoperability-oriented service component composition includes the following steps: Step 1: Specify service requirements and SLA according to the negotiation results between service providers and customers; Step 2: Aiming at each service requirement, express it as the form of service ontology; Step 3: Find BSCs that match with the requirements by querying each BSC’s Profile interfaces; Step 4: For each selected BSC, query its RequiredResource, RequiredCapability, RequiredBehavior, InputInformation and OutputInformation interfaces to find what kinds of resources, capabilities, behaviors and information elements it requires; Step 5: For each service elements found in Step 4, query the responding SCs from repository by interface semantics matching; Step 6: Recursively query SCs from repository until there are no other SCs required; Step 7: For the function and semantics mismatch between SCs, design adaptors between them to eliminate the gaps; Step 8: Compose all selected SCs together.
Fig. 5. A simple example of service component composition
6 Conclusions Service system is the fundamental infrastructure that ensures service right and efficiently executed to co-produce and share values between service providers and customers by exerting individual competitive advantages. It is a kind of complex socio-technological system composed of various heterogeneous service elements, and whether the interoperability channels between them would keep smooth or not will impact service execution quality and efficiency to a large extent. In this paper, we carefully analyze basic service elements in service system and the possible
Ontology-based Service Component Model for Interoperability of Service Systems
379
interoperability scenarios, and then construct typical service ontology defined by OWL and Protégé. To support interoperability, service elements are defined as service components with unified interfaces, where their semantics are represented by service ontology. Due to limited spaces, some technical details are not included in this paper. And concerning the key issue of how to find proper SCs by semantics matching during service component composition, it is a quite big topic and we will discuss in other papers.
Acknowledgement Research works in this paper are partial supported by the National Natural Science Foundation (NSF) of China (60673025), the National High-Tech Research and Development Plan of China (2006AA01Z167, 2006AA04Z165) and the Development Program for Outstanding Young Teachers in Harbin Institute of Technology (HITQNJS.2007.033)
References [1]
B. Elvesæter, A. Hahn, A.-J. Berre, and T. Neple. Towards an interoperability framework for model-driven development of software systems. The 2nd International Conference on Interoperability of Enterprise Software and Applications. Springer London, 2006, 409-420. [2] D.E. Cox and H. Kreger. Management of the service-oriented architecture life cycle. IBM Systems Journal, 2005, 44(4): 709-726. [3] G. Vetere and M. Lenzerini. Models for semantic interoperability in service-oriented architectures. IBM Systems Journal, 2005, 44(4): 887-903. [4] H. Cai. A two steps method for analyzing dependency of business services on it services within a Service Life Cycle. ICWS'06. 877-884. [5] H. Chesbrough and J. Spohrer. A research manifesto for services science. Communications of the ACM, 2006, 49(7): 35-39. Service Sciences, Management and Engineering (SSME). [6] IBM. http://www.research.ibm. com/SSME [7] J. Spohrer, P. Maglio, J. Bailey, and D. Gruhl. Steps towards a Science of Service Systems. IEEE Computer, 2007, 40(1):71-77. [8] J.M. Tien and D. Berg. Towards Service Systems Engineering. IEEE International Conference on Systems, Man and Cybernetics, 2003, 5(5): 4890-4895. [9] Q.G. Liu, J. Zhou, and J.Y. Li. Catastrophe modeling for service system in the networked environment. 2006 Asia Pacific Symposium on Service Science, Management and Engineering. Nov. 30-Dec. 1, 2006, Beijing, China. [10] R. Karni and M. Kaner. An engineering tool for the conceptual design of service systems. Springer Berlin Heidelberg. Advances in Services Innovations. 2000, 65-83. [11] R.D. Mascio. Service process control: a method to compare dynamic robustness of alternative service processes. Journal of Process Control. 2003, 13(7): 645-653. [12] R.X. Yuan and X. Zhang. Spatial characteristics of agent behavior in small world networks. 2006 Asia Pacific Symposium on Service Science, Management and Engineering. Nov. 30-Dec. 1, 2006, Beijing, China.
380
Zhongjie Wang and Xiaofei Xu
[13] Stanford Medical Informatics. The Protégé ontology editor and knowledge acquisition system. http://protege.stanford.edu/ [14] The OWL Services Coalition. OWL-S: Semantic Markup for Web Services. http://www.daml. org/services/owl-s/1.0/owl-s.html [15] W.B. Rouse and M.L. Baba. Enterprise transformation. Communications of the ACM, 2006, 49(7): 66-72. [16] W.-F. Tung and S.-T. Yuan. iDesign: an intelligent design framework for service innovation. Proceedings of 40th Hawaii International Conference on System Sciences (HICSS 40), Hawaii, USA. January 3-6, 2007. 64-73. [17] W3C. OWL Web Ontology Language Overview. http://www.w3.org/TR/owlfeatures/ [18] Wikipedia. http://www.wikipedia.org/wiki/services
Supporting Adaptive Enterprise Collaboration through Semantic Knowledge Services Keith Popplewell1, Nenad Stojanovic2, Andreas Abecker2, Dimitris Apostolou3, Gregoris Mentzas3, Jenny Harding4 1
2
3
4
Coventry University, Priory Street, Coventry CV1 5FB, United Kingdom [email protected] Forschungszentrum Informatik, Haid-und-Neu-Str. 10-14, D-76131 Karlsruhe, Germany {nstojano, abecker}@fzi.de Institute of Communication and Computer Systems, 157 80 Athens, Greece {gmentzas, dapost}@mail.ntua.gr Loughborough University, Loughborough, Leicestershire, LE11 3TU, United Kingdom [email protected]
Abstract. The next phase of enterprise interoperability will address the sharing of knowledge within a Virtual Organisation (VO) to the mutual benefit of all VO partners. Such knowledge will be a driver for new enhanced collaborative enterprises, able to achieve the global visions of enterprise interoperability. This paper outlines the approach to be followed in the SYNERGY research project which envisages the delivery of Collaboration Knowledge services through interoperability service utilities (ISUs): trusted third parties offering web-based, pay-on-use services. The aim of SYNERGY is to enhance support of the networked enterprise in the successful, timely creation of, and participation in, collaborative VOs by providing an infrastructure and services to discover, capture, deliver and apply knowledge relevant to collaboration creation and operation. The proposed approach aims to (a) provide semantic ontology-based modelling of knowledge structures on collaborative working; (b) develop a service-oriented self-adaptive solution for knowledgebased collaboration services; and (c) facilitate the testing and evaluation of the efficiency and effectiveness of the solution in concrete case studies. Keywords: enterprise interoperability, semantic web, knowledge services, knowledge management, trust, virtual organisation.
1 Introduction In a recent roadmap of the European Commission (Enterprise Interoperability Research Roadmap – EIRR hereafter, [5]) four challenges were identified as
382
K. Popplewell, N. Stojanovic, A. Abecker, D. Apostolou, G. Mentzas, J. Harding
strategic directions of research in the area of Enterprise Interoperability: (i) interoperability service utility; (ii) web technologies for enterprise interoperability; (iii) knowledge-oriented collaboration; and (iv) a science base for enterprise interoperability. Here, we discuss the development of the necessary technological infrastructure for supporting the third grand challenge, i.e. the next phase of development of deeper functionality of enterprise interoperability that will allow the sharing of knowledge within virtual organizations (VOs) to the mutual benefit of the VO partners. Such research will help mitigate two primary needs of enterprises in successfully forming and exploiting VOs: rapid and reliable formation of collaborative consortia to exploit product opportunities; and application of enterprise and VO knowledge in operational and strategic decisionmaking, thereby leading to enhanced competitiveness and profitability. In this paper, we claim that research on semantic web services [12] [19] has the potential to facilitate the satisfaction of these needs and provide the underlying technological infrastructure for supporting adaptive enterprise collaboration through semantic knowledge services [14]. Specifically, we outline the major objectives and architectural directions of a multi-national research project (SYNERGY), which aims to enhance support for the successful and timely creation of, and participation in, collaborative virtual organizations by providing an infrastructure and services to discover, capture, deliver and apply knowledge relevant to collaboration creation and operation. Section 2 outlines the motivation of this research, while section 3 gives the overall objectives and conceptual architecture of the SYNERGY infrastructure – it focuses on three categories of knowledge services to be developed: moderator services; collaboration pattern services; and knowledge evolution services. Section 4 discusses related work, while the final section 5 presents the main conclusions and further work.
2 Motivation The last decades show a clear trend in business – away from big, comprehensive trusts which can cover all stages of a value creation chain, and away from long standing, well-established and stable supply chains; instead, companies increasingly focus on their core business competencies and often enter into flexible alliances for value creation and production. For example, in the automotive industry the market speed demands flexible configuration and re-configuration of supply-chains, in typical “knowledge-oriented businesses” (like management consulting and software engineering) more and more freelancers and small specialized companies form project-specific coalitions for customer-specific products or services, while in life sciences and biotech technological progress comes from research-based companies in co-opetitive relationships which require flexible and ad-hoc co-operations. This growing demand for flexibly interactive and efficiently integrated businesses and services has already led to a huge amount of scientific and technological research in enterprise interoperability. Although such research has achieved already promising results and partially led to first commercial products
Supporting Adaptive Enterprise Collaboration through Semantic Knowledge Services
383
and service offerings, as well as operational, deployed applications, these remain nevertheless at the level of data interoperability and information exchange; they hardly reach the level of knowledge integration, and certainly fall short of knowledge-based collaboration. Seen from the business-process perspective, today’s approaches to business interoperability mainly address support processes (for instance, how to manage ordering and buying a given product), but they hardly support the companies’ core processes (e.g., finding a decision about what product to buy) with the companies’ core knowledge assets in the centre of value creation and competitive advantage. If we rely on typical definitions of the term “knowledge” as widely accepted in the Knowledge Management area [13], some of the key characteristics of knowledge are that it is highly context-dependent, interlinked with other pieces of knowledge, action-oriented, and often either bound to people or expressed in complex, often logic-based, knowledge representation formalisms. This shows that today’s business interoperability approaches usually address the level of information and application data, but clearly fail to achieve the “knowledge level”. This situation is sketched in Figure 1. Knowledge subject to uncontrolled sharing
Enterprise N Enterprise 3
Knowledge which SHOULD be shared but rarely is • Organisation knowledge assets • Policy • Strategy
Enterprise 2
Enterprise 1
Legacy Systems
Data Interoperability
Legacy Systems
(eg. ERP, CAD, etc)
Process Interoperability
(eg. ERP, CAD, etc)
• Etc.
Knowledge which MUST NOT be shared but often is
Isolated, ad hoc, interpersonal communications
Misunderstanding; Core IPR confidentiality lost
• Core IPR • Competing projects • etc.
Lost opportunities to learn and adapt from past collaborations
Missed Collaboration Opportunities
The Collaborating Partner The Virtual Organisation
Fig. 1. Current Forms of Knowledge-Oriented Collaboration
Some of the identified shortcomings of the current situation are as follows: Existing solutions for automated business interoperability address data interoperability for (more or less hard-coded) support of business processes as implemented, e.g., in ERP systems. All forms of “higher-level” interoperation in
384
K. Popplewell, N. Stojanovic, A. Abecker, D. Apostolou, G. Mentzas, J. Harding
knowledge-intensive processes ([1], [18]) usually take place in the form of isolated, selective, informal person-to-person contacts, such as e-mails, meetings, telephone conversations, etc. If the business partners do not know each other already and do not have deep insights in the other company’s internal affairs, they can not be aware of their partner’s internal rules, regulations, experiences, core knowledge assets, etc. which easily lead to misunderstandings, wrong estimations, etc. Even worse, “uncontrolled” and unsystematic collaboration about complex issues may not only be subject to inefficiencies, misunderstandings, or wrong decisions because of missing knowledge about the business partner; it is also exposed to the risk of unaware, accidental disclosure of corporate secrets and all kinds of confidential information. Furthermore, unmanaged knowledge exchange can not only cause direct problems such as inefficiency, mistakes, or confidentiality problems; there are also indirect problems which stem from the fact that a systematic assessment of new opportunities, a continuous collaboration-process improvement, etc. can only happen if there is some level of formality and documentation as its basis. Modular, Ontology Based Knowledge Knowledge shared with controls & understood risks • Organisation knowledge assets • Policy • Strategy • Etc.
ISU Services
Enterprise 3 Enterprise 2
Information and Process Interoperability Services Enhanced VO Collaboration
Collaboration Patterns
Common Understanding
VO Opportunity & Decision Conflict Identification & Resolution
Moderator Services The Learning VO
Learning Services
New Knowledge The Learning Enterprise
Collaboration structured for enhanced support of VO
Partner KM Services
Knowledge Sharing and Security
Protected knowledge • Core IPR • Competing projects • etc.
Enterprise N
Enterprise 1
Learning Loop
Collaboration Registry Services
Publishing Capabilities Searching for Contributions
The Collaborating Partner The Virtual Organisation
Fig. 2. SYNERGY Vision of Knowledge-Oriented Collaboration
Figure 2 illustrates the overall idea of the SYNERGY project in terms of the TO-BE situation which will be enabled by the SYNERGY project results. In this TO-BE situation, a Web-based and service oriented software infrastructure will help all kinds of companies which need to engage in collaborative businesses, to
Supporting Adaptive Enterprise Collaboration through Semantic Knowledge Services
385
discover, capture, deliver and apply knowledge relevant to collaboration creation and operation thus helping them to effectively and efficiently participate in Virtual Organizations (VOs) whilst avoiding the above mentioned shortcomings and problems. The next section outlines in more detail the objectives and conceptual architecture of our approach.
3 The SYNERGY Approach 3.1 Objectives
Following the vision and approach of the IST Enterprise Interoperability Research Roadmap (EIRR, [5]), the SYNERGY architecture takes up and refines the challenge of the Interoperability Service Utility (ISU), i.e. of an open, serviceoriented platform which allows companies to use an independently offered, intelligent infrastructure support to help planning, setting-up, and running complex knowledge-based collaboration. The ISU services to be investigated, designed, prototypically implemented and tested in the SYNERGY project can be organized in three groups: x
x
x
basic collaboration support, including: collaboration registry services that allow publication of and search for capabilities; and information and process interoperability services that may include, e.g., data mediation at the message level, protocol mediation at the service orchestration level, process mediation at the business level, etc. [19]: enhanced collaboration support, including: partner-knowledge management services: services helping a company that wants to enter the collaboration space, to efficiently build up and manage a knowledge base of collaboration-oriented internal knowledge, together with sharing and exchange services which guarantee adequate treatment of confidentiality concerns etc.; collaboration pattern services: as a means to use and reuse proven, useful, experience-based ways of doing and organizing communication and collaboration activities in specific collaborative tasks; and moderator services: which implement the role of a trusted third party helping to establish and run a specific collaborative engagement, to employ collaboration patterns, to mediate conflicts and communication problems, to implement intelligent services at partner site such as opportunity detection, etc; collaboration evolution support, i.e. learning services which continuously accompany and evaluate ongoing activities in order to realize a continuous improvement of knowledge residing both in the central services (such as the collaboration patterns) and at the partner sites (partner-specific collaboration knowledge).
The overall aim of SYNERGY is to enhance support of the networked enterprise in the successful, timely creation of and participation in collaborative
386
K. Popplewell, N. Stojanovic, A. Abecker, D. Apostolou, G. Mentzas, J. Harding
VOs by providing an infrastructure and services to discover, capture, deliver and apply knowledge relevant to collaboration creation and operation. The infrastructure must facilitate the sharing of knowledge within an enterprise, between potential and actual VO partner enterprises, and across industrial sectors, whilst allowing, and indeed actively promoting, the protection of individual and shared commercial interests in operating knowledge and expertise and intellectual property rights (IPR). Note that whilst the results of this research are aimed at providing services which could be integrated into the offerings of an ISU, it is not the intention to duplicate research, carried out elsewhere, into the policy, strategy, delivery and operation of ISUs in general. Rather our research effort aims to define the way in which ISUs in general can provide the essential infrastructure for knowledgeoriented collaboration.
3.2 Conceptual Architecture SYNERGY supports collection and preservation of individual enterprise knowledge about inter-organizational collaboration and its secure integration and harmonization within the existing knowledge landscape of a networked enterprise, stored globally in the ISU or locally at the enterprise level. Through collaboration-knowledge services, SYNERGY provides an active platform for efficient maintenance of and access to shared and individual knowledge, and moderation of inter-organizational collaboration, including the ability to continually learn from experience. In this section, we present in detail how we plan to realize this idea. Figure 3 presents an overview of the SYNERGY conceptual architecture. ISU Services
Partner KM Services
Moderator Services Learning Services Collaboration Registry Services
Access to ALL Knowledge Repositories through ISU Services
Collaboration Patterns
Application
Enterprise IPR Collaboration Experience etc..
ISU Knowledge Registry Moderator Knowledge Industry Knowledge Collaboration Capabilities Collaboration Patterns etc..
Distributed Knowledge Repositories
ISU Service Information and Process Interoperability Services
Networked Enterprise
Local Repositories
Repositories at ISU
Fig. 3. Overview of SYNERGY Conceptual Architecture: distributed knowledge repositories, residing locally in an enterprise or globally in the ISU (right-hand side), which
Supporting Adaptive Enterprise Collaboration through Semantic Knowledge Services
387
can both be maintained and accessed through collaboration-knowledge ISU services (lefthand side)
Each network will develop, through its lifetime, project-specific knowledge. This is in part knowledge specific to the network’s product or service, and to the processes and technologies involved, but it is also related to the current state of the network in its life-cycle. In most cases, such knowledge needs to be maintained only for the network and its partners because it is of no use, and possibly even very confusing, outside that context. Nevertheless, there may be a need to analyse such knowledge and its evolution to provide improved patterns for the future, thus forming a basis for organisational learning. Such collaboration patterns may then enter the public domain to support future collaborations across, say, an industrial sector, but there will be (perhaps most) cases where it is precisely this knowledge which represents competitive advantage to the network or partners concerned, so there is a need to identify where this is the case, and how services might deliver new learning to a specified and appropriate audience of future users – perhaps only partners in the network generating this new knowledge. Within SYNERGY we aim to deliver a Collaboration Knowledge Services Framework (CKSF), a structural framework for knowledge repositories and collaboration services defining mechanisms to manage correct sharing and protection of knowledge. In order to effectively share information and knowledge, it is essential to know when sharing is advantageous: the CKSF will embody knowledge to provide this capability. The maintenance of a library of appropriate collaboration patterns, available as process and service templates to be specialised as necessary and applied to network enterprises either as they form or as they subsequently evolve, is central to the support of partner collaboration in a VO. The CKSF will therefore embody structures and services to define collaboration pattern templates and to select (according to the nature of a developing or existing/changing network), specialise and apply such templates. It is envisaged that the distribution of repository knowledge will reflect its commercial sensitivity, so that at one extreme the most sensitive, perhaps enterprise core competence, would reside within the enterprise’s own systems, whilst at the other extreme a set of collaboration patterns applicable to different scenarios common across an industrial sector would reside in the service provider’s repository. Between these extremes, services may deposit or access knowledge from the service provider or from other partners, but the significant issues relate to the control of access regardless of the location of knowledge. Enterprise knowledge relevant to the formation and operation of collaborative ventures will include, though not necessarily be limited to, (i) Enterprise Core Competence, (ii) Process knowledge for VO Formation, (iii) Process knowledge for partner selection, (iv) VO Operations Management Knowledge. The architecture will be mainly based on the development of a number of ISU services for collaboration: moderator services, pattern services and knowledge evolution services. They are examined in more detail in the following sections.
388
K. Popplewell, N. Stojanovic, A. Abecker, D. Apostolou, G. Mentzas, J. Harding
3.2.2 Collaboration Moderator Services A critical aspect of effective knowledge sharing within VOs is the identification of the most appropriate knowledge for reuse or exploitation in a particular context combined with the most efficient tools and mechanisms for its identification, sharing or transfer. Knowledge has a life-cycle and therefore, to maintain its value it must evolve through ongoing maintenance and update. These issues are addressed through the identification of appropriate knowledge sources and the concept of a Collaboration Knowledge Model to support knowledge-sharing activities. However, these elements on their own are insufficient to actively support knowledge sharing and interactions between collaborating partners in the VO. Partners also need to be aware of when knowledge needs to be shared, the implications of doing so and when their decisions are likely to affect other partners within the collaboration. Therefore tools and methods are needed to support identification, acquisition, maintenance and evolution of knowledge and to support knowledge sharing through the raising of awareness of possible consequences of actions and other partner's needs during collaboration. SYNERGY addresses these issues, by exploiting the identified sources of collaboration knowledge in the design of a Collaboration Moderator to raise awareness of needs, possible consequences and likely outcomes in collaboration activities between the partners of the VO. Collaboration Moderation Services comprise the process of identifying key knowledge objects and processes as well as understanding their relevance in context and their relationships. We will exploit previous research work [8] [11[11]] and will also identify innovative knowledge acquisition approaches to extend existing moderator functionalities to enable improved collaboration support through ongoing knowledge updating and maintenance. 3.2.3 Collaboration Pattern Services The collaboration-computing experience is currently dominated by tools (e.g. groupware) and the boundaries they introduce to collaboration processes. As new integration methods (e.g. Web Services) enable users to work more seamlessly across tool boundaries and to mix and match services opportunistically as collaboration progresses, a new organisational model becomes desirable. The challenge is not simply to make integration easier, but also to support the users deal with a multiple of collaboration-related information, tools and services. By adopting collaboration patterns as the organizational model of collaboration, users will work in a more complete context for their actions and be burdened by fewer manual integration tasks. At the same time, by dividing collaborative work into distinct collaboration activities, users can focus more readily on a particular activity and deal more easily with interruptions by suspending and resuming activities as needed. Collaboration patterns augment rather than replace collaboration services and tools. Through reference-based integration methods, collaboration patterns introduce new views and organizational schemes that cut across service and tool boundaries, thereby increasing the integrity of the representation of work and mitigating scatter.
Supporting Adaptive Enterprise Collaboration through Semantic Knowledge Services
389
SYNERGY will assess the appropriate level of pattern granularity (abstraction) and represent collaboration patterns using ontologies. We will then develop the Collaboration Patterns Editor, a software component for defining collaboration patterns. The editor will represent collaboration patterns as a collection of relationships that emerge between people, the resources they use, and the artefacts they work on, as well as the communication, coordination, and business processes that are used to complete their work. Collaboration patterns will link to services that are already exposed by collaborative tools, such as workflow tools, word processing, wikis, etc. We envisage three ways for generating collaboration patterns: manually, from best practice data; semi-automatically, by detecting prominent usage patterns using folksonomy techniques (e.g. users typically tend to send an email to all other users asking their feedback when a meeting agenda is sent out); and by community members themselves, either from scratch or as refinements of existing patters. A collaboration pattern created with the editor can be used as a template that guides the informal collaborative process without constraining it. We will also develop a simulator that takes as input information about on-going collaborations taking place and being recorded in a collaboration-pattern knowledge base. The simulator will focus on visualizing collaborative networks as well as transactions inside them. 3.2.5 Knowledge Evolution Services One of the unique features of the SYNERGY approach is based on the idea that an explicit management of knowledge-based collaborative work opens up completely new possibilities regarding (semi-) automatic verifying, evolving, and continuously improving the collaboration-knowledge bases. We aim at a comprehensive management of all dynamic aspects in the proposed knowledge bases, including: (i) automated searching for inconsistencies and problems; (ii) automated recommendation of possible new collaboration opportunities; (iii) propagation of evolutionary changes in the collaboration environment toward dependent artefacts in the codified information space; or (iv) implementing means for self-adaptive collaboration that will enable learning from experience and continuously adapt collaboration patterns and their usage preconditions. Altogether, this will lead to a further development of the concept of the learning organization toward a “learning virtual organization” or, better, a “learning business ecosystem”. Methodologically, we intend to extend ideas and approaches of the Learning Software Organization [9].
4 Related Work The European Commission’s EIRR [5] identifies the state of the art of a number of research areas relevant to enterprise interoperability in general and to SYNERGY in particular. For example, the EIRR recognises ontology definition as necessary to the sharing of knowledge, almost by definition – Gruber [7] defines a domain ontology as “a formal, explicit specification of a shared conceptualisation”.
390
K. Popplewell, N. Stojanovic, A. Abecker, D. Apostolou, G. Mentzas, J. Harding
Similarly, the EIRR anticipates that delivery of enterprise interoperability capabilities will be achieved through Interoperability Service Utilities (ISUs) providing services to clients, possibly on a pay-per-use business model making them affordable to SMEs. SYNERGY aims to define collaboration knowledge services [14] explicitly to be delivered as services in this way. Specifically, two of the main innovations of SYNERGY include: (i) a reference architecture for an interorganisational knowledge-sharing system, based on knowledge services; and (ii) the formal representation of meta-knowledge for network enterprise collaboration, as well as for risk assessment, propagation and evaluation in networked enterprise collaborations. Concerning moderator services, a lot of research has been reported since the early work of Gaines et al. [6]. Although recent efforts promote the research agenda in this field (e.g. [10], [20], [23]), none of them have addressed interoperability and semantic heterogeneity issues. In SYNERGY, we will provide reference ontologies and example knowledge bases for Collaboration Moderator Services and design and develop a prototype implementation of semantic Collaboration Moderator Services. Collaboration between partners in a VO from a wide variety of domains will result in the need to share knowledge from varied sources, with different data types, file formats and software tools. To cope with this, [16] proposed an ontology-based approach to enable semantic interoperability. The case study proves the effectiveness of ontologies in collaborative product development to support the product-data exchange and information sharing. However, for interoperability to be achieved effectively, it is essential that the semantic definitions of the knowledge objects, processes, contexts and relationships are defined based on mathematically rigorous ontological foundations [11]. Much current work utilises the Web Ontology Language (OWL) for the representation of semantic objects, but this has a very limited capability in terms of process definition. Similarly, the Process Specification language (PSL) has a strong process representation capability, but is weak in its representation of objects. Researchers are therefore increasingly identifying the need for heavyweight ontologies and improved knowledge formalisms [3, 22]. Within SYNERGY, we will develop a blueprint (requirements, language, system architecture, runtime experience) for a knowledge representation and reasoning solution dedicated to knowledge-based collaboration support, as well as a prototype implementation dedicated to knowledge-based collaboration-support requirements. Patterns and pattern languages are becoming increasingly important in various areas such as community informatics [2], activity management [15] and workflow management [21]. A pattern formalizes the structure and content of an activity and the integration methods it depends on, thereby making it reusable as a template in future activities. Collaboration patterns can be regarded as abstractions of classes of similar cases and thus describes a kind of best practices for the execution of specific collaboration activities. Collaboration patterns are useful because they may be used to suggest possible tasks to the users, they can provide information about dependencies between tasks, provide insight about the roles that are required, the resources needed, etc. By sharing collaboration patterns, users can ‘‘socialize’’ best practices and reusable processes. Recently, [4] provided a categorization of
Supporting Adaptive Enterprise Collaboration through Semantic Knowledge Services
391
collaboration patterns, while [17] presents a first collaboration-patterns ontology. However, to our knowledge there exist no software tools that exploit collaboration patterns as a means to support collaboration in real-time. In SYNERGY, we intend to develop a collaboration pattern-based system that gathers and manipulates many types of content without relying on their native applications. Specifically, we will develop: (i) a reference ontology for collaboration-pattern representation; (ii) methods and service-based tools for collaboration-pattern management and use; and (iii) novel methods of collaborative work and knowledge-task management based on collaboration-patterns support and awareness mechanisms. The ATHENA European FP6 integrated research project considered aspects of enterprise models for interoperability and model driven interoperability. ATHENA focused on application interoperability and on the necessary ontological and semantic requirements to support interoperability of enterprise information. Reports of results and pilot implementations can be accessed through the ATHENA web site [24].
5 Conclusions In this paper we outlined the major objectives of the SYNERGY project, which aims to enhance support for the successful and timely creation of, and participation in, collaborative virtual organisations. Moreover, we presented the architectural directions for a software infrastructure and services supporting collaborative virtual organisations to discover, capture, deliver and apply knowledge relevant to collaboration creation and operation. SYNERGY is expected to benefit collaborating organisations in many ways. As a "learning organisation" a collaborating partner is better able to exploit past experience in responding to future opportunities and in particular to opportunities for participation in collaborative networks. Improved risk assessment may enable collaborating partners to participate in more networks than previously seemed safe, whilst minimising exposure to survival-critical risk. Enhanced sharing of knowledge, with dynamic access control and security accelerates and improves network decision making, shortens time to market and reduces network operating costs, whilst improved capture and especially re-use of enterprise and network knowledge reduces the cost of repeating work of earlier projects, and of repeating past errors. Improved, risk-aware decision making reduces the costs of wrong decisions, and failed collaborations. The SYNERGY software infrastructure will be extensively evaluated against the sophisticated collaborations which arise in the pharmaceutical industry and the engineering domain. The participation in the project of collaborating organisations from more than one industrial sector will enable the evaluation of different aspects of collaboration and, at the same time, will ensure that SYNERGY is generic and not sector-specific.
392
K. Popplewell, N. Stojanovic, A. Abecker, D. Apostolou, G. Mentzas, J. Harding
References [1] [2] [3]
[4]
[5] [6]
[7] [8]
[9] [10] [11]
[12] [13]
[14] [15]
[16]
[17] [18]
[19]
Abecker A.: Business Process Oriented Knowledge Management – Concepts, Methods and Tools. Phd Thesis, University of Karlsruhe (2003) Chai, C.S., Khine, M.S.:. An Analysis of Interaction and Participation Patterns in Online Community. Education Technology & Society, 9(1):250-261 (2006) Cutting-Decelle, A.-F., Young, B.I., Das, B.P., Case K., et al: A Review of Approaches to Supply Chain Communications: From Manufacturing to Construction. Itcon, vol. 12, pp 73-102 (2007) den Hengst, M., Adkins, M.: Which Collaboration Patterns are Most Challenging: A Global Survey of Facilitators. In: Proc. 40th Hawaii Int. Conf. on System Sciences (2007) European Commission: Enterprise Interoperability: A Concerted Research Roadmap for Shaping Business Networking in the Knowledge-based Economy (2006) Gaines, B.R., Norrie, D.H., Lapsley, A.Z.: Mediator: An Intelligent Information System Supporting the Virtual Manufacturing Enterprise. In: Proc. 1995 IEEE Int. Conf. on Systems, Man and Cybernetics, pp. 964-969 (1995) Gruber, T.R.: Towards Principles for the Design of Ontologies used for Knowledge Sharing. Int. J. of Human-Computer Studies, vol. 43, pp.907-928 (1993) Harding, J.A., Popplewell, K.: Driving Concurrency in a Distributed Concurrent Engineering Project Team: A Specification for an Engineering Moderator. Int. J. of Production Research, 34(3):841-861 (1996) Henninger, S. Maurer, F.: Advances in Learning Software Organizations: 4th Int. Workshop (LSO 2002). Springer, Heidelberg, 2002. Huang, G. Q.: Web Based Support For Collaborative Product Design Review. Int. J. of Computers in Industry, 48(1):71-88 (2002) Lin, H.K., Harding, J.A.: A Manufacturing System Engineering Web Ontology Model on the Semantic Web for Inter-Enterprise Collaboration. Int. J. of Computers in Industry, 58(5):428-437 (2007) Martin, D., Domingue, J.: Semantic Web Services, Part 1. IEEE Intelligent Systems, September/October, pp. 12-17 (2007) Mentzas, G., Apostolou. D., Abecker, A., Young, R.: Knowledge Asset Networking: A Holistic Approach for Leveraging Corporate Knowledge”, Springer, Heidelberg (2002) Mentzas G., Kafentzis, K., Georgolios, G.: Knowledge Services on the Semantic Web. Comm. of the ACM, 50(10):53-58 (2007) Moody, P., Gruen, D., Muller, M.J., Tang, J., Moran T.P.: Business Activity Patterns: A New Model for Collaborative Business Applications. IBM Systems Journal 45(4):683–694 (2006) Mostefai, S., Bouras, A., Batouche, M.: Effective Collaboration in Product Development via a Common Sharable Ontology. Int. J. of Computational Intelligence, 2(4):206-216 (2005) Pattberg, J., Fluegge, M.: Towards an Ontology of Collaboration Patterns. In: Proceedings of Challenges in Collaborative Engineering 07 (2007) Remus, U.: Integrierte Prozess- und Kommunikationsmodellierung zur Verbesserung von wissensintensiven Geschäftsprozessen. In: Abecker, A., Hinkelmann, K., Maus, H., Müller, H.-J. (eds): Geschäftsprozessorientiertes Wissensmanagement, pp. 91-122. Springer, Heidelberg (2002). In German. Studer, R., Grimm, S., Abecker, A.: Semantic Web Services. Concepts, Technologies, and Applications. Springer, Heidelberg (2007)
Supporting Adaptive Enterprise Collaboration through Semantic Knowledge Services
[20]
[21] [22]
[23]
[24]
393
Ulieru, M., Norrie, D., Kremer, R., Shen, W.,: A Multi-Resolution Collaborative Architecture for Web-Centric Global Manufacturing. Information Sciences 127, 3-21 (2000) van der Aalst, W.M.P., ter Hofstede, A.H.M., Kiepuszewski, B., Barros, A.P.: Workflow Patterns. Distributed and Parallel Databases, 14(3):5-51 (2003) Young, R.I.M., Gunendran, A.G., Cutting-Decelle, A.-F., Gruninger, M.: Manufacturing Knowledge Sharing in PLM: A Progression Towards the Use of Heavy Weight Ontologies. Int. J. of Production Research, 45(7):1506-1619 (2007) Zhou, L., Nagi, R.: Design of Distributed Information Systems for Agile Manufacturing Virtual Enterprises Using CORBA and STEP Standards. J. of Manufacturing Systems, 2(1):14-31 (2002) ATHENA Project Public Website: http://www.athena-ip.org/
Part V
Interoperability in Systems Engineering
Semantic Web Framework for Rule-Based Generation of Knowledge and Simulation of Manufacturing Systems Markus Rabe, Pavel Gocev Fraunhofer Institut Produktionsanlagen und Konstruktionstechnik (IPK) Pascalstrasse 8-9, 10587 Berlin, Germany {markus.rabe, pavel.gocev}@ipk.fraunhofer.de
Abstract. The development of new products and manufacturing systems is usually performed in the form of projects. Frequently, the conduction of the project takes more time than planned due to inconsistency, incompleteness, and redundancy of data, which delays other project activities influencing the start of production (SOP). This paper proposes a semantic Web framework for cooperation and interoperability within product design and manufacturing engineering projects. Data and knowledge within the manufacturing domain are modelled within ontologies applying rule-based mapping. The framework facilitates the generation of new knowledge through rule based inference that enriches the ontology. This enables a high-level model completeness in the early phase of product design and manufacturing system development, which is a basic prerequisite for the realisation of a proper simulation study and analysis. The simulation results can be integrated into the ontologies as a knowledge that additionally extends the ontology. Keywords: Semantic Web, Product Design, Manufacturing, Ontology, Knowledge Base, Rules, Inference, Modelling and Simulation.
1 Introduction The design and development of new products and manufacturing systems is usually considered as a project that involves numerous project members, who cooperate and exchange data and information. The design and development activities result in new findings and conclusions, which are knowledge applied to a particular situation. Usually, at the beginning of the project there are strategic decisions of the management about the new products that are documented as plain text or simple tables, describing the features of the products. Product designers deliver the first
398
Markus Rabe, Pavel Gocev
design and “look and feel” models of the new products. Simultaneously, technologists and production experts are involved in order to determine the required technologies for the production, to develop the manufacturing system and to estimate the product cost. Various concepts of the product and the manufacturing system are usually verified through discrete event simulation of the production and the logistics. The process of data and information exchange has a very important aspect of understanding of the meaning. In [27] the authors (Studer at al.) give an overview of the understanding process supported by the Meaning-Triangle, which describes the relations between symbols or words, thoughts and real objects. As the meaning of the words highly depends on the context, the environment and the personal view of the project partner, the authors conclude that the missing of a common understanding leads to misunderstandings, wrong communication, inadequate interoperability and limited reuse of data and information within various software systems. Most of the required data and information in the early project phase are informal and based on the experience from already existing products and manufacturing systems. Generally, the data and the information are taken from existing IT applications for Business Process Modelling (BPM), Enterprise Resource Planning (ERP), Product Data Management (PDM), Product Life Cycle Management (PLM), Computer Aided Process Planning (CAPP), Manufacturing Execution System (MES), and others. Usually, for the project purposes data are extracted as office applications in format of text, spreadsheet, project plans, presentation slides, etc. The authors’ experience proved the practice of using of different files in multiple IT applications with distinct structures and terminology, which causes inconsistency, redundancy, and incompleteness. Regular meetings of the project members govern the share and exchange of information, as well as the presentation of the achievements. The attendees of those meetings analyse, debate, reason and conclude in order to take decisions for their further project work. The different skills and abilities of the project members for knowledge perception, conclusion and context presentation are an additional reason for a possible extension of the project work, very often influencing the critical milestones of the project. The complexity grows when supply networks have to be analysed, developed and simulated where the data and project members belong to several enterprises with different cultures and procedures for data management. The complexity of cooperative project work can be reduced if project data and information are free of ambiguity and redundancy and presented in a format and structure that enables easy use by the project members. The explicit description and defined data structures facilitate the integration of various sources (data and information) and therewith the interoperability within the project. The necessary level of domain description can be achieved through classes which are organised as taxonomies with assigned individuals (instances) and defined relations between them in one model. These models are called ontologies and defined by Gruber [7] as an explicit, formal specification of a shared conceptualisation. Ontologies can be used to develop a knowledge base for one particular domain under consensus of the involved partners as a means to collect and structure the existing knowledge as well as the generation of new knowledge through reasoning.
Semantic Web Framework for Rule-Based Generation of Knowledge and Simulation
399
This paper gives an overview of existing technologies and approaches for ontology development to describe manufacturing systems. The focus is set on solutions that utilise the emerging semantic Web technologies (section 2.1), existing and emerging standards, and open standards for modelling of manufacturing systems (section 2.2). Section 3 summarises the challenges for development. Section 4 gives an overview of the framework proposed, that is based on semantic Web technologies and is utilised for the modelling of the manufacturing domain as well as for the generation of new knowledge.
2 Related Enabling Technologies and Developments 2.1 Semantic Web Technologies
Under the leadership of the World Wide Web Consortium (W3C) [12] new developments of technologies that are suitable for data modelling and management have been achieved. With the definition of the Extended Markup Language (XML) [13] a widely accepted base for the syntax of structured data was given. The modelling of the semantics was realised with the Resource Description Framework (RDF) [14] and Resource Description Framework Schema (RDFS) [15] which enable the description of the relations among the data objects as statements in the form of subject-predicate-object (triples). The statements described with RDF can be processed by computers, but RDF does not offer a concept to model similarities, distinctions, cardinalities, restrictions, intersections, unions, characteristics of properties and other functionalities that are needed for the explicit description of a domain. These features are offered by the Web Ontology Language (OWL) [16]. But, OWL does not support the modelling of logical rules in form of “If-Then”rules that are essential for reasoning and inference. The requirements for logical reasoning could be satisfied with the Semantic Web Rule Language (SWRL) [17] that is still in the proposal stage by the W3C. The rules can be expressed as OWL concepts (classes, properties and instances) which enable easy integration with OWL ontologies. The rules in terms of OWL syntax are axioms that comprise of an antecedent part (body) and consequent part (head) of the rule. Both consist of atoms built with variables, classes, properties, individuals or built-ins. If all antecedent atoms are true, then all consequent atoms will be true, too. Example: IF car has wheels AND ModelX isA car THEN ModelX has wheels.
The example shows how to extend the knowledge about the individual ModelX. The result of inference upon the rules is a new statement as a triple (ModelX-haswheels), which enriches the ontology and represents the newly generated knowledge.
400
Markus Rabe, Pavel Gocev
2.2 Standards and Initiatives for Modelling of Manufacturing Systems
An essential prerequisite for the integration of information and knowledge from different project members is the deployment of a common data structure. Since the processes and operations of manufacturing companies are supported by various IT applications, very often there are data redundancies, structural differences and semantic distinctions. Most of the companies are striving to an integration of the business domain activities with production planning, scheduling, performance reporting, and other related processes. As a result of these attempts numerous standards, open standards and industrial initiatives have emerged or still are appearing. All objects and phases of the life cycle of manufacturing systems are considered by standards, but there is no complete one that covers all aspects. The list of most frequently applied standards for modelling and description of manufacturing systems entails ISA 95 (ISO 62264) [18], OAGIS [19], ISO 10303 (STEP) [20], MIMOSA [21], RosettaNet [22], PIDX [23], ISO 15926 [24], and ISO 18629 [25]. For the design, development and analysis of manufacturing systems, standards or open standards can be used. Very suitable are the models and data structures for manufacturing systems which are defined with the standard for vertical business to manufacturing integration ISA-95 Enterprise Control System Integration and with the open standard OAGIS. Main elements that are defined with the both standards are: x x x x
Hierarchy, functional and physical object models of manufacturing systems. Activity models for manufacturing operations management. Object Models and attributes for information exchange. Transactions between applications performing business and manufacturing activities.
The latest developments within the Object Management Group (OMG) [28] for systems engineering emerged in a Systems Modeling Language (SysML) [29]. Seven diagrams from Unified Modeling Language 2.0 (UML 2.0) [30] have been adapted and two new have been developed in order to support specification, design, analysis and testing of complex systems and system-of-systems. Still the applicability for an explicit domain description and inference seems to be limited, due to the fragmentation of the diagrams and the complexity of the language, especially for non-software engineers. As the metadata interchange standard XML Metadata Interchange (XMI) [31] for SysML is based on XML, it could be applied for data and information exchange with RDF/OWL-based files. 2.3 Ontologies for Manufacturing Systems
A comprehensive state-of-the-art of ontologies for modelling and simulation of manufacturing systems is given in [2]. Several scientific communities conduct research on the development of ontologies for discrete event simulation. Most of the work is focused on the architecture of the simulation model and on its parts like entities, clock, resources, activities, queues, etc.
Semantic Web Framework for Rule-Based Generation of Knowledge and Simulation
401
The most recent developments at the University of the Georgia Athens are moving towards ontology-based model generation for simulation purposes [3], where the authors develop an ontology-driven Simulation (ODS) Architecture based on a discrete event modeling ontology [4]. The suggested architecture generates simulation models for the healthcare domain. Different ontologies have been developed that are used for manufacturing domain descriptions. However, these ontologies are used for various other purposes than for simulation. The project PABADIS PROMISE [5] develops a manufacturing ontology [6] based on ISA-95, ISO 10303 and ISO 61499 [26]. The ontology has to provide a formal and unambiguous description of manufacturing systems that will support the new control architecture for manufacturing enterprises. A manufacturing system engineering (MSE) ontology [8] is developed to provide a common understanding of manufacturing-related terms within globally extended manufacturing teams. The goal of ontologies for Supply Chain Management (Onto-SCM) [9] is to provide terms within the SCM domain and to model general concepts and relationships. Therewith it presents a neutral basis for an effective cooperation by different partners. A core manufacturing ontology [10] has been developed as an object-oriented UML model to be used in the communication between agents within an adaptive holonic control architecture for distributed manufacturing systems (ADACOR) [11]. The attempts and solutions mentioned are a significant step of deploying ontologies for a description of the manufacturing domain. However, a solution to support the cooperation within the projects between different members and considering the multiple information structures and semantics is not suggested. The solutions found consider specific parts of the manufacturing domain and are still not sufficient for the evaluation of manufacturing systems through simulation. The key performance indicators for manufacturing systems, which are essential for analysis and evaluation of manufacturing systems, have not been considered, too.
3 Functionalities Required for the Generation of Knowledge In order to support the collaborative work within the development projects and to reduce the development time, there is a need for assistance of definition, structuring and generation of the knowledge. Instead of having one or more glossaries as text documents, which are usually not related with distributed data carriers, explicit definition of objects and terms within the manufacturing system and the relations between them is required. This will enable an unambiguous manipulation with the concepts, and will facilitate the consideration of only related data and information within the collaboration activities in design, modelling and simulation of manufacturing systems as a part of the digital factory. Thus, the following functionalities emerge for a framework for generation of knowledge, modeling and simulation of manufacturing systems: x x
Embedding of domain knowledge within a single knowledge base. Inclusion of results from the daily work of project members involved in the design and development of new products and manufacturing systems.
402
Markus Rabe, Pavel Gocev
x x x x x
Discovery of relationships between the entities of the knowledge base. Generation of new knowledge through inference. Integration of generated knowledge into the knowledge base and therewith enrichment of the knowledge base. Facilitating simulation and evaluation of a manufacturing system and integration of simulation results back into the knowledge base. Extraction of data and information from the knowledge base for the users and project members in a format needed for daily project work (e.g. spreadsheets, process models, documents, drawings, etc.).
4 Framework for the Generation of Knowledge within the Manufacturing Domain The framework for knowledge generation has to enable a knowledge base that describes products, materials, resources, technologies and processes. Their models have to hold information and knowledge needed for flawless project activities of the users. Figure 1 presents the main components of the framework: x x x x x
Standard-based core ontology of the manufacturing domain. Rule-based interface for the integration of dispersed information about the products, processes, resources, technologies and organization of a particular manufacturing system. Manufacturing Knowledge Base (MKB) as an entirety of information and knowledge for the particular manufacturing systems. Rules for inference and generation of new knowledge and enrichment of the MKB. Interface for modelling and simulation of the manufacturing system and integration of the simulation results into the knowledge base.
Manufacturing Manufacturing Manufacturing Manufacturing Domain Domain Domain Ontologies with Domain Ontologieswith with Ontologies Ontologies with Different Different Different Structures Different Structures Structures Structures
Simulationgenerated Knowledge Generated Knowledge
Simulation
Inference Engine
Mapping Rules Core Manufacturing Ontology (OWL-M)
Manufacturing Knowledge Base (MKB) Rules
Standard based Structures (ISA-95/OAGIS)
Manufacturing Knowledge Base (MKB)
Manufacturing Knowledge Base (MKB)
Rules
Rules
Continuously Enriched and Growing MKB
Fig. 1. Semantic Web Framework for Manufacturing Systems.
Semantic Web Framework for Rule-Based Generation of Knowledge and Simulation
403
4.1 Core Manufacturing Ontology
The Core Manufacturing Ontology (OWL-M) is under development on Fraunhofer IPK and is built using the RDF/OWL syntax upon the object models and structures defined within the ISA-95 standard series and the open standard OAGIS. OWL-M is the further development of the data model for simulation explained in [1], structured according to the classes of ISA-95: x x x x
Process segments as a business view of the production, Product definition with bill of materials and production plans, Resources and their subclasses (personnel, equipment and material), Work description of production, maintenance, quality tests and inventory as: -
Capabilities as the highest sustainable output rate that could be achieved for a given product mix, raw materials, and resources, Schedules for production, maintenance, quality tests and inventory, Performance as a report of the production responses.
Since the ISA-95 and OAGIS are developed for the execution of manufacturing systems, OWL-M comprises additional classes and properties that describe shiftpatterns, spatial elements for the layout, manufacturing engineering project phases (installation, qualification and ramp-up), resource status, queues, transporters, performance indicators, etc. The OWL-M gives a core structure that can be used for the development of an extensive MKB for a particular manufacturing system comprising the individuals. 4.2 Rule-based Mediation of Source Ontologies The Manufacturing Knowledge Base (MKB) comprises several interfaced ontologies around the OWL-M. The input information from different IT applications is imported as an XML file. These XML files can be transformed as OWL files with a weak semantic, since they still have the original structure as in the XML file using the OWL syntax. The integration of the elements within these input OWL files into the structure of OWL-M can be realised through rule-based mediation. The mapping procedure yields the correspondences between the source OWL files and OWL-M. The alignment and matching of the ontologies are specifications of similarities. Those specifications are the base for rule development by the engineer. The rules govern the merging and creation of the MKB on the skeleton of OWL-M. An inference engine (software) applies the rules, reasons over the OWL-M structure, populates the existing classes, properties and instances in form of statements, and generates new ones. 4.3 Inference and Enrichment of the Manufacturing Knowledge Base
Due to the variety of information sources upon which the MKB is generated, there is “hidden” knowledge within the MKB that it is still not “visible” for the user (person or IT application). For example, the bill of material given by the product
404
Markus Rabe, Pavel Gocev
designer frequently does not contain additional materials like consumables in production. The list of fixtures and tools needed for a particular product is not given by the product designer and not available in the product description. The production plan does not exist, too. To produce all theses information in a structure needed for further processing (e.g. bill of resources, production plan), usually discussions and information exchange between project members is needed. This goal can be achieved through rules and reasoning performed again by an inference engine. The antecedent part of the rules consists of existing axioms and facts about the individuals (triples) from different classes or ontologies. The consequent part includes the facts and statements that have to be generated through inference. An overview of the whole process is given below through an example. A product SiP_07 and the corresponding manufacturing system have to be designed. A product designer determines the list of the components that the new product has to conclude (Figure 2). Some of the product features are given, too. These are the result of the customer requirements and the experience knowledge of the designer. Facts: SiP_07 has following materials: RS_1, D_3, C_7 and T_5. The product SiP_07 has to be processed on a pick-and-place with speed of 300. The printing has to be performed according to the Jig principle and SiP_07 has to have a value of 5 for the feature distance. hasPlacingSpeed SiP_07
hasPrintingType RS_1
300
JigPrinting
hasDistance hasMaterial
D_3
5
T_5 C_7
Fig. 2. Description of the Instance SiP_07 by the Product Designer
The technologist has the knowledge about the materials, components, and their features as a result from his or her experience, using own terminology (Figure 3). Facts: RS_1 needs following technologies: Modules_Test, Reflow, Pick_and_Place, Jig_Printing, Sieve_Printing and Stacking. RS_1 needs either SB_21 with a length of 5, or SB_22 with length of 7.
Semantic Web Framework for Rule-Based Generation of Knowledge and Simulation
Fig. 3. Description of the Instance RS_1 by the Technologist.
The modeler is the designer of the OWL-M and is responsible for the semantic within the ontology (Figure 4), as well as for the rules, which enable reasoning and inference. Rules can be modelled as a set of atoms. hasLength owl:equvalentProperty hasDistance
Fig. 4. Equivalency of two Properties.
A rule example is given in table 1. Table 1. Example of a Rule for Extension of the Bill of Material
Rule Atom
Example
IF
IF
(?P :hasMaterial ?C)
SiP_07 hasMaterial RS_1
Product Designer
(?C :needsMaterial ?M)
RS_1 needs SB_21
Technologist
(?P, ?PP, ?PPV)
SiP_07 hasDistance 5
Product Designer
(?M, ?MP, ?MPV)
SB_21 hasLength 5
Technologist
(?PP owl:equivalentProperty ?MP)
Distance and Length are equal
Modeler
5=5
Modeler
equal(?PPV, ?MPV) THEN
Source
THEN SiP_07 hasMaterial SB_21
Inference Engine
(?P :hasMaterial ?M)
The first rule atom (?P :hasMaterial ?C) considers all products P and the instances C that are related with P through the property hasMaterial. The following atoms of the rule’s body are additional premises of the rule. The head of the rule contains just one atom (triple) that has to be generated by the inference engine that
406
Markus Rabe, Pavel Gocev
reasons upon the ontology. The rule governs if all premises given in the antecedent are satisfied for a set of particular individuals. Then, the list of the materials that are in relation with the SiP_07 with the property hasMaterial will be extended. All values for M that satisfy the conditions will be added to the list and therewith the MKB will be augmented. In this example the technologist defined that RS_01 needs either SB_21 with value for the feature hasLength of 5 or SB_22 with value for the same feature of 7. As per the last rule’s premise the values of hasDistance and hasLength have to be equal, only SB_21 comes as a result of inference. A new statement (SiP_07 :hasMaterial SB_21) enriches the bill of material of the individual SiP_07 and therewith new knowledge is generated (Figure 5). hasPlacingSpeed SiP_07
hasPrintingType RS_1
300
JigPrinting
hasDistance hasMaterial
D_3
5
T_5 C_7 SB_21 Fig. 5. Augmented Description of the Instance SiP_07 after the Inference.
The same procedure can be applied considering other axioms and facts within the MKB through the deployment of additional rules that are specified to reach a particular goal. Deploying the knowledge of the production engineer, a production plan for the SiP_07 can be generated, too. The results are to be seen after the inference as triples (subject-predicate-object). Only those triples that are selected by the user build the generated knowledge and can be asserted into the MKB. 4.4 Connecting the MKB with the Simulation Model
The enrichment of the MKB can yield a completion of information needed for simulation of the manufacturing system (bill of materials, production plans, available resources, production orders and shift plans). The more data about the manufacturing system are available, the more accurate and closer to reality the simulation can be performed. The triples from the MKB through transformation can be used as an input for the simulation model. After the end of the simulation, the results (e.g. throughput-time, resource utilisation, buffer time and needed capacities) can be imported into the MKB through XML-to-OWL transformation. Therewith the information and knowledge
Semantic Web Framework for Rule-Based Generation of Knowledge and Simulation
407
gained from the simulation can be used for further inference and enrichment of the MKB.
5 Conclusions and Future Developments An ontology for the manufacturing domain is needed as a skeleton for the modelling of the knowledge about a particular manufacturing system. The basis for the core manufacturing ontology (OWL-M) that is presented in this paper is taken from the structures and the object models defined within the ISA-95 series and open standard OAGIS. The distributed information about one particular manufacturing system can be transferred from the source files into several ontologies. The objects of those ontologies can be integrated into the OWL-M skeleton through rule-based ontology mediation, yielding the Manufacturing Knowledge Base (MKB) for a particular manufacturing system. Rules can generate new knowledge and through assertion of inference statements the MKB can be augmented. Since this paper describes a method and first applications, further developments are necessary in order to provide the users with a more friendly interface for ontology modeling, as project members can not be expected to be familiar with ontology modeling software. The MKB as a repository of knowledge about the particular manufacturing system can be used for further analysis and developments like simulation. The practice is that the logical decisions at branching points of the material flow are embedded into the simulation models. Examples are scheduling decisions, batching, matching, lot splitting, etc. The logic models could be a part of the MKB and therewith free the simulation models from complex constructs of modules and logical elements. This approach would enable rapid building of simulation models for complex manufacturing systems. The results obtained from the simulation could be integrated into the MKB as an additional enrichment. The inclusion of the scenarios into the MKB could give a basis for manufacturing system advisor through querying a knowledge base, where expert knowledge is stored and can be used later by other users or applications. Further developments could enable the connection of the MKB with existing Manufacturing Execution Systems. Similar to the results from simulation, the real information from daily operation of the manufacturing system could augment the MKB. The described semantic Web framework can substitute a part of the project activities usually in form of data and information exchange, understanding, agreeing, reasoning and knowledge generation. The modelling of the manufacturing domain with ontologies facilitates and accelerates the cooperation and collaboration processes within the projects of product design and development of manufacturing systems.
Gocev, P., Rabe, M.: Simulation Models for Factory Planning through Connection of ERP and MES Systems. Tagungsband 12. ASIM-Fachtagung Simulation in Produktion und Logistik, pp. 223-232. Kassel (2006) Gocev, P.: Semantic Web Technologies for Simulation in Production and Logistic - a Survey. Simulation und Visualisierung 2007 – Doktorandenforum Diskrete Simulation, pp. 1-10. Magdeburg (2007) Silver, G., Hassan, O., Miller, J.: From Domain Ontologies to Modeling Ontologies to Executable Simulation Models. Proceedings of the 2007 Winter Simulation Conference, pp 1108-1117. (2007) Miller, J., Fischwick, P.: Investigating Ontologies for Simulation Modelling. Proceedings of the 37th Annual Simulation Symposium (ANSS’04). pp 55-63. (2004) Project Pabadis‘Promise. www.pabadis-promise.org Development of Product and Production Process Description Language (PPPDL). www.uni-magdeburg.de/iaf/cvs/pabadispromise/dokumente/Del_3_1_Final.pdf Gruber, T.: A Translation Approach to Portable Ontology Specifications. Knowledge Acquisition 5(2), pp. 199-220, (1993) Lin, H. -K., Harding, J. A., Shahbaz, M. Manufacturing System Engineering Ontology for Semantic Interoperability across Extended Project Teams. International Journal of Production Research, Vol.42, No.24, pp 5099-5118. Tylor & Francis (2004). Ye, Y., Yang, D., Jiang, Z., Tong, T.: Ontology-based Semantic Models for Supply Chain Management. The International Journal of Advanced Manufacturing Technology. Springer, London (2007) Borgo, S., Leitao, P.: Foundations for a Core Ontology of Manufacturing. Ontologies – A Handbook of Principles, Concepts and Applications in Information Systems, Vol.14, Part 4, pp 751-775. Springer (2007) Leitão, P., Colombo, A., Restivo, F.: ADACOR – A Collaborative Production Automation and Control Architecture. IEEE Intelligent Systems, Vol.20, No.1, pp 5866. (2005) World Wide Web Consortium. www.w3.org Extensible Markup Language. http://www.w3.org/XML Resource Description Framework. www.w3.org/RDF Resource Description Framework Schema. www.w3.org/TR/rdf-schema Web Ontology Language - www.w3.org/2004/OWL Semantic Web Rule Language. www.w3.org/Submission/SWRL Instrumentation, Systems, and Automation Society, Enterprise-Control System Integration. Parts 1,2,3. Published 2000-2005. www.isa.org Open Applications Group Integration Specification. www.oagi.org Standard for the Exchange of Product Model Data. www.tc184-sc4.org/SC4_Open Machinery Information Management Information Open Systems Alliance. www.mimosa.org RosettaNet Standards. www.rosettanet.org Petroleum Industry Data Exchange (PIDX). www.pidx.org Industrial Automation Systems and Integration – Integration of Life-Cycle Data for Process Plants Including Oil and Gas Production Facilities. www.iso.org; http://15926.org Industrial Automation Systems and Integration – Diagnostics, Capability Assessment, and Maintenance Applications Integration Part 1. (Under Development), 2006. www.iso.org Function Blocks for Industrial-Process Measurement and Control Systems. www.iec.ch
Semantic Web Framework for Rule-Based Generation of Knowledge and Simulation
409
[27] Studer R. et al.: Arbeitsgerechte Bereitstellung von Wissen – Ontologien für das Wissensmanagement. Technical Report, Institut AIFB, Universität Karlsruhe. 2001. www.aifb.uni-karlsruhe.de/WBS/ysu/publications/2001_wiif.pdf [28] Object Management Group (OMG). www.omg.org [29] Systems Modeling Language (SysML). www.sysml.org [30] Unified Modeling Language (UML). www.uml.org [31] XML Metadat Interchange (XMI). www.omg.org/technology/documents/formal/xmi.htm
Semantic Interoperability Requirements for Manufacturing Knowledge Sharing 1
N. Chungoora and R.I.M. Young1 1
2
Wolfson School of Mechanical and Manufacturing Engineering, Loughborough University, Loughborough, LE11 3TU, UK [email protected] Wolfson School of Mechanical and Manufacturing Engineering, Loughborough University, Loughborough, LE11 3TU, UK [email protected]
Abstract. Nowadays, sophisticated Computer Aided Engineering applications are used to support concurrent and cross-enterprise product design and manufacture. However, at present, problems are still encountered whenever manufacturing information and knowledge have to be communicated and shared in computational form. One of the most prominent of these problems is concerned with semantic mismatches, which impinge onto achieving seamless manufacturing interoperability. In this paper, the possible configuration of frameworks to capture semantically enriched manufacturing knowledge for manufacturing interoperability is being discussed. Ontology-driven semantic frameworks, based on explicit definitions of manufacturing terminology and knowledge relationships, offer an attractive approach to solving manufacturing interoperability issues. The work described in this paper defines Hole Feature ontological models in order to identify and capture preliminary semantic requirements by considering different contexts in which hole features can be described. Keywords: interoperability, semantics, manufacturing knowledge sharing
1 Introduction Information and Communications Technology (ICT) infrastructures, coupled with appropriate manufacturing strategies and practices can bring considerable benefits towards the survival and integration of manufacturing enterprises. According to Ray and Jones [1], interoperability is “the ability to share technical and business data, information and knowledge seamlessly across two or more software tools or application systems in an error free manner with minimal manual interventions”. Seamless interoperability, although being one fundamental requirement for ICT
412
N. Chungoora and R.I.M. Young
infrastructures, so as to support efficient collaborative product development, is still not completely achievable. This lack of interoperability is costly to many globally distributed industries [2] where significant amounts of money are spent into overcoming interoperability problems [3]. Several problems are responsible for the lack of interoperability of manufacturing systems, the most common reason being due to incompatibility between the syntaxes of the languages and the semantics of the terms used by the languages of software application systems [4]. It has been asserted that the problems of interoperability are acute for manufacturing applications, as applications using process specifications do not necessarily share syntax and definitions of concepts [5]. Moreover, clear emphasis has been laid to the fact that either common terms are used to mean different things or different terms are used to mean the same thing which leads to potentially substantial interoperability problems [1]. Several authors such as Prawel [6], Liu [7] and Cutting-Decelle et al. [8] have recognised the importance of product data exchange and information modelling as a means of obtaining a certain level of systems integration. Systems and process integration and interoperability work hand in hand, for example, at manufacturing level, the integration of mechanical analysis into the design process is one of the most obvious and crucial requirements, particularly during the early stages of design [9]. In modern PLM systems, manufacturing knowledge handled by decision support systems has to be communicated across the entire lifecycle effectively. In the design and manufacture domain, this knowledge is developed in activities such as Design for Function, Design for Assembly and Disassembly, Design for Manufacture and Manufacturing Planning. Current limitations of semantic interoperability, therefore, inevitably affect manufacturing knowledge sharing capability. Effort pursued through ontological approaches provide attractive potential solutions to the problem of semantic interoperability. However, the greatest difference of significance between ontological approaches is the basis upon which the sharing of meaning is made in relation to the level of rigour with which terms are defined [10]. Furthermore, it has been specified that interoperability between manufacturing activities is influenced by the context dependency of information [11]. Hence, an all-embracing framework to solve semantic manufacturing interoperability issues is likely to cut across rigorous ontological engineering which captures the contextual representations of information and knowledge. This paper provides an understanding of the potentials that ontology-driven frameworks possess in order to solve semantic interoperability problems in the manufacturing domain. A hole feature ontology example has been devised to illustrate some of the requirements for capturing semantic as well as to identify key areas which need to be focused towards solving semantic manufacturing interoperability and manufacturing knowledge sharing issues.
Semantic Interoperability Requirements for Manufacturing Knowledge Sharing
413
2 Manufacturing Information and Knowledge Support Systems The quest towards PLM decision support systems with increasing decisionhandling capabilities has driven the progression from information support systems to knowledge-based systems. Relevant information support to product design and manufacturing has been pursued through the use of Product and Manufacturing Models [12]. A Product Model may be defined as an information model, which stores information related to a specific product [13]. The Product Model paradigm has slowly been extended with time, for instance, through the inclusion of additional dimensions such as product family evolution [14]. On the other hand, a Manufacturing Model is said to be a common repository of manufacturing capability information, whose use is justified in the way the relationships between all manufacturing capability elements are strictly defined [15]. Nowadays, new product development in large companies, operating for instance in automotive and aerospace sectors, is supported by Knowledge-Based Engineering (KBE). KBE in industry is mostly used to automate design processes and to integrate knowledge and experience from different departments [16], for example, in the design and manufacture domain the generative technology of knowledge-based tools enables companies to create product definitions which incorporate the intuitive knowledge (experience) of designers and engineers about design and manufacturing processes [17]. The main claimed benefit of KBE lies in its ability to aid rapid product development in a collaborative way for increased productivity. 2.1 Integrating Product and Manufacturing Knowledge
The concept of acquiring manufacturing knowledge is partly based on having the appropriate system infrastructure to aid the integration of product and manufacturing information repositories. It has been mentioned that Manufacturing Information Models have not been shown fully integrated with each other or with a Product Information Model [18]. Manufacturing knowledge dissemination can be more specifically targeted to the useful interoperability of both product and manufacturing information repositories in such as way that clear contexts, relationships, constraints and rules are defined. Previously, multi-view modelling has attracted attention as a framework for gathering manufacturing systems knowledge. However, multi-view modelling to acquire manufacturing knowledge has been developed into solutions based on the use of UML, and therefore uses a lightweight ontological approach which is inappropriate for intersystem interoperability [10]. Therefore, more stringent approaches need to be devised in order to capture and share manufacturing knowledge. 2.2 Ontology-Driven Frameworks for Knowledge Support Systems
The area of ontological representation of knowledge is a subset of technologies for information and knowledge support [19], which implies that in one way or the
414
N. Chungoora and R.I.M. Young
other, ontological approaches can be sought in order to set up platforms for knowledge-driven integration of Product and Manufacturing Models. In recent years, ontological engineering has been witnessed in the manufacturing domain, for instance, a Manufacturing System Engineering (MSE) ontology model that has the capability to enable communication and information exchange in interenterprise, multi-disciplinary engineering design teams, has been proposed [20]. In another instance, a product ontology concerned with the development and refinement of the ISO10303-AP236 standard to support information exchange for the furniture industry has been developed [21]. One fundamental observation made is that only a progression towards the use of heavyweight ontologies can provide greater confidence that the real meaning behind terms coming from different systems is the same [10]. Hence, heavyweight ontologies offer the potential of supporting semantic manufacturing interoperability.
3 Understanding Semantic Requirements for Knowledge Sharing As an attempt to understand the need for semantic support in knowledge sharing between functional domains, Figure 1 has been proposed. A functional domain may be regarded as being any group or community in which a particular knowledge is fully accepted, understood and shared for the realisation of a specific function. In a concurrent engineering environment, a functional domain could be synonymous to, for example, a team of people working in Design for Assembly or another group working in Design for Function. Having a communal acceptance of knowledge within a group implies that a specific ontology is being adopted within a functional domain. Therefore, it can be deduced that the common understanding of concepts, communication and sharing of these concepts, and subsequent decision-making, all depend on the semantics defined and used in a functional domain. Assuming that a functional domain has a semantically well-defined ontology as a basis for sharing knowledge and the meaning behind the knowledge, it becomes feasible to suggest that different domains are likely to develop their own ontologies. Hence, two functional domains, regardless of whether they operate within similar areas or not, may not necessarily achieve consensus, whenever knowledge is to be shared among the two groups. This is because, although the ontologies can be well-formed, accepted and semantically correct in both individual groups, the semantics from both functional domains do not match when both groups have to communicate with each other. In concurrent engineering design and manufacture premises, semantic mismatches are primarily due to multiple manufacturing and product-related terminologies defined to mean similar things or to mean disparate concepts. As a consequence of this, a software system developed to suit the purpose of one functional domain, needing to communicate with another software system suited for another domain, does not always readily do so. These semantic problems can be carried downstream in the product lifecycle. At present, there still exist problems related to ambiguous semantics ([5], [10]) which prevent manufacturing knowledge to be captured and shared seamlessly. In Figure 1, the central ellipse
Semantic Interoperability Requirements for Manufacturing Knowledge Sharing
415
denotes the ongoing misfits in semantics, which lead to the problem of knowledge sharing. Knowledge Sharing
Functional Domain
X
Concept Understanding Concept Communication Decision Making
Functional Domain
Concept Understanding
Semantics Semantics
Concept Communication Decision Making
Fig. 1. Knowledge Sharing and Semantics between Functional Domains
3.1 Hole Feature Ontology Model to Identify Semantic Requirements
To illustrate the issue identified previously, an example has been put forward where aspects of two different but related domains (namely a design domain and a manufacture domain) have been captured. An ontological approach using the Protégé 3.3 tool has been exploited to model the two domains. The scope of the task is to identify a set of semantic requirements, which can be used as specifications to the design of frameworks to promote semantic interoperability of knowledge resources within disparate contextual representations of features. It has been acknowledged that feature-based engineering bridges the gap between CAD and knowledge-based engineering systems [22]. Features play a key role in providing integration links between design and manufacture [23]. For these reasons, it was considered appropriate to build an onthology around a feature so as to incorporate some level of manufacturing knowledge. The feature ontology proposed has been developed to identify semantic requirements related specifically to holes as features. This is partly because hole feature manufacture is problematic and sometimes costly to industries, as a result of the diverse contexts, manufacturing processes and poorly established best practice methods associated to whole features. An example to illustrate the prominence of the contextual definitions for the designation of whole features is illustrated in Figure 2. In the design functional domain, a counterbored hole accommodating a particular size of bolt may be
416
N. Chungoora and R.I.M. Young
regarded as a bolt hole. In the manufacture functional domain, the functionality (of the whole acting as a bolt hole) can be of little importance and instead, the same hole in a manufacture functional domain may be designated as a counterbored hole. In the latter case, this could further imply that the counterbored hole needs to consist of an optional centre-drilling operation, a required drilling operation and a required counterboring operation. A whole feature may be considered from various contexts and the semantics need to be defined for contexts such as functional, geometry, manufacturing, machining process and assembly [11]. Drilling Counterboring Bolt Hole Design Functional Domain
Manufacture Functional Domain
has functional context
has machining context
Counterbored hole Fig. 2. Considering a Counterbored Hole from Two Different Contexts
3.1.1 Design Hole Feature Ontology
A lightweight ontology for hole feature representation from a purely functional and geometry context, reflecting the design functional domain, was developed. Figure 3 depicts the class hierarchy from the Design Hole Feature ontology. This ontology may be regarded as one possible context in which hole features can be represented. The superclass “Design Hole Feature” has two subclasses namely “Circular CrossSection Hole” and “Shaped Hole”, implying shape and geometric property variations from these two parent classes.
Semantic Interoperability Requirements for Manufacturing Knowledge Sharing
417
Fig. 3. Class Hierarchy for the Design Hole Feature Ontology
Protégé allows the user to define classes and specify the properties or slots of these classes. These properties, also known as attributes in the OO environment, describe the information elements which are the building blocks of a class. The necessity for having parent to child class property inheritance (i.e. is-a relationship) is significant. For example, the “depth” property of the class “Design Hole Feature” is inherited by the subclass “Circular Cross-Section Hole” and subsequent child classes. It is also possible to define additional slots for specific classes, for example, the “Circular Cross-Section Hole” class has a “primary diameter” property. It would not be reasonable for the class “Shaped Hole” to possess the property “primary diameter” since a shaped hole consists of two or more geometries which define its cross-section. In the proposed Design Hole Feature ontology a few instances have been defined. 3.1.2 Machining Hole Feature Ontology
A similar approach to the ontology development of the Design Hole Feature ontology has been adopted to devise a Machining Hole Feature ontology. The latter captures other contexts in which hole features can be represented namely from a machining and manufacturing process context, thus reflecting the manufacture functional domain. Provision for the geometric context has also been made since it is impossible to describe a “Machining Hole Feature” without referring to the basic dimensions of the feature. Several classes and subclasses have been defined. Two main superclasses are present namely “Machining Hole Feature” and “Hole Machining Operation”. The “Machining Hole Feature” class holds knowledge about particular classes of holes which can be encountered during production, while the “Hole Machining Operation” class holds knowledge about the capability of a particular process to machine a given hole feature. Figure 4 illustrates some of the relationships that can be made through slots. One type of relationship is called an inverse relationship whose semantics is well defined (in
418
N. Chungoora and R.I.M. Young
the current ontology the “produced by” and “produces” slots pertaining to the classes “Machining Hole Feature” and “Hole Machining Operation” respectively are inverse slots) and Protégé allows the user to input this information through an “inverse-slot” option made available.
Knowledge Fig. 4. Class Hierarchy and Knowledge Relationships in the Machining Hole Feature Ontology
The property defined as “requires machining sequence” is a requirement for the “Hole Machining Operation” class. This is because for selected processes, a processing sequence may be required before a complete machined hole feature can be obtained. As previously seen, attributes or properties can be defined so that they represent relationships which state the behaviour of information elements between classes, slots and instances, thereby acquiring some knowledge within the domain ontology. In order to understand occurrence and process-dependency aspects in hole manufacturing operations, a reaming operation required to produce a reamed hole, has been taken into account. A “Reamed Hole” may be described as being “produced by” a certain reaming operation which “produces” the hole feature and achieves the necessary dimensional target. The reaming operation involves the use of an available manufacturing resource “Machine Chucking 10.02mm”. Also, it is possible to define that a reaming operation “requires machining sequence” and use this property to identify other manufacturing operations and resources pertinent to a reaming operation. The instances diagram in Figure 5 gives a clear idea of the level of semantic linking that needs to be defined through relationships.
Drilled Hole 9.52mm diameter = 9.52mm produced by = Stub Length 9.50mm diameter tolerance = +/-0.02 depth = 35.0
Fig. 5. Implications of Producing a Reamed Hole from Knowledge Contained in Instances
4 Discussions and Conclusions The feature-oriented ontologies developed with well-defined semantic relationships reflect a potential way forward to the integration and sharing of product and manufacturing knowledge. It is clear that the functional and geometry contexts from the Design Hole Feature ontology capture some relevant aspects of the Product Model perspective. On the other hand, the machining process context
420
N. Chungoora and R.I.M. Young
based on manufacturing methods, witnessed in the Machining Hole Feature ontology, reflects the Manufacturing Model perspective. In Design for Manufacture, having context-specific hole feature representations is important but these representations should not be used individually from each other. Thus, a basis needs to be defined so as to enable the successful management and matching of feature-oriented ontologies constructed from different contextual views for knowledge sharing. So as to meet the purpose of manufacturing interoperability, unambiguous semantic relationships need to be set up among these context-specific ontologies so that multi-context manufacturing knowledge becomes interoperable and subsequently shareable. In the experiment, the “requires machining sequence” property gives the relationship between multiple hole machining operations and introduces a knowledge element to the system. Although existent, the basis over which the “requires machining sequence” property has been defined is still not explicit enough and it would be an advantage to create a more rigorous statement. Some of the questions which could be asked concerning this issue are: is the sequence an ordered sequence? Is the machining sequence in an ascending order of dimensional targets? Is the sequence in descending order of importance? One highly promising direction to solve this and other similar issues is to include meta-modelling of the classes and slots in such a manner that the semantics behind classes and properties are fully captured, thus removing the ambiguities present in semantic linking through property definitions. Given that the manufacturing knowledge in question is captured using different methods employed by different groups and cutting across varying contexts, two important questions need to be reviewed in the quest for an ontology-driven semantic interoperable framework for manufacturing knowledge sharing. These main questions have been identified below: x x
How can methods of manufacturing knowledge capture be refined in such a way that a level of semantic enrichment of the knowledge is achieved to enable comprehensive knowledge sharing? To what extent can a semantic framework verify what segments of manufacturing knowledge are appropriate for sharing? Conversely, what segments of manufacturing knowledge prove to be semantically dissimilar so that they cannot be shared?
At this stage, it is possible to list a number of distinct semantic requirements to be satisfied with the intention of promoting semantic interoperability for manufacturing knowledge sharing. These requirements are as follows: x x x
It is necessary to provide an adequate basis for sharing design and manufacture related meaning through comprehensive ontological frameworks (e.g. through the construction of domain ontologies). These frameworks should entail sufficient complexity in the way information and knowledge are structured (e.g. through meta-modelling). Semantic definitions need to cut across several contexts so as to provide a basis for matching semantics from different functional domains (e.g. geometric, assembly and machining contexts).
Semantic Interoperability Requirements for Manufacturing Knowledge Sharing
x x
421
Semantic linking should be made through well-defined knowledge relationships thereby bridging the semantic gaps between contexts. It is essential to provide an underlying mathematical rigorousness to formalise semantic statements (e.g. where dependencies on options, sequences, activities and event-based parameters are present).
Future work to be based on RDF(S)/OWL ontology markup languages, using the software Protégé OWL and Altova SemanticWorks, shall provide a further insight into answering the above questions. Furthermore, the subsequent exploration of heavyweight feature-oriented ontologies using the Process Specification Language (PSL) shall provide additional support into overcoming the problem of semantic interoperability for manufacturing knowledge sharing.
References [1]
Ray SR, Jones AT, (2003) Manufacturing interoperability. Concurrent Engineering, Enhanced Interoperable Systems. Proceedings of the 10th ISPE International Conference, Madeira Island, Portugal: 535–540 [2] National Institute of Standards and Technology, (1999) Interoperability cost analysis of the U.S. automotive supply chain. http://www.nist.gov/director/prog-ofc/report991.pdf [3] Brunnermeier SB, Martin SA, (2002) Interoperability costs in U.S. automotive supply chain. Supply Chain Management: an International Journal 7(2): 71–82 [4] Das B, Cutting-Decelle AF, Young RIM, Case K, Rahimifard S, Anumba CJ, Bouchlaghem N, (2007) Towards the understanding of the requirements of a communication language to support process interoperation in cross-disciplinary supply chains. International Journal of Computer Integrated Manufacturing 20(4): 396–410 [5] Pouchard L, Ivezic N, Schlenoff C, (2000) Ontology engineering for distributed collaboration in manufacturing. AIS2000 Conference. http://www.acmis.arizona.edu/CONFERENCES/ais2000/Papers.back/Papers/PDF/a0 26pouchardlc.pdf [6] Prawel D, (2003) Interoperability best practices: advice from the real world. TCT 2003 Conference organised by Rapid News and Time Compression Technologies, NEC, UK [7] Liu S, (2004) Manufacturing information and knowledge models to support global manufacturing coordination. PhD Thesis, Loughborough University, Loughborough, UK [8] Cutting-Decelle AF, Das BP, Young RIM, Case K, Rahimifard S, Anumba CJ, Bouchlaghem NM, (2006) Building supply chain communication systems: a review of methods and techniques. Data Science Journal 5: 26–51 [9] Aifaoui N, Deneux D, Soenen R, (2006) Feature-based interoperability between design and analysis processes. Journal of Intelligent Manufacturing 17: 13–27 [10] Young RIM, Gunendran AG, Cutting-Decelle AF, Gruninger M, (2007) Manufacturing knowledge sharing in PLM: a progression towards the use of heavy weight ontologies. International Journal of Production Research 45(7): 1505–1519 [11] Gunendran AG, Young RIM, Cutting-Decelle AF, Bourey JP, (2007) Organising manufacturing information for engineering interoperability. Interoperability for Enterprise Software and Applications Conference, Madeira Island, Portugal
422
N. Chungoora and R.I.M. Young
[12] Costa CA, Young RIM, (2001) Product range models supporting design knowledge reuse. IMechE Part B Journal of Engineering Manufacture 215(3): 323–337 [13] Molina A, Ellis TIA, Young RIM, Bell R, (1995) Modelling manufacturing capability to support concurrent engineering. Concurrent Engineering Research and Applications 3(1): 29–42 [14] Sudarsan R, Fenves SJ, Sriram RD, Wang F, (2005) A product information modelling framework for product lifecycle management. Computer Aided Design 37: 1399– 1411 [15] Liu S, Young RIM, (2004) Utilizing information and knowledge models to support global manufacturing co-ordination decisions. International Journal of Computer Integrated Manufacturing 17(4): 479–492 [16] Liening A, Blount GN, (1998) Influences of KBE on the aircraft brake industry. Aircraft Engineering and Aerospace Technology 70(6): 439–444 [17] Kochan A, (1999) Jaguar uses knowledge-based tools to reduce model development times. Assembly Automation 19(2): 114–117 [18] Feng SC, Song EY, (2003) A manufacturing process information model for design and process planning integration. Journal of Manufacturing Systems 22(1): 1–16 [19] Chandra C, Kamrani AK, (2003) Knowledge management for consumer-focused product design. Journal of Intelligent Manufacturing 14: 557–580 [20] Lin HK, Harding JA, (2007) A manufacturing engineering ontology model on the semantic web for inter-enterprise collaboration. Computers in Industry 58(5): 428– 437 [21] Costa CA, Salvador VL, Meira LM, Rechden GF, Koliver C, (2007) Product ontology supporting information exchanging in global furniture industry: 278–280. In: Goncalves RJ et al. (eds.) Enterprise interoperability II: new challenges and approaches, Springer-Verlag London Limited, London, UK [22] Otto HE, (2001) From concepts to consistent object specifications: translation of a domain-oriented feature framework into practice. Journal of Computer Science and Technology 16(3): 208–230 [23] Brimson J, Downey PJ, (1986) Feature technology: a key to manufacturing integration. Computer Integrated Manufacture review
Collaborative Product Development: EADS Pilot Based on ATHENA Nicolas Figay, Parisa Ghodous 1
2
EADS IW 12, rue Pasteur – 92152 Paris Cedex – France [email protected] Bâtiment Nautibus, Université Claude Bernard Lyon 1, 43 bd du 11 novembre 1918, 69622 Villeurbanne cedex - France [email protected] URL: http://liris.cnrs.fr/ 2
Abstract. When willing to support collaboration within enterprise or between enterprises, it is required to support fast establishment of communication and interactions between numerous organizations, disciplines and actors. More and more, it implies to be able to interconnect enterprise applications that support involved partners and their communication, authoring and management processes. This paper presents an innovative federation framework and associated usage, which composes an effective way already existing enterprise, knowledge and application interoperability frameworks that are themselves standardized. The framework is defined according ATHENA vision, addressing interoperability at Enterprise, knowledge and Information/Communication technologies and establishing links at semantic level by means of ontologies for information, service and processes. The framework aims to address governance, organizational and technological brakes identified in industrial context when willing to establish fast and effective eCollaboration, providing modeling and execution platforms able to produce executable collaboration models and supporting round-trip development, that is a pre-requisite when willing to federate legacy solutions of partners involved within the collaboration. It proposes policies for choice, composition and usage of de jure and de facto standards that will remove these brakes. It will be validated within the particular domain of collaborative Product Development within Aerospace sector. Keywords: Interoperability of Enterprise Application, Federation, Model Driven Interoperability, Semantic Preservation
424
Nicolas Figay, Parisa Ghodous
1 Introduction: Interoperability Needs and Issues for Emerging Networked Organization In order to face a more and more competitive environment, enterprises rely more and more on usage of enterprise applications, which are supporting the main functions and processes of the enterprise: Enterprise Resources Planning applications, Human Resources management applications… Characteristic of these applications is to support, enact, monitor, control and some time execute business processes of the enterprise for numerous activities and categories of users, which are geographically distributed. In addition, they are managing informational resources of the enterprise that are becoming more and more electronic: databases, documents, models, knowledge… and that are produced by authoring tools such as document authoring tools (e.g. Microsoft word…), relational database systems, Computer Aided Design tools (e.g. Dassault Systems Catia). As functions of the enterprise need to be interconnected in order to support transversal processes, in particular those related to the clients and attached to the creation of values and benefits, integration of enterprise information system became a key issue the last years, with as means integration frameworks, integration systems (e.g. Enterprise Application Integration Systems), middleware (e.g. Common Object Request Broker Architecture [3]), data/documents exchange facilities (e.g. XML) and Service Oriented Architectures and Systems. In order to govern evolution of the whole enterprise information system and to align the information system with strategic objectives of the enterprise, new approaches have been created such as controlled urbanism of the Enterprise Information System with associated Enterprise Modelling capabilities (e.g. Business Process Modelling or decision modelling). Nevertheless, integration of enterprise applications remains difficult due to heterogeneity of legacy applications and due to the fact the used software were not initially designed to be interoperable (c.f. Interoperability anti-patterns defined in ATHENA[13]). In addition, integration solutions provided by the market are most of the time not interoperable between them and led to add technological or software products silos to functional silos Because of globalization of the economy and necessity to focus on core high value business activities, enterprises had also to establish partnerships with other companies specialized in other domains, which are required to support their activities but that are out of their core activities. For example in-house software development was replaced by selection of commercial of the shelves (COTS) software products. Other example is Aerospace activity where the amount of subcontracted activity is today targeted to 60%, including design activity. It lead to the creation of what is called Virtual Enterprise, where integrators have to coordinate activities of all the partners and to import them on the information and communication system supporting product development (what is called Extended Enterprise). Challenge of Extended Enterprise is very often seen as to be able to integrate information systems of several enterprises. But as establishment of each enterprise information system was done independently, integration of enterprise information systems is a very difficult challenge. Due to the new relationships established between enterprises, it is also most of the time neither wished nor possible, in particular because partners and members of the supply chain are
Collaborative Product Development: EADS Pilot Based on ATHENA
425
working with several partners, clients and programs that are independent, and also because internal informational systems are based on heterogeneous core domains and activities. How can then an integrator constraint a partner to work with the tools that are related to a domain they are not expert in and that is out of their core activity? In such a situation, collaboration is to be established between partners having heterogeneous core domain activity, processes, applications, and information/communication technologies, without targeting integration but fast and limited in time interconnection of applications supporting the collaboration within a collaboration space. For such a challenge, existence and usage of accurate set of ‘de jure’ and ‘de facto’ standards addressing interoperability at enterprise level, knowledge domain level and Information and Communication technologies level are critical. Competition of numerous standardization bodies and solutions providers, pushing valuable but incompatible and overlapping solutions, does not facilitate the task. ATHENA research program on interoperability of enterprise applications highlighted difficulties to use simultaneously solutions coming from these different communities when trying to integrate them within piloting activities. For example, semantic mediation prototypes were based on Resource Description Framework schemas, while Service Oriented Execution components were based on message structured according XML schema [8]. So it required to work on mapping of schema definition languages. Similar issues exists for model interchange format (XML Model Interchange, XML Process Definition Language, etc).From enterprise piloting activities point of view, it was particularly important because the different models had to be coherent and projected on robust Service Oriented execution platform. Finally business domain communities (such as Manufacturing, Health, etc) are creating their own standardization communities collaborating with several Information and Technology Interoperability initiatives. Some try to formalize their need a way that is Information and Communication technology independent, developing their own specification framework (e.g. ISO STEP community) that includes binding to important interoperability technologies (e.g. for ISO STEP community bindings with XML, UML[12], Java, etc) but also contributing to vertical specification definition concerning their domain in liaison with different other communities (e.g. PDM enablers and PLM services [4] within the Mantis [5] group of OMG, or PLCS consortium at OASIS defining PLCS PLM services or Reference Data Libraries). An important difficulty is related to incompatibilities between the used paradigms and solutions developed for each technological framework. It was highlighted for example within the SAVE project trying to use simultaneously STEP AP214 and PDM Enablers. Another difficulty is related to incompatible implementations of the standards, due to insufficient conformance testing frameworks and certification process establishment. Consequently, there is a strong need today to find new interoperability solutions enabling enterprises to collaborate a federate way, and to interconnect their legacy integrated information systems using together the different accurate technological interoperability frameworks a coherent way in order to avoid technological silos.
426
Nicolas Figay, Parisa Ghodous
2 State of the Art Concerning Standard Based Technological Interoperability Frameworks to Reuse ATHENA research programs provided foundation to establish interoperability of enterprise application, but also highlight a set of important issues that are still not solved. Existing Interoperability frameworks can be assessed according the different viewpoints defined by ATHENA. 2.1 ATHENA
The ATHENA Integrated project proposed a vision for interoperability of enterprise applications, that should be addressed a holistic way at Enterprise, knowledge and ICT (Information and Communication Technologies) levels, with semantic models as glue between levels and enterprise applications. It is a basis for the establishment of federation framework and identification of relevant standards to use within this framework.
Fig. 1. Interoperability supported at all the layers of the enterprise by ATHENA
Within ATHENA, several sectors (Aerospace, Telecom, Automotive, and Furniture) and domains (Enterprise modelling, Executable service oriented platform) were implied. But it is important to point that open federative platform for several organizations was not really addressed by researchers, targeting more one to one connection as reflected in the model of reference. Neither federation issues related to data (e.g. multiple identification, naming, and typing rules) nor usage of different used legacy formalisms within a single modelling environment were addressed. It was an issue when willing to integrate ATHENA produced innovative solutions within an open collaborative framework. 2.2 Standard Based Technological Interoperability Frameworks to Reuse
Numerous solutions and approaches have been developed the last decade in order to address interoperability. State of the art was done in order to cover the different layers to be considered for enterprise applications interoperability (enterprise, knowledge, ICT with semantic mediation), with coverage of information, service and process aspect, and with as prerequisite openness, standardization and existence of implementations as commodities on the WEB (i.e. free, open source
Collaborative Product Development: EADS Pilot Based on ATHENA
427
and robust implementations). According ATHENA, the different viewpoints to consider are domain knowledge, enterprise, and ICT (Information and Communication Technologies). Domain knowledge related standards have been defined to address some interoperability issues, with specific technological frameworks and bindings to other technologies. It is the case for manufacturing domain that is used for validation of the federation framework through collaborative design business scenarios. This community is being producing, through ISO 10303 STEP standard, a set of applications protocols (formal and computational information models in EXPRESS[7]) in order to address exchange, sharing and long term retention of data describing a product (e.g. an aircraft or an automotive) between several organizations and software products. Bindings to existing technologies of interest are provided. This community is also producing set of standardized object oriented interfaces (Product Data Management Enablers) and web services (Product Lifecycle Customer Support – PLCS[24]- and Mantis Product Lifecycle Management or PLM services) based respectively on Common Object Request Broker Architecture and Web Services Definition Language [6]. In order to use these standards together, several initiatives addressed the issue of federation of different domain models, such as PLCS. PLCS produced STEP AP239 standard, PLCS PLM services and Reference Data Libraries standards and specifications, that attempts mixed usage of several technologies and provide ways to interrelate different domains ontologies to product data model provided by AP239. Enterprise modelling related standards is just emerging, and it is difficult to really identify those that should be use. Some are addressing modelling of an enterprise as a system, providing modelling construct for an enterprise, such as Unified Enterprise Modelling Language. But such standard is mainly used by consultant and function of the enterprise dealing with organization and not for Application Engineering. Application specifications are often formalized by means of UML use cases, in combination with Business Process Modelling. Usage of different modelling language for application specifications, combined to projects boundaries, lead to “islandization” of applications, i.e. creation of independent and not interoperable applications within the enterprise. Finally emerging approaches related to executable Business Process Models and related standardized modelling language should be considered, as part of the Enterprise Modelling construct, or as component of the enterprise models to be considered. Without a real consensus today for usage of a single enterprise modelling “de-jure” standard, it is difficult today to select one. A more pragmatic approach should be to establish, for a given community, a set of collaboration processes, shared services and information models of reference for collaboration, within a federated environment. More and more communities are dealing with federation issues: federated authentication (Liberty Alliance project), web services federation (OASIS’ SAML), Federated Enterprise reference architecture (or FERA, by Collaborative Product Development Associates, LCC or CPDA). FERA proposes a framework for loosely coupled business process integration as part of its product value management and product lifecycle management infrastructure research services, with implementation on ebXML platform. It is consequently within a technological silo and it creates new overlapping model of reference in
428
Nicolas Figay, Parisa Ghodous
PLM domains, which is not related to other PLM standards such as PLCS, PDM Enablers or STEP Application protocols. FERA approach is nevertheless important, as related to Product value management that is not necessarily considered by other PLM standardization communities. At Information and Technological level, it is important to distinguish the modelling platforms, the development platforms and the execution platforms. The most promising execution platforms are the service oriented platforms encompassing standardized application servers, integration framework, process enactment and execution platforms, presentation integration (portals), federated authentication and single sign-on components. Such an execution platform was already described by the author [1] [2] within ATHENA Networked Collaborative Product Development Platform within Aerospace pilot. The federation platform will be an extension of this platform. For the development platform, the used standard should be based on mature enough open “de-jure” and “de-facto” standards for application modelling, programming and model transformation. An extension of EUROPA[25] Eclipse platform was identified as an excellent candidate, the Papyrus UML platform, that support most of the last versions of OMG standards for UML modelling, interchange (diagrams and models) and transformation language (Model to Model). In addition, it should be extended by Model to Text and Text to Model capabilities to be able to import and export views of enterprise modelling platforms and operational execution platforms. For modelling platforms, the more appropriate technologies and standards are those related to MDA[16] in one hand, and to the semantic WEB in the other hand, in conjunction with usage of relevant models for a community (e.g. emerging enterprise/application modelling language and models). Papyrus platform, in conjunction with appropriate profiles and transformations, is a good candidate as MDA platform. Some of these transformations are being developed by the author within the scope of the OpenDevFactory project[20], as components of the federation framework described later on. For ontological models, the modelling platform is Protégé ontology editor, coupled with standards based querying (sparQL[11] based tool such as Virtuoso) and reasoning (DL[14] based tools such as Pellet[18]) tools. It is required to find way to federate heterogeneous models (different modelling language and different viewpoints) within each environment and to interchange federated models between the two modelling environment. This issue is addressed within the proposed federation frameworks through definition of extended multi-ground hyper models.
3 Proposed Federation Framework The proposed federation framework is addressing the different interoperability needs and issues identified within the previous sections. It is first aimed to be able to establish federation of applications on a collaborative space that will allow collaboration between enterprise having heterogeneous internal private and specific processes, information, organizations, modelling platforms and execution platforms. The challenge is to identify eligible legacy solutions based on open
Collaborative Product Development: EADS Pilot Based on ATHENA
429
standards, which could be combined to provide organizations modelling, knowledge modelling and application modelling platform that can be combined to support model driven approach collaboration and roundtrip transformation between service oriented execution platforms and modelling platforms. The proposed federation framework encompasses a federated organization reference model, enabling collaborative and B2B standard based platforms specifications and principles and finally innovative enabling concepts to break the technological silos, roundtrip transformation and heterogeneous model aggregation. 3.1 The Federated Organisational Model
The federated organization is a network for which each member is an organization or a company with its own objectives, private business processes and specific business objects. Each information process is supported by applications that are providing services to human end users or to other applications. Relationships between business processes and application is done through business use cases, that define interactions between application and users a specific context, constrained by business rules coming from disciplines methods, application usage guidelines and software product usage. User interfaces are sequenced in order to give access or to basic information Creation, Remove, Update or Delete (CRUD) operations, or providing service invocation templates and result consultation templates. Internally, an application is a business information container (data, documents) and a service provider, which implement some business logic. Interfaces with other application is done by publication of set of services that can be composed by means of executable business process, defined by means of programs, batches, composition or workflows. In order to be integrated at the outside, the application can support business data/documents exchange and sharing, business services access to application and business service access to human users (by means for example of portlets). Most of the time, members of the network don’t have a model oriented approach and no enterprise models or application models exist, only documents. The internal applications are also not structured this way. It is a prerequisite to consider that front-office applications of the enterprise is structured this way, and can publish what is described. It represents state of the practice, through usage of application servers, portals and process enactment systems.
430
Nicolas Figay, Parisa Ghodous
Fig. 2. Front Office Application of a Network Member
A federated organization is a network where collaboration processes and business rules are not those of one member of the network, but those of the network, and where each member of the network has its own specific internal private organization, process and set of business objects. A Collaborative platform for federated organization is a place where collaboration processes can interconnect legacy application systems, through different means such as service publication and consumption or service composition. In addition, it should allow business information exchange and sharing a secured way, on the basis of a common business language. It is important to point that such a platform does not correspond to the model of Extended Enterprise, which is just an extension of the enterprise boundary to enterprise external actors.
Fig. 3. Collaborative Platform for federated organization
Collaborative Product Development: EADS Pilot Based on ATHENA
431
3.2 Collaborative Platform Specifications
The network collaboration platform for federated organization is providing collaboration execution platform, with business logic containers including heterogeneous sets of shared and private heterogeneous and aggregated business data and metadata, which can be used and accessed through application server, process enactment systems and user application interaction enactment. In addition, it provides a shared governance platform based on enterprise business computer independent models. It also provides collaboration modelling platform for executable business models, shared business services and information business models. It provides a development platform that allows to make the link between the business models and the application models, but also to implement when required the business logic. Two other components are a resource repository (services, models, processes) and a communication and transformation platform. All these platforms are interconnected in order to support roundtrip transformations between enterprise models, application models and execution platforms. They are each providing set of services such as governance services, modelling time related services, business runtime services, and finally transversal enactment services. Complementary services that are not domain specific, such as security, authentication, transactions, etc… are also addressed through services that are plugged on business containers (CCM/EJB models for application servers).
Fig. 4. Platform architecture requires semantic preservation and aggregation
First principle is the business logic alignment between enterprise models of the governance, the modelling, the development and the business logic container of the execution platform, and the semantic preservation between formal representations on each of these platforms. Second principle is the simultaneous existence within
432
Nicolas Figay, Parisa Ghodous
the collaboration system of aggregated heterogeneous business logic, data and schemas. 3.3 Enabling Concepts: Extended Multi-ground Hyper Models for Heterogeneous Grounds
The modelling platform to provide is not a simple application modelling platform as it should be able produce annotated CIM, PIM and PSM models, with capability to import and export models using different modelling languages. The roundtrip transformation should be possible in order to maintain coherency between models, code and binaries. Finally, some semantic mediation and transformation should be possible to exchange information between the different members of the network, but still again being able to perform reverse transformation and reconciliation of models and data. As part of the federation framework, concepts were defined for a metamodelling workshop to respond to all these needs. First the concept of modelling “ground” was defined. A modelling ground is a concrete modelling environment, based on a modelling paradigm with associated standardized language. Some activities, services and tools are associated to this modelling “ground” aiming to implement a vision of the community that defined this paradigm and associated standards. It is for example the object paradigm, with as associated standards UML that is dedicated to software engineering with representation of several aspects of an application (usage with use cases, object with class diagrams, sequence with dynamic diagrams, and deployment with deployment diagrams). A concrete ground could consequently be Eclipse environment with UML2 plug-in, based on UML 2.1, XMI 1.4 [9] and EMF 2.0. Another modelling “ground” is for example Protégé 3.3, based on the semantic web paradigm and OWL 1.0[10], RDF schema, SparQL and OWL-S standards. Second concept is the semantic preservation of the business concept. For users of an application, the way a business object was logically defined in order to be interpreted by a computer is not important. For example a “person” concept remains the same being modelled as a UML class, a OWL class, an EXPRESS entity or an XSD entity. So when a conceptual business model is moved between different “grounds”, a risk exist to loose semantic due to “impedance” mismatch, i.e. to the lost of information when translating from one language to an other, as they are not equivalent. The idea to avoid semantic lost is to extend the idea developed by hypermodel, in order to support preservation of the semantic of the business models when moving the model from one ground (e.g. meta-workshop UML2) to an another ground (e.g. meta-workshop OWL). The idea is also to be able to import business models formalized on other grounds (e.g. Product Data Exchange based on EXPRESS models) but being able to keep track of the original modelling concepts. For example, if reusing an application protocol express schema containing an entity “Person”, the resulting UML class person should keep track of the fact it can be considered as a STEP entity within the model. Of course, it is true for class concept, but also for all the other modelling concepts and their relationships.
Collaborative Product Development: EADS Pilot Based on ATHENA
433
Fig. 5. Semantic Preservation of Business Concept Person within different grounds
So in a collaboration space federating numerous and heterogeneous systems, with important information flow and roundtrip generation, it is very important not to loose information in order to have a coherent system. It is about impossible with business model translation from one language to another, except if keeping track of the original language constructs with additional information concerning details that are lost during the transformation and allowing to make the reverse transformation So a concrete federation framework is considered as a set of “grounds” for modelling, coding or execution that have to be robust in term of standards compliance, in order to allow usage of several paradigms and associated modelling language within the collaboration space. Such an approach should resolve some issues related to technology silos and to be able to mix usage of different technological framework.
4 Application to a Product Lifecycle Management Collaborative Platform These principle have been applied for establishment of a Collaborative PLM federated organization, with as business information models of reference STEP application protocols, with ass business service of reference PLM services and collaborative process of reference some collaborative processes such as Engineering Data Package, Change and Configuration Management or co-revue of a product. It is currently evaluated through an industrial research called SEINE[22]. A Networked Collaborative Product Development Platform for collaboration is being defined and developed, extending the one defined in Aerospace ATHENA pilot, with in particular an application server based on EJB3 for execution platform
434
Nicolas Figay, Parisa Ghodous
and a Model Based development platform based on AndroMDA and UML2 profiled modelling for enterprise application on the WEB. Finally STEP application protocols semantic preservation between modelling and execution platform is currently being addressed. EXPRESS UML2 profile was defined, allowing transforming application protocol as UML models, stereotyped as EXPRESS model in one hand (CIM provenance) and as EJB entities and Value Objects in the other hand (PSM targets). A similar way to annotate the model with a OWL UML2 profile, derived from OMG ODM, is under development in order to be able to easily export the extended hypermodel on Protégé 3.3 ground.
5 Conclusion and Perspectives Numerous component solutions exist for establishment of open and standards based collaboration platform aiming to respond to emerging needs for fast establishment of collaboration for federated organization allowing interconnection of involved enterprise information system. The federation framework currently proposes ways to address it, including federated organization model of reference, standards based platforms specification, principles and finally innovative concepts for extended hyper-models allowing semantic preservation on heterogeneous grounds. This framework, that is being developed iteratively, needs to be extended in order to take into consideration not only information models, but also service and process models. Some investigation should be done on data themselves (as managed individuals or object instance). These extensions will be described in future papers. Once robustness demonstrated, methodological approaches will be defined for the different actors involved on establishment and usage of the collaboration space.
References [1]
[2]
[3] [4] [5] [6] [7]
[8] [9]
Figay, N, “Technical Enterprise Applications interoperability to support collaboration within the Virtual Enterprise all along lifecycle of the product”, I’ESA2006 Doctoral Symposium Figay, N, « Collaborative Product Development :EADS Pilot Based on ATHENA results », 2006, within “Leading the web in concurrent engineering – Next Generation Concurrent Engineering”, IOS Press, ISBN I-58603-651-3 CORBA - http://www.omg.org/gettingstarted/history_of_corba.htm PLM services - http://www.prostep.org/en/standards/plmservices/ ManTIs - http://mantis.omg.org/index.htm WSDL http://www.w3.org/TR/wsdl ISO 10303-11:1994 Industrial automation systems and integration -- Product data representation and exchange -- Part 11: Description methods: The EXPRESS language reference manual XML Schema, http://www.w3.org/TR/xmlschema-0/ XML Metadata Interchange, http://www.omg.org/technology/documents/modeling_spec_catalog.htm#XMI 2
2
3
2
Collaborative Product Development: EADS Pilot Based on ATHENA
435
[10] OWL Web Ontology Language Overview, http://www.w3.org/TR/owl-features/, in June, 2006. Web Ontology Language – Description Logic, http://www.w3.org/2004/OWL/ [11] SparQL - http://www.w3.org/TR/rdf-sparql-query/ [12] UML Unified Modeling Language http://www.omg.org/technology/documents/formal/uml.htm [13] ATHENA - http://www.athena-ip.org [14] F. Baader, D. Calvanese, D. McGuinness, D. Nardi, & P. Patel-Schneider (eds) The Description Logic Handbook, Theory, Implementation and Applications, Cambridge University Press, United Kingdom, (2003) 555 pp. [15] ATHENA Aerospace piloting Web Site : http://nfig.hd.free.fr [16] MDA: Model Driven Architecture Official web site: http://www.omg.org/mda/ [17] Virtuoso web site: http://virtuoso.openlinksw.com/wiki/main/Main/VOSSPARQL [18] Pellet web site: http://pellet.owldl.com/ [19] Object Management Group at http://www.omg.org [20] OpenDevFactory as part of the Usine Logicielle project at http://www.usinelogicielle.org [21] Papyrus UML at http:// http://www.papyrusuml.org/ [22] S.E.I.N.E. at http:// http://seine-plm.org/ [23] Enterprise Java Beans specification at http://java.sun.com/products/ejb/docs.html [24] PLCS web site at http://www.plcs-resources.org/ [25] Eclipse Europa at http://www.eclipse.org/ 3
3
3
3
4
4
4
4
4
5
5
5
5
5
6
6
Contribution to Knowledge-based Methodology for Collaborative Process Definition: Knowledge Extraction from 6napse Platform V. Rajsiri1, A-M. Barthe1, F. Bénaben2, J-P. Lorré1 and H. Pingaud2 1
2
EBM WebSourcing, 10 Avenue de l’Europe, 31520 Ramonville St-Agne, France {netty.rajsiri, anne-marie.barthe, jean-pierre.lorre} @ebmwebsourcing.com Centre de Génie Industriel, Ecole des Mines d’Albi-Carmaux, 81000 Albi, France {benaben, pingaud}@enstimac.fr
Abstract. This paper presents a knowledge-based methodology dedicated to automate the specification of virtual organization collaborative processes. Our approach takes as input knowledge about collaboration coming from a collaborative platform called 6napse developed by EBM WebSourcing, and produces as output a BPMN (Business Process Modeling Notation) compliant process. The 6napse platform provides knowledge to instantiate the ontology to contribute to the collaborative process definition. The ontology is in the collaborative network domain, consisting in (i) collaboration attributes, (ii) description of participants and (iii) collaborative processes inspired from the enterprise Process Handbook (MIT). Keywords: Ontology based methods and tools for interoperability, Tools for interoperability, Open and interoperable platforms supporting collaborative businesses
1 Introduction Nowadays companies tend to open themselves to their partners and enter in one or more networks in order to have access to a broader range of market opportunities. The heterogeneities of partners (e.g. location, language, information system), the long-term relationships and establishing mutual trust between its partners are the ideal context for the creation of collaborative networks. The interoperability is a possible way toward the facilitation of integrating networks [6] [18]. General issue of each company in collaboration is to establish connections with their partners. Partners have no idea about what their collaboration will exactly be but they know what they are waiting for from the collaboration. This means that
438
V. Rajsiri, A-M. Barthe, F. Bénaben, J-P. Lorré and H. Pingaud
partners can express informally and partially their collaboration requirements (knowledge). But, how to make these requirements more formalized and completed? In principle, partners collaborate through their information system. The concept of collaborative information system (CIS) has been evolved to deal with the interoperability issues. According to [16], this concept focuses on combining the information systems of different partners into a unique system. Developing such a CIS concerns the transformation of a BPMN collaborative process model into a SOA (Service Oriented Architecture) model of the CIS. This is comparable to the Model Driven Architecture (MDA) approach [9], as discussed in [17]. The BPMN supports the Computation Independent Model (CIM) of the MDA, while the SOA-based CIS supports the Platform Independent Model (PIM) of the MDA. Consequently, our research interest concerns the CIM model. The main focus is to formalize the informal and partial knowledge expressed from the partners in form of BPMN relevant process. But, how do we obtain the BPMN? The answer is shown as follows:
Fig. 1. Our approach for defining a BPMN collaborative process
The schema above shows our approach composing of (i) two gathering methods: interview and knowledge extraction, (ii) two repositories: collaboration characteristics (participant and collaboration) and collaborative processes, and (iii) a transformation. The approach starts at gathering knowledge by interviewing or extracting from a platform called 6napse. This knowledge will be classified and kept in corresponding repositories. Main difference between these two gathering methods is that the interview provides knowledge about the participants (e.g., name, role, business, service) and their collaborations (e.g., relationship, common objective) for the characteristic repository, while the extraction from 6napse provides not only the same knowledge as interview, but also the collaborative process (e.g., CIS, CIS services). Both repositories allow to analyze, keep and construct knowledge in form of collaborative process. Defining these two repositories requires implementing a knowledge-based methodology. This methodology uses ontology and reasoning to automate the specification of collaborative processes. The ontology covers the collaborative network domain which maintains the repositories of collaboration characteristics and collaborative processes, as shown in Fig. 1. The reasoning methodology
Contribution to Knowledge-based Methodology for Collaborative Process Definition
439
establishes the interactions between the repositories in order to fulfill the building of collaborative processes. The paper is focused firstly on introducing 6napse platform. Secondly, the ontology describing the collaborative network domain will be presented. Finally, the knowledge extraction from the platform and an application scenario will be discussed.
2 6napse Collaborative Platform The global evolution makes the enterprises to open themselves to their partners. The necessity of creating network depends on various elements, for example, competition, communication, complexity of products. The collaboration is set up around business tools corresponding to a collaborative process between the enterprises (e.g., group buying services, supplier-customer services). The actual market offers many collaboration tools addressing various functionalities, for example, communication (i.e. e-mail, instance massager), sharing document (i.e. blogs), knowledge management (i.e. wiki, e-yellow pages) and project management (i.e. calendar sharing). However, one of the most required functionality from users is to be able to integrate directly their functionalities emerged from their proper activity domain to the platform as mentioned in [7]. EBM WebSourcing, an open source software provider, has been found in late 2004. Their business focuses on editing and developing the solutions dedicated to SME clusters. EBM WebSourcing is now developing a collaborative platform called “6napse”. 6Napse is a collaborative platform intended to enterprises that would like to work together. The main idea is to provide a trustable space for members to establish (or not) commercial relations among them. The platform will play the role of mediator between the information systems of the enterprises. It is different from the other actual products in the market because it is integrated the business services. The development of this platform is on a basis of the social network paradigm. It aims at (i) creating a dynamic ecosystem of enterprises which communicate by using the services provided by the platform (i.e. send/receive documents, send mails, share documents), (ii) creating a network by viral propagation in the same way as Viadeo, LinkedIn and (iii) being the first step which drive to integrate information systems of the partners and to define more complicated collaborative processes (i.e. supply chain, group buying, co-design). The third development aim led to the concept of CIS and the collaborative process definition by using the knowledge-based methodology (Fig. 1). The followings are some examples of functionalities that the platform offers to their members: x x x x
Registering the enterprise, the user. Login or logout of the user to the platform. Creating or consulting the profile of enterprise and user. Inviting partners to join network, creating a partnership and a collaboration.
440
V. Rajsiri, A-M. Barthe, F. Bénaben, J-P. Lorré and H. Pingaud
x
Searching enterprises, services via keywords (i.e. service, localization, tag)
Through these offered functionalities, the members can create their partnerships and collaborate. The user of the 6napse platform will be an individual relevant to an enterprise. It means that an enterprise is considered as a frame grouping their individuals (employees). An enterprise can be also a reference for the external individuals because normally we need to recognize the enterprise before being able to identify their belonging personnel. The collaboration occurred on the platform is established at the individual level.
3 Collaborative Network Ontology (CNO) In Artificial Intelligence, according to [3], knowledge representation and reasoning aim at designing computer systems that reason about a machine-interpretable representation of the world, similar to human reasoning. A knowledge-based system maintains a knowledge base which stores the symbols of the computational model in form of statements about the domain, and it performs reasoning by manipulating these symbols. Our knowledge-based methodology lies on the above approach in order to deal with the collaborative process design. Fundamentally, the methodology of collaborative process design in our case starts at analyzing the input knowledge regarding collaborative behaviors of the participants and ends at providing a related BPMN collaborative process. The input knowledge we require for manipulating the methodology concerns the collaborative characteristics or behaviors of all involved partners of the network. This kind of knowledge is, for example, business sectors, services (competencies) and roles. This knowledge is extractable from the 6napse platform. We have already discussed about the platform in the previous section. The knowledge extraction will be presented in Section 4. After manipulating the methodology, what we are waiting for at the output are network participants, exchanged data, business services and coordination services. These elements are essential for designing a BPMN collaborative process. Thus, to make the methodology able to produce these elements, we need (i) to define ontology and rules describing the collaborative network domain and (ii) to use an inference engine to deduce these modeling elements from the input knowledge. According to [4], ontology is a specification of a conceptualization. It contains a set of concepts relevant in a given domain, their definitions and interrelationships. To define domain and scope of ontology, [8] suggested starting by answering several basic questions which concerns for example, the domain of interest, user and expected result of the ontology. Often developing ontology is akin to defining a set of data and their structure for programs to use. Problem-solving methods and domain-independent applications use ontologies and knowledge bases built from them as data. The domain of interest for developing an ontology that we focus is on the collaborative network domain especially for designing collaborative process. The
Contribution to Knowledge-based Methodology for Collaborative Process Definition
441
knowledge base built from this ontology will cover the two repositories shown in Fig. 1. It will be used in some applications by the consultants of EBM WebSourcing to suggest their clients a collaborative process relevant in given collaboration behaviors. There are three key concepts underlying the collaborative network ontology (CNO) which are (i) the participant concept, (ii) the collaboration concept and (iii) the collaborative process concept. What we need to define in an ontology is not only the concepts, relations and properties, but we need also to define rules that reflect the notion of consequence. The followings are some examples of rules in the collaboration domain: If decision-making power is equal and duration is discontinuous then topology is peer-to-peer or if role is seller then participant provides delivering goods. The following paragraphs describe these three concepts with their relations, properties and rules. The participant concept, see Fig. 2, concerns the descriptions about participant, which are the characterization criteria of collaboration [13]. A participant provides several services at high level (discussed in the collaborative process concept) and resources (e.g., machine, container, technology), plays proper roles (e.g., seller, buyer, producer) and has business sectors (e.g., construction, industry, logistic).
Fig. 2. RDF graph representing the participant concept
From the above figure, reasoning by deduction can be occurred for example between role and service. Role and service are not compelled to have both but at least one of them is required because they can be completed by each other by deduction. It means that related services will be derived from a given role and viceversa. For example, if role is computer maker then its services are making screen, making keyboard… The collaboration concept, see Fig. 3, concerns the characterization criteria of collaboration [13] and the collaborative process meta-model [17]. Common objective, resource, relationship and topology are the characterization criteria, while CIS and CIS services are a part of the collaborative process meta-model.
442
V. Rajsiri, A-M. Barthe, F. Bénaben, J-P. Lorré and H. Pingaud
Fig. 3. RDF graph representing the collaboration attributes
A collaborative network has a common objective (e.g., group same products to buy together) and a CIS. A CIS has its own CIS services which can be generic (e.g., send documents/mails) or specific (e.g., select supplier service). A network can have several topologies which can be star, peer-to-peer, chain or combination of these three structures. Topology has duration and decision-making power characteristics. Central, equal or hierarchic power is a decision-making power. Duration can be continuous or discontinuous. A topology contains relationships which can be group of interest, supplier/customer or competition. Reasoning topology by deduction is for example, if decision-making power is equal and duration is discontinuous then topology is peer-to-peer, if decisionmaking power is hierarchic for whatever duration be then topology is chain. The collaborative process concept, see Fig.4, is an extension of the concepts developed by the MIT Process Handbook project [8] and also the value chain of [12]. The value concept provides a list of services describing competencies at very high level and generic (e.g., vehicle manufacturing, software development), while the Process Handbook provides business services (i.e. assemble components of computer) at functional level. A service can be divided into business service and coordination service. Business service explains task at functional level. Service can derive the business services that correspond to it. For example, if service is making keyboard then business services are assembling circuit board, testing board…
Fig. 4. RDF graph representing the service
The concept of dependencies (flows) of resources is also included. To deduce a dependency, according to [2], we consider possible combinations of services using resources. Each dependency can be associated a coordination service (i.e. manage flow of material from a business service to another). The concepts of dependency and coordination are related because coordination is seen as a response to problems
Contribution to Knowledge-based Methodology for Collaborative Process Definition
443
caused by dependencies. This means a coordination service is in charge of managing a dependency. For example, if the placing order service of a buyer produces a purchase order as output and the obtaining order service of a seller uses a purchase order as input then there is a dependency of resource between these two services and we can use the forwarding document coordination service to manage this dependency. Collaborative networks usually have several participants, resources, relationships and a common objective. Common objective achieves services which use resources and are performed mostly by proper roles of the participants. A relationship gets two participants together which its type is depended on the roles of the participants (e.g. if two participants play seller and buyer roles, the relationship will be supplier/customer). The following figure shows these expressions which unite the three above concepts together:
Fig. 5. Union of the participant, collaboration and collaborative process concepts.
Once the CNO has been informally defined, we need to formalize it with rigorous syntax and semantic language. OWL (Web Ontology Language), a W3C recommendation, is the most recent development in standard ontology languages. There are three OWL versions but the most appropriate one in our case is OWLDL (Description Logics) because it adapts to automated reasoning. It guarantees the completeness of reasoning (all the inferences are calculable) and logics. For using this language, we need an editor to create ontology’s elements (classes, relations, individuals and rules). We use the Protege which is an opensource OWL editor developed by Stanford University [11]. To reason the ontology, we use the inference engine Pellet which is an open source OWL-DL inference engine in Java, developed at the University of Maryland’s Mindswap Lab [15].
4 Knowledge Extraction from 6napse Platform The two previous sections have been described about the CNO and the 6napse platform. This section interests in using 6napse platform with the CNO as discussed in [1]. We focus on extracting knowledge from the platform, which will be used to instantiate the CNO. The beginning idea of using 6napse came from when partners have no perspective about what their collaboration is supposed to be or they would like to see it clearer, they can try to collaborate through 6napse. In principle, we try to extract knowledge and to find patterns behind the collaborations occurred on the 6napse. The extracted knowledge and patterns will
444
V. Rajsiri, A-M. Barthe, F. Bénaben, J-P. Lorré and H. Pingaud
be used to improve existing collaborations and to define more complicated collaborative processes. In this section, we will discuss about the knowledge extraction from 6napse platform. Then, an application scenario will be presented by using a suppliercustomer use case. 4.1 2-level Knowledge Extraction
Extracting knowledge from 6napse can be occurred at two levels corresponding to the life cycle of the enterprise on the platform: individual registration and collaboration. The first level occurs when the enterprises register on the platform. The second level is when the enterprises collaborate or start exchanging data between their partners. The second one can occur once the first one has been done. We will detail the extractable knowledge at each level of the platform in comparison to the requirements of the ontology defined in Section 3. The individual registration level provides knowledge about the description of the participants concerning the enterprise itself which is seen as an organization. This knowledge is available immediately since the participants have individually registered themselves on the platform and will not be varied depending on the collaboration. We can find this knowledge on the “my company” and “my profile” pages of the platform. The table 1 compares the information that the ontology requires as input to what we can extract from the 6napse. Table 1. Requirements of the ontology vs. Extractable knowledge from 6napse Requirements of the ontology Name of participants Business sector Services Business services Relationships
Extractable knowledge from 6Napse Name of the enterprise Activity sector List of the services List of the business services (or function of the individual belonging to the enterprise) List of the partners
Following is an example of the information on the “service” tab of the “my company” page. The service tab shows the list of services which explain the competencies of the enterprise.
Contribution to Knowledge-based Methodology for Collaborative Process Definition
445
Fig. 6. Printed screen of a service tab of an enterprise
The collaboration level provides the source of knowledge concerning the enterprise which is considered as a member of the network. To enter the collaboration level, they need to declare their partnerships and create their collaborations (network) on the platform. This can occur once the individual registration of each enterprise has been done. After the individual registration, the enterprises can invite other 6napse’s members to be their partners in the network. Then, they can create their collaborations on the collaborative space of the platform. During the partners are collaborating (e.g., transferring documents) via the platform, we will extract their collaboration knowledge. Through this knowledge, we can understand what is happening in the real collaboration. This knowledge is available on the collaborative space which includes the “share service”. The knowledge we expect to extract in this level are, for example, number of participants in the network, CIS services they are using and documents transferred from one to others, see the table below. Table 2. Requirements of the ontology vs. Extractable knowledge from 6napse Requirements of the ontology Number of participants CIS services Transferred resources Business service Common objective Duration of the collaboration (continuous, discontinuous)
Extractable knowledge from 6Napse Number of members in a collaboration CIS generic services of the platform Documents shared on the platform Shared by (individual who shares the document) Description of why creating the collaboration Measurement of the duration of the collaboration
Following is an example of the knowledge on the “share” tab of a collaboration occurred on the platform. The share tab shows the documents transferring between partners of this collaboration.
446
V. Rajsiri, A-M. Barthe, F. Bénaben, J-P. Lorré and H. Pingaud
Fig. 7. Printed screen of a collaboration occurred on the 6napse platform.
The knowledge extracted in this level is, for example, an individual named Laura shares the test case documents as same as Pascal who shares the how-to text to their partners. Comments can be written for each shared documents. However, some working environments force the users to reroute several times before finding the right information, for example, B needs to see the document shared by A before transferring to C. We cannot study the exchange flows of documents on the platform without taking the network topology and the type of relationship into account. This is significant for improving or defining more complicated collaborative processes afterwards. 4.2 Supplier-customer Scenario
To illustrate principles of knowledge extraction in this section, we introduce a customer-supplier use case. The input knowledge is extracted from the 6napse platform. Some reasoning examples will be shortly explained. The output result will be shown at the end. The tables 3 and 4 show the knowledge extracted from 6napse while registering and collaborating respectively. Table 3. Description of each participant in the network Participants
Business sector
M
Manufacturing
S
Part Manufacturing
W
Logistics
Services Making computers Sales Supplying the parts of computer Stocking materials and transporting to customers
Relationships S, W (supplier-customer) M (supplier-customer) M (supplier-customer)
Table 4. The extracted knowledge from 6napse while collaborating
Knowledge Nb of participants CIS services used Resources transferred
Contribution to Knowledge-based Methodology for Collaborative Process Definition
Business services Common objective Duration of collaboration
447
Identifying needs, transferring materials, paying… Fulfill the supply chain for manufacturing products to stock Continuous
The knowledge in the tables 3 and 4 is required as input of the ontology. It will be used to instantiate the ontology defined in the section 3. Once we have the input knowledge (instances in the ontology), we can derive first of all the topology and the characteristics of the network. This network has two chain topologies because, from the deduction, the decision-making power is hierarchic and the duration is continuous. We continue the collaboration definition by deducing the roles and business services. If there is any missing information at this level, this information will be completed by the ontology. In this case, for example, since M provides making computer and sales then M plays manufacturer role which performs identifying needs, receiving materials, paying and producing business services. Once the business services provided by the participants have been reasoned, what we need to do next is to derive all possible dependencies between business services belonging to different participants. This dependency brings to the deduction of the coordination and the CIS services in the collaborative process. After that, we can find if there are any services required in the collaboration that no any participant can be in charge of. If this is the case, we need to create the CIS services to perform these services. For example, the control and evaluation service cannot be done by any participant, so it will belong to the CIS. At the end, we have to deduce once again the dependency between the CIS services and the coordination services. The result is shown in the following figure:
Fig. 8. A solution of collaborative process of the network.
448
V. Rajsiri, A-M. Barthe, F. Bénaben, J-P. Lorré and H. Pingaud
We have to remark that the collaborative process obtained at the end is just a solution for the given use case. It is always possible to have other solutions that meet to collaborative behaviors of the participants more than the proposed one.
5 Conclusion The 6napse collaborative platform is still on the development phase. It allows to extract knowledge before setting up collaborations. The contribution of the 6napse is dedicated to partners who have no idea about what their collaboration is supposed to be or would like to see it clearer. The partners can capitalize on their collaboration knowledge for better collaborating in the future. Also, it plays an important role for enriching the knowledge (instances) in the CNO in order to improve existing collaborations of the partners and define more complicated collaborative processes. The collaborative process obtained from the ontology (Fig. 8) is really near the BPMN compliant process but still not complete. There are some missing elements such as gateways and events. These elements are needed to be added in actual collaborative processes because they can make process more dynamic. Our current work is focused on firstly extracting knowledge from real collaborations occurred on the 6napse. Secondly, we address adding the dynamic aspect to the actual reasoning methodology by taking into account event and gateway elements. Also, the actual knowledge-based methodology, including its concepts, rules and reasoning steps is needed to be finalized and validated.
References [1]
[2]
[3] [4] [5] [6] [7] [8]
[9]
Barthe, A-M. : La plateforme 6napse – présentation et perspectives d’intégration de cet outil et du recueil d’informations qu’il autorise, Mémoire de Master Recherche, Génie Industriel de l’Ecole des Mines d’Albi-Carmaux (2007). Crowston, K.: A Taxonomy of Organizational Dependencies and Coordination Mechanisms (Working paper No. 3718-94): Massachusetts Institute of Technology, Sloan School of Management (1994). Grimm S., Hitzler P. and Abecker A.: Knowledge Representation and Ontologies: Logic, Ontologies and Semantic Web Language, Semantic Web Services. (2007). Gruber, T. R.: A translation approach to portable ontologies, Knowledge Acquisition, 5(2), pp.199-220 (1993). Katzy B., Hermann L.: Virtual Entreprise Research: State of the art and ways forward, pp.1-20 (2003). Konstantas D, Bourrières JP, Léonard M and Boudjlida N.: Interoperability of enterprise software and applications, INTEROP-ESA’05, Springer-Verlag (2005). Lorré, J-P.: Etat de l’art, EBM WebSourcing (2007) Malone, T.W., Crowston K., Lee J., Pentland B.: Tools for inventing organizations: Toward a Handbook of Organizational Processes, Management Science/Vol. 45, N° 3. (1999). Process Handbook Online : http://ccs.mit.edu/ph/ Millet J. and Mukerji J.: MDA Guide Version 1.0.1 (2003)., available on http://www.omg.org
Contribution to Knowledge-based Methodology for Collaborative Process Definition
449
[10] Natalya F.N. and Deborah L.M.: Ontology Development 101: A Guide to Creating Your First Ontology. Stanford University, Stanford, CA, USA. (2001) [11] Protege http://protege.stanford.edu (2000) [12] Porter M. : L’avantage concurrentiel, InterEdition, Paris, pp. 52 (1986). [13] Rajsiri V, Lorré JP, Bénaben F and Pingaud H.: Cartography for designing collaborative processes. Interoperability of Enterprise Software and Applications (2007) [14] Rajsiri V, Lorré JP, Bénaben F and Pingaud H.: Cartography based methodology for collaborative process definition, Establishing the Foundation of Collaborative Networks, Springer, pp. 479-486. (2007) [15] Sirin E., Parsia B., Grau B.C., Kalyanpur A. and Katz Y.: Pellet: A practical OWLDL reasoner, Journal of Web Semantics, 5(2). (2007) [16] Touzi J, Lorré JP, Bénaben F and Pingaud H.: Interoperability through model based generation: the case of the Collaborative IS. Enterprise Interoperability. (2006). [17] Touzi J, Bénaben F, Lorré JP and Pingaud H.: A Service Oriented Architecture approach for collaborative information system design, IESM’07. (2007). [18] Vernadat FB.: Interoperable enterprise systems: architectures and methods, INCOM’06 Conference. (2006).
SQFD: QFD-based Service Quality Assurance for the Lifecycle of Services Shu Liu, Xiaofei Xu and Zhongjie Wang Research Centre of Intelligent Computing for Enterprises and Services (ICES), School of Computer Science and Technology, Harbin Institute of Technology, 150001, Harbin, China {sliu, xiaofei, rainy}@hit.edu.cn
Abstract. Based on a service system, service providers offer services to customers. Quality of a service system has great influence on customer. Therefore in order to provide better services to customer, it is necessary to assure quality for the lifecycle of services. With the experiences accumulated in developing and implementing some typical IT systems in manufacturing enterprises from past decade, we have proposed a new service engineering methodology named SMDA, to assist service providers to build better service system. As part of the SMDA, the Service Quality Function Deployment (SQFD) has been proposed to consider quality aspects of a service system. SQFD, which is adopted from QFD, focuses on designing, evaluating and optimization of service quality in lifecycle of services. The three phases of SQFD, i.e., build-time QFD-oriented service quality design, run-time service performance evaluation and service performance optimization, are illustrated in this paper. Keywords: Quality and performance management of interoperable business processes, Service oriented Architectures for interoperability, Model Driven Architectures for interoperability
1 Introduction In the past decade the world economy has changed rapidly, especially in the advanced countries, a modern service centered economy is emerging with the growing GDP of service sector and advanced development of information technology. In order to gain advantages in the competition of the new economy attention has focused on service quality. While there have been efforts to study service quality, there have been no general agreement on the effective way to measure and improve service quality. One service quality measurement model that has been extensively applied is the SERVQUAL model developed by Parasuraman A, Zeithaml V, and Berry L L. The SERVQUAL instrument has been the predominant method use to measure
452
Shu Liu, Xiaofei Xu and Zhongjie Wang
consumers’ perceptions of service quality. The SERVPERF model is another service quality measurement instrument, was developed later by Cronin and Tayor, which inherited from SERVQUAL and expanded SERVQUAL. Both models have been focusing on service quality measurement lack of attention on service quality design and service quality assurance for the lifecycle of services. Under this atmosphere research center on Intelligent Computing for Enterprises and Services (ICES) of School of Computer Science and Technology in Harbin Institute of Technology (HIT) is one of the pioneers. With the rich experiences accumulated in developing and implementing some typical IT systems (e.g., ERP, SCM, CRM, etc.) in manufacturing enterprises from past decade, ICES proposed a new service engineering methodology named Service Model Driven Architecture (SMDA), to assist service providers to build their service systems in an MDA style, e.g., modeling customer requirements and gradually transforming models into a executable service system [1][2]. As part of the SMDA the Service Quality Function Deployment (SQFD) has been proposed to consider quality aspects of a service system. The SQFD was originated from Quality Function Deployment (QFD) which was developed at the Kobe Shipyard of Mitsubishi Heavy Industries, Ltd. as a way to expand and implement the view of quality. QFD has been widely applied in many industries worldwide, such as automobile, electronics, food processing, computer hardware and software ever since. SQFD focuses on service quality assurance for lifecycle of services includes service quality design, service system evaluation and optimization. The purpose of this paper is to illustrate the approaches of SQFD which is organized as following. The 2nd section explains service quality and SQFD in detail. In the 3rd section the build-time QFD-oriented service quality design was discussed. The run-time service quality evaluation and service lifecycle optimization are introduced in section 4 and 5 respectively. Finally is the conclusion.
2 Service Quality and SQFD 2.1 Service Model Driven Architecture
In order to understand the mechanism of SQFD it is necessary to explain the architecture of SMDA first. SMDA, a new service engineering methodology, contains three-layer service models and a service system, as showing in figure 1.
SQFD: QFD-based Service Quality Assurance for the Lifecycle of Services
Services Demand Acquirement
USML: Unified Services Modeling Language
Service Requirements Model
SRM: Service Requirement Model
Service Process Model Capability Model
Services Design
SBCM: Service Behavior/Capability Model Service Modeling Process
Service Execution Model
Selecting Services Components
(Build time)
Services Component Sets
SEM: Service Execution Model
Evaluation On-Demand Services
Mapping Services Quality Evaluation
453
Service Performance
Implementation and Execution of Services Systems (Run time)
Instances of Services Components SES: Service Execution Systems
Fig. 1. The architecture of SMDA
The first layer of SMDA is Service Requirement Model (SRM), which captures Voice of Customer (VOC). The second layer, Service Behavior and Capability Model (SBCM) are used to design service behaviors and capabilities by transforming VOC defined in SRM. The Service Execution Model (SEM), the third layer of SMDA, further transforms service behaviors and capabilities from SBCM into executable service component sets by selecting appropriate service components. Then SEM is mapped to a Service Execution System (SES) of a specific service. There is a top-down transformation between models in SMDA by three mappings: SRM to SBCM, SBCM to SEM, and SEM to SES. The first two mappings are service modeling process, belong to build-time. And the third mapping is implementation and execution of services systems; belong to run-time [2]. 2.2 Service Quality in SMDA
Service quality is a concept that has aroused considerable interest and debate in the research literature because of the difficulties in both defining it and measuring it with no overall consensus emerging on either [3]. Among a number of definitions for service quality, one that is widely accepted defines service quality as the difference between customer expectation of service and perceived service. Analyzing and measuring the gap between customer expectation of service and perceived service is the starting point in designing and improving traditional service quality. Figure 2 is a conceptual model of service quality with 5 major gaps, which was proposed by Parasurman, Zeithaml and Berry. As shown in the figure gap 5 is the discrepancy between customer expectations and their perceptions of the service delivered, which is the sum of other 4 gaps. In order to improve service quality in traditional service, the efforts have to be focused on reducing the gap 5, which is necessary to be able to measure gap 1~ 4 and reduce each of them.
454
Shu Liu, Xiaofei Xu and Zhongjie Wang
Fig. 2. Gaps in Service Quality in Traditional Service
With similar approach the conceptual model of the service quality of SMDA is proposed with 5 major gaps, as shown in figure 3.
Fig. 3. Gaps in Service Quality of SMDA
x x x
Gap1: the difference between customers expected quality of service and quality characteristics captured in SRM. Gap2: the difference between quality characteristics captured in SRM and quality characteristics transformed in SBCM. Gap3: the difference between quality of SBCM and transformed in SEM.
SQFD: QFD-based Service Quality Assurance for the Lifecycle of Services
x x
455
Gap4: the difference between quality of SEM and external communications to customers. Gap5: the difference between customers expected service quality and customer perceived service quality of service systems.
As seeing in figure 3, to improve service quality of SMDA the key is reducing gap5. Since the gap5 is the sum of other 4 gaps, to reduce each gap become the ultimate goal. As part of SMDA the SQFD was proposed to capture VOC and transform through SRM to SBCM to SEM, finally to service execution system (SES). 2.3 SQFD
Traditional quality evaluation systems aim at minimizing negative quality such as eliminating defects or reducing operational errors. QFD is quite different in that it seeks out customer requirements and maximizes "positive" quality by designing and transforming quality factors through product development process [4]. QFD facilitates translation of a prioritized set of subjective customer requirements into a set of system-level requirements during system conceptual design [5]. Then further translate system-level requirements into more detailed set of requirements at each stage of the design and development process [6]. SQFD is proposed with a similar approach, it translates a set of customer requirements of a service into an executable service system through the service modeling process. SQFD consist of three phases: x x x
Build-time QFD-oriented service quality design Run-time service quality/performance evaluation Service quality/performance optimization
As shown in Figure 4, the service quality design process is depicted along black arrows. VOC is captured in Service Level Agreement (SLA) through negotiation between customers and service providers. As input to SRM, the Quality aspect of VOC is transformed though SBCM and SEM into service system by QFD approach. Through the performance monitoring, data collection, and evaluation the red arrows depict the service optimization process of the service system.
456
Shu Liu, Xiaofei Xu and Zhongjie Wang
Fig. 4. Structure of SQFD
3 Build-time QFD-oriented Service Quality Design As core of QFD, the House of Quality (HoQ) can be used to incorporate VOC into every manufacturing activity. There are four levels of quality house were proposed in typical manufacturing case as shown in Figure 5. It helps to trace what design engineers and manufacturers do for what customers want [7].
Fig. 5. The Four Houses of QFD
With the similar approach the three levels of quality house have been designed with each level corresponding to one of the model in SMDA, as shown in figure 6.
SQFD: QFD-based Service Quality Assurance for the Lifecycle of Services
457
Fig. 6. The three levels of SHoQs
The three-level service house of quality (SHoQ) incorporates customer’s voice into every modeling activity of SMDA. It helps to make customer an integral part of early design synthesis and evaluation activities. In building each level of the quality house of SQFD, four dimension views of SMDA should be considered, as shown in table 1. Table 1. Modeling Views in SMDA Models SRM
SBCM
SEM
Service Provider Customer (S-PC View) Service Behavior View (SB View) Service Action View (SA View)
Four dimension views SMDA Service Service Resource Organization View View (SR View) (SO View) Service Role Service Capability View View (SL View) (SC View) Capability Service Configuration Participation View View (CC View) (SP View)
Service Information View (SI View) Service Information View ((SI View)) Service Information View ((SI View))
HoQ of SQFD Level-1 SHoQ
Level-2 SHoQ
Level-3 SHoQ
The level-1 SHoQs is used to capture “Voice of the Customer” for specific service and transform them into SRM with top-down approach. Identification and extraction of customer’s essential needs are the first step in building house of quality. In the customer requirement acquisition, methods such as interviews, surveys, market investigation, and trend analysis are often used, it is important to ensure that complete, consistent, nonredundant, and true customer requirements are identified and specified [8][9]. When building the Level-1 SHoQ. the four dimensions of SRM should be considered include Service-Provider-Customer View, Service Organization View, Service Resource View, and Service Information View. Figure 6 (a) depicts the level-1 SHoQ, it takes the values and risks extracted from requirements of customers as the inputs of the SHoQ, and outputs the quality targets for each task of specific service.
458
Shu Liu, Xiaofei Xu and Zhongjie Wang
The level-2 SHoQs take the outputs from the level-1 as its inputs and transform them into SBCM with top-down approach. Figure 6 (b) depicts the level-2 SHoQ of the four View, it takes output of target values for each task from level-1 as input and outputs target values for each behavior of specific task. Level-3 SHoQ takes target values of each behavior from the level-2 as its inputs and transforms to target values for each action of specific behavior by topdown approach, as shown in figure 6 (c). Meanwhile, the bottom-up approach is also used to select available service components with different granularity to meet the target values of task/behavior/action.
4 Run-time Quality/Performance Evaluation Figure 7 depicts the Run-time quality/performance monitoring mechanism. Service system in run-time, the monitoring software automatically sends queries and receives reports from different service component environments periodically. The system run-time Key Performance Indicators (KPIs) will be evaluated based on collected data. The service system quality/performance can be evaluated from three aspects with different granularity as shown in figure 8. KPIs of service components are collected and evaluated; KPIs of service orchestration are collected and evaluated; KPIs of service choreography are collected and evaluated. The results from runtime quality/performance evaluation are analyzed for further service system optimization.
Fig. 7. Run-time quality/performance monitoring
SQFD: QFD-based Service Quality Assurance for the Lifecycle of Services
459
Fig. 8. Aspects of service quality/performance evaluation
5 Service Quality/Performance Optimization Service quality/performance optimization is the third phase of SQFD, it is a reversed process compare to service quality design as shown in figure 9. Blue arrows represent the quality design process and the red arrows with opposite direction represent the optimization process. The gaps between customer expected service quality from SRM and customer perceived service quality from service system are identified, analyzed, and traced backward to early phase for reasoning the causes. In order to improve the quality of service system efforts are made to optimization by considering re-design, re-configuration, and or re-negotiation. This closes the loop of quality assurance for lifecycle of services. SRM Performance
SRM (System)
SBCM Performance
SBCM (System)
Expected Service Quality Gap
SEM Performance
SEM (System)
SQFD based quality design
Service System Performance
Service System
Reversed SQFD based quality optimization
Fig. 9. Service quality/performance optimization of SMDA
Perceived Service Quality
460
Shu Liu, Xiaofei Xu and Zhongjie Wang
6 Conclusion In this paper, based on SMDA a SQFD method is proposed to ensure service quality for the lifecycle of services. There are three phases in SQFD which includes service quality design, service quality evaluation, and service quality optimization. The QFD method is adopted and applied to design service quality by building three levels of quality houses. Run-time based service quality evaluation and optimization is also depicted briefly. Further research work include: refining of SQFD based on real case study; defining quality parameter sets for specific typical service sectors etc.
Acknowledgement Research works in this paper are partial supported by the National High-Tech Research and Development Plan of China (2006AA01Z167, 2006AA04Z165) and the National Natural Science Foundation (NSF) of China (60673025).
References [1]
[2]
[3] [4] [5]
[6]
[7]
[8]
[9]
Xiaofei Xu, Zhongjie Wang, Tong Mo. SMDA: a New Methodology of Service Engineering. 2006 Asia Pacific Symposium on Service Science, Management and Engineering. Nov. 30-Dec. 1, 2006, Beijing, China Xiaofei Xu, Tong Mo, Zhongjie Wang. SMDA: A Service Model Driven Architecture. the 3rd International Conference on Interoperability for Enterprise Software and Applications. Mar. 28-30, 2007, Madeira Island, Portugal Parasuraman A., Zeithaml V.A., Berry L.L.. A conceptual model of service quality and its implication. Journal of Marketing. 1985 Vol. 49 Fall, pp. 41~50 Xiaoqing Frank Liu. Software Quality Function Deployment. IEEE POTENTIALS Jong-Seok Shin, Kwang-jae Kim, M. Jeya Chandra. Consistency check of a house of quality chart. International Journal of Quality & Reliability Management. Vol. 19 No. 4, 2002, pp. 471~484 Kwai-Sang Chin, Kit-Fai Pun, W.M. Leung, Henry Lau. A quality function deployment approach for improving technical library and information services: a case study. Library Management. Jun 2001 Volume: 22 Issue: 4/5 Page: 195~204 A. Ghobadian, A.J. Terry. How Alitalia improves service quality through quality function deployment. Managing Service Quality. 1995 Volume: 5 Issue: 5 Page: 31~35 Anne M. Smith, Moira Fischbacher, Francis A. Wilson. New Service Development: From Panoramas to Precision. European Management Journal. October 2007 Vol. 25 No. 5, pp. 370~383 Eleonora Bottani, Antonio Rizzi. Strategic Management of Logistics Service: A Fuzzy QFD Approach. Int. J. Production Economics 103 (2006) 585~599
Coevolutionary Computation Based Iterative MultiAttribute Auctions Lanshun Nie, Xiaofei Xu, Dechen Zhan Harbin Institute of Technology, Harbin 150001, P.R. China {nls, xiaofei, dechen}@hit.edu.cn
Abstract. Multi-attribute auctions extend traditional auction settings. In addition to price, multi-attribute auctions allow negotiation over non-price attributes such as quality,terms-ofdelivery, and promise to improve market efficiency. Multi-attribute auctions are central to B2B markets, enterprise procurement activity and negotiation in multi-agent system. A novel iterative multi-attribute auction mechanism for reverse auction settings with one buyer and many sellers is proposed based on competitive equilibrium. The auctions support incremental preference elicitation and revelation for both the buyer and the sellers. Coevolutionary computation method is incorporated into the mechanism to support economic learning and strategies for the sellers. The myopic best-response strategy provided by it is in equilibrium for sellers assuming a truthful buyer strategy. Moreover, the auction are nearly efficient. Experimental results show that the coevolutionary computation based iterative multi-attribute auction is a practical and nearly efficient mechanism. The proposed mechanism and framework can be realized as a multi-agent based software system to support supplier selection decision and/or deal decision for both the buyer and the suppliers in B2B markets and supply chain. Keywords: Socio-technical impact of interoperability, Decentralized and evolutionary approaches to interoperability, Enterprise application Integration for interoperability
1 Introduction Auctions are important mechanisms for allocating resources and services among agent[1]. It has found widespread use as a technique for supporting and automating negotiations in business-to-business online markets, industrial procurement and Multi-Agent System. Multi-attribute auctions [2, 3] extend the traditional auction setting to allow negotiation over price and non-price attributes such as quality and terms-of-delivery, and promise to improve market efficiency in markets with configurable goods. Traditional auction mechanisms, such as the English, Dutch, First(or Second) price Sealed-Bid auction, can’t be extended straightforwardly to
462
Lanshun Nie, Xiaofei Xu, Dechen Zhan
the multi-attribute settings, because there is private information in both sides of the auction. Several researchers have considered attributes other than price in the procurement setting from a purely theoretical perspective. Most of them adopt mechanism design theory and focus on the agent’s best-response strategy. Che [2] first studies the optimal auction when there are only two attributes, price and quality. He proposes a buyer payoff-maximizing one-shot sealed-bid auction protocol with a first price and a second price payoff function. Branco [4] extends this protocol to the case where the seller cost functions are correlated. Milgrom [5] has shown that efficiency can be achieved if the auctioneer announces his true utility function as the scoring rule, and conducts a Vickrey (second-price sealedbid) auction based on the resulting scores. Beil and Wein [6] propose an iterative payoff-maximizing auction procedure for a class of parameterized utility function (with K parameter) with known functional forms and naive suppliers. The buyer uses K rounds to estimate the seller cost functions deterministically. For the final round they design a scoring function so as to maximize buyer payoff. Although theoretical progress has been made, there are some limitations:(1) Agents are required to have complete information about their preference. However, preference elicitation is often costly and bidders would prefer not to determine an exact value tradeoff across all different combinations of attribute levels. (2) Agents are required to reveal a great deal of private information. However, they would prefer to reveal as little information as possible about costs and preferences. (3) Agents are often required to have complete rationality. Another approach to auction problem is competitive equilibrium theory. In this model an agent plays a best-response to the current price and allocation in the market without either the strategies of other agents or the effect of its own actions on the future state of the market. Iterative auction mechanisms based on it, which allow agents to provide incremental information about their preference, are especially important in application of multi-attribute auctions. Research about this is just beginning. As far as we know, only Parkes and Kalagnanam [3] propose a iterative primal-dual based multi-attribute auction mechanism for reverse auction settings. The auctions are price-directed. A myopic best-response strategy is in equilibrium for sellers assuming a class of consistent buyer strategies. The auctions are efficient with a truthful buyer. In this study, we propose an iterative multi-attribute auction mechanism based on competitive equilibrium theory. Coevolutionary computation method [7] is incorporated into the mechanism to support economic learning and strategy for the sellers. Section 2 formulates general multi-attribute auction problem. Section 3 introduces an iterative auction mechanism for it. Coevolutionary computation based multi-attribute auction method is presented in Section 4. A computation result is reported in Section 5. We discuss some characteristics of the proposed mechanism in Section 6.
Coevolutionary Computation Based Iterative Multi-Attribute Auctions
463
2 Multi-attribute Auction Problem In the multi-attribute auction problem there are N sellers, one buyer, and M attributes. Let I denote the set of sellers, and J denote the set of attributes. Each attribute, jJ, has a domain of possible attribute values, denoted with 4j. The joint domain across all attributes is denoted 4=41u42u}u4M. For an attribute bundle, T4, each seller, iI, has a cost function, ci(T)t0, and the buyer has a value function, v(T)t0. We restrict our attention to the problems in which a single buyer negotiates with multiple sellers in a reverse auction, and will eventually select a single seller. Also, we assume that agents have quasilinear utility functions. The utility to seller i for selling an item with attribute bundle T at price p is the difference between the price and its cost, i.e. u i (T , p ) p ci (T ) . Similarly, the utility to the buyer for buying an item with attribute bundle T at price p is the difference between its value and the price, i.e. u B (T , p ) v(T ) p . We believe that efficiency is a more appropriate goal for multi-attribute auction since it is usually applied in business to business market and procurement activity[3]. In addition to efficiency, individual-rationality, budget-balance, low rationality requirement to agents are all desirable properties of a multi-attribute auction mechanism.
3 Iterative Multi-attribute Auction Mechanism We propose the following iterative mechanism(IMA) for the multi-attribute auction problem. Step 1 The buyer announces minimal constraints to bids, maximal round MAX_R, the number of bids OPT in each round, the number of rounds for quit QUIT_R. The current round CUR_R=0. Each seller submits an initial bid. Step 2 The buyer determines the best bid for this round BestbidCUR_R and announces it to the sellers. Step 3 CUR_R++. Each seller (i), except the one providing BestbidCUR_R-1, proposes at most OPT new bids to the buyer and the buyer returns the result whether these bids are better than BestbidCUR_R-1. If some proposed bids are better than BestbidCUR_R-1, seller i selects one of them as the bid of this round. Else, i.e. no bids are better than BestbidCUR_R-1, count the number of consecutive rounds seller i failed, if it is larger than QUIT_R, seller i loses. Step 4 If there is only one active seller, this seller win. The deal result is BestbidCUR_R-1. And the whole auction terminates in success. Step 5 If CUR_R<MAX_R, goto Step 2; else, the whole auction terminates in failure. In mechanism IMA agents don’t have to know complete information about their preference. Preference elicitation is only on demand when a new bid is proposed. They can simply compare the bids without knowing and determining their exact utility values. Little information is revealed in IMA. The sellers learn the
464
Lanshun Nie, Xiaofei Xu, Dechen Zhan
preference of the buyer through interaction and information exchange, and improve the bids in order to find the deal with high efficiency. At the same time, since the buyer selects only one seller, the relationship between sellers are completely competitive and no collusion will happen[8]. So, every seller struggles to propose bid which the buyer prefer in order to win and maximizes his utility. We assume that the buyer is truthful, i.e. the buyer’s utility is achieved through natural competition between the sellers rather than cheat. This is reasonable for the long-term market design and cooperation between enterprises. As concerned as the sellers, preparing a bid that matches the auctioneer’s requirement, which is better than the bids of other seller agents and also maximizes the seller agent’s utility is a sophisticated task when it does not completely know the buyer’s preference and other sellers’ cost. So learning capability and strategy capability is critical to the seller agent.
4 Coevolutionary Computation Based Multi-attribute Auction Agent-based computation economics (ACE) [9] is the computational study of economies modeled as evolving systems of autonomous interacting agents. In recent years, ACE has become a powerful tools for mechanism design. Coevolutionary computation is a dominant approach of ACE. In this study, we incorporate ACE tools into multi-attribute auction problem and propose a coevolutionary computation based multi-attribute auction model for the first time. In doing so, two important goal can be achieve. First, the auction mechanism IMA is simulated appropriately and the performance of it is evaluated. Second, we provide a powerful tool for supporting the strategy of the seller agents in the mechanism IMA. Coevolutionary computation is developed from traditional evolutionary algorithms, which simulates the coevolutionary mechanism of the species in nature and adopts the notion of ecosystem. Multiple species coevolve and interact with each other and result in the continuous evolution of the ecosystem. The species are genetically isolated–individuals only mate with other members of their species[7, 10]. Coevoluationary computation model for iterative multi-attribute auction mechanism IMA is shown in Fig. 1. Each seller in the auction is represented by a species in the ecosystem. Each species evolves a bundle of individuals, which represent the bidding strategies of the corresponding seller. Each species is evolved through the repeated application of a conventional evolutionary algorithm. The buyer is represented by the buyer agent. Fig. 1 shows the fitness evaluation phase of the evolutionary algorithm from the perspective of species 1. To evaluate an individual (bidding strategy) from species 1, collaboration is formed with representatives (representative bidding strategy) from each of the other species. Buyer agent determines whether this individual is better than other representatives and returns the result to species 1. Then species 1 can use the result to evaluate the fitness of its individual. Here the fitness is the utility to the corresponding seller provided by this individual if it wins. There are many possible methods for
Coevolutionary Computation Based Iterative Multi-Attribute Auctions
465
choosing representatives, for example, the current best individual, or a random individual[10].
Fig. 1. Coevoluationary Computation Framework for IMA mechanism
The coevolutionary computation procedure, entitled IMA-CGA, for the iterative multi-attribute auction mechanism IMA is as follows, in which the evolution of each species is handled by a standard Genetic Algorithm. Pseduo-code of IMA-CGA k=0 for each species i do begin initialize the parameters of the species' GA initialize the species' population
Pop 0i
evaluate fitness of each individual Ind 0i , n in Pop 0i end while termination condition=false do begin for each species i do begin choose a representative Re pki from Popki reproduction from
Popki to get Mateki
crossover and mutation from
Mateki to get Popki 1
evaluate fitness of each individual Ind ki ,n1 end k=k + 1 end output the winner and the bid
in
Popki 1
Here the details of the species’ fitness evaluation procedure and the buyer agent’s winner determination procedure are omitted since they are straightforward. The key variables, data structure and procedure of IMA-CGA are defined as follows: k is the generation counter of the evolutionary process, and it corresponds to CUR_R. n is sequence number of individuals in a species. Every individual corresponds to a bidding strategy, which is composed of the value of price and
466
Lanshun Nie, Xiaofei Xu, Dechen Zhan
non-price attributes. Every attribute is encoded in a chromosome. Simple roulette wheel selection strategy is used for reproduction. Uniform strategy is used for crossover. Swap strategy is used for mutation. The current best individual is selected as the representative of each species. This is consistent with the selfinterested nature of the sellers and also accelerates the speed of evolution. If the fitness of all the individuals of a species are zero for the last QUIT_R consecutive generations, the corresponding seller loses and is thrown out from the auction. When the last competitor quits, the seller left is the winner, and the winning bid is the representative proposed by it in the last generation. Of course, if k is large than MAX_R of IMA, the auction terminates in failure. The coevoluationary computation model completely simulates the iterative multi-attribute auction mechanism IMA, including the buyer, the sellers, the competition between the sellers, the interaction between the buyer and the sellers, and the iterative auction procedure. At the same time, the coevolution of the species provides the corresponding seller with economic learning capability and strategy capability, so the strategy of the sellers in IMA can be supported by this tool.
5 Computational Results The problem in Beil [6] is adopted here to illustrate our mechanism because the buyer value function and the seller cost functions are all complex and nonliear. This problem has N=2 sellers and one nonprice attribute, quality. The cost function of seller 1 is q3+4q15, the cost function of seller 2 is 2q+q3, and the value function of the buyer is 5q0.9. The IMA-CGA is implemented in C++ based on GALib[11]. The price and quality attribute are encoded in real. The size of population is 50. The crossover probability is set to 0.9, and the mutation probability is 0.1. QUIT_R is set to 3. When a bid provides the buyer with a 0.1 higher utility than the other, the bid wins. The problem is solved by IMA-CGA and the procedure is shown in Fig. 2. In Fig. 2, S1U and S1toB represent the utility to the seller 1 and the buyer, respectively, provided by the best individual of this generation. S2U and S2toB represent the utility to the seller 2 and the buyer, respectively, provided by the best individual of this generation. The competition and interaction between the two sellers are mainly reflected in the trend of S1toB and S2toB. As a whole, competition from the opponent compels seller agent to provide better and better bid for the buyer in order to defeat the opponent, with the utility for itself decreasing. The utility to the buyer provided by seller 2, S2toB=1.8717, can not defeat that provided by seller 1, S1toB=1.9787, from the 6th generation. This does not change after 4 generation. So seller 2 quits and seller 1 wins. The outcome is that price p is 2.2096, quality q is 0.8213, and the utility of seller 1 is 1.4466.
Coevolutionary Computation Based Iterative Multi-Attribute Auctions
467
Fig. 2. Coevolutionary computation procedure for the multi-attribute auction problem
Let’s analyze whether the outcome reached by the coevolutionary computation procedure is an equilibrium. First, assuming there is complete information, constrained by individual-rationality, the maximal utility to the buyer provided by seller 2 is 2.0198. Since 2.0198 is less than 1.9787+0.1, there exists no strategy better than the outcome, i.e. lost, for seller 2. As concerned as seller 1, the maximal utility to itself is 1.4588, constrained by winning seller 2. It deviates merely 0.8% from the utility in the outcome by IMA, i.e. 1.4466. So, for seller 1, there are nearly no strategy better than the outcome. In a word, the strategy provided by the coevolutionary computation procedure IMA-CGA is ex post Nash equilibrium for the sellers, assuming the buyer is truthful. Now let’s go to the efficiency of mechanism IMA. The Vickrey auction mechanism proposed by Milgrom [5] is efficient assuming the buyer announce his truthful utility function as scoring rules. We use it as the benchmark for evaluating IMA. The outcome of Vickrey mechanism is that, seller 1 win, the utility of the buyer is 2.0198, and the utility of seller 1 is 1.4177. The utility of the buyer in the outcome of mechanism IMA is 2% less than that of Vickrey. The utility of seller 1 in the outcome of mechanism IMA is 2% more than that of Vickrey. The total utility of the market in mechanism IMA is 3.4253, which deviate only 0.3% from that of Vickrey(3.4375). Hence, iterative multi-attribute auction mechanism IMA, with coevolutionary computation procedure IMA-CGA, is nearly efficient for the multi-attribute auction market.
6 Discussion Some characteristics of the proposed mechanism IMA, with procedure IMA-CGA, for multi-attribute auction problem are as follows: (1) The multi-attribute auction model in this study has no limitation on attributes and utility functions, hence it has
468
Lanshun Nie, Xiaofei Xu, Dechen Zhan
wide applicability. (2) Mechanism IMA supports incremental preference elicitation on demand. Moreover, agents can simply compare the bids without having to determine the exact utility value of them. (3) Agent do not have to reveal complete utility function or scoring function. Information revelation is little and private information is well preserved. (4) Not only coevolutionary computation procedure IMA-CGA supports myopic best-response strategies for the sellers, but also these strategies are ex post Nash equilibrium. (5) Mechanism IMA is nearly efficient for multi-attribute auction market and has a quick auction speed.
7 Conclusions Multi-attribute auctions are central to negotiation in business to business markets, procurement activity and multi-agent system. A novel iterative multi-attribute auction mechanism IMA for reverse auction settings with one buyer and many sellers is proposed based on competitive equilibrium theory. The auctions support incremental preference elicitation and revelation for both the buyer and the sellers. Coevoluationary computation method is incorporated into the mechanism to support economic learning and strategy making for the sellers. Experimental results show that the myopic best-response strategy provided by it is ex post Nash equilibrium for sellers assuming a truthful buyer strategies and the auction are nearly efficient, i.e. the total utility of IMA deviates mere 0.3% from that of multiattribute Vickrey auction mechanism. Efficient outcome can be quickly reached by the practical mechanism IMA in the setting where agents don’t have complete information about their preference, or they would not reveal much private information, or they don’t have complete rationality. Also, the proposed mechanism and framework may be extended to the multiattribute, multi-unit auction problem[12], which is so complex that, as far as we know, there are nearly no theoretical or practical approaches for it to this day. A Multi-agent based system, in which the proposed mechanism and framework are realized, has been developed for supporting the reverse multi-attribute auction in B2B markets and supply chain procurement activity. Supplier selection decision and/or deal decision are supported by the system for both the buyer and the suppliers. The software will be applied and verified in a supply chain composed of an automobile company and its main suppliers.
References [1] [2] [3] [4]
McAfee R.P., McMillan J.: Auctions and bidding. Journal of Economic Literature. 25(2) (1987) 699–738 Che Y. K.: Design competition through multidimensional auctions. RAND Journal of Economics. 24 (1993) 668–680 Parkes D.C., Kalagnanam J.: Models for Iterative Multiattribute Procurement Auctions. Management Science.51(3) (2005) 435–451 Branco F.: The design of multidimensional auctions. RAND Journal of Economics. 28(1) (1997) 63–81
Coevolutionary Computation Based Iterative Multi-Attribute Auctions
469
Milgrom P.: An economist’s vision of the B-to-B marketplace. Executive white paper, http://www.perfect.com. [6] Beil D.R., Wein L.M.: An inverse-optimization-based auction mechanism to support a multiattribute RFQ process. Management Science. 49(11) (2003) 1529–1545 [7] Potter M.A., De Jong K.A.: Cooperative coevolution: An architecture for evolving coadapted subcomponents. Evolutionary Computation. 8(1) (2000) 1–29 [8] Klemperer P.D.: Auction theory: a guide to the literature. Journal of Economic Surverys. 13 (1999) 227–286 [9] Tesfatsion L.: Agent-based computational economics: Growing economics from the bottom up. Artificial Life. 8(1) (2002) 55–82 [10] Wiegand R.P.: An analysis of cooperative coevolutionary algorithms. Ph. D. dissertation, George Mason Univ., 2003. [11] Matthew W.: GAlib: a C++ Library of Genetic Algorithm Components, version 2.4, MIT,1996. [12] Bichler M., Kalagnanam J.: Configurable offers and winner determination in multiattribute auctions. European Journal of Operational Research. 160(2) (2005):380–394 [5]
Knowledge Integration in Global Engineering Prof. Reiner Anderl, Diana Völz, Thomas Rollmann Petersenstraße 30, 64287 Darmstadt, Germany {anderl, voelz, [email protected]}
Abstract. In the globalised economy, organisational structures and methods used in product development have changed. Companies and organisations have responded by reengineering involved processes, increasing the use of information and communication technologies. Although not new, innovative concepts are required to fully support the resulting distributed virtual product development process. Modern information systems provide the access to information as well as communication, but actual software tools do not integrate the changing circumstances, arising from working in an intercultural field. Global Engineering can create problems of misunderstandings in the product development process leading to a possible lack of information as well as knowledge circulation. Knowledge Integration in collaboration supports successful communication and cooperation. It also supports finding common grounds, processes, and project language. The aim of this paper is to advance knowledge integration in product development to tackle the new challenges in Global Engineering. Keywords: Global Engineering, Knowledge Integration, cultural diversity in collaborated team work, different organisational structures, product development
1 Introduction Due to intensified processes of globalisation organisations have reengineered product development: Engineers working together in global teams embedded in virtual environments. The new circumstances in intercultural global team work offer new possibilities to create innovative products. However, the consequential virtual team environments also require new strategies to overcome the challenges of global product development. Team collaboration in product development is vital for developing successful products. This study presents an empirical survey in Global Engineering including interviews that identify the actual reengineering of working processes in global organisational structures. An overview is given about the important aspects impacting global team work which should be taken under consideration to improve global product development. The focus of this paper is
472
Prof. Reiner Anderl, Diana Völz, Thomas Rollmann
the issue of Knowledge Integration as a key for improved virtual team collaboration in intercultural fields. It will be argued that Knowledge Integration is the basis for mutual understanding in Global Engineering.
2 Global Engineering 2.1 Why has Engineering gone global?
World-wide networked engineers and modern information as well as communication technologies offer new forms of collaboration in product development. An engineer can better understand costumers’ needs and feelings when located in the local sales market. Therefore customers’ requirements can be integrated earlier in the product development process. This reduces time-consuming and costintensive developments. Today it is necessary for companies to be represented in the global market to be competitive. From another point of view, Global Engineering makes it possible to combine dispersed worldwide expertise on one project. Special knowledge support is sometimes needed in engineering, e.g., the application of special software simulation tools. In Global Engineering the expert is able to work on several projects at the same time without leaving his location. Furthermore, product development in a multicultural field can evoke synergy effects. “Such teams have the potential to provide companies with a more practiced and economical way to develop new products and services.” [1] 2.2 Global Teams
Global teams are both geographically dispersed and culturally diverse. They work together in virtual environments and are mostly embedded in different organisational structures. Research suggests, however, that companies are struggling to deal with the myriad problems arising from the use of such global teams [1]. Much of this struggle seems to stem from the nature of global teams.
3 Challenges in Global Engineering 3.1 An Investigation in Global Engineering
This research aims to investigate the support of Global Engineering by the use of Computer Supported Cooperative Work (CSCW) - Technologies. An empirical survey addressing the shortcomings in communication, coordination and cooperation as well as the application of (CSCW) - Technologies in practical Global Engineering was developed to tackle the challenges in global team work.
Knowledge Integration in Global Engineering
473
The survey was started at the Global Annual PACE-Forum [17] and additional distributed via e-mail to engineers working in the global field. The first part of the survey entailed questioning the respondents about the most important factors affecting the Global Engineering process. The second part of the survey was meant to understand the benefit of team building process. The number of international respondents was forty-six. Effects in Global Engineering were measured by asking the respondents to indicate the extent to which seven factors were influencing the global team work and the project’s results. A five item scale was used to evaluate each factor. The mean of each factor which represents is overall importance is pointed out in Figure 1a). The respondents estimated that “different organisational cultures” was the most influential factor in global team work.
Fig. 1. Factors impacting Global Engineering team work and project results
The organisational culture was the most vital factor influencing Global Engineering regardless of the different roles in the global team (team member, project leader, and process leader). However, the second and third factors varied depending on the position in the team (Fig. 1b). The findings are influenced by the different areas of responsibility depending on the role of the engineer in the team. Figure 2 presents the responses about the teambuilding’s benefits in the Global Engineering project. A four item scale was used to evaluate each factor. The mean of each factor is presented in Figure 2 based all of the responses. Due to most respondent’s attitude, teambuilding arrangements can lead to a better understanding of common project goals, and Knowledge Integration between the project members was found to the second most important factor affecting the team cooperation. It is striking in the findings of the questionnaire that the number of 24 hour developments has increased in product development.
474
Prof. Reiner Anderl, Diana Völz, Thomas Rollmann
In addition, five interviews with representatives of engineering companies working in a global field showed the global re-engineering of product development processes and its consequences. After having merged, companies reorganised their departments, thus new competence centres were established. Work processes directly integrate suppliers in the product development. The standardisation of those processes makes it possible for testing procedures, virtual modelling, and assembling to be processed anywhere and anytime from every engineer.
Fig. 2. Desired benefits of teambuilding methods
The global management regulates the different engineering competence departments worldwide. Members of global teams are confronted with a global supervisor that is usually not working at the same location, thus they do not necessarily know supervisors personally. The globalisation of both organisational structures and standardised processes enable the distribution of tasks depending on free resources and available competences in all cross-locations. Product data management systems permit access to worldwide distributed development data, and CSCW-technologies enable communication, coordination, and cooperation in team collaboration. The misunderstandings and lack of trust are crucial issues which every organisation solves in their own way. Some organisations arrange “face-to-face” meetings or a “coming together” to build cohesion in global teams, whereas other organisations include cultural trainers in collaborative team work to coach the team members in cultural issues. To support cooperation, as well as to overcome language difficulties in communication, engineers use visualisations of the product. 3D-CAD-models or digital mock-ups represent the real form of the product to evaluate and the engineers discuss development questions over wide distances and across cultural boundaries. According to statements of the interviewed persons, visualisations can help to build a common project goal and improve mutual understanding. Visualisation tools, like shared desktop or viewer, offer the possibility of working together in globally dispersed teams. 3.2 An Outline in Global Engineering
Due to the geographical extent of collaboration, product development processes are affected by new challenges.
Knowledge Integration in Global Engineering
475
The challenges for personal interaction, including different language skills as well as a different set of cultural beliefs, can lead to misunderstandings and more room for interpretation in global team work. Different countries have a different focus on skills, mirrored by education and working attitudes, and different companies have various approaches to work which have to be combined in cooperation. The challenges for managing global teams stem from the nature of global teams. In Global Engineering the team members are geographically dispersed and often live in different time zones. Communication only takes place virtually. The team members working in networks seldom know each other personally. This can lead to a lack of trust and cooperation between team members. Finally, these aspects make it complicated for project managers to build a common goal and a task strategy as well as cohesion between the team members. Sometimes “face-toface meetings” or a “coming together” are arranged to overcome this problem in Global Engineering. In addition to cooperation between companies there is often non-uniform technical support. This implies that the software systems are not compatible. Therefore, it is difficult for an engineer to get the right information when it is needed. The transparency of information and knowledge is vital in product development. The challenges can be summarized as challenges in communication, coordination, and cooperation of global teams that are embedded in virtual team environments.
4 Global Virtual Team Environments 4.1 Global Virtual Team Rooms
A comparison of Real Team Rooms (the room where co-workers meet face-toface) and Virtual Team Rooms (communication network linking global teams) shows the differences of both working environments. Today’s Real Team Rooms offer an all encompassing working environment where team members personally interact which leads to team cohesion share environment. In contrast to Real Team Rooms, the team members of Virtual Team Rooms are networked by information and communication applications. The utilised tools depend on organisational structures. The product visualisation is the connecting factor of Virtual Team Work and the product visualisation fosters mutual understanding in such teams. 4.2 Communication, Coordination and Cooperation in Global Teamwork
Empirical studies about team work in product development deal with the key points of successful team collaboration: communication and coordination as referenced in [2], [3], and [4]. In his “3C-Model”, Teufel [5] added the key point coordination.
476
Prof. Reiner Anderl, Diana Völz, Thomas Rollmann
The “3C-Model” consists of Communication, Coordination, and Cooperation are terms that must be kept in mind in interdisciplinary team collaboration which can be extended to Global Engineering. Communication means the exchange of development data as well as management project information in verbal, textual, or graphical form. Coordination is needed to structure distributed and interdisciplinary team work in the planning of time schedules, work processes, milestones as well as in networking knowledge, expertise, information, and data. Cooperation in team work means to be responsible for one’s own field of duty and to be aware of the interfaces to one’s team colleagues. Cooperation is driven by the open-minded behaviour of the team members to transfer information, expertise, and results to his colleagues. 4.3 CSCW-Technologies
“Computer Supported Cooperative Work (CSCW) is a generic term which combines the way people work in groups with the enabling technologies of computer networking and associated hardware, software, services, and techniques” as outlined by Wilson in [16]. CSCW-technologies resolve the limitation of time and space in collaboration, thus team meetings can take place anywhere and anytime. Groupware enables the synchronous communication of many team colleagues over a great distance. Communication is more focused on the project tasks because important project feedback can be given when needed. The disadvantages of CSCW-technology use is that team members are confronted by a flow of redundant information and the lack of social contact. A classification of CSCW-technologies regarding time and space in the CSCWMatrix was developed by Johansen in 1988 [19]. This model was extended by distinguishing between synchronous and asynchronous CSCW-technologies. Burger [6] integrated the classification of CSCW-technologies in the 3C-Model of Teufel (Fig. 3).
Fig. 3. CSCW-Triangle-Matrix according to [5]
Knowledge Integration in Global Engineering
477
The support of communication is vital for verbal, textual or graphical information exchange in collaboration. Tools providing the possibility to communicate can be subdivided into tools enabling asynchronous communication, e.g., e-mail exchange. Examples of synchronous communication technology include chats or video conferencing systems. Communication tools can be used individually or can be connected with further Groupware systems. Coordination tools should enable an unhindered and efficient flow of work in collaboration. Coordination can be processed by the project members themselves by using the aforementioned communication tools or by special workflow management systems. Workflows should advance the project’s progress in coordinating activities and resources of team members. Systems for coordination also enable access to development data and allow for easier administration of the project information. Cooperation is driven by the interaction and open-minded behaviour of the team members. Cooperation tools should fulfil different requirements depending on the project phase. Therefore a number of systems are needed with various applications, such as permitting synchronous as well as asynchronous working on common documents. In summary, there exists a lot of CSCW-Technology to support team collaboration in the global field. According to [7] and the survey, the most technology is intuitively applied in the correct way depending on the richness of communication and information transfer by the users. Facial expression and body language are important nuances that are lost in asynchronous communication. Only the video conference system is less convenient to use because of the needed preparation time. Steinheider and Legrady [8] introduced the cooperation model based on the 3CModel [5]. In this model cooperation is replaced by Knowledge Integration as the basis of communication (Fig. 4) and is defined as the partial exchange of knowledge between participants [8].
Fig. 4. Extended CSCW-Triangle-Matrix according to [6]
478
Prof. Reiner Anderl, Diana Völz, Thomas Rollmann
Due to this research the IT-support of Knowledge Integration needs further attention.
5 Knowledge Integration a Key in Global Engineering? 5.1 Knowledge Integration
Knowledge Integration means the application of tacit knowledge into a specific situation. Team members pool their tacit knowledge in solving a collective task, e.g., during the Product Development Process when developing innovative products. Effective teamwork requires joint problem solving to integrate and apply knowledge to the task at hand. This process requires an environment of easy, frequent, content- and context-rich interpersonal interactions. Knowledge Integration is developed over time and primarily through direct interaction between the team colleagues. Interaction takes place through rich and iterative communication and information transfer in team work. In Virtual Team Environment the dispersion of team members, their different cultural background, working attitudes, expertises as well as languages constrain the development of Knowledge Integration in Global Engineering. In these settings Knowledge Integration can suffer, thus tacit knowledge is not integrated and is not brought to bear on Product Development Process. [9] Based on Knowledge Integration cooperation can take place to define a common project goal and project content as well as a common project language. Therefore information and knowledge have to become productive. The gained Knowledge Integration should be the background for further communication. To support the communication process leading to Knowledge Integration in team collaboration further research has to be done. [8] According to Clark and his colleagues, “common ground” is a key ingredient of effective communication and collaboration in team settings [10], [11], [12]. To gain Knowledge Integration flexibility is needed. Using IT support, strong rules are necessary but can lead to unevenly shared knowledge across the virtual environment. A prerequisite for effective Knowledge Integration in teams is, knowing who has the required knowledge and expertise, where the knowledge and expertise are located, and where and when they are needed. Knowledge exists in firms, networks and is bounded in employees. Knowledge Integration enables the ability of organisations to sense, interpret, and respond to new business opportunities, e.g., re-engineering of Product Development Processes. [9] The investigation of [8] points out that common Knowledge Integration is easier gained by the use of visualisation of the project item in collaboration. Especially in product development, where engineers work together, visualisation plays an important role, realising the idea in a digital model, helping to ensure the level of Knowledge Integration. Documentation and visualisation plays an important role for Knowledge Integration in the Product Development Process. This can be realized in a graphical supported communication. Especially engineers are used to communicate by visualisation. Current research in linguistic science
Knowledge Integration in Global Engineering
479
investigates the interplay of texts and drawings in communicating the knowledge in the field of engineering [13] .The connecting factor for all cross-locations in product development is the product which can be visualised by the 3D-geometry. The digital model can be helpful to ensure a fixed level of Knowledge Integration in team work. Several studies revealed that enhanced 3D-representations are immediately understood by about 90% of the engineers compared to only 10-30% regarding plain texts or 2D mechanical drawings. Current research dealing with the establishment of Knowledge Management Systems (KMS) doesn’t take under consideration that Knowledge Integration is the first step to do before developing a KMS. 5.2 The Concept of a Computer Supported Knowledge Integration System
The results of the survey showed a lack of Knowledge Integration in Global Engineering team work. According to the last section special beneficial technology does not exist that supports Knowledge Integration in Global Engineering. To build Knowledge Integration within a Virtual Team Environment, a number of requirements have to be taken into account. Some of the important requirements of Computer Supported Knowledge Integration Systems (CSKIS) are listed in Table 1 and described afterwards: Table 1. Requirements of Computer Supported Knowledge Integration Systems. Requirements of CSKIS flexible socio-technical accessible context based social well arranged data representation generally understandable digital model based compatible adaptable storable network structured common ground based
Description adaptable to different teams, different tasks in product development, different organisation structures allow direct interaction between team members, simple but well structured dispersed team members should have access to the system as well as to information regardless of their location allow communication on a common project task facilitate virtual team building representing ongoing actual project information and development data in the user’s view facilitate mutual understanding regardless of culture, language, expertise and working attitude as the language of engineers and product oriented data representation to heterogeneous software systems compatible to existing software systems for further knowledge management systems and lessons learned as well as technical product documentation guideline to stored knowledge (who knows and where it is) basic information about the own and other cultures, working attitudes, expertise – competence matrix, yellow pages
Knowledge Integration significantly depends on the interaction in teams. The behaviour of a team member is relevant in building Knowledge Integration such as diligence, openness, acceptance, and the readiness to change the perspective to a
480
Prof. Reiner Anderl, Diana Völz, Thomas Rollmann
better understanding for a colleague. These factors can hardly be managed by software tools, but a socio-technical management system that includes team members interactively working together based on a common context [14]. A CSKIS can foster strong ties within weakly coupled subnet works of individuals to build trust and cohesion by facilitating Knowledge Integration, according to [9] “Acquiring knowledge” by communication [15] assimilates external knowledge in projects. CSKIS should offer a collaboration platform to build common grounds in global teams. Common grounds about other professional disciplines, culture, and organisational structures are key points for Knowledge Integration. “Identify knowledge” is the task to analyse internal and external organisation knowledge to gain an overview of the available organisation competences. This means that data, information and competence available in data banks, internet or literature have to be identified. In this process teams should become aware of the involved project competence. In Global Engineering it is a part of teambuilding. The formation of teams is a key point for the involved project knowledge. Only knowledge involved in teams can circulate through networked projects. Distribution of knowledge plays an important role in global teamwork because communicated knowledge is mostly distributed by communication technologies or in form of texts and graphics. In intercultural team work it is vital to document and visualise the development results and experiences to gain common grounds. Visualisation is mentioned by the respondents as an important tool to advance mutual understanding in global team work. The best way is the use of interactive documents including visualisation because Knowledge Integration is a dynamic process where the activity of all team members is needed. Further requirement of CSKIS is to represent information in the users view to guarantee the distribution of knowledge. The integrated knowledge should be stored, for example, in content management systems to enable access to knowledge anywhere at anytime for global team members. 5.3 Further steps
Knowledge Integration has to be classified in the Product Development Process to focus the knowledge based interaction between team members in Global Engineering. Current research doesn’t take into account the special knowledge based task during the Product Development Process. The ongoing phases of the Product Development Process change from technical to social based accommodation with different deepness of team interaction. Further steps are to investigate applied Collaboration Platforms and to integrate dynamic visualisation technology in them, e.g., the Portable Document Format Technology linked with Product Data Management Systems. The advantage is that existing platform software is interlinked with actual 3D-geometry, independent of expensive 3D-CAD-software tools. 3D-PDF represents compressed but exact 3Ddata, which reduce time-consuming upload processes. Therefore the system is portable and applicable in every kind of team collaboration. In early 2005 PDF received numerous new functionalities regarding the visualisation of 3D-geometries. The functions cover the possibility of including
Knowledge Integration in Global Engineering
481
user interactions to alter the current state of the graphical representation. Users viewing the document can easily rotate, pan, tilt, and zoom the 3D-object as well as trigger pre-loaded animations of the 3D-object. Measurements are also possible. Aside from the well-known functions for the representation of textual information, PDF was enhanced for a combination of both, graphical and textual information in one document. Communication processes with the ability of transferring information also via 3D-representations enable collaborative engineering to work more efficiently, as technical issues and problems can be demonstrated more vividly. They are also more easily understood without the need for special terminologies if 3D-models are used as interactive contents. Further advantages of the PDF-Technology are listed in Table 2. [18] Table 2. Advantages of the PDF-Technology Advantages
- Tools and plug-ins can be inserted into PDF-based applications. - High degree of awareness and diffusion rate - Data integrity: PDF-documents integrate textual information, fonts, images and figures, graphs, audio, video and 3D-objects - Protection of data privacy: PDF-documents can be signed with digital keys, or they can be cryptographically secured
Also, the system can integrate additional functions and information like establishing project wikis, “yellow pages”, containing profiles of team members specifying their area of specialisation, and searchable libraries as well as electronic bulletin boards where team members can post questions and seek other team members’ assistance and knowledge. One further step should be an investigation of already implemented functions in collaboration platforms helping to gain Knowledge Integration, e.g., videoconferencing, groupware, e-mail and shared whiteboards and the testing of possible integration of new technologies, e.g. hypervideo functions.
6 Conclusion The purpose of this paper is to show the challenges in Global Engineering and the re-engineering of organisational structures as well as product development processes. This leads to new requirements for CSCW-Technologies. The existing communication and collaboration tools are applied intuitively in the correct way. But these tools are not able to store the gained Knowledge Integration when communicating. Therefore new functions have to be implemented in collaboration platforms enabling flexible, interactive, and context based global team work. In engineering, communication by using the evolving 3D-geometry of the product is sensible in bridging the language difficulties in Global Engineering. The communication data should be stored to guarantee Knowledge Integration in the actual team project for “lessons learned” in further global projects as well as for the use in the technical product documentation. Further research is needed to enhance
482
Prof. Reiner Anderl, Diana Völz, Thomas Rollmann
Knowledge Integration during the product development phases and to test the integration of 3D-PDF in collaboration platforms.
References [1]
[2]
[3]
[4]
[5] [6] [7] [8]
[9] [10] [11] [12]
[13] [14]
[15]
[16] [17] [18]
[19]
Donough (Mc), E.F., Kahn, Kenneth B., Barczak, G.: "An investigation of the use of global, virtual and collocated new product development teams", in The Journal of Product Innovation Management, pp. 110-120; (2001) Wehner, T.; Raeithel, A.; Clases, C.; Endres, E: „Von der Mühe und den Wegen der Zusammenarbeit.“ In Endres, E.; Wehner, T. (Hrsg.): Zwischenbetriebliche Kooperation - Die Gestaltung von Lieferbeziehungen. Weinheim: Psychologie Verlags-Union (1996) Badke-Schaub, P.; Frankenberger, E.: „Analysing Teams by Critical Situations: Empirical Investigations of Teamwork in Engineering Design Practice“. Vortrag auf dem Fifth European Congress of Psychology, Dublin, Ireland (1997) Frankenberger, E.: „Arbeitsteilige Produktentwicklung – Empirische Untersuchung und Empfehlungen zur Gruppenarbeit in der Konstruktion“. Düsseldorf VDI-Verlag (1997) Teufel, S.; Sauter, C.; Mühlherr, T.; Bauknecht, K.: „Computerunterstützung für die Gruppenarbeit“. Bonn: Addison-Wesley (1995) Burger, C.: „Groupware: Kooperationsunterstützung für verteilte Anwendungen“. Dpunkt-Verlag (1997) Hertel, G.; Konradt, U.: „Telekooperation und virtuelle Teamarbeit – Interaktive Medien“, Oldenbourg, München, 2007 Steinheider, B., Legrady, G.: „Kooperation in interdisziplinären Teams in Forschung, Produktentwicklung und Kunst“, Fraunhofer Institut für Arbeitswirtschaft und Organisation, Stuttgart, (2001) Alavi, M.; Tiwana, A.: “Knowledge Integration in Virtual Teams”, Journal of the American Society For Information Science And Technology, October 2002 Clark, H.: “Using language”, New York: Cambridge University Press, (1996) Clark, H.; Carson, T.: “Speech acts and hearer`s beliefs”, In N.Smith (Ed.), Mutual knowledge (pp. 1-36), New York: Academic Press Clark, H., Marshall, C.:”Definite references and mutual knowledge”, in A. Joshi, I. Sag, & B. Webber (Eds.), Elements of discourse understanding (pp. 10–63). New York: Cambridge University Press. (1981) Teich, E.: http://www.linglit.tu-darmstadt.-de/index.php?id=1169#c2272 (2007) Troxler P., Lauche L.: “Knowledge Management and learning culture in Distributed Engineering”, ICED International Conference on Engineering Design, Stockholm, August 2003 Gausemeier, J., Hahn, A., Kespohl H.D., Seifert, L.: “Vernetzte Produktentwicklung. Der erfolgreiche Weg zum Global Engineering Networking“, pp. 311-313 Carl Hanser Verlag München Wien (2006) Anderl, R.: „Skript Produktdatentechnologie B – Produktdatenmanagement SS2007“. Technische Universität Darmstadt (2007) PACE-Forum: http://www.pace-digiman.de/ (2007) Lee, K: „Einsatzmöglichkeiten der 3D-PDF-Technologie in der Technischen Produktdokumentation“, Master Thesis at Technische University Darmstadt, Institute Datenverarbeitung in der Konstruktion, 2007 Johansen R: “GroupWare: Computer Support for Business Teams”, The Free Press, New York, NY, 1988
Part VI
Modelling and Meta-modelling Methods and Tools for Interoperability
A Framework for Executable Enterprise Application Integration Patterns Thorsten Scheibler and Frank Leymann Universität Stuttgart, Institute of Architecture of Application Systems (IAAS) Universitätsstrasse 38, 70569 Stuttgart, Germany {leymann, scheibler}@iaas.uni-stuttgart.de
Abstract. A great challenge for enterprises is the improvement of the utilization of their landscape of heterogeneous applications in complex EAI (Enterprise Application Integration) scenarios. Enterprise Application Integration Patterns help to address this challenge by describing recurring EAI problems and proposing possible solutions at an abstract level. However, EAI patterns are documentation only used by systems architects and developers to decide how to implement an integration solution. Thus, patterns do not specify how to produce the code that will actually implement the solution described by the pattern on a specific middleware. In this paper we introduce a framework that provides configuration capabilities for EAI patterns. The framework also allows to generate executable integration code from EAI patterns using a model-driven architecture approach. Furthermore, we present a tool providing this framework. Keywords: Enterprise application Integration for interoperability, Service oriented Architectures for interoperability, Model Driven Architectures for interoperability
1 Introduction Enterprise Application Integration (EAI) means integration of huge application systems landscapes comprising different kinds of heterogeneous artifacts (e.g. data, services, applications, processes). EAI applications function by coordinating standalone, independent applications, that can run by themselves, in the enterprise landscape in a loosely coupled manner. EAI therefore does not mean creating a single application that is distributed across several systems. A common way of integrating applications is by using messaging[15]. This technology eases integration by enabling multiple applications to exchange data using a “send-and-forget” paradigm together with providing a certain level of Quality of Service (QoS) because of the “store-and-forward” approach. Often, business processes (i.e. workflows) are used to describe and execute the integration
486
Thorsten Scheibler and Frank Leymann
of multiple services in a message-oriented system[14]. Workflows invoke the distributed services in a coordinated manner, using e.g. coordination protocols, and compensation logic. While designing EAI scenarios, one recognizes that the same solutions are used multiple times to address recurring problems. Those problems and their solutions are generally described in terms of Patterns [2]. Besides the well-established patterns in the field of object-oriented software design [11], there is an emerging area for creating patterns for software and enterprise architecture [5, 9]. For instance, [12] defines a set of patterns, covering prominent examples of situations encountered in message-based EAI solutions. Basically, the development of EAI scenarios comprises three layers: modeling, integration, and implementation. Figure 1 depicts exactly this approach: (i) on the top level, architects are drawing system landscapes in an abstract way to describe the system architecture without specifying technologies used in the system. This level eases the communication between architects not knowing all technical details, and technical personnel. (ii) The middle layer, also known as integration layer, enables the gluing of various services. The code produced represents business logic developed to achieve the business goals. (iii) At the lowest level reside the implementations of enterprise applications (legacy systems, databases,…) providing different services that eventually help to achieve business goals.
Fig. 1. Three levels of Enterprise Application Integration
Having the architecture of the desired systems landscape for a particular EAI scenario on the one hand and the implemented services on the other, no continuous method currently exists explaining how to get from the top level down to the lower level. This reveals the so-called Architecture-and-IT-gap- a vacuum between the persons who are familiar with the overall architecture (including business processes) and the staff with expertise in technological issues independent of the business goals targeted by a particular scenario. In this paper we present an approach to fill this gap. We introduce a framework that allows to generate executable integration code from EAI patterns using a model-driven architecture (MDA) [10] approach. The framework also provides configuration capabilities for EAI patterns in a platform independent manner which serve as basis for the generation logic. The paper is structured as follows: in Section 2 we discuss the connection of EAI patterns and MDA, including a short
A Framework for Executable Enterprise Application Integration Patterns
487
comparison of the Pipes-and-Filter (PaF) architecture and workflows, that are two basic alternatives available at the integration level. In Section 3 we introduce the general approach of configuring EAI patterns and present concrete properties of selected patterns. In Section 4 we show an example how an integration solution designed with those EAI patterns can be translated into a specific implementation technology including the presentation of a tool enabling the framework. The paper is concluded by the outlook in Section 5.
2 EAI Patterns and MDA Regarding the three levels of EAI (see Figure 1) no standard exists to model and design EAI scenarios. EAI patterns [12] are a way of documenting an architecture of enterprise application systems on a higher abstraction level. Those patterns are intended for use by analysts or systems architects to draw IT landscapes in a standardized way. These patterns are pieces of advice for integrating various systems, and a documentation explaining the underlying architecture leaving out specific technical details. On this level nothing is mentioned about how the visual representations of the patterns can be implemented. Currently, the realization is being done solely manually, which underlines the lack of methodology or tool. This situation was neglected in research and has not been studied yet. Nevertheless, there exists comparable work in several other areas. In the area of Workflow Patterns [19] it has been shown that such patterns can be mapped to BPEL [13, 16, 17]. In Service Interaction Patterns [4], the authors suggest how to represent the patterns with the help of WS-*[20] and BPEL[1] technology [3]. With respect to EAI patterns first implementation suggestions are presented to provide pattern support in an Enterprise Service Bus (ESB) [6] in a non-systematic manner. Right now there are two proposals: (i) within the open source ESB Mule 3, and (ii) within the routing engine Apache Camel 4 on top of Apache Active MQ. However, none of those approaches offer a coherent method for getting from the design down to the technical details. Actually, the (automatic) generation of executable code from EAI patterns has not been addressed at all. Also, there is no existing work on representation of patterns and their configuration, as well as no existing tool to support such a methodology. Considering the illustration of EAI patterns in Figure 1, by default based on the PaF architecture, one can recognize that the structure is comparable to graphs in which nodes are connected via edges. In EAI patterns, there are filters as working units representing nodes and channels as data links between two filters representing edges. Such a graph-oriented notation is comparable to workflow graphs with nodes standing for activities and control links connecting the nodes. Based on this similarity a PaF architecture can be viewed as a workflow where data flow and control flow is identical. Nevertheless, both architectural styles have fundamental differences [18]. The main distinction was found to be in the support of instances. In workflow based systems an incoming request message starts a new 3 http://mule.codehaus.org/ 4 http://activemq.apache.org/camel/
488
Thorsten Scheibler and Frank Leymann
instance of a business process. Hence, messages are associated with a particular workflow instance in contrast to PaF where messages traverse the filters independent of an notion of an instance: when a filter receives a message it processes it, forwards it, and processes the next upcoming message immediately. Therefore a message is not connected to an instance of a filter an vice versa. But the concept of an instance is very important in production environments enabling monitoring, auditing, repair etc. [14]. In an EAI scenario the implementations layer is realized by legacy applications, standard applications, or roll-your-own applications available at an enterprise. Often, the interfaces of those applications are described using some Interface Definition Language (IDL) allowing the integration layer to integrate the systems in an architected manner (e.g based on J2EE Connector Architecture 5). Nowadays those interfaces are described using Web Service technology (WS-*) [20] which is the latest integration technology. The IDL in the WS-* stack is the Web Service Description Language (WSDL [7]). In a Web Service environment BPEL (Business Process Execution Language [1]) is the standard for describing and executing business processes. BPEL processes reference the WSDL descriptions of the enterprise applications to integrate.
Fig. 2. EAI patterns transformed into executable workflows
In MDA the so-called Platform Independent Model (PIM) describes a solution to a problem in a technology independent manner. At PIM level a system, thus, is described platform neutral. The PIM can be translated into a Platform Specific Model (PSM) with a transformation logic based on a Platform Model (PM) as well as on Marks which configures the generation of particular code. In case of the three levels modelling EAI solutions (see Figure 1) the analogy to MDA is obvious: EAI patterns are at the level of a PIM, BPEL as orchestration language and Web Services as application interfaces reflect the level of a PSM (see Figure 2), and the gap between those levels can be filled if an appropriate generation algorithm can be developed which automatically maps from the non-executable PIM into an 5http://java.sun.com/j2ee/connector/
A Framework for Executable Enterprise Application Integration Patterns
489
executable PSM. The transformation algorithm needs configuration guiding the generation of appropriate code. Especially in the case of EAI patterns as PIM we need further information how the utilized patterns are intended to be used. Therefore, we introduce parameters for each pattern as well as for the whole system to support the transformation algorithm and to enhance the PIM with some information about the (technical) execution environment. Moreover, these parameters enable reusability when developing different algorithms for various target technologies.
3 Parameterization of EAI Patterns Current approaches (see Section 2) lack a configurability of patterns. With configured EAI patterns we introduce a new way to develop a common basis for various transformation algorithms targetting different technologies: This configuration is used by the transformation algorithm to generate platform specific code. We call configuration properties of individual types of patterns parameters. Evaluating EAI patterns we identified four categories of properties which form the parameters and describe the patterns in a more specific way. The categories are as follows (see Figure 3): x x x
x
Input: This categories describe the amount of incoming (request) messages. Output: This categories describe the amount of outgoing (request) messages. Characteristics: This is the main kind of the category. The paramaters of this category describe the detailed behavior of each type of EAI pattern, i.e. different choices and settings lead to different characteristics of the platform specific code. Control: This category describes the format of control messages which dynamically lead to a new behavior of a pattern in terms of manageability of the pattern implementation.
We choose a simple graphical notation to visualize those categories of parameters. The category characteristics has the main impact on the transformation algorithm. Different settings of parameters may lead to different results of the transformation because the algorithm generates different code depending on the settings. This will be discussed in Section 4.
Fig. 3. Generic visual representation of EAI patterns parameters
490
Thorsten Scheibler and Frank Leymann
Next, we describe the parameters of some patterns. As an example we show one pattern from each category of EAI patterns described in [12], namely Message Construction, Messaging Endpoints, Messaging Channels, Message Routing and Message Transformation. We describe in detail how each of the selected patterns can be expressed in a parameterized form. The parameterized representation of other patterns can be found in [8]. 3.1 Message Construction
Message Construction patterns suggest ways how applications (i.e. filters) that use Message Channels can actually exchange a piece of information. The Message pattern, the most prominent pattern, describes how to structure information in form of a message consisting of a header and a body part. The header contains information relevant for message delivery (e.g. destination address) or processing (e.g. security tokens) and is intended to be interpreted by the communication infrastructure. The body of a message contains the actual payload that is to be delivered to a receiver. Since the header part is used by the transmitting middleware and is typically set by the sending application- or the middleware itself-, we do not reflect that part in our parameters: parameters only describe the representation of the payload. A message in the system is not a processing, thus, it has no input, no output, and no control parameters. The message is only characterized by the format representing the content of the message, which is therefore the only parameter describing this pattern. 3.2 Messaging Endpoints
Messaging Endpoints describe how applications are connected to messaging systems and thus, how applications send and consume messages. Message reception through message endpoints can be further detailed by the way messages are received: in case of a Polling Consumer, the recipient actively polls the communication infrastructure for data items to consume. In contrast to this, an Event-driven Consumer is notified by the communication infrastructure as soon as a new data item becomes available. In case of a Selective Consumer, the pattern can be described by the kind of message which the consumer will receive through the connected filter. The consumer can further be parameterized by the selection criteria which selects messages based on their content and, thus, filters messages transmitted to the consumer. Those criteria can be simple key/value pairs, ranges of values or even complex selection logic (e.g. sophisticated XPath queries in case of an XML-based message content). 3.3 Messaging Channels
Messaging Channels are used to establish connections between applications and therefore transfer aspects of a messaging-based application. As mentioned earlier each channel is associated with an assumption about the particular “sort of messages” communicated over that channel. Applications choose a channel to
A Framework for Executable Enterprise Application Integration Patterns
491
communicate messages based on the expectation of the type of messages that run over that channel. There are a lot of parameters describing all kinds of message channels independent from the real occurrence of the pattern (e.g. Data Type Channel). Similar to all channel patterns is that only one message is put into the channel, same does not hold for the output of a channel- e.g. in case of a Publish-Subscribe Channel there are possibly more than one message leaving the channel. A channel can furthermore be characterized by the Quality of Service the channel has to provide: (i) reliable (yes/no), (ii) secure (kind of security, security token, security algorithm used), and (iii) persistent delivery (yes/no). Moreover it could be important to annotate the logical address of the channel, the maximum size of a single message, and the buffer size for messages to be temporarily saved. Also we have to define if a Dead Letter Channel is associated with the channel and under which circumstances (e.g. time out, buffer size exceeded,…) messages are forwarded to this channel. Parameters used by the data type channel are all of those mentioned. Additionally the pattern is characterized by one input and one output messages as well as the kind of message which can be transmitted through the channel. 3.4 Message Routing
In order to decouple a message source from its ultimate destination, messageoriented middleware employs Routers to guide messages through a network of channels. Furthermore routers not only direct messages through the system but they split, aggregate, or rearrange messages while traversing the middleware. Routers are messaging clients that forward messages between channels and decide e.g. based on content or application context to which channel a message or parts of it should be forwarded to.
Fig. 4. Parameters of a Message Translator
Fig. 5. Parameters of a Message Translator
The Content-based Router pattern (see Figure 4) is described by one message received by the pattern and one message (the same) as output of the pattern. In contrast to most of the patterns a content-based router has more than one exit (channel), however, only one exit is activated while utilizing the router. Furthermore a parameter describes the routing logic which specifies the outgoing channel depending on the content of the incoming message. The definition includes optionally an “else” channel if no matching channel can be found within the
492
Thorsten Scheibler and Frank Leymann
routing logic. A possible alternative in specifying the routing rules is to utilize an external service. In this case we have to define the address of the external service. The result of the service shows the channel to which the message has to be transmitted. The Message Transformation patterns help to reduce coupling between applications introduced by the message format when messaging technology is used to integrate different applications which do not share a common data format. Thus, it is typically required to transform messages during their transmission from one application to another. This can be accomplished by using e.g. the Message Translator pattern (see Figure 5). This pattern is characterized by one input and one output message, and the type of messages going in and respectively out. Moreover, the transformation logic has to be defined. This may be a simple mapping of one part of the incoming message to one part of the outgoing message to the point of a complex transformation algorithm (e.g. a complicated XSLT stylesheet in case of a XML message). If the capability of the transformation logic do not suffice the requirements, an external service can be used to achieve the desired transformation.
4 Model Transformation In this section we discuss the mapping of configured EAI patterns to BPEL processes. Consider the following example (see Figure 6) where we transform an parameterized EAI solution. Patterns used in the scenario are those discussed in Section 3. We take this model as a basis and fill the parameters with appropriate values to enable the transformation algorithm to generate platform specific code. In our case we generate a BPEL process and corresponding WSDL file which can be deployed onto an appropriate workflow engine (e.g. “Active BPEL” 6).
Fig. 6. Scenario of the example mapping
The example to illustrate the functionality of our framework is a simple order scenario where a customer sends an order of either a pair of shoes or jewels to a system. The system sends a delivery instruction to a shoe store or a jeweler’s shop depending on the content of the customer order. This decision is accomplished by a routing node. As the incoming message formats differs from the delivery request
A Framework for Executable Enterprise Application Integration Patterns
493
message format we need a transformation between both formats (i.e. a message translator). All channels in the scenario are data type channels, each with one input and one output, and the characteristic “message type”. No more parameters are set. Channels A, B, and C have the customer request message as data type, channel D has the shoe store message, and channel E the jeweler’s shop message type. The customer request message consists of four parts: (i) the kind of article (shoe or jewel), (ii) the customer address, (iii) the name of the article, and (iv) the price of the article. The message types of shoe and jeweler requests differ only as the part “kind” is left out. All message types are described using XML Schema. The content-based routers offer two exit channels, no “otherwise” channel is set. The outgoing channel is determined based on the content “kind” of the incoming message. The message path ending at the show store is selected if the value is “shoe”, if the value is equal to “jewel” the other exit is chosen. Both message translators use an XSL stylesheet to describe the transformation logic which is a simple mapping of values from one type to another. The integration scenario ends in sending the shoe request message to the shoe store and the jeweler request message to the jeweler’s shop service respectively. To finish the parameterization the endpoints of each used service (jeweler’s shop and show store) have to be defined. Now, the parameterized EAI patterns are ready to be used for generating BPEL. 4.1 Mapping to BPEL
Each entity of the EAI patterns diagram will be transformed to a BPEL snippet. Each snippet will be connected according to the message flow, resulting in one BPEL process representing the integration scenario. The incoming request sent out by a message endpoint is received via a Receive activity. The content-based router corresponds to an If activity in BPEL. The routing rules are evaluated using XPath expressions (e.g. “sendOrderRequest/kind = shoe”) as conditions. Each message translator is represented by an Assign activity using a “doXslTransform” operation wherein a XSL style sheet is applied to transform the messages. The message endpoints “shoe store” and “jewelers shop” are mapped to Invoke activities. All messages are represented by variables. The message type is described with XML schema. The single BPEL snippets are glued together via Links to form a complete process. Hence, those links represent the channels in the integration scenario. Beside the BPEL process all needed additional artifacts are generated automatically by the transformation algorithm (e.g. a WSDL file describing the interface of the process, Partner Links used by the process, namespace definitions, imports, …).
494
Thorsten Scheibler and Frank Leymann
Fig. 7. The Parameterization Tool
A tool was developed to provide the parameterization of EAI patterns in a comfortable, visual manner including a transformation algorithm generating platform specific code. The tool is available as an Plug-In for the Eclipse IDE7. Figure 7) presents a screen shot of this tool. It provides a toolbar on the right including all available patterns and a parameter configuration facility for each pattern (in the “Properties” tab), and the overall system. It automatically checks if the connected patterns fit together by controlling for instance the message type transmitted over a channel and the message type received by a filter. Moreover, it verifies that all needed artifacts for generating BPEL (e.g. XSD and WSDL files) are imported into the system. Thus, the tool only generates BPEL code if all properties are properly set. Figure 8 shows the generated BPEL of the presented scenario. To ease the error detection of the user, the tool uses the problems view of Eclipse. The actual implementation for generating BPEL code from EAI patterns supports only a subset of all EAI patterns. All basic patterns are currently represented in our implementation of the framework whereas composed patterns (e.g. Message Processor) are not yet fully supported. Some EAI patterns and their corresponding parameterizations are very complex and cannot be transformed into BPEL in a manner that produces acceptable BPEL code. Therefore, we have to provide special kind of services hiding this complexity. Until now, the framework dependends on those services. That means the special, external services are not generated automatically within the framework.
7http://www.eclipse.org
A Framework for Executable Enterprise Application Integration Patterns
495
Fig. 8. Generated BPEL file (in Active BPEL Designer notation)
5 Conclusions and Outlook In this paper, we presented a compact framework offering a configuration facility for EAI patterns and a transformation algorithm to generate platform specific code (in this case BPEL). We described a tool implementing our framework supporting a subset of EAI patterns and generating a valid BPEL file with additional WSDL description. The architect still needs knowledge of the underlying system (like WSDL messages, port types, and operations), however with the framework we reduce the gap between the architect’s view on a system and the technology used. In our future work we will target improvements in the development of EAI pattern parameters with the help of decision trees[21]. In order to support the whole catalogue of EAI patterns the implementation of System Management patterns and composed patterns (like Message Broker) has to be completed. Until now our MDA approach offers only generating BPEL as platform specific code. Investigations of other targets apart from workflow-based
496
Thorsten Scheibler and Frank Leymann
systems are needed, for instance a mapping of EAI patterns onto a generic Enterprise Service Bus (ESB). We have shown that a model-driven architecture approach can be used to automatically generate executable code from EAI patterns. Parameterization of patterns facilitates the reusability of code representing the patterns. We demonstrated a method for transforming the non-executable documentation of EAI solutions into executable artifacts that can be deployed on a BPEL workflow engine.
References [1] [2] [3]
[4]
[5] [6] [7] [8] [9] [10] [11]
[12] [13] [14] [15] [16] [17] [18]
Web Services Business Process Execution Language Version 2.0 - Committee Specification. Technical report, OASIS, Jan 2007. C. Alexander. The Timeless Way of Building. Oxford University Press, 1979. A. Barros, M. Dumas, and A. ter Hofstede. Service Interaction Patterns. Proceedings of the 3rd International Conference on Business Process Management, pages pp. 302318, September 2005. A. Barros, M. Dumas, and A. ter Hofstede. Service Interaction Patterns: Towards a Reference Framework for Service-based Business Process Interconnection. Technical Report FIT-TR-2005-02, Faculty of Information Technology, Queensland University of Technology, Brisbane, Australia, March 2005. F. Buschmann, R. Meunier, H. Rohnert, P. Sommerlad, and M. Stal. Pattern-Oriented Software Architecture. Wiley, 1996. D. Chappell. Enterprise Service Bus. Theory in Practice. O'Reilly Media, January 2004. E. Christensen, F. Curbera, G. Meredith, and S. Weerawarana. Web Services Description Language (WSDL) 1.1. Technical report, Mar 2001. B. Druckenmüller. Parametrisierung von EAI Patterns. Master's thesis, Universität Stuttgart, 2007. M. Fowler. Patterns of Enterprise Application Architecture. Addison-Wesley Professional, 2002. D. Frankel. Model-Driven Architecture: Applying MDA to Enterprise Computing. Wiley, January 2003. E. Gamma, R. Helm, R. Johnson, and J. Vlissides. Design Patterns: Elements of Reusable Object-Oriented Software. Addison-Wesley Longman Publishing Co., Inc. Boston, MA, USA, 1995. G. Hohpe, B. Woolf, and K. Brown. Enterprise Integration Patterns: Designing, Building, and Deploying Messaging Solutions. Addison-Wesley Professional, 2003. V. Kramberg. Pattern-based Evaluation of IBM WebSphere BPEL. Master's thesis, Universität Stuttgart, 2006. F. Leymann and D. Roller. Production Workflow: Concepts and Techniques. Prentice Hall PTR, September 1999. R. Monson-Haefel and D. Chappell. Java Message Service. O'Reilly, 2000. N. Mulyar. Pattern-based Evaluation of Oracle-BPEL (v.10.1.2). Technical report, Department of Technology Management, Eindhoven University of Technology, 2005. T. Scheibler and F. Leymann. Realizing Enterprise Integration Patterns in WebSphere. Technical Report 2005/09, Universität Stuttgart, October 2005. J. Trautvetter. Analyse der 'Pipes and Filter' Architektur gegenüber instanzbasierten Ansätzen bei Workflows. Master's thesis, Universität Stuttgart, 2006.
A Framework for Executable Enterprise Application Integration Patterns
497
[19] W. M. P. van der Aalst, Ter, B. Kiepuszewski, and A. P. Barros. Workflow patterns. Distributed and Parallel Databases, 14(1):5-51, July 2003. [20] S. Weerawarana, F. Curbera, F. Leymann, T. Storey, and D. Ferguson. Web Services Platform Architecture. Prentice Hall, 2005. [21] O. Zimmermann, J. Grundler, S. Tai, and F. Leymann. Architectural Decisions and Patterns for Transactional Workflows in SOA. Proceedings of 5th International Conference of Service-Oriented Computing (ICSOC 2007), September 2007.
Experiences of Tool Integration: Development and Validation J-P. Pesola1, J. Eskeli1, P. Parviainen1, R. Kommeren2 and M. Gramza2 1
Abstract. Generally in software development, there is a need to link the development work products with each other i.e., requirements with the corresponding design artefacts, to the resulting software and associated test cases. This enables, for instance, the efficient change impact analysis and reporting facilities during the different phases of the software development life cycle. Establishing and maintaining these links manually is a laborious and error prone task, so tool support is needed. This paper describes a configurable tool integration solution (the Merlin ToolChain) that integrates project management, requirements management, configuration management and testing tools. The paper introduces the architecture of the ToolChain as well as describes the development and validation activities carried out. Experiences from real life industrial case showed that the ToolChain works and is useful in collaborative software development. Keywords: Industrial case studies and demonstrators of interoperability, Tools for interoperability, Engineering interoperable systems
1 Introduction The software size in embedded systems grows exponentially. For instance, in consumer electronics it doubles every one to two years. In the perspective of growing size and complexity of embedded systems, companies are not able to develop all the required functionality by themselves. As a result, suppliers specialize in specific functionality or specific skills which they can sell to others. This is clearly visible in the largely growing numbers of the outsourcing constructions in the past years. Collaborative engineering of embedded systems has become fact of life, and currently there is no way back anymore; companies have
500
J-P. Pesola, J. Eskeli, P. Parviainen, R. Kommeren and M. Gramza
already outsourced large parts of their developments to other companies, resulting in that the related skills are no longer available in their own organisations. Instead, the companies need to manage a complex situation of many partners, subcontractors, suppliers, software platforms and so on. Poor interoperability of development tools is one of the major problems in embedded systems development today. Currently there are many product development tools, often with dedicated, specific strengths. These tools need to be connected and these connections managed in order to enable traceability between development phases as well as between collaboration partners, ensure consistency of the work products as the development progresses, and provide visibility beyond partner borders. However, the existing tools are hardly integrated and poorly interoperable. At worst, manual work is required to transfer data between tools. Industrial experiences indicate that development performance can be increased by tens of percentages, if development is optimally supported by well-integrated tools. This is true for product development within one company, but even more for multicompany development, where many different tools are used. Currently the main available tool integration solutions are the integrated tool sets offered by the large tool vendors. These bundled tool-chains solve the problems only partly and create dependency on the particular tool vendor. For instance, introduction of a new, good tool from another vendor impedes the total development process dramatically, due to poor integrability of single tools. This paper describes the further development of the tool integration environment called the ‘Merlin ToolChain’ to support collaborative software development. The first version was introduced in I-ESA 2007 [1]. The development of this tool integration has been carried out in the Merlin 8 project, which is a three-year research project within the ITEA 9 framework. The paper is composed as follows. Chapter 2 describes related work about existing tool integration approaches. This is followed by Chapter 3 where the general concepts for the tool integration in the ToolChain and the first prototype of the ToolChain are introduced. Then the further development and generalisation of the tool chain is discussed in the Chapter 4. Chapter 5 presents the results and findings of the ToolChain validation in an industrial project. Finally, the conclusions and thoughts for future research are discussed in the Chapter 6.
2 Background: Existing Tool Integration Solutions In this chapter we list existing approaches to tool integrations, what is meant by application lifecycle management (ALM), discuss briefly about existing ALM solutions and finally about the general weaknesses of existing tool integrations.
8 Merlin project home page: http://www.merlinproject.org 9 ITEA home page: http://www.itea-office.org
Experiences of Tool Integration: Development and Validation
501
2.1 Tool Integration Approaches
The current approaches to tool integration are described in the following table (Table 1). The descriptions are based on [2, 3, 4, 5].
Table 1. Current approaches to tool integration Approach
Description
Piecemeal
Tools are applied to achieve improvements in specific life cycle phases and migration between the development phases is done manually. This approach focuses on specific phases and lacks the view to the overall improvement between the phases
Singlevendor
This approach attempts to improve all life cycle phases with a fulllifecycle-product from a single vendor. The approach requires selecting a vendor with the technical expertise required to effectively support all life cycle phases, creating a risk of locking the organization into a costly, proprietary solution.
Best-inclass
This approach attempts to integrate the best tools of regarded domains, typically from different vendors, for specific life cycle phases. Best-in-class has two variations: point-to-point integrations and framework based integrations:
x
x
Point-to-point integrations, or one-to-one integrations, provide integration built between two defined tools specifically. Point-topoint integrations are adequate only for small numbers of integration endpoints and typically create more complexity in developing and managing tools than they solve. This is the most common type of interface between systems engineering tools. [6]. Framework-based integrations attempt to classify tools and provide integration between tool classes based on vendor-neutral interfaces and mechanisms. Framework-based approach provides integration environment and common look and feel without limiting the choice of tools.
Another relevant topic regarding tool integration is application lifecycle management (ALM), meaning the coordination of development life-cycle activities, including requirements, modelling, development, build, and testing, through enforcement of processes that span these activities, management of relationships between development artefacts used or produced by these activities; and reporting on progress of the development effort as a whole [7]. ALM operates with the artefacts produced and used during the lifecycle of SW products by providing visibility into the status of the evolving SW product. A variety of solutions provide mechanisms to represent different types of traceability links between developmental artefacts. However, the interpretation of the meanings of such linkages is often left to the user.
502
J-P. Pesola, J. Eskeli, P. Parviainen, R. Kommeren and M. Gramza
2.2 Existing Implementations
The existing implementations of tool integration include company specific tool integrations (piece-meal or point-to-point) that are not publicly available (and thus not discussed in this chapter), various frameworks, and single vendor solutions for ALM. Also, development environments offering complete product development environment including integrated development tools have also appeared. Examples of these implementations are presented in Table 2.
Table 2. Examples of tool integration implementations Name
Type
Description
Borland Open ALM
ALM solution
Several Borland tools (e.g., CaliberRM and Caliber DefineIT, Together, SilkTest, StarTeam) are integrated with each other. CaliberRM integrates also with MS VSTS and Eclipse.
IBM ALM solutions
ALM solution
IBM tool portfolio covers all development lifecycle stages integrating several IBM Rational tools, (e.g., RequisitePro, Rational Rose, Rational Software Architect, Rational Functional Tester, Rational Performance Tester and ClearCase CM).
Microsoft Visual Studio Team System (VSTS)
ALM solution
A development platform that support various phases of SW development lifecycle. The backbone is the Team Foundation Server, the central point of contact for project and process management. Also, process guidance is provided via Microsoft Solutions Framework (MSF).
Eclipse
Framework
The Eclipse provides mechanisms to use, and rules to follow to integrate tools. Eclipse Platform offers good support to extend its functionality by plug-ins. Eclipse simplifies tool integration by allowing tools to integrate with the platform instead of each other [4, 8].
MODELBUS
Framework
MODELBUS 10 offers a tool API independent layer of abstraction for exchanging models using Eclipse EMF metamodeling technology. The aim is to simplify access to tool data in a distributed environment based on SOAP middleware. MODELBUS doesn’t offer support for, e.g., management of traceability links across tools.
ALF
Framework
Application Integration Framework (ALF) 11 is part of Eclipse foundation. The project aims to provide the logical definition of the overall interoperability business process. This technology handles the exchange of information, the business logic governing the sequencing of tools in support of the application lifecycle process,
Experiences of Tool Integration: Development and Validation
503
and the routing of significant events as tools interact. IBM Jazz
Framework
IBM Jazz attempts to build a scalable, extensible teamcollaboration platform for seamlessly integrating tasks across the software lifecycle. Jazz is based on Eclipse, and is a kind of middleware layer for linking development assets.
Model-based tool management and integration platform
Framework
The platform supports model integration, where models defined in different tools for different aspects of the same system are related such that they may share and exchange data. The integration platform also enables model management functionalities on a fine-grained level. The approach is based on Matrix PDM tool and stores a copy of the data in the tool, which then creates consistency problems. [9]
Fujaba
Framework
Mechanism to integrate different tools on the metamodel level. A consistency management system is included, especially for the integration of different or enhanced meta-models. The Meta-Model Extension and MetaModel Integration patterns enable the integration of data in different scenarios on the meta-model level. [6]
CollabNet
Devel. environment
CollabNet is a collaborative development environment where the developers and IT projects managers collaborate on-line through CollabNet.
As it can be seen from the table above (Table 2), already many tool integration frameworks and solutions exist. However, these solutions do not solve the integration problem optimally. For example, the existing ALM solutions are single vendor specific tool integrations creating dependency on the vendor, regarding for example version support and future developments of the tools. Single vendor solutions also limit the choice of tools; companies cannot choose the individual tools that would fit to the situation and specific purpose the best, but have to consider the development tools as a whole. This results in having to use a set of onaverage best tools available for the situation, whereas free selection of individual tools would provide better support when well integrated. The same is true for the development environments. Regarding tool integration frameworks, they are often generic, providing flexibility for the integration, but on the other hand causing that the integration has to be planned and built from scratch each time, resulting in large amount of effort needed. The Merlin ToolChain uses existing knowledge and solutions (e.g., Eclipse) of tool chain integrations and tries to come up with new ways to solve the problems like: replicating data, manual transfer of data between tools etc. These aspects are discussed in more detail in the following chapters.
504
J-P. Pesola, J. Eskeli, P. Parviainen, R. Kommeren and M. Gramza
3 Tool Integration Concepts This chapter describes the concepts of the Merlin ToolChain, including the data model and the architecture of the ToolChain. 3.1 Merlin ToolChain Data Model
The data model of the ToolChain presents how the development artefacts from the different tools map to each other. As stated before, the ToolChain integrates four different type of tools: project management (PM), configuration management (CM), test management (TM), and requirements management (RM). The following figure (Fig. 1) shows simplified models of example tools and how they are traced to each other via unique identifiers in the ToolChain.
Fig. 1. Data model of Merlin ToolChain
The data model is divided into four parts according to the tools integrated in the ToolChain. For each tool the entities that are connected to another entity in certain relationship in the ToolChain are presented in the model. This linking of the connections is based on the entities’ unique identifiers, which can be for example id strings. Based on these unique identifiers traceability links are created and stored in to the traceability database. 3.2 Merlin ToolChain Architecture
The basic idea is to have several Eclipse plug-ins making it possible to connect the different tools into the same environment. The Merlin plug-in is a kind of “master plug-in” and it is composed of a set of extension points in which the tool plug-ins connect. The Merlin plug-in communicates with tool specific plug-ins through
Experiences of Tool Integration: Development and Validation
505
tree- and table interfaces. Each tool integrated into the ToolChain is implemented using the defined interface providing the Merlin plug-in the data from the tools. The Merlin plug-in then delivers the data to Traceability View plug-in, which visualizes the information and status to the end user. The links between the data from the different tools are stored into a traceability database that is also operated by the Merlin Plug-in. The following figure (Fig. 2) introduces the conceptual architecture of the Merlin ToolChain.
Fig. 2. Merlin ToolChain conceptual architecture
This architecture enables that the Merlin plug-in does not need to know how the particular plug-in gets the data it needs, so there are no restrictions or definitions while designing and implementing tool specific plug-ins. This means that the plugin developer can create a plug-in for a new tool as he wishes and after that integrate the new plug-in into Merlin ToolChain by implementing the supplied interfaces. The main idea of the Merlin ToolChain is to integrate different tools together and bring only the most essential information from every tool to the shared view. This essential information was defined in co-operation with the Merlin industrial partners. Also, for ease of use, a similar look and feel was used for the views. 3.3 The First Prototype
This chapter describes the first prototype of the Merlin ToolChain briefly. This work was done in close co-operation within the Merlin project consortium, including valuable input from the industrial partners (e.g., Philips) on the requirements and potential use for the ToolChain. The development and the resulting first prototype itself are described in detail in [1]. The tools integrated in the first prototype of the Merlin ToolChain were, for requirements management IBM Rational RequisitePro, for project management
506
J-P. Pesola, J. Eskeli, P. Parviainen, R. Kommeren and M. Gramza
Philips Project Assist Tool (PAT), for configuration management Telelogic Sunergy/CM and for test management Philips SoftFab. The development of the Merlin ToolChain started from learning how to create and implement Eclipse plug-ins. The first tool integrated into Eclipse environment was IBM Rational RequisitePro and later on the ToolChain was expanded to be a set of separate tool specific plug-ins for the other above mentioned tools. After the tool specific plug-ins were ready, the integration between these tools was created. The goal was to increase the traceability during the project life cycle. Items from the requirements management tool (RequisitePro) were selected to be “the integration point” where the other tool’s traceability links should be created. This creation of links was handled differently and separately for each tool. This means that creating a link for example between project tasks (from PAT) and requirements (from RequisitePro) was made in the plug-in made especially for linking these items, while on the other hand, code files (from Synergy/CM) were linked with these requirements in another plug-in. Thus the first prototype was not according the architecture described in Fig. 2. The first prototype of the ToolChain appeared to be quite promising one. We had several demonstrations and try outs with the ToolChain and we got encouraging comments and experiences on it. However, the ToolChain still needed some improvements to make it more valuable and useful in a real life projects, and during the Merlin project, it has been further developed as will be described in the following chapters.
4 Generalisation of ToolChain The first prototype proved the concept of tool integration via Eclipse plug-ins, next goal for the ToolChain development was to make a more generic tool integration solution. The generalisation work continued to be done in close co-operation with the Merlin project consortium; in bi-monthly workshops the status of the development was presented and future development directions agreed. This work started by developing integrations to alternative tools of the project management (PM), configuration management (CM), requirements management (RM) and test management (TM) tool types. This was done in order to study the differences in plug-ins for different tools and thus to find out the generalisation possibilities for the integration. Another aim was to prove that the ToolChain concept was robust with respect to changing tools: interoperability of tools should be maintained when plugging out a tool and replacing it by another. The following tools were integrated to the tool chain using the same point-to-point method as in the first prototype solution: Telelogic DOORS, OSRMT (Open Source Requirements Management Tool), Open Workbench, Subversion and TestLink. These tools were chosen based both on industrial partners’ input and on decision of using open source tools. Based on doing these integrations, the generic interfaces for PM, TM, CM and RM tools to the ToolChain could be specified. This enables the changeability of the tools and also easy integration of other similar tool(s) into the ToolChain.
Experiences of Tool Integration: Development and Validation
507
Currently the ToolChain supports two PM tools (Open Workbench, PAT), three RM tools (DOORS, OSRMT, RequisitePro), two CM tools (Subversion, Synergy/CM) and two TM tools (Testlink and Softfab). Adding other PM, RM, CM or TM tools to the ToolChain is also easy using the defined API. Using the tool selector feature of the ToolChain any combination of these tools can be taken into use, however if the tools are changed later on, the traceability database must be reset and therefore all the traceability data will be lost. Creating a purely open source tool set from the existing integrations in ToolChain is also possible. However, the ToolChain itself is not yet available as open source, but will share it in the near future. The Traceability view (see Fig. 3) gathers all the information from the selected tools. Users can create dependencies between different development artefacts in this view by drag & dropping. When new dependencies are created, the data is immediately available to the other ToolChain users, who then can inspect the existing relations and create new ones if necessary.
Requirements
Source code related to task Task status Test cases related to requirements
The traceability links are created using drag & drop mechanism, e.g., tracing a code file to requirement is done by selecting a requirement and then dragging code files from the CM tool view and dropping them to the code files window in the Traceability View. The dragged items come from tools’ own plug-ins. The same mechanism works for all the existing and new tools. This mechanism enables easy adding of new tools to the ToolChain significantly; in the first prototype changing tools and defining relationships between their artefacts had to be done by modifying the source code and the traceability database structure. At the moment, all the plug-ins handle their own user interface themselves but the Merlin-plug-in synchronizes the views. Thus, all views are always up to date for all ToolChain users, regardless if they have the underlying tool installed in their computer, or where they are located.
508
J-P. Pesola, J. Eskeli, P. Parviainen, R. Kommeren and M. Gramza
5 Experiences of Implementation: Validation of Tool Chain In this chapter we discuss about ToolChain validation case. The case was carried out in cooperation between Philips and VTT. The trial was performed in the OSIB project, realized by Philips Applied Technologies and its purpose is to provide the Integrated Ambient Experience™ 12 for a new hotel chain 13. The software is designed and developed for several hardware subsystems interconnected through well defined software interfaces. The subsystems of the OSIB solution are as follows: x
x x
Moodpad: advanced remote control with a touch screen and hardware buttons. Moodpad is used to enable the user interface to the hotel room (controlling the light, TV, Venetian blinds, sunscreen curtain, room climate). Room Controller: Implements the hotel room logic and interacts with the Moodpad, room TV, climate control system, RFID door lock. Ambient Experience server: subsystem functioning as a gateway between hotel rooms and the external components: Property Management System of the hotel chain and the remote hotel diagnostics centre.
All the devices communicate either via Ethernet or wireless (Wi-Fi) connections with each other. The project uses agile way of working, where development is done in increments of two weeks and requirements are selected for each increment, together with the customer. The goals for the case from the ToolChain development viewpoint were to evaluate the usability and usefulness of the ToolChain in real-life product development. The aim was to get experiences from setting up the tool chain to the industrial situation, from its’ use in practice as well as from the adaptability of the selected concepts to the needs appearing during the use of the ToolChain. From the case project viewpoint, the goals for using the tool chain were to improve traceability and visibility to the project progress while not interrupting the product development work. 5.1 Preparation and Setup
The case was the first time that the ToolChain was tried out in a real life setting, so the technical improvements of ToolChain during the case were significant. First the ToolChain was adapted to Philips environment by integrating the tools they had in use to the ToolChain. The toolset used in the case was Project Assist Tool (PAT), DOORS, Subversion and Softfab. In practice, this meant developing new plug-in for DOORS, adopting the available Subversion plug-in and updating the PAT and
12 See http://www.medical.philips.com/main/company/ambient_exp/ for the description of the Ambient Experience for Medical Environment. Concepts mentioned there can be applied in different domains as well, for example in the hospitality. 13 More information about the hotel itself and the room technology can be found at http://www.citizenm.com/new-hotel-room-technology.php.
Experiences of Tool Integration: Development and Validation
509
Softfab plug-ins as the versions used by the project were not the same as were used in the development of the ToolChain. The installation of ToolChain to the case project environment was done in a week when the ToolChain developers from VTT visited Philips to set up the ToolChain. Some compatibility problems were encountered at first, but they could be handled quite easily as they were mainly configuration problems. Also, some features considered essential by the case project were added to the ToolChain during the installation. The trial showed that the ToolChain could be adopted and set up for the industrial project environment relatively quickly, only a week for setup was needed. Considering that the ToolChain integration is only a research prototype, the setup period can be further shortened in the future. 5.2 Experiences from using the ToolChain
Next the OSIB project members would use the ToolChain in everyday development work. During this period the OSIB project members wrote down all the encountered problems and ideas for improvement. This feedback was given to the ToolChain developers bi-weekly and urgent problems were handled immediately. The feedback was also analysed and prioritized in the bi-monthly workshops with the Merlin partners. Examples of the improvements made for the ToolChain based on the feedback include: development of the Tool Selector that enables selecting the tools that are in use, improving usage of available space in the user interface, and showing more information for the user. Also, the database was changed to an open source alternative during this development period. The new version of the ToolChain was sent to the industrial partner after it was finalized. In conclusion, during the evaluation period many new features were added in the ToolChain and it became more robust and durable in the practical use. The case also proved that the ToolChain works in daily operations in a natural way without complicating things, but rather making things easier by providing traceability between development artefacts like requirements and code files.
6 Conclusions and further research In this paper, further developments of a tool integration solution called Merlin ToolChain were discussed. The Merlin ToolChain provides customizable tool integration for requirements management, configuration management, project management and test management tools. The ToolChain has been evaluated in a real industrial product development project, proving to improve the traceability of product data and visibility of the project progress without disturbing the product development. The ToolChain also proved to be relatively easy to install and to take into use. As a result we conclude that the current version of ToolChain works and can be useful in collaborative software development, but it should be noted that it is just a research prototype; it has not been extensively tested or used in different situations.
510
J-P. Pesola, J. Eskeli, P. Parviainen, R. Kommeren and M. Gramza
Future improvements of the ToolChain include, for example, developing a robust user right management / security model, and extending the current traceability model to support direct tracing between any types of artefacts. Also, adding new types of tools, e.g., for change management, design, etc and developing means to control the integrated tools from the Merlin ToolChain would improve the usefulness of the tool.
Acknowledgements The authors would like to thank all Merlin 14 research partners for their assistance and cooperation. Especially the authors would like to thank the industrial partners’ investment of time and support for the study, especially Philips representatives and the OSIB project members.
References [1]
[2] [3] [4] [5] [6]
[7] [8] [9]
Kääriäinen, J., Heinonen, S, & Takalo, J., 2007, Challenges in collaboration: tool chain enables transparency beyond partner borders, Proceedings of 3rd International Conference on Interoperability for Enterprise Software and Applications I-ESA 2007, March 28th – 30th, 2007. Kanwalinder, S., Tool Integration Frameworks -- Facts and Fiction, IEEE Proceedings of the National Aerospace and Electronics Conference 2 pp. 750-756, 1993 Nghiem, A., 2002 Web Services Part 6: Models of Integration http://www.awprofessional.com/articles/article.asp?p=28713&seqNum=2 Eclipse http://www.eclipse.org/proposals/eclipse-almiff/main.html Digital, Framework-Based Environment Design Center, Version 2.0, SPD 56.03.00, http://h18000.www1.hp.com/info/SP5603/SP5603PF.PDF Burmester, H. Giese, et al., “Tool integration at the meta-model level: the Fujaba approach”, International journal on software tools for technology transfer, Springer, vol. 6, no. 3, pp. 203-218, 2004. Schwaber, C., The Changing Face of Application Life-Cycle Management, Forrester Research Inc., August 18, 2006. Amsden, J., Levels Of Integration, Five ways you can integrate with the Eclipse Platform, 2001, OTI. http://www.eclipse.org/articles/index.html El-khoury, J., Redell, O., Torngren, M., A tool integration platform for multidisciplinary development, 31st EUROMICRO Conference on Software Engineering and Advanced Applications, 30 Aug.-3 Sept. 2005 Page(s):442 – 449
14 Merlin is an European co-operation project carried out in ITEA framework. Merlin web pages: http://www.merlinproject.org, ITEA web pages: http://www.itea-office.org
Interoperability – Network Systems for SMEs Kai Mertins, Thomas Knothe, Frank-Walter Jäkel Fraunhofer Institute Production Systems and Design Technology Pascalstr. 8-9, D-10587 Berlin {kai.mertins, thomas.knothe, Frank-Walter.Jaekel} @ipk.fraunhofer.de
Abstract. Small and Medium Sized Enterprises suffers more than others from interoperability challenges. On the one hand these companies are often dependent from different larger enterprises that are dictating methods standards and technologies for collaboration. On the other hand they have to form cooperation and networks in order to fulfill customer requirements – e.g. to provide complete components as known in the car industry. The paper gives an overview about SME situation regarding Enterprise Interoperability and related research activities. In the second part a framework and integrated methodological and software service solutions will be introduced for tackling SME challenges for cooperation establishment and operations starting and ending from a business perspective. A certain part of this portfolio the extended MO²GO software based process assistant will be explained in more detail. At the end a research and technological outlook will be provided Keywords: Interoperability in SME, Enterprise Modelling in the Context of Interoperability, Frameworks
1 Introduction Short-term and flexible usage of market opportunities requires close cooperation between different core competencies and their bundling in common value-added chains. This is often limited by the lack of compatibility between IT solutions, but also by organizational and cultural differences between the companies. In this situation large companies tends to react with the establishment of subsidiaries to better manage specific conditions in different regions around the world such as laws, rules, mentality etc.. An example could be the laws in some countries such as China that only products could be soled if a production exists in these countries. But also environmental costs and logistic costs are considered. Especially the large companies can benefit from fewer restrictions in terms of environmental and social
512
Kai Mertins, Thomas Knothe, Frank-Walter Jäkel
responsibility which reduce their production costs and increase their compositeness on the European market. In contrast, small and medium-sized enterprises (SMEs) have to rely much more on cooperation and networking. Unlike large companies, small and medium-sized companies cannot rely on company-wide standards and require far more flexible solutions and interoperability [1] to act in different networks with different partners. Applications for network building and operation of networks such as M + E Network [2], [3] VIENTO, Supply Chain Mapping (www.supply-chain-mapping.de) consider both the short-term benefits as well as the long-term ability of SMEs to operate in different collaborations. This increases the need for coordination and interoperation of IT-systems and processes. „Interoperability” is needed to facilitate the technological and organizational requirements for simple and flexible networking with different companies. Some problems linked with the word "interoperability" are: x x x x x x x x
Format differences in the processing of orders or supplier data and thus costs for manual activities Requirement of transparency by customers Many different supplier portals, market places and custom specific software Support of joint product development processes Mastery of a menagerie of different IT systems Data quality at the interface with customers and suppliers Low IT accessibility, sometimes only a web browser could be expected Different certification requirements of the customers
The list can be further expanded and illustrates the variety of aspects which can be summarized under "Enterprise interoperability". "Enterprise Interoperability" defines as a field of activity with the aim to improve the manner in which enterprises, by means of Information and Communication Technologies (ICT), interoperate with other enterprises, organisations, or with other business units in the same enterprise, in order to conduct their business [4].
2 Situation in Research and Industry Potential solutions to the interoperability of companies are provided by the results from European projects like INTEROP-NoE and ATHENA-IP [1]. These projects addressed also the handling of the influence of mutual trust with cooperation partners (e.g. Can I call my cooperation partners clients or not?) and possible cultural aspects (e.g. different contact initiations in different countries or different decision-making structures). The cultural aspects of companies and their collaborations have been a major issue in the project IKOPA (innovation capability through cooperation and participation) [2]. With IKOPA, among others, the influence of corporate culture and cooperation on the innovativeness of the company was analyzed and optimize. On the EU level projects such as DBE-IP (Digital Business Ecosystem) has tackled the cultural issues and it is also addressed within the roadmap of the enterprise interoperability cluster.
514
Kai Mertins, Thomas Knothe, Frank-Walter Jäkel
3 Solutions Provided to Industry The Corporate Management (CM) division at Fraunhofer IPK-Berlin has developed new concepts and tools for the establishment of interoperability in enterprises in two of the largest EU projects for interoperability as well as in the context of industrial projects. Key points of CM work are the issues of synchronization and interoperability of companies with specific emphasis on SMEs [6, 9, 10]. With further results from projects for enterprise networks and cooperation of enterprises in the following a framework of the IPK to support the interoperability of companies is outlined, and a solution component is closer presented. 3.1 Framework for the Improvement of SME Interoperability
A flexible but secure, and above all, inexpensive network of partners allows for a significantly higher agility. To implement such networking means a high investment risk for many companies which have to adjust their business processes and software to different partners without knowing how many common business activities will be held. This requires a continuous support of cooperation in the development stages. In Fig. 2 the general phases of cooperation based on [10] are shown. Phase 1 Identify Demand
Phase 2 Search Partners
Phase 3 Establish Network
Phase 4 Operate Collaboration
Phase 5 Optimise Collaboration
Fig. 2. Cooperation phases based on [12]
At first, in phase 1 the own need for external cooperation is to be identified. This requires the transparency of the own profile, as well as competency and skill profiles of potential partners. Finding partners and the partner selection in Phase 2 are characterized by matching the desired profile with those of potential partners [12]. This results in the compilation of a joint cooperation profile with which the business for both partners can be assured. Legal aspects play a not insignificant role. To establish the networking is required to examine technical specifications for the business and production process integration (for example for data exchange or for packing ¬ conventions) as well as organizational schemes (accountability for quality audits and Returns). The operational phase is characterized by monitoring the performance and the identification of optimization opportunities, which are to be implemented in the ongoing operation during Phase 5. The different phases are not strictly separated. Supporting methods and models should be applied during the phases.
Interoperability – Network Systems for SMEs
515
3.2 Solution Modules
For all of the partners simple methods to cooperate along the stages are necessary, but they must be able to integrate case wise. The integration is based on the Integrated Enterprise Modelling (IEM). The appropriate models and approaches serve as a "backbone" for all methods and tools of cooperation. A uniform transparency is offered by the network model, which is gradually built up along the cooperation phases (Fig. 2). The model includes descriptions of the connected network products, processes, systems and organizations. In Fig. 3, the solution components of the enterprise management division of IPK are aligned along the phases to improve the interoperability of cooperation’s. Essential is the model integration of the solutions, so that the components can be integrated application-oriented. Analyse
Interoperability Maturity Models
Integrated Enterprise Modelling MO²GO
Benchmarking Index
Intellectual Capital Statement Integrated Cooperation Reference Models
Design
VIENTO Platform, Supply Chain Mapping Model Data Exchange
Operation
Implementation Assistant Process Assistant SPIDER-WIN Order Data Management Phase 1 Identify Demand
Phase 2 Search Partners
Phase 3 Establish Network
Phase 4 Operate Collaboration
Phase 5 Optimise Collaboration
Fig. 3. IPK portfolio for continuous support of networked enterprises
Linked with the model of the network the solution components of IPK are: x
x
x
The "Intellectual Capital Statement" identifies the need for structural, relational and human capital of a company. Based on the analysis of the relationship capital, optimization potential in existing collaborations as well as new needed skills can be unlocked. [14, 15] Often, particularly in the phase of finding partners, the abilities to cooperate are not transparent. The "EIMM maturity model" for interoperability enables rapid identification of the capabilities of a company to cooperate without giving internals to the outside [16]. A transparent description of the processes and structures of the company allows for easy definition of "external" views for fast linking of process
516
Kai Mertins, Thomas Knothe, Frank-Walter Jäkel
x
x
x
x
x
x
chains. The IPK deploys its MO²GO-System [8] which implements the integrated enterprise modeling and enables a very fast and easy to understand modeling. The "VIENTO platform" helps to define the necessary operational skills, the development of joint production process and the relevant legal aspects of a cooperation of partners facing the end customer [5]. The software AMERIGO visualizes all supplier relationships (supply chain mapping) and clarifies delivery risks, so that alternative strategies for reorganizing the supply network can be chosen. Many companies already have enterprise and process models. With the help of developments from INTEROP and ATHENA individual companies can exchange models and synchronize content to design together processes and principles of the cooperation [9, 15]. With the help of an implementation assistant operational processes of the cooperation can be developed and adapted in such a way that the necessary specifications for IT implementation respectively adaptation of the software systems is directly available. [18]. The process assistant is the extension of the implementation assistant with all management and support processes of the cooperation and serves as an integral foundation for an integrated and above all complete digital management system based on the cooperation model [19]. With the help of the "SPIDER-WIN order data management system" it is possible to automatically distribute order data over several stages (tiers) of the Supply Chain, monitor processing status in the supply chain and react to disturbances. Orders are automatically fractioned in their subcontracts and given to the suppliers and subcontractors. In addition, the modelconfigured system offers overall visibility of the "stocks" in the entire supply chain [7]. With the help of the benchmark index [20] the performance of the cooperation can be easily compared with best practices from more than 15 countries around the world. The benchmark index is updated annually with benchmarking data for over 100,000 businesses. With this instrument improvement potentials can be identified quickly, which can be linked for example with the results of the knowledge balance.
In addition to the syntactic and semantic backbone, the underlying business model leads to a common description and communications base. In the next section as an example the implementation assistant is presented to support phase 3 and phase 4 of the reference procedure. In the next subchapter the module Process Assistant will be explained to a wider extent to support a use case in the real estate business. 3.3 Use Case - Process Assistant in Distributed Work Environments
The real estate business in Berlin is radically changing from public to private owned. In this context more efficiency in managing complete living environments as well as new tasks for becoming a full service provider are demanding. In our
Interoperability – Network Systems for SMEs
517
case several small sized real estate management companies had to merge. In order to realize efficiency effects harmonised work environments including common usage of software systems are required. As first step a preliminary enterprise model based on the IEM methodology were developed by the entire corporate management team. This model played only the frame which the refined definitions have to follow. These activities were performed by knowledge workers in the operative business coming from the distributed subsidiaries of the company under coaching support by the modelling experts. This participative approach ensures elaboration of real enterprise representations highly acceptance of modelling activities. By the way the procedures of work use of software were harmonised by take into account best process and structure practices. By using of the MO²GO capabilities of automated import and export of document linking information and new developed services for connecting IBM LOTUS Notes content more operative information can be linked to the model. From this point the Process Assistant were generated in order to give the entire company employees access to the model based information (see Fig. 4). Lotus Notes Environment
Process Assistant Model based linking to content
Tr C a n on sf te or n m t at io n
MO²GO Enterprise Modelling t n en io nt at Co form s an Tr
Fig. 4. Operationalized Enterprise Architecture to support day to day business
The process assistant provides reflecting views as html report with an integrated graphical model viewer. The representation of the reflecting views is independent from the modelling language and so easy to use for employees which are not familiar with enterprise modelling but need support to fulfil their duties. On the other hand more operational data and documents (responsibility lists, actual figures) were decentralised elaborated without recognising the model behind. By applying this approach not any more conceptual modelling elements, work instructions and checklists are integrated into the process assistant as well as day to day changing information. The beauty is, whilst knowledge worker do not recognising the model behind their work but a lot of colleagues receives the benefits through the process assistant. In order to integrate the process assistant, the MO²GO tool and IBM Lotus Notes some services for automatic linking of
518
Kai Mertins, Thomas Knothe, Frank-Walter Jäkel
documents to the model and transformation of document data through the modeling tool to the process assistant. This solution extent the Active Knowledge Modelling approach by Karlsen and Lillehagen (compare [21]) functionality to integrate non modelled Content into a joint enterprise architecture without explicit modelling by users. It is a complementary approach between static model representation and operation and the concept of model generated workplaces (MGWP).
4 Conclusion and Outlook The medium and long-term goal is a dramatic change of the current situation there SME are react mostly direct on the pressure coming from large companies both customers and suppliers. Today the result is the elaboration of partial solutions just for one challenge related to one customer or supplier. Consequently, European SME might not be on a good technological edge for the competition on the market places around the world. In contrast in the future SME should have the capability to provide their interoperability services to their collaboration partners (customers, suppliers and cluster/network partners). This requires: x x x x
Transparency of the processes out of the company and the possibility to create different interconnected (external) views of these processes on demand, Prepared to operate in a multicultural environment as well as for the management of different degrees of confidence (Trust-Levels), Implementation of a strategic cooperation management and training of the employees (management and operative staff) to act in global networks Clear definitions of the terminology used in the daily business (might be by the application of reference ontology technologies and enterprise modelling).
To be sustainable, solutions towards interoperability require a development in the direction to become standards. Two aspects are in the focus: x
x
Interoperability to enable efficient collaboration and cooperation within SME networks dynamically according to market demands to ensure that SMEs as a network creates a market positioning, so that they become equal partners for large international companies. Interoperability as a precondition for cooperation on global markets, especially between different businesses of different sizes. Towards the capability of an organisation to simultaneous participate in various collaborations and networks without adapting the internal systems and process architecture and without providing specific proprietary systems for each partner.
Sustainability of enterprise interoperability requires the capability of SME to participate in collaborations and enterprise networks on demand without high additional investments in IT or process support. The vision is the accessibility of
Interoperability – Network Systems for SMEs
519
easy to use and open services supporting collaborations. Organisations such as the INTEROP-Vlab on the European level with its localised poles such as the DFI in Germany are established to coordinate the work on such vision. The introduced use case of a distributed midsized intents, these kind of companies have still to overcome own interoperability gaps. The introduced interoperability framework solutions can help companies on the way to better business. Especially the extended process assistant and its interfaces to the internal operational content enables on the one hand more structured and efficient business process by reducing unnecessary conceptual work and delays. On the other hand employees can stay at their environment but be “hidden” model based connected to the entire company. One of the next steps for solution improvement is the introduction of semantic services to extract content from business operations semi automatically into the enterprise model. The experiences from the real use case are very encouraging.
References [1] [2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
Müller, J. P.; Mertins, K.; Zelm, M. Enterprise Interoperability II. New Challenges and Approaches. Springer-Verlag London 2007. Schallock, B.; Nojarov, S: Mit Wirksystem und Prozessmodellierung zur leistungsorientierten Innovationskultur. In: Doleschal, R. et al. (Hrsg.) Innovationen systematisch gestalten. Beiträge zum Innovationskongress 2006, Schriftenreihe des KOM, Fh Lippe und Höxter, Lemgo 2007, S. 107 ff. Enterprise Interoperability Research Roadmap, Final Version (V4.0), 31 July 2006, published by the European Commission, http://cordis.europa.eu.ist.ict-ent-net/eiroadmap_en.htm Institute of Electrical and Electronics Engineers. IEEE Standard Computer Dictionary: A Compilation of IEEE Standard Computer Glossaries. New York, NY: 1990. Schallock, B.; Yalniz, Z.: Network management tools supporting market access and product innovation – cases from VIENTO, cDIE and InnoRegio. Proceedings Sevilla 13.-16. June 2004; Internatioanl Concurrent Engineering conference ICE 2004 BIBA Bremen Press, S. 248- 256 Paganelli, P.; Peterson, S. A.; Schallock, B.: Feature-based analysis framework for Interoperability in networked organisations. in: Camarinha-Matos, L.M.; Afsarmanesh, H.; Ortiz, A. (Hrsg.): Collaborative networks and their breeding environments. New York: Springer 2005, S. 467-474. Rabe, M., Weinaug, H., Unterstützung von KMU bei Prozessgestaltung und Supply Chain Execution. In: Uhlmann, E. (Editor) 3D-Erfahrungsforum - Innovation Werkzeug- und Formenbau. Tagungsband 3D Erfahrungsforum, IPK Berlin 17-18. Mai 2006, S. 53-63. Mertins, K.; Jaekel, F-W: MO²GO: User Oriented Enterprise Models for Organizational and IT Solutions. In Bernus, P., Mertins, K., Schmidt, G.(ed.). Handbook on Architectures of Information Systems. Second Edition. Springer-Verlag, Berlin Heidelberg New York 2006, S. 649-663. Jaekel, F.-W.; Rabe, M.; Zelm, M.:’Praxisnahe Interoperabilität von Unternehmenssoftware in KMU-Netzwerken’, Industrie-Management 21 (2005) 4 Gito Verlag, Berlin, 2005, S. 49-52.
520
Kai Mertins, Thomas Knothe, Frank-Walter Jäkel
[10] Jaekel, F.-W.; Perry, N.; Campos, C.; Mertins, K. and Chalmeta, R. (2005) Interoperability Supported by Enterprise Modelling; On the Move to Meaningful Internet Systems 2005: OTM Workshops: OTM Confederated: International Workshops and Posters, AWeSOMe, CAMS, GADA, MIOS+INTEROP, ORM, PhDS, SeBGIS, SWWS, and WOSE 2005, Agia Napa, Cyprus, October 31 November 4, 2005. Proceedings Editors: Robert Meersman, Zahir Tari, Pilar Herrero ISBN: 3-540-29739-1 Chapter: p. 552; Lecture Notes in Computer Science, Publisher: Springer-Verlag GmbH , ISSN: 0302-9743 Subject: Computer Science Volume 3762. [11] Greiner, U.; Lippe, S.; Kahl, T.; Ziemann, J.; Jäkel, F.-W. (2006) A multi-level modeling framework for designing and implementing cross-organizational business processes. Proc. of TCoB Workshop, Cyprus, 2006. [12] http://www.ec-net.de/ECNet/Navigation/Kooperationen/kooperationsschritte,did=101812.html [13] Schallock, B; Bading, N.: Kundenintegrierende Innovation. In: Doleschal, R. et al. (Hrsg.) Innovationen systematisch gestalten. Beiträge zum Innovationskongress 2006, Schriftenreihe des KOM, Fh Lippe und Höxter, Lemgo 2007, S. 145 ff. [14] Alwert, K. Wissensbilanzen für mittelständische Organisationen. Entwicklung und prototypische Anwendung einer geeigneten Implementierungsmethode. Herausgeber: Mertins, K. Erschienen Stuttgart: Fraunhofer IRB Verlag, 2006, X, 181 S. Zugl.: Berlin, TU, Diss., 2005, Berichte aus dem Produktionstechnischen Zentrum Berlin ISBN: 3-8167-7033-9 [15] Mertins, K.; Will, M. Intellectual capital statement: The basis for knowledge management. Quelle Warsaw University of Technology, Faculty of Production Engineering ; International Institution for Production Engineering Research -CIRP-, Paris: APE 2007, International Conference on Advances in Production Engineering : 14 - 16 June 2007 Warsaw, Poland, Warschau, 2007, S.43-51. Konferenz International Conference on Advandes in Producion Engineering (APE) <4, 2007, Warschau> . [16] Knothe, T.; Kahl; T., Schneider, K.; Böll, D.: Framework for Establishing Enterprise Modeling in the Context of Collaborative Enterprises, Proceedings HICCS Hawaii International Conference on System Sciences 2007 [17] Ziemann, J.; Ohren, O.; Jäkel, F.-W.; Kahl, T.; Knothe, T.: Achieving Enterprise Model Interoperability Applying a Common Enterprise Metamodel. In: Doumeingts, G.; Müller, J.; Morel, G.; Vallespir, B. (Ed.): Enterprise Interoperability - New Challenges and Approaches. Springer-Verlag London Limited 2007. S.199 – 208. [18] Jankovic, M.; Ivecic, N.; Knothe, T.; Mrjanovic, Z.; Snack, P.: "A Case Study in Enterprise Modelling for Interoperable Cross-enterprise Data Exchange" In: Müller, J. P.; Mertins, K.; Zelm, M. Enterprise Interoperability II. New Challenges and Approaches. Springer-Verlag London Limited 2007 S. 541 – 552. [19] Knothe, T.: Synchronization of Service Engineering and Operational Process. In Proceedings of COMA 2004, Stellenbosch. South Africa [20] Mertins, K.; Görmer, M.; Kohl, H. Benchmarking 2005. Best Practices: Lösungen für den Mittelstand, Potenziale und Handlungsfelder von Benchmarking, 17. - 18. November 2005 ,Fraunhofer-Institut für Produktionsanlagen und Konstruktionstechnik -IPK-, Berlin. Stuttgart: Fraunhofer IRB Verlag, 2005, S. 283, ISBN: 3-8167-6941-1. [21] Karlsen, D.: CPPD Methodology Building Collaborative Design Services, http://193.71.42.92/websolution/FileRepository/Attachment_Pub2_DownLoad.asp?Sy stemID=270&ID=2, last visit 01.10.2007.
Engineer to Order Supply Chain Improvement Based on the GRAI Meta-model for Interoperability: An Empirical Study A. Errasti1 and R. Poler2 1
2
Supply Chain Consulting, SCC, Ulma Handling Systems, Spain [email protected] Research Centre on Production Management and Engineering, Polytechnic University of Valencia, Spain [email protected]
Abstract. Companies compete in saturated markets trying to be more productive and more efficient. In this context, to manage the entire supply network to optimize overall performance becomes critical. Enterprise Modelling of Decision Systems is an important instrument to structure this complexity. This paper explores the methodology in which the redesign of internal and external operational integrated processes should be done, applying the GRAI Meta Model and the Design Principles for Interoperability, in order to improve the overall performance of an Engineer to Order Supply Chain. This research has also conducted a case study in the Producer Goods Sector from an Original Equipment Manufacturer (OEM) point of view. The main conclusions related to the aim of the paper are that, the reengineered system integrates the Supply Chain Network more effectively, as well as achieving the customers´ objectives in terms of time and cost. Keywords: Business Process Reengineering in interoperable scenarios, Enterprise modelling for interoperability, Modelling cross-enterprise business processes, Experiments and case studies in interoperability
1 Introduction: Engineer to Order Supply Chains The opening of the boundaries of all the markets has caused the necessity to compete with large multinational organisations in niches where just some traditional SMEs performed some decades ago. In this context, organisations have to survive in saturated markets trying to be more productive, more efficient or more innovative (Lummus et al, 99). Three levels of interaction between enterprises can be defined:
522
A. Errasti and R. Poler
x x x
Communication: exchange of necessary information to work together. Coordination: exchange of information and agreements in decision making processes with specific objectives (individual and common). Cooperation: exchange of information and collaborative decision making with a common goal.
SMEs need, more than ever, cooperate in a supply chain context in order to exploit their complementary core competences to optimize overall performance of the entire network. Cooperation can be achieved through integration of the different way of doing business, but this solution normally clash with the desired individual autonomy. Each company has his own models and applications to manage his business processes, thus the solution is to achieve the necessary degree of interoperability between models and enterprise applications. Different problems have to be faced from the point of view of interoperability depending on the decoupling point. This point shows how deeply the customer order penetrates the supply system. Four commonly used classifications of the decoupling point are: Make to Stock (MTS), Assemble to Order (ATO), Make to Order (MTO) and Engineer to Order (ETO). The Make to Order concept, captures de idea that value-adding activities, manufacturing, assembly even distribution, are triggered by orders, rather than forecasts. By performing value-adding activities to order, a company would avoid incurring the risks of forecasting uncertain events, which may ultimately lead to excess inventory and poor service level (Salvador et al, 2006). Thus, Make to Order requires a supply chain which has product and process flexibility. In this context, flexibility means the ability to identify the need for a change and then adapt to the change in a manner that benefits the manufacturing system performance. Some Make to Order production systems have to customize the product, thus an engineering design stage should be integrated before manufacturing. These production systems, called Engineer to Order, are usually project based production system, where the element of repetition is usually limited, in other words “low volume – high variety” (Slack et al, 2004). Suppliers are usually devoted to manufacture some components or subsystems that should be then supplied and assembled in the manufacturing or assembly sites (Ramsay, 2005). Some subcontractors could have expertise in some components or subsystems. Thus, they are required to supply these components and subsystems or variations of them in more than one project. Designs of details are usually planned overlapping with the manufacturing period. This creates multiple and parallel processes. Parallel operations are at the same time interdependent and can therefore be expected to interrupt and disturb the manufacturing processes. Some authors (Caron et al, 1995) add that the lack of supply chain and logistics related management in a project environment has a significant influence on the manufacturing systems. Thus, the decisional interoperability of the internal functions such as engineering, purchasing, manufacturing, etc. and external suppliers to operate in a whole and coherent way must be taken into account in such a way that the decision
Engineer to Order Supply Chain Improvement Based on the GRAI Meta-model
523
points and the action points interact to ensure the smooth functioning of the manufacturing system.
2 Research Objectives Even if some authors argue that few companies are actually engaged in such extensive supply chain integration (Fawcett et al, 2002), especially when it requires a strategic collaboration (Bittitci et al, 2004), the authors of this paper state that the GRAI Meta-model for Interoperability approach could be useful to accomplish an integration reengineering project in order to improve overall supply chain performance. Some GRAI tools (GRAI Grid and Nets, DGRAI Model) have already been applied succesfully while integrating a Make to Stock production system (Errasti et al, 2006), but there is no evidence of application in Engineer to Order production systems. More over, Engineer to Order production systems are usually “low volume-high variety” (Slack et al, 2004), the need of coordination and synchronization and interoperability has not been studied in depth in case studies of “high volume-high variety” Engineer to Order production systems. Thus, this paper explores the method in which the redesign of internal and external integrated processes should be carried out, facing the interoperability problems and aided by the GRAI Meta-model for Interoperability approach and the GRAI Grid and DGRAI Model, in order to improve the overall performance of the supply chain in a Engineer to Order project based production system. This research has also conducted a case study in the Producer Goods Sector from an Original Equipment Manufacturer point of view answering the difficulties mentioned above.
3 Research Methodology The research methodology behind the work presented in this paper, consists of a theory building phase, a theory testing phase and a synthesis phase: x x
x
Theory building: started with an extensive literature review to identify the issues/factors to be considered in the implementation of the GRAI Metamodel for Interoperability in a supply chain. Theory testing: the theory-testing phase of the research was designed around action research principles. Action Research can be seen as a variation of case research, in which the action researcher is not an independent observer (Coughlan, 2002) (Voss, 2002). Conclusions/Synthesis: in the synthesis phase the conclusions of the case study and the findings are shown, which increase the understanding of the reengineering process and the techniques based on Enterprise Modelling of Decision Systems to integrate the supply chain.
524
A. Errasti and R. Poler
3.1 Theory Building: Issues/Factors to be Considered
Given the implications of redesigning the operational processes of a supply chain, the decision to do so and how, should be considered a strategic issue for the firm. Thus, the concepts identified in the literature were set around the steps that are typically needed in a strategy development process. Authors like Acur (2000) state that, for a dynamic strategy development process, four stages (inputs, analysis, strategy formulation, strategy implementation and strategy review) are needed and that management and analytical tools can be used for this purpose.
Fig. 1. Schematic representation of the methodology/guide and factors to be considered
The authors of this paper have accepted this approach; nevertheless this research simplifies this process and adapts it to the business units´ operational strategy (Platts, 1990) considering the following factors (Figure 1): x x
The methodology/guide takes into account the position of the Business unit in the Value Chain (Browne, 1995) and sets the stages which should help value creation (Porter, 1980) ( Martinez et al, 2006). A diagnosis or input stage is used to analyse the factors (Gunn, 1987). In this stage GRAI Meta-model and the Design Principles for Interoperability (Dassisti et al, 2006), GRAI Grid (Doumeingts, 1998) and DGRAI (Poler et al, 2002) analytical tools support the analysis of the AS IS System and the TO BE System.
Engineer to Order Supply Chain Improvement Based on the GRAI Meta-model
x
x
525
The diagnosis contributes to choosing the content of the strategy (Gunn, 1987) and defining or formulating the strategy (Platts, 1990). The diagnosis also contributes to monitoring the advantages/disadvantages of the future decision system related to Information technology (hardware and software specifications), manufacturing technology (tools and equipment specifications) and organisation (physical system and management structure). After that, a deployment stage of the formulated strategy is set (Feurer, 1990).The deployment is a project-oriented task (Marucheck et al, 1990), where a process of monitoring and reviewing to facilitate the alignment of the organisation to the strategy is set (Kaplan et al, 2001).
3.2 Theory Testing: Empirical Study
3.2.1 Supply Chain Presentation andMmethodology/Guide Adapted for theCcase Study The OEM which led the supply chain improvement, was a Spanish company devoted to design, manufacturing and assembly of vertical transportation equipments (lifts, escalators,..). In order to ease the comprehension of the case study, the main characteristics of the external supply chain (distributors and suppliers network) and the internal supply chain (materials warehouse and manufacturing plant) are described in this section. Figure 2 shows the different types of nodes of the supply chain taken into account in this study: External distributors´ Supply Chain; Internal Supply Chain and External suppliers´ Supply Chain. External Supply Chain
Manufacturing Suppliers Plant
Warehouse
Internal Supply Chain
External Supply Chain
Original Equipment Manufacturer
Distributors Assemblers
Warehouse
Plant
Construction Site
Engineer to Order Replenishment Planning and Inventory system
Fig. 2. Internal and external supply chain considered in the case study
x x
External distributors´ Supply Chain: the modules and subsystems produced by the O.E.M. are delivered by distributors to the construction site where they are assembled. Internal Supply Chain: The OEM production is planned when the design team has finished the engineering stage of each customized lift or escalator. Thus, the manufacturing plant works as an Engineer to Order system. The
526
A. Errasti and R. Poler
x
manufacturing plant process is based on mass customization concepts, which try to exploit the advantages of mass production with the customization of the product. For this purpose the product and the process have high modularity and commonality that allows a high customization, combining different parts or modules in the final part of the process (assembly). External suppliers´ Supply Chain: The OEM has classified the suppliers taking account of logistic volume or weight per supply unit, number of references, supplier distance and value per supply unit, as J.I.T. volume suppliers, J.I.T. module suppliers, traditional Make to Order suppliers and traditional Make to Stock suppliers (Figure 3). SUPPLIER J.I.T. VOLUME CLASIFICATION SUPPLIER FACTORS
J.I.T. MODULE SUPPLIER
TRADITIONAL MAKE TO ORDER SUPPLIER
TRADITIONAL MAKE TO STOCK SUPPLIER
LOGISTIC VOLUME OR WEIGHT PER SUPPLIED UNIT NUMBER OF SUPPLIED REFERENCES DISTANCE TO THE O.E.M. LOCATION
VALUE PER SUPPLIED UNIT
Fig. 3. Suppliers´ classifications taking into account some logistics factors
In construction industry supply chains, based on the client’s brief, design consultants design. Thereafter, the project is passed on to the contractor who takes responsibility for the construction of the facility (Thomas, 2002). The contractor subcontracts to subcontractors. The company works usually as subcontractor. These subcontractors are subject to tremendous pressures in terms of Quality, Service and Cost (Errasti et al, 2005). In this context, the company tried to gain a sustainable competitive advantage. For this purpose, the research group aided them to apply the methodology/guide to the Business Unit (Figure 4).
Engineer to Order Supply Chain Improvement Based on the GRAI Meta-model
527
Change Drivers : Service quality and cost competitive factors Scope : Elevation Business Unit from O.E.M . point of view Purpose : Better performance through process reengineering
Dynamicstrategymanagementprocess: Dynamic strategy management process: management process Diagnosis
Operational Strategy : Quality service improvement Total cost reduction
GRAI Method Method
Strategy Deployment
- Future Operational planning System implementation MonitoringReviewing Monitoring Reviewing
- Analysis of Current /Future Operational Planning System and suppliers /customers integration Through GRAI Grid and DGRAI - Interoperability problems avoiding by applying the Design Principles for Interoperability - Future System specifications
-Key performance indicators : Product delivery date Customer Order Fulfilment Stock Turnover Logistic and production total lead time
Fig. 4. Schematic representation of the methodology/guide adapted to the Case Study
In the diagnosis stage the GRAI grid and DGRAI tools to analysis the Current System (AS IS) and design the Future System (TO BE) were used and the Design Principles for Interoperability were applied. The Operational Strategy defined was to improve customer service (reduce delivery date and increase order fulfilment) and reduce the total cost of the chain through a redesign of the production and inventory planning system of the OEM and suppliers´ network. The implementation of the future system was monitored with key performance indicators of Customer Service (Delivery date, Order fulfilment) and Cost (Stock, Manpower Costs). 3.2.2 Supply Chain AS IS Analysis and TO BE Design To analyze the production and inventory/material planning current system of the internal supply chain and the suppliers´ network (AS IS), the decision system was monitored using GRAI Grid based on the Meta-model Basis Unit (MMBU) reference (Dassisti et al., 2006). The GRAI Grid showed the main characteristics (decision levels, decision centres, planning periods, planning frequency, decision alternatives, information). To design the TO BE system the Design Principles for Interoperability were embedded in the implementation. In particular some of them were explicitly applied in this study: x
1st DP: “When designing an organisation, evaluate its domain and boundaries of action inside the networked enterprise chain, identifying the necessary actors and the material and immaterial exchanges to perform” In the analyzed supply chain, the inner and suppliers´ teams to improve the
528
A. Errasti and R. Poler
x
x
x
x
x
supply chain the key points of their work, the core competencies to develop, the necessary physical and the informative infrastructures were identified. 2nd DP: “When designing an organisation, consider first of all the global reference pattern of its networked enterprise chain. This pattern should represent a networked enterprise chain as interoperable as possible.” In the analyzed supply chain, the inner and suppliers´ key roles and activities to improve the overall performance and the agents involved and their relationships were defined. 3rd DP: “When designing an organization, apply the Meta-model basic unit (MMBU), built from the interoperable networked enterprise chain pattern”. A simplified version of the MMBU GRAI Grid was used to represent the strategic level related to planning and definition activities (design, inventory, quality policies and business plan, etc.) and the operational level related to the operative aspects (general scheduling, synchronisation of processes, scheduling of deliveries, etc.). 4th DP: “When designing an organization, build its aggregate decision structure model. A useful tool for this goal is the GRAI grid” The GRAI grid was used to map the decision systems of the production and inventory/material planning system.Special attention was payed to the clear definition of the information and decision flows and the verification of the consistency of the model using GRAI rules. 7th DP: “Implement a mechanism to exchange decision frames between deciders” In the analyzed supply chain, the connected nodes exchange decision frames concerning production plans in a collaborative way. If the decision periods are different, a lot of inconsistencies will result. 8th DP: “When designing and re-engineering an organization, pay particular attention to local decision centres linked to local decision centres of the other partners. A good solution could be using for linked decisional centres the same horizon but especially the same period” The production and inventory/material planning system of the suppliers was coordinated to the inventory/material planning system of the O.E.M in order to avoid inconsistencies deriving from updated decisions of the partners.
The production and inventory/material planning is concerned with balancing supply and demand. It tries to keep the material flow and the value added activity in manufacturing going on without interruptions. In this case, the production and material planning system worked as a Make to Order system, once the engineering stage was done. It was a weekly based fix order point system called “S+5:5”, because the manufacturing plant supply to the distributors the orders received in the week “S”, 5 weeks later. Thus, the planning period was one week and the planning horizon was six weeks. When monitoring the GRAI Grid a policy constraint was detected. The whole flow of materials was limited by the master production scheduling (MPS) decision level. In this decision level, the planning period and planning horizon of the master production scheduling had a great
Engineer to Order Supply Chain Improvement Based on the GRAI Meta-model
529
impact on the system lead time and consequently on the delivery date and on stocks needed to go on without interruptions. Thus, the analysis and design stage of the future system was set around the alternatives of reducing the planning period and the planning horizon of the MPS. In particular, the “D+18:2” planning system, with a planning period of two days and a planning horizon of twenty days, was analysed. This replenishment system could be expected to perform better, because the period and planning horizon were reduced. In theory, the new system could work with less work in process, because it supposed a quicker response of the manufacturing plant to the finished product warehouse. Nevertheless, in order to analyze the feasibility of the future system, the information technology, manufacturing technology and organisations of the internal and external suppliers/customers´ network critical factors and other efficient supply systems (cross docking, milk run,...) were monitored (Figure 5). S+5:5 vs D+18:2
INTERNAL SUPPLY CHAIN
EXTERNAL SUPPLY CHAIN
INFORMATION TECHNOLOGY
•Production order generation in O.E.M. • Purchasing order generation and submission to suppliers
•Suppliers order reception
MANUFACTURING TECHNOLOGY
•Lot or Batch size in the production system •Bottlenecks efficiency •Work in process
•Lot or Batch size in the suppliers´ production system
ORGANISATION
•Plant Manufacturing Pull System •Parts and Raw materials stored in O.E.M. warehouses •Efficiency of the Planning System. Management time •Efficiency of the Inventory System. Management time
•Parts stored in suppliers´ finished products warehouses •Efficient supply system from suppliers (Milk Run with third party logistics)
Fig. 5. Critical information technology, manufacturing technology and organisation factors taking into account when implementing the Future system
In order to analyze the efficiency to operate with the future planning and inventory/material system (Figure 6), a simulation of the Decision System was done using DGRAI tool. DGRAI allowed monitoring if there were bottlenecks or capacity problems in the production planning and inventory/material management teams and also the total quality of the future system. In order to compare the dynamic behaviour of both systems, a simulation of one year was performed. The conclusions were:
530
A. Errasti and R. Poler
x x
x
x
Both planning systems were correct from the point of view of supply coordination. The simulation did not evidence synchronization problems between internal and external supply chain. Regarding the manpower consumption, “D+18:2” planning system used a every two day recalculation of the Master Program (consequently the same period for programming and ordering), versus the “S+5:5” planning system that used a weekly recalculation. The simulation showed that the “D+18:2” uses a 40% more of hours of decision makers than the “S+5:5”. Therefore D+18:2 was more expensive than S+5:5 from the point of view of manpower cost. Concerning the impact of the human resource capacity in the decision system performance, an interesting indicator is the quantity of decisions in the queue of decision makers over the time. Both planning systems had a similar behaviour concerning the maximum number of decisions in queue, but the queue time was higher in “D+18:2”, the average number of decisions in queue was a 50% higher in “D+18:2” than in “S+5:5”. In particular the Planning Chief of the internal supply chain was overloaded all the time in “D+18:2”. With regard to the evolution of the Total Quality of Decision System (TQDS) indicator, “D+18:2” was a 4% better in mean than “S+5:5”. But the most important was that “D+18:2” is a 10% better in minimum than “S+5:5”. TQDS was calculated as the weighted mean of the quality of decisions at a given moment. The quality of a decision depends on the quality of information used and the quality of the decision maker and decreases with time (until its re-generation). ENGINEERING Demand Import
H = 3 months
P = 1 months H = 25 days P = 5 days
Demand Export
PRODUCTION CHAIN U.S.M Plan
Cabins Plan
Doors Plan
Machines Plan
Controls Plan
SUPPLIERS CHAIN Chassis Plan
Internal Resources JIT Vol Informatio Mngmt Suppliers n
Aggregate Aggregate Aggregate Aggregate Aggregate Aggregate Aggregate Aggregate d Prod. d Prod. d d Prod. d Prod. d Prod. d Prod. d Prod. Shift Staff Plan Plan Expedition Plan Plan Plan Plan Plan IMP
EXP
Plan
Cabins
Doors
Machines
Controls
Chassis
Plan
Stock Levels
Production
JIS Comp MTO Suppliers Suppliers
Aggregate Aggregate Aggregate Aggregate d Supply d Supply d Supply d Supply Shift Staff Plan Plan Plan Plan JIT Vol Suppliers
JIS Comp MTO Suppliers Suppliers
Lift Project Lift Project IMP
External MTS Resources Informatio Suppliers Mngmt n
MTS Suppliers
Plan
Stock Levels
Production
Orders
EXP Demand Master Master Master Master Master Master Expedition Load/Capa Forecastin Orders Orders Expedition Prod. Plan Prod. Plan Prod. Plan Prod. Plan Prod. Plan Teams city g JIT Vol JIS Comp MTO Plan Cabins Doors Machines Controls Chassis Program Expedition Suppliers Suppliers Suppliers
H = 20 days P = 5 days
Master Load/Capa Production city Plan
Wharehou se
Logistics Load/Capa Production city
H = 15 days P = 2 days H = 5 days
Deadline
Deadline
Expedition
Cabins
Doors
Machines
Controls
Chassis
P = 2 days
Tracking IMP
Tracking EXP
Program
Program
Program
Program
Program
Program
Teams Production Program Expedition Production Teams Reprog.
incidences
H = 4 days P = 1 days H = 2 days
Daily
Cabins
Doors
Machines
Controls
Chassis
P = 1 days
Expedition s
Reprog.
Reprog.
Reprog.
Reprog.
Reprog.
Logistics Production Production Teams Reprog.
Orders
Weekly Deliveries
JIT Vol Suppliers
Program
Weekly
Weekly
Deliveries
Deliveries
Daily
incidences Deliveries
Fig. 6. GRAI Grid
Expedition Weekly Program Expedition Daily Program Program Changes
Engineer to Order Supply Chain Improvement Based on the GRAI Meta-model
531
3.2.3 Results in Terms of Effectiveness The business unit managers balanced the advantages/disadvantages shown by the study and decided to implement the new production and inventory/material planning system. The performance of the future production and material/inventory planning system was set in quantitative planning parameters such as days of stock reduction and customer service. Five years after the starting up of the reengineering process, due to the new production and material/inventory planning system the outstanding changes were: x x x x x x x
35% of delivery date reduction 50% increase of order fulfilment 15% of Manpower Consumption when planning Master programs 40% of work in process reduction 60% of stock reduction in J.I.T. module suppliers’ warehouses. 20% of stock increase in MTO suppliers forced to work as MTS suppliers. 30% of stock reduction of Make to Stock suppliers parts in OEM warehouse.
4 Conclusions The redesign of internal and external integrated processes of a OEM of the Producer Goods Sector has been done, applying the GRAI Meta-model, the Design Principles for Interoperability and the DGRAI Model, with the objective of improving the overall performance of the Supply Chain. The main conclusions related to the aim of the paper are that the adoption of methodologies based on GRAI Meta Model and Design Principles for Interoperability could be useful when improving an Engineer to Order Supply Chain Network performance.
References [1]
[2]
[3] [4]
[5]
Acur N, Biticci U, (2000) Active assessment of strategy performance. Proceedings of the IFP WG 5.7, International Conference on Production Management, Tromso, Norway. Bittitci US, Martinez V, Albores P, Parung J, (2004) Creating and managing value in collaborative network. International Journal of Physical Distribution and Logistics Management Vol. 34, No.3/4, pp.251-268. Browne J, Sackett PJ, Wortmann JC, (1995) Future Manufacturing Systems-towards the extended enterprise. Computers in Industry, Vol.25, pp.235-254. Caron F, Fiore A, (1995) Engineer to order companies: how to integrate manufacturing and innovative processes. International Journal of Project Management. Vol. 13, No.5.pp.313-319 Coughlan, P., Coughlan D. (2002)“ Action Research: Action research for operations management”.International Journal of Operations and Production Management Vol 22 No 2 pp. 220-240
532
A. Errasti and R. Poler
[6]
Dassisti M, Chen D, Scorziello F, (2006) GRAI and SCOR Meta-model and Design Principles for Interoperability. I-ESA’06 Conference, Bordeaux, France Doumeingts G, Vallespir B, Chen D, (1998) GRAI Grid Decisional Modelling. In Handbook on Architecture of Information Systems, Edited by P. Bernus, K. Mertins, G. Schmith, Springer Verlag, pp. 313-337. Errasti A, Oyarbide A, Santos J, Case Research to gain competitive advantage through construction process reeingineering. Faim2005, 15thInternational Conference on Flexible Automation & Intelligent Manufacturing. Errasti A, Poler R, Oyarbide A, Santos J, (2006) Supply chain improvement based on GRAI Method: an empirical study. Proceedings of 13th Euroma International Conference, Glasgow, Great Britain. Fawcett S, Magnan G, (2002) The rhetoric and reality of supply chain integration. International Journal of Physical Distribution and Logistics Management, Vol.32, No 5, pp.339-361. Feurer R, Chaharbaghi K, Wargin J. (1995) Analysis of strategy formulation and implementation at Hewlet Packard. Management Decision, Vol.33, No.10, pp.4-16. Gunn TG (1987) Manufacturing for Competitive Advantage: Becoming a World Class Manufacturer. Ballinger Publishing Company, Boston M.A. Kaplan RS, Norton DP (2001) The Strategy Focused Organization, Harvard Business School Press, Boston Massachussets. Lummus R, Vovurka R, (1999) Defining supply chain management: a historical perspective and practical guidelines. Industrial Management and Data Systems No.99/1, pp.11-17. Martinez, V. & Bititci, U.(2006) Aligning value propositions in supply chains. International Journal of Value Chain Management, 1, 6-18. Marucheck A, Pannesi R, Anderson C, (1990) An exploratory study of the manufacturing strategy in practice. Journal of Operations Management, Vol.9, No.1, pp.101-23. Platts KW (1990) Manufacturing audit in the process of strategy formulation. PhD dissertation, University of Cambridge, Cambridge. Poler R, Lario FC, Doumeingts G, (2002) Dynamic Model of Decision Systems (DMDS). Computers in Industry, Vol.49, pp.175-193. Porter ME, (1980) Competitive Strategy: techniques for analyzing industries and competitors. The Free Press. Ramsay, J. (2005) The real meaning of value in trading relationships. International Journal of Operations & Production Management, 25, 549 Salvador F, Rungtusanatham M, Forza C, Trentin A, (2005) Understanding synergies and trade-offs between volume flexibility and mix flexibility in Build-to-Order strategies. Proceedings of 12th Euroma International Conference, Budapest, Hungary Slack N, Chambers S, Johnston R, (2004) Operations Management. 4th ed., Pearson, London Thomas S, (2002) Contractors risk in Design, Novate and Construct contracts”. International Journal of Project Management. 20 119-126. Voss, C. Nikos Tsikriktsis and Mark Frohlich. (2002) “ Case Research in operations management”.2002.International Journal of Operations and Production Management Vol 22 No 2 pp.195-219
[7]
[8]
[9]
[10]
[11] [12] [13] [14]
[15] [16]
[17] [18] [19] [20] [21]
[22] [23] [24]
Proposal for an Object Oriented Process Modeling Language Prof. Dr. Reiner Anderl, Dipl.-Ing. Jens Malzacher, Dipl.-Ing. Jochen Raßler Department of Computer Integrated Design (DiK), Technische Universität Darmstadt Petersenstr. 30, 64287 Darmstadt {Anderl, Malzacher, Rassler} @dik.tu-darmstadt.de
Abstract. Processes are very important for the success within many business fields. They define the proper application of methods, technologies and company structures in order to reach business goals. Not only manufacturing processes have to be defined from the start point to their end, also other processes like product development processes need a proper description to gain success. For example in automotive industries complex product development processes are necessary and defined prior to product development. Over the last decades several product modeling languages have been developed moving to object oriented modeling languages, such as UML, but the used process modeling languages are still procedural. The paradigm shift caused by object oriented description within product modeling languages has to be transferred to process modeling languages. This paper introduces an object oriented approach for process modeling. Using UML as a starting point an object oriented process modeling method is differentiated. The basic concepts which are needed for process modeling are put into an object oriented context and are explained. The paper also deals with the most important methods behind object oriented process modeling and gives an outlook, what can be achieved by this approach. Keywords: process modeling, object orientation, UML, modeling language
1 Introduction All over industrial appliance the necessity of well-defined and powerful processes are known. These processes range from manufacturing processes over business processes even to product development processes. During the last decades they have been analyzed and defined in the according companies. But within most areas the well-known and defined processes are represented with old methodologies. Discipline-specific methods have been developed to a new level. Applications for that are cross-enterprise collaboration, e.g. in manufacturing
534
Reiner Anderl, Jens Malzacher, Jochen Raßler
or product development networks, and cross-discipline collaboration like mechatronical product development. Within cross-enterprise collaboration the involved companies are not integrated by the means of deliverables anymore, but are integrated in the complete processes now. The cross-discipline collaboration is similar. It has used to be integration by the means of interfaces and key objectives, but now it is integrated at any time of the process. Both examples lead to two major problems. First of all cross integration is new every time a new collaboration starts. Typically a company is involved in several different collaboration networks at a time. They are all different but principally support the same process. Second problem is that most existing process descriptions are based on procedural process description. These are not powerful enough to meet requirements of describing cross collaboration. A short example is given to illustrate the problem. Within product development the VDI2221 describes the sequential process of product development that allows some iteration. The ideas behind that process are roughly 50 years old now. Products and with that product development has changed dramatically. Mechatronical product development requires a coordinated development process of several disciplines like mechanics, electrical and electronical devices or software development. Especially software development does not fit properly into the VDI2221 process. With the “Münchner Vorgehensmodell” (MVM) a new approach on a process model for product development was defined using important stages. Depending on the problem every lived process according to the MVM may take its own way between these stages. Still there is no proper description for flexible processes, but processes like MVM or cross collaborations are required. So a new process modeling language shall meet the following requirements: 1. 2.
Support of hierarchical structures Support of flexible interpretation of a defined process without getting incompatible – support of generalization and specification 3. Robust process definition for flexible proceeding sequences of activities without losing process comparability – support of interchangeability of processes 4. Support of different integration scenarios and levels without changing process description at any time.—support of flexibility of processes 5. Easy to learn and read – audience of those process definitions are very broad Comparing the requirements to the paradigm change in information modeling, which is caused by the introduction of object orientation, a similar approach seems straightforward for the progress of powerful process description. In this paper some existing procedural oriented process modeling languages and those who call themselves object oriented will be mentioned and a new object oriented process modeling language will be introduced. A conclusion closes this paper.
Proposal for an Object Oriented Process Modeling Language
535
2 Existing Process Modeling Languages In this chapter some existing process modeling languages are mentioned. These are IDEF0/SADT, Event-driven Process Chain (EPC), process modeling with UML, Business Process Modeling Notation (BPMN), Integrated Enterprise Modeling, Process Specification Language (PSL) and processes with the Semantic Object Model. These are not all process modeling languages but seem to be the most important. A short statement why these languages do not meet the requirements of modern process definition is included. 2.1 IDEF0 / SADT
IDEF0 is a procedural process modeling language. It explicitly supports hierarchic definition of processes. Complex processes can be defined in different level of details. Each activity can be detailed as own sub process. [1], [2] As it is a procedural language its descriptions are not very flexible regarding changes in the proceeding sequence of activities. IDEF0 remains on unchangeable sequences. It is easy to read but it tends to get very complicated on complex processes. Except the support of the hierarchy IDEF0 meets no mentioned requirements. 2.2 Event-Driven Process Chain (EPC)
Event-driven Process Chain is a procedural process modeling language. Compared with IDEF0 it has more objects in the language. EPC supports events and activities between events. For the process flow EPC allows explicit branching and aggregation of processes. Furthermore there are additional objects, which support the process. These are information and/or resource objects, persons and/or organizations. [3], [4] EPC is a procedural language supporting different level of details. Compared with IDEF0 it is more powerful to support different integration scenarios and levels. EPC is not very flexible regarding to changes in the proceeding sequence of activities. It is easy to read but it tends to get complicated on complex processes. Therefore EPC meets only two of the mentioned requirements. 2.3 Process Modeling with UML
The Unified Modeling Language (UML) offers an all spanning modeling language. Regarding data and information model the language is object oriented. Looking at the process modeling capabilities of UML the processes are still described in a procedural way. Most important process diagrams within UML are the activity, state chart and the sequence diagram. Each of the process diagrams shows the process in exactly one instance. For each instantiation the object oriented nature of the data model allows different processes. But the processes can not be described in a generic way within these diagrams. Solely the use case diagram seems not to fit into that. There some kind of process understanding is modeled in an abstract way. [5], [6], [7]
536
Reiner Anderl, Jens Malzacher, Jochen Raßler
UML is not an object oriented language for process modeling. Each activity is seen as an object. Relations between activities still base on logical states. Processes defined with UML are not very flexible regarding changes in the proceeding sequence of activities. Like EPC process modeling with UML supports different level of details and different integration scenarios and levels. It is quiet easy to read and to handle. Therefore process modeling with UML meets three of the mentioned requirements. 2.4 Business Process Modeling Notation (BPMN)
The Business Process Modeling Notation (BPMN) was developed to provide a notation for process descriptions. The specification includes the visual appearance of the elements and the semantics of the elements. Furthermore it deals with the exchange of process definition either between tools or as scripts (e. g. mapped on the Business Process Execution Language). [8], [2] BPMN representation of processes is quite similar to the UML activity diagram. Processes are defined as a sequence of activities in swim lanes. Again it is a state based connection between object oriented activities. Therefore the verdict upon BPMN is in this case similar to the UML verdict. BPMN meets three of five mentioned requirements. 2.5 Integrated Enterprise Modeling
The Integrated Enterprise Modeling was developed out of SADT. It capsules activities as objects and adds static information as job, product or resources. Due to this further information it is possible to generate views on the complete enterprise, not only on its processes. [9], [2] Integrated Enterprise Modeling represents processes in a SADT kind of style. Due to its retaining on logical sequence of activities it has no real advantage in modeling flexible processes. It still lacks a powerful support of process flexibility. 2.6 Process Specification Language (PSL)
The Process Specification Language (PSL) is a neutral language for process specification to serve as an interchange language to integrate multiple processrelated applications throughout the manufacturing process life cycle. As it only defines itself in informal manner it has no formal and graphical constructs. Therefore it is not capable for process modeling and a big audience. [10] 2.7 Semantic Object Model
The Semantic Object Model methodology is an enterprise architecture, which allows to divide an enterprise model into the model layers enterprise plan, business process model, and resources, each of them describing a business system completely and from a specific point of view. Within the process model the activity objects are connected with events. This concept allows flexible and robust process modeling. Out of this diagram the interaction scheme and the task-event scheme
Proposal for an Object Oriented Process Modeling Language
537
are developed. [2] Within interaction scheme relations also seem to be object oriented, but not within task-event scheme. So both worlds seem to be mixed up. Due to the integrated approach of enterprise plan, process and resources constructs are difficult to understand. 2.8 Short Statement on the Languages
Looking on the process modeling languages above we see that most of them already think in object oriented activities. Here paradigm changes already carried out. The definition of relations mostly remain on state based, proceeding sequences of activities. Only SOM seems to be beyond that, but is not clarified well. To meet all requirements a new approach shall be started.
3 Proposal on an Object Oriented Process Modeling Language The previous chapter has summarized the known and used process modeling languages and their limitations regarding the requirements introduced in chapter 1. Therefore a new approach for a process modeling language will be introduced in this chapter, which uses object oriented techniques and hence meets all requirements. 3.1 Towards an Object Oriented Process Modeling Language
UML is a well known and widely used modeling language for large software systems that uses object oriented techniques to obtain modularization, software reuse, flexibility and easy maintaining, among others. Expansions, such as SysML, introduce additional methods to use UML in other contexts than software engineering. Also the development of BPMN shows that UML is a technology that has a wide acceptance by users, developers and managers. Thus UML is a good starting point for the development of an object oriented process modeling language. Fig. shows the definition of a UML class diagram. The first field shows the class name, the second field lists the attributes, and the third field lists the methods, which can be used within the context of the class. The class itself is time invariant as it is a generic description of the content of the context. But the instance of a class, an object, is time variant, because it holds characteristic values that can be checked to given times and can change over time. This means, the values can change, but the general structure of an object (number and kind of attributes) can not change. class attributes methods
Fig. 1. UML class diagram
538
Reiner Anderl, Jens Malzacher, Jochen Raßler
Having a time variant object it can be derived by time regarding to [11] lim
T oT0
Object (T ) Object (T0 ) T T0
dObject dT
O bject.
(1)
Equation (1) shows that the content of an object, which means the attributes of an instance of a class, may change over time. Given a rule to change the attributes of an object one can express the change of the object’s content as a process instance, which is shown in (2). It is necessary to mention that we use a discrete time T instead of continuous time t to implement “time steps”. This is due to the result of the derivation as different process instances may need different time intervals to execute. O bject
(2)
Process instance
As we have derived the object we now have to derive the object’s content. Fig. 1 uses the word attributes as defined in UML, in equation (3) we will derive the attributes, but using the word information to make the meaning clearer and more generic. lim
T oT0
Information(T ) Information(T0 ) T T0
dInformation dT
Information
(3)
The derivation of information shows that the information may change over time. So the change of information, the change of attributes or data can be expressed as a method, which is shown in (4). Information
(4)
Method
The last field of a UML class diagram and thus in the object holds the methods, which act on the attributes. In the following we use the term operation for UML methods to differentiate between UML and our introduction. Operation and the just derived method are quite similar and are the same in several cases. In the following we derive the operation, which is shown in equation (5). lim
T oT0
OperationT OperationT0 T T0
dOperation dT
O peration
(5)
The meaning of the derivation of an operation is quite complex. To express this mathematically we can use equations (3) to (5), which show, that dOperation / dT is the first derivation of an operation or the second derivation of information. This means dOperation / dT is the gradient of an operation or the curvature of information. The expression gradient of an operation seems quite handsome and opens the question: what does result in the change of an operation? Or, more exact, what does result in a change of the quality of the execution of a method? Think also of the similarity of operation and method. This question directly leads to the answer to the problem, which is
Proposal for an Object Oriented Process Modeling Language
O peration
Resource.
539
(6)
Resources influence the execution of an operation. The use of more or less resources leads to faster or slower execution, influences the quality of the output, may lead to more innovation and so on. Equations (1) through (6) have shown the derivation from a time variant object to a time variant process instance. Generalizing the process instance we get a process class, which again is time invariant. The diagram of a process is shown in Fig. 2 process methods resources
Fig. 2. PML Process class diagram
Further we introduce the term PML, which stands for Process Modeling Language and can be seen as an extension to UML, as SysML is. Thus the known techniques of inheritance, association, and cardinalities can be used. Implementing those techniques processes can be modeled hierarchically with modularization, structure, exchangeability, and reusability. Hence all the requirements described in chapter one are fulfilled. A last topic of the class diagrams that have to be covered are assurances, which is done in the following in a qualitative way. In UML assurances can be defined to guarantee the co-domain of attributes. The assurances are conditions or constraints that have to be met by methods changing those attributes. In PML those assurances are important too, but apply to methods. This is obvious in the derivation’s context. Further, a condition can be seen as a constant signal. With the beginning of the life time of an attribute, which is the same as the life time of an object, the condition starts and remains constant over time. That is, the condition must hold for the life time of the attribute. Deriving the constant signal is straightforward using a Fourier transformation. The transformation results in the delta function; its derivation is a constant frequency signal. Back transformed to the time domain the result is a Dirac impulse [12], which can be interpreted as an event. The event actually can be a condition becoming true, the trigger from a finished method, or information becoming available. Thus the assurances are derived too and can be used for process modeling, which is shown in Fig. 3. process methods {event} resources
Fig. 3. PML class diagram with assurances
540
Reiner Anderl, Jens Malzacher, Jochen Raßler
3.2 Meaning of PML
Above we have shown mathematically the derivation of PML. We have introduced the terms of process instance, which can also be called project, and the process as a generic class description. We now want to clarify those terms and their meaning. Fig. 4 shows the used way to derive PML. Starting from the time invariant UML class we have instantiated a time variant object. This is derived by time and leads to a process instance, or project, which is time variant, and finally generalized to a time invariant process. time invariant
time variant
class
object
process
process instance / project
Fig. 4. Derivation loop of PML
Looking to application level the meaning of all four constructs will be clearer. The generic class model is used to represent the data model in a PDM system, e.g. as STEP AP214. Within one project the class is instantiated and the object holds the actual data of the designed model. The class therefore describes the product in a generic way, while the real contents are stored in its instantiation. The same is true on process level. The PML process class describes the process in a generic way. It allows one to define all methods with assurances and resources needed for the process. The instantiation of a process is a project. This means, the instance of a process defines the current occurrence of resources, used data models etc. This not only leads to a paradigm change in process modeling, but also in the view to processes and projects. 3.3 Using UML Constructs in PML
In this chapter we will give a short overview of using UML constructs within PML to gain the capability of hierarchical and modular modeling. Inheritance. The concepts for inheritance of process classes follow the notation of standard UML classes. Fig. 5 shows the inheritance of process classes. Starting with a generic Creativity Process that has a Creativity Method and requires a problem becoming available as event, but no Resource, two subclasses are inherited. The Intuitive Process, which adds two Resources and the TRIZ Process, which adds one Resource. Both subclasses inherit the Creativity Method from their superclass Creativity Process, TRIZ Process overwrites the Creativity Method with its own technique of creativity. Brainstorming and Brainwriting are subclasses from Intuitive Process, inheriting the Resources and the Creativity Method, which they overwrite. Both of the subclasses define their own additional Resource.
Proposal for an Object Oriented Process Modeling Language
541
Creativity Method takes an argument, problem, which can have assurances, which means starting and ending conditions can be defined. Creativity process creativity method {problem avail.}
Intuitive process
Creativity process creativity method {problem avail.}
Persons Pens
TRIZ solver
Brainstorming
Brainwriting
creativity method {problem avail.}
creativity method {problem avail.}
Whiteboard
Paper
Fig. 5. PML inheritance diagram
The process class also supports abstract methods, as well as public and private methods. Associations. The known concepts of associations from UML classes can be used for process classes. Fig. 6 through Fig. 8 show the concepts of associations, aggregations, and compositions. All associations can use cardinalities to implement the number of relations they use. The techniques introduced above enable generic process modeling that supports structural and hierarchical modeling, including modularization and flexible design. As classes are time invariant no statement about the “running time” is made, e.g. about sequential or parallel process execution. The actual execution is determined instantiating the process classes and can be further described within the project with activity diagrams (logical description) or sequence diagrams (timely description). State diagrams can be used to describe the project states at given times. These instantiation diagrams are quite similar to the state-of-the-art object oriented process modeling languages. process class 1
association name
process class 2
Fig. 6. Association
process class 1
process class 2
Fig. 7. Aggregation
process class 1
process class 2
Fig. 8. Composition
542
Reiner Anderl, Jens Malzacher, Jochen Raßler
3.4 An Example
To illustrate our way for process modeling we prepared a short example coming from manufacturing. We have a small enterprise which is focused on shape cutting manufacturing. Its production process description is shown in Fig. 9. Note that the process looks very similar to an UML class diagram. (Preliminary) work piece
0…1 Work part handling Handling {work part avail.}
Fig. 9. Example process diagram for a small enterprise specialized on shape cutting
There are two main processes which are connected. The within the enterprise adopted technologies inherit their attributes from them. Let us assume the enterprise wants to manufacture the product shown in Fig. 10.
In the first manufacturing lot 30 pieces of that product are produced. Due to some reasons of machine and technician availability and lot size the project is instantiated as shown in Fig. 11.
Screw thread cutting
Handling {work part avail.}
Shape cutting {work part avail.}
Machine 4 NC Code 15
Technician
Fig. 11. Example process diagram for a small enterprise specialized on shape cutting
Proposal for an Object Oriented Process Modeling Language
After the first lot the enterprise wants to produce a second lot of five more pieces. Therefore they changed the used machine (see Fig. 12). Please note it is still the same product and the same process. The instantiation only has changed. It is a new project.
Screw thread cutting
Handling {work part avail.}
Shape cutting {work part avail.}
Machine 5 NC Code 34
Technician
Fig. 12. Example process diagram for a small enterprise specialized on shape cutting
The enterprise produces also another product as shown in Fig. 13.
Fig. 13. Example process diagram for a small enterprise specialized on shape cutting
Handling {work part avail.} Machine 5 NC Code 21
Shape cutting {work part avail.} Machine 6 NC Code 22
Screw thread cutting Shape cutting {work part avail.} Machine 6
Handled work part
Milling
(Preliminary) work piece
automated work part handling
Handled work part
For production of a certain lot the project shown in Fig. 14 is used. Now we have a different product from a different instantiation based on the same process description.
automated work part handling Handling {work part avail.} Machine 5 NC Code 23
Fig. 14. Example process diagram for a small enterprise specialized on shape cutting
In this example it became obvious how powerful process modeling and description with PML is. Once defined we can instantiate different processes leading to same or different results. 3.5 Further Topics to be Mentioned
In the available paper we only have used class diagrams and instances to show our concept. The other UML diagrams seem to fit for PML. The derivation and usage of those diagrams will be covered in future works. The most important diagrams, and those which are in widest use in UML, are sequence, activity and state diagrams. State diagrams can be directly used to show
544
Reiner Anderl, Jens Malzacher, Jochen Raßler
the state of a process instance. Sequence diagrams are more complex and show the running time of a process instance in a given occurrence. That means, it shows which sub processes run parallel or sequential, the instance’ life time, etc. We have started working on this topic and it seems obvious using discrete Fourier transformations or z-transformations to derive sequence diagrams from instance diagrams using time based states of the project. Activity diagrams follow a similar derivation, but the goal is to get a logical description instead of a time-dependent one of the project running time. PML and UML can be linked together using two techniques. The data classes – data which is generated along the value creation chain – can be used as input for methods or be written to the edges of associations. If company structures, resources, and similar information are modeled, this may best fit as associations for the resources. Further, there exists the attributed association, which can be used for explicit cross modeling between PML and UML. Process management will underlay an enormous change, since process management mainly reduces to process modeling. For existing processes this means that changes in the generic process description leads to extensions of the process description using e.g. inheritance to specialize or modify given processes. Another important topic is project management. It is obvious that project management directly influences the process instances. This implies the timely activities and the allocation of resources.
4 Conclusion The strength of the shown approach for process modeling is the complete object oriented view to processes and the differentiation and linkage of and between processes and projects. As in data modeling process modeling can now be done in a generic way. The introduced process description perfectly fits into PDM systems with the process class descriptions. Hence process management is now process modeling at running time. A process in a PDM system can be extended by more classes, that extent existing classes, or specialize them. The instances of those processes are used in projects, which define the parameters of the instances. The implemented technique of processes and projects within PDM systems is then similar to data models, where object orientation has been a standard since years. Further works will focus on the topics of process and project management and will introduce examples of how to use the object oriented process modeling approach to implement real world projects. The object oriented approach of process modeling introduces a paradigm change not only to the view of process and project management, but also enables new possibilities for interoperability. Heavy use of modularization enables exchangeability and process reusability and hence strengthens the integration of third-party processes. This leads to more powerful cross-enterprise collaboration. Another important point is the certification of processes. Depending on products or customers it is necessary to have certified processes. Think of ISO 9000 or certification for medical issues. With PML the process is only certified
Proposal for an Object Oriented Process Modeling Language
545
once but can lead to different instantiations – regardless to the project (in terms of same or different product). Summarizing the development of PML we have explained existing process modeling languages, which call themselves object oriented, but this is only true for the modeled activities. Therefore the modeled processes still look like sequences of activities. In this paper we have developed a new approach. By deriving timedependent objects we got process instances. Taking them alone we do not use the complete power of object oriented modeling. The step from process instances to process classes helped us to define a new object oriented process modeling language. As processes are not modeled on instances anymore, but as abstract classes, we have a completely new representation of processes. Running processes then are projects. With this definition we need a new comprehension on process and project management.
References [1]
IEEE Std 1320.1-1998. IEEE Standard for Functional Modeling Language—Syntax and Semantics for IDEF0. New York: IEEE, 1998. [2] Bernius, P.; Mertins, K.; Schmidt, G. (Eds): Handbook on Architectures of Information Systems, 2nd Edition. Springer Verlag Berlin, Heidelberg (2006) [3] Scheer, A.-W.: ARIS – Business Process Frameworks, 2nd Edition, Berlin, 1998 [4] Scheer, A.-W.: ARIS – Business Process Modeling, 2nd Edition, Berlin, 1999 [5] OMG: Unified Modeling Language: Superstructure v2.1.1, of Feb 2007, www.omg.org, 2007 [6] Eriksson, H.-E.; Penker, M.: Business modeling with UML: business patterns at work. John Wiley & Sons, Inc, New York (2000) [7] Burkhardt, R.: UML – Unified Modeling Language: Objektorientierte Modellierung für die Praxis. Addison-Wesley-Longman, Bonn (1997) [8] OMG: Business Process Modeling Notation Specification, of Feb 2006, www.omg.org, 2006 [9] Spur, G.; Mertins, K.; Jochem, R.; Warnecke, H.J.: Integrierte Unternehmensmodellierung Beuth Verlag GmbH (1993) [10] International Standards Organization (ISO): ISO 18629 Series: Process Specification Language of 2004, www.iso.org, 2004 [11] Luh, W.: Mathematik für Naturwissenschaftler, Bd.1, Differentialrechnung und Integralrechnung, Folgen und Reihen, Aula, Wiesbaden (1987) [12] Clausert, H., Wiesemann, G.: Grundgebiete der Elektrotechnik 2. Wechselströme, Leitungen, Anwendungen der Laplace- und Z-Transformation, Oldenbourg, München (2000)
Enterprise Modeling Based Application Development for Interoperability Problem Solving Marija Jankovic1, Zoran Kokovic1, Vuk Ljubicic1, Zoran Marjanovic1, and Thomas Knothe2 1
Abstract. This paper presents an approach in IV&I (Inventory Visibility and Interoperability) business application development which is based on business processes and user requirements represented in a form of enterprise model. This approach is beneficial in supporting cross-enterprise business application integration when used in conjunction with semantic mediation tools. Finally, an account of the lessons learned and plans for the future is provided. Keywords: enterprise modeling, business process, interoperability, application development
1 Introduction In recent years, technology support needs have become more pronounced due to enterprise infrastructure complexity caused by global expansion. As advanced technologies continue to be developed, additional tools need to be implemented in order to support interoperable integration. It is important to show how industry could lower cost, improve speed-to-market and reuse IT (Information Technologies) investments by developing more capable data management approach based on new technologies [8]. A global automotive, chemical, electronics, or another industry group (that comprise users, application vendors, and other stakeholders) engages its stakeholders to jointly develop and adopt interoperability artefacts that are implemented by the application vendors and, ultimately, tested for conformance and later deployment in the users’ environments.
548
M. Jankovic, Z. Kokovic, V. Ljubicic, Z. Marjanovic, N. Ivezic, Th Knothe
This paper presents the application of the AIF (Athena Interoperability Framework) [1] to bridge the gap between the business and IT domains. The AIF initiative is promoting the usage of enterprise models to describe business processes and user requirements. Applying a MDD (Model Driven Development) vision to software development is important in order to build model based systems and to avoid or reduce the lost of information, as well as to increase the separation of concerns, flexibility and traceability. The paper is based on the validation results of the ATHENA Sub-Project B5.10: Piloting Including Technology Testing Coordination – IV&I (Inventory Visibility and Interoperability) that we participated in. The original IV&I concept was specified by the Automotive Industrial Action Group (AIAG). The primary purpose of the sub-project was to perform validation of the ATHENA results (i.e., advanced tools and methodologies) on an industrial inventory visibility scenario. Within the validation pilot, a collection of ATHENA tools was chosen to enhance the traditional standards-based development method within a number of specific phases of that development approach. To enable the integration and use of the ATHENA tools for the validation pilot, we developed an additional tool: the APOLLO (FOS) inventory visibility application which supported the visibility business process relaying on RDFS (Resource Definition Framework Schema)-based data exchange interface. Our specific goal was to propose the efficient development process for new generation of IV&I tool that achieves interoperable communication with other ASF (Athena Semantic Framework) tools.
2 Problem Statement The motivation to explore new development approaches in cross-enterprise business application integration follows from the need to address high costs efforts, particularly in complex industrial supply chains [2]. In particular, the current generation of IV&I applications lacks the capability to exchange information about the inventory replenishment events in an interoperable manner. An interoperable exchange of the replenishment information among IV&I applications is critical for efficient supply chain management. Given that IV&I tools are not interoperable, the goal is to arrive at a set of standard electronic messages that may be exchanged in an agreed-upon protocol among IV&I–enabled tools. Once a standard message set is implemented by the IV&I tools, the supplier will select only one of the IV&I tools to communicate to all of its customers.
3 Business Case Description The business process is eKanban, in which both suppliers and customers have a web view of Kanban signals and suppliers are required to cover customer materials requirements based on contractual business rules. The guideline that documents
Enterprise Modeling Based Application Development
549
the business process for eKanban is developed by the IVBP (Inventory Visibility & Interoperability Business Process Workgroup). It includes detailed specifications for implementing and applying the eKanban concept across enterprises (the target interoperable eKanban material replenishment and data exchange protocols). This concept is relevant for supplier parks around a large OEM (Original equipment manufacturer) manufacturing location. Suppliers in the automotive industry are very large and have selected their IV&I tool already. For collaborative eKanban implementation an interoperable data exchange between partners is required. The eKanban Use Case Diagram is illustrated in the Figure 1.
Fig. 1. Ekanban Use Case Diagram
The eKanban electronic collaboration process is based on the ‘Customer Managed Kanban’ version of the eKanban process and supports two roles – Customer and Supplier – and four messages – SyncKanbanConsumption, SyncShipmentSchedule, SyncShipment and SyncDeliveryReceipt. The actors interact via the IV&I tool(s) by sending appropriate signals in a following scenario: Customer consumes a Kanban container; when he decides to authorize the replenishment of Kanban container he sends authorization signal; Supplier ships the Kanban container and when the Customer receives the Kanban container he sends receive signal to Supplier.
4 Development Process Proposed development approach may be seen as containing two phases: x x
EKanban business process model and data exchange specification and IV&I tool implementation.
550
M. Jankovic, Z. Kokovic, V. Ljubicic, Z. Marjanovic, N. Ivezic, Th Knothe
Since the AIAG IV&I specification is a set of fragmented documents a holistic model was created by using the MO²GO tool and it’s supporting IEM (Integrated Enterprise Methodology) methodology. The initial goal was to create a computational system model for performing requirements identification, gathering and analysis. With a computational representation, a much more precise description of roles (e.g. Carrier, Supplier), business control concepts (signals, messages) and business objects as well as its reflecting views was possible. The model developed on the basis of the IV&I specification document doesn’t have only a specification character; it plays more of a guide role for an interoperability project that applies lean supply chain management principles such as Kanban. At the process modelling time, we were concerned with the different functions and their logical processes, the actors involved in the function execution, the data documents, the information systems, and the existing data flows that are regarded as necessary. With the MO²GO tool it was possible to integrate all those data into one consistent model. The key benefit that IV&I Reference model brings to the eKanban engineering is that it enables a focus on a standard way of requirements representation [10]. Its purpose is to provide a foundation for systematic development of the AIAG eKanban [7] business process by providing a common frame of reference for different modeling entities at the same abstraction level. The IV&I Reference Model facilitates standard description of business environment (e.g. roles, business objects), processes, services and data exchange requirements. Our approach to integrate PSM (Platform Specific Model) and PIM (Platform Independent Model) into CIM (Computational Independent Model) is shown in the figure 2.
Fig. 2. Enterprise Model Interoperability System specification
Enterprise Modeling Based Application Development
551
All relevant concepts and business objects are represented in the eKanban model as instances of the reference class structures. The main focus of the IV&I eKanban project is communication between Customer and Supplier that is supported by IT systems with visualization capability for the purpose of transmitting order requirements and fulfillment response by the Supplier. The information flow is based on the order elements. Therefore, the orders are modelled in more detail than product elements. Important benefit of eKanban Reference model was standardization of information for planning and control. As basic structures of the model high-level class structures for orders were introduced to express four eKanban messages as subclasses from the Massage class. The instances are linked to the appropriate business processes in the model. Implementation relevant specifications are related to them. For example, on the figure 3 is shown that OAIG BOD message definition is linked to SyncKanbanConsumption message, which is represented as instance of Message subclass of Order class. According to the AIAG eKanban use case definition and IEM principle of hierarchical decomposition, the total business process is captured at the highest level of the model, and contains two activities (1) Run eKanban and (2) Establish cooperation by setting up documents. The Run eKanban activity is an abstraction of the eKanban execution. At the top model level, all operational processes are present together in the form of Run eKanban activity. Establish cooperation by setting up documents presents an integration of planning and establishing processes that provide needed resources for executing Run eKanban activity and related operational processes. Next, an additional modeling activity refines the eKanban business process model and data exchange requirements to the atomic concepts and relationships that define the intended data exchange (In the future, the IV&I enterprise model should be a basis for development of IV&I reference ontology in support of the eKanban data exchange). A desired outcome at this stage is to capture the data exchange requirements between supplier and customer in a computable form. Data exchange requirements are represented in the form of table and linked to the relevant elements in the model (see Figure 2).
Fig. 3. Implementation relevant specifications related to the Enterprise Modeling Objects
552
M. Jankovic, Z. Kokovic, V. Ljubicic, Z. Marjanovic, N. Ivezic, Th Knothe
The ability to work directly from a computational model of the business process reduced time to understand related artefact and have good basis for the IV&I tool implementation. The primary direction of exploration in developing the IV&I application was to apply the MO²GO model and to assess the new Semantic Web-based technology for interoperable data messaging exchange interface in the context of the newly proposed semantic mediation architecture within the ATHENA project [5]. The following five major steps were applied for developing the Apollo application: x
x
x
x
x
Identify the IV Functionality. The IEM methodology supported by MO²GO tool has proven to be a very good basis for IV&I tool development. Using MO²GO model we have considerably reduced time necessary for understanding the business processes and requirements to be met by IV&I application. Out of many processes described in the model, we have choosen the ones which were important for fulfillment or our goals. In the model they are represented as subprocesses of Run eKanban process. The following four processes were identified for IV&I tool development (1) Communicate consumed kanban; (2) Authorize kanban; (3) Initiate ship kanban container; (4) Receive kanban container. Define IV&I eKanban Protocol to Process Messages. The similarity of all four processes resides in a fact that all of them have a resource identified as message which is sent form customer’s to consumer’s IV tool or vice versa. As a necessary step to assure data exchange in the IV&I eKanban conformant manner, the IV&I eKanban messages need to be generated and consumed according to the specified protocol. As an output from this phase, detailed mechanism were defined to follow logic of the prescribed protocol. Define IV&I eKanban BOD Processing Functionality. As another necessary step to assure IV&I eKanban conformant data exchange, all the eKanban BOD elements’ semantics were defined. As an output from this phase, detailed description of all the fields’ semantics within the system and with reference to corresponding BOD elements. For the basis for defining the semantisc of fields and making the data model we used XML messages attached to processes in MO²GO tool as Message states as well as from the data type requirements expressed as properties inside the Eventstates (see Figure 3). Determine Message Exchange Framework. Within the ATHENA provided Web Services (WS) execution support, a Web Service communication capability needed to be specified for Apollo to effectively use the ATHENA WS support [6]. Perform Semantic Reconciliation of Messages. To utilize the ATHENA provided Semantic Mediation support, semantic reconciliation of the local interface elements (i.e., message schema) needed to be performed with respect to the IV&I eKanban Reference Ontology
Enterprise Modeling Based Application Development
553
5 IV&I Tool Implementation The following are the main components of the developed IV&I tool: x
x
x
x
IVI Web Client. This component has the responsibility to accept and transfer data entered by the user to the Business Component for required processing. It is realized as a Web application that can be accessed using a standard Web browser. Suns’ implementation of Java Server Faces (JSF) 1.2 was the choice for client component [12]. Business Component. This component encapsulates the whole business logic of the IV tool. It has a responsibility of taking input data from both the client interface and XML/RDF Adapter to process it. This point represents one of the benefits of the applied architecture which is that specific business logic is implemented only once for all input data no matter how they got to the system. For implementation of this component, Glassfish implementation of EJB 3.0 (Enterprise Java Beans) were used, both Session Beans for bussines logic and Entity Beans for OR (ObjectRelational) mapping. In order to achieve great perfomance when needed, direct JDBC queries were used [15]. Web Service Interface. This Web service interface is designed to support interoperable communication between tools. This interoperability is gained through set of open standards such as WSDL, SOAP , UDDI etc [13]. When the system is reached through a web service interface instead of client user interface, IV&I tool has to do one more thing before processing it. Message is passed over to XML/RDF adapter that will interpret message content and make message understandable by IV&I business component. Soap with Attachments API for Java (SAAJ Standard Implementation) was used for Web Service interface development. XML/RDF Adapter. This component is responsible for transformation of domain entity model into common message exchange schema. This means that a BOD document is received in RDF or XML format, and needs to be recognized, validated, processed and stored in database. Recognition and validation is done by XML/RDF adapter while processing and updating database is done by previously described IV&I Business Component. For developing XML/RDF adapter we used Sesame, Elmo and related OS (Open-Source) technologies [17].
6 RDF Based Data Exchange Interface The ATHENA semantic mediation capability uses an ontology-based integration approach to address the issues of semantic interoperability. In our case, the IV&I eKanban Reference Ontology (RO) is one such ontology that provides a meta-data model to define possible IV application interface data models in terms of the application domain concepts, their properties, and the relationships among them. Using the ATHENA approach, the reconciliation rules between each application
554
M. Jankovic, Z. Kokovic, V. Ljubicic, Z. Marjanovic, N. Ivezic, Th Knothe
data exchange interface schema and the RO may be defined to map the logic of that IV application data interface using the RO elements. Once these reconciliation rules are available, IV&I eKanban messages may be transformed from one form to another and interoperable data exchange may be achieved using a semantic mediator to run the reconciliation rules. We defined data exchange interface using Resource Description Framework Schema (RDFS) [14]. Developing the RDF interface consisted of the following activities: x
x
x
Defining the eKanban model for understanding received messages. In order for the application to process the messages as intended, the application has to access definition of the language used to implement the messages. The easiest approach was to define message patterns (to include class and property definitions) that the application can rely on when processing the message content. Testing the information contained in the message payload. Even when a mapping of a message content onto a message pattern is succesfully completed, correct processing of the message is not guaranteed. Our approach was to perform certain tests of the message content before making a decision of the message meaning and starting a transaction that could make various databases out of sync. Enabling translation of domain entities into RDF instances. Domain entities are carrying data from domain database and data needed to be transformed into RDF message [18]
A successful result with one message was achieved between applications at General Motors and FOS on one side and “Test Harness” on the other, based on a limited amount of data being transmitted. The results were sufficiently encouraging to merit further research to improve message exchange. Following figure shows developed IV&I tool at the moment of message sending during the AIAG demo.
Enterprise Modeling Based Application Development
555
Fig. 4. The IV&I tool look
7 Lessons Learned Overall, we were very pleased how the IEM methodology and MO²GO tool supported the modelling of the eKanban business process leading to a transformation of the eKanban business process specification into a consistent enterprise model that formed basis for IV&I tool implementation. We found eKanban business process model great for communication, coordination and collaboration with AIAG business users and business analysts in order to understand and collect their eKanban requirements. Modeling the networked organization in IEM results in models that capture an extensive set of relationships between the goals, organizations, people, processes and resources. Hence, we discovered it was important to adapt the general IEM method by further revising and extending the methodology concepts in support of the IV&I tools implementation. We were able to introduce specific rules and constraints for the eKanban model by using attributes in the class structure. For instance, a semantic disambiguation was important to emphasize difference between concepts like Signal and Event. A documentation attribute of the Signal subclass states precisely that signal is generated by an Inventory Visibility tool. That information was essential to account for the necessary IT requirements. This is particularly useful considering the dynamic nature of collaboration between Customer and Supplier in the eKanban supply chain. For new partners
556
M. Jankovic, Z. Kokovic, V. Ljubicic, Z. Marjanovic, N. Ivezic, Th Knothe
joining the network, the rich eKanban model provides a valuable source of knowledge on how to “behave” in the network. The interactive nature of eKanban BPM, meaning that we were free to refine it during execution and implementation phase increases it’s potential as source of experience and knowledge. As such, it documents details on how the work was actually done, not only how it was planned. From a knowledge management perspective, process models are carriers of process knowledge; knowledge of how to do things. But through the possibility in IEM of attaching information resources to the tasks at any level, such a model also imposes a structure upon the set of information resources relevant for the work described by the process model. That way, the process models themselves form the basis for information management. Model enrichment with XML Schema and BOD message definitions enhanced this even further. IEM can be extended to link into for languages used in the SOA and MDA approaches, particularly UML, through the visual features of MO²GO. It was very useful for IV&I tool development to have UML diagrams included directly in the model because UML is a de facto modeling standard recognized by the most tool providers and can be converted to schema or code. Besides, most tool providers can consume the UML data exchange format. The ability to work directly from a computational model of the business process reduced time to implement IV&I application. Given model was really helpful during the identification of required application functionalities. Standard way of Data exchange representation and their link to the model was very important. Because this phase of development process was relatively easy, more time could be spent on dealing with interoperability issues. Implementing the RDF interface challenged the RDF processing module in an additional way. In fact, there are many different ways one can write an RDF instance. The differences may be reflected in the structure of the elements as well as in the element types. In addition, the attribute definitions may differ from one application user to another with different sets of attributes used by different users. We tried to cover a broad set of attributes and possible message formats because in this part of the system, robustness is crucial. System must be robust in the case of missing data or unknown message structures. Our design priority was to pull out the maximum of information from a received message.
8 Conclusion and Future Work For systems and enterprises to evolve in a coordinated manner, there is a need for representing knowledge in a way understandable for both business users, system analysts and software developer. In this paper, we demonstrated that a wellstructured IV&I Reference Model has turned out to be a very efficient mean for IV&I tool implementation, allowing capture of eKanban business requirements in a simple and comprehensive way. The most important benefit of the selected approach is the strong support to implement eKanban based on the integration of operational processes to execute eKanban control, and the structures and processes to establish the connection and collaboration between partners. Here, the integrated
Enterprise Modeling Based Application Development
557
specifications which are used in the eKanban execution, ensure common understanding between all stakeholders and consistency of data and processes. Modeling has been touted as an appropriate way of providing the necessary abstraction mechanism to comprehend and analyze complex problems in this regard. It appears that no modeling technique or approach is applicable across the whole spectrum of process and stakeholder types. This paper has outlined early work on how results from modeling approaches from the fields of Enterprise Modeling, Ontologies and MDA architecture can be combined to provide a more complete coverage of the overall problem area. We will over the next years work on these problems both standalone and combined, testing solutions out on industrial cases, with a specific focus on achieving both system and business interoperability. Future work is planned in several directions. We plan to use the experience and results from the validation interviews (to be completed with the domain business analysts’ involvement) to develop structured test specification for the analysis of the consistency and completeness of the models created. Such structured test specification will allow easier validation planning of enterprise modelling results in the context of crossenterprise interoperable data exchange development. Another future challenge is to try to integrate business process and workflow management system with semantic mediation system. The idea is to try to use the identified data exchange requirements at the EM level to assist in creating messaging data models and, eventually, the XML Schema or RDF schema definitions for implementation of eKanban-conformant data interfaces. This capability could help drive efficiency and quality of real implementations of the interoperable interfaces with all technical details included.
References [1]
[2]
[3]
M. Jankoviü, N. Iveziü, Z. Marjanoviü, T. Knothe, P. Snack: "A Case Study in Enterprise Enterprise Modeling for Interoperable Cross-Enterprise Data Exchange", IESA’07, Funchal (Madeira Island), Portugal, 2007., Monograph: R. J. Goncalves, J. P. Muller, K. Mertins and M. Zelm (Eds.): "Enterprise Interoperability II: New Challenges and Approaches", Springer, London, 2007., ISBN-13:9781846288579, ISBN 978-1-84628-857-9, e-ISBN 978-1-84626-858-6, pp 541-552.2. IDEAS, Interoperability Development for Enterprise Application and Software Project, (2005), http://www.ideas-roadmap.netATHENA, I. Noviþiü, Z. Kokoviü, N. Jakovljeviü, V. Ljubiþiü, M. Bacetiü, N. Aniþiü, Z. Marjanoviü, N. Iveziü: "A Case Study in Business Application Development Using Open-Source and Semantic Web Technologies", I-ESA’07, Funchal (Madeira Island), Portugal, 2007., Monograph: R. J. Goncalves, J. P. Muller, K. Mertins and M. Zelm (Eds.): "Enterprise Interoperability II: New Challenges and Approaches", Springer, London, 2007., ISBN- 13:9781846288579, ISBN 978-1-84628-857-9, e-ISBN 978-184626-858-6, pp 541-552.2. IDEAS, Interoperability Development for Enterprise Application and Software Project, (2005), http://www.ideas-roadmap.net INTEROP, Interoperability Research for Networked Enterprises Applications and Software NoE, (2005) (IST-2003-508011) http://www.interop-noe.org
558
M. Jankovic, Z. Kokovic, V. Ljubicic, Z. Marjanovic, N. Ivezic, Th Knothe
AIAG, Automotive Industry Action Group, (2006) Inventory Visibility & Interoperability Electronic Kanban Business Process (IBP-2) [5] 5. ATHENA, (2005) Deliverable Number: D.A6.1, Specification of a Basic Architecture Reference Model.Version 1.0 [6] ATHENA, (2004) Deliverable D.A1.1.1, First Version of State of the Art in Enterpri Modelling Techniques and Technologies to Support Enterprise Interoperability, Enterprise Modelling in the Context of Collaborative Enterprises [7] Knothe T., Jäkel F., Schallock B., (2004) ATHENA, IEM / MO²GO Tutorial Theory and Case Studies. Fraunhofer IPK-Berlin, http://www.ipk.fhg.de [8] ATHENA, Advanced Technologies for interoperability of Heterogeneous Enterprise Networks and their Applications Project, (2006) http://www.athenaip.org [9] Java Platform, Enterprise Edition (Java EE), Enterprise JavaBeans Technology, Available at: http://java.sun.com/products/ejb/ [10] 10. Java Web Services - SOAP with Attachments API for Java (SAAJ), Available at: http://java.sun.com/webservices/saaj/index.jsp [11] Documentation for Sesame. Available at: http://www.openrdf.org/documentation.jsp [12] Resource Description Framework (RDF) Schema Specification 1.0 W3C 2000, Available at: http://www.w3.org/TR/2000/CR-rdf-schema-20000327 [4]
IS Outsourcing Decisions: Can Value Modelling Be of Help? Hans Weigand Infolab, Tilburg University, P.O.Box 91053, 5000 LE Tilburg, The Netherlands [email protected]
Abstract. Value models are used to support discussions on business models as well as strategic decision-making for networked enterprises. The aim of IT strategy is to make strategic decisions on IT, such as outsourcing, in the light of the contribution of IT to the business as a whole. To facilitate discussions on the contribution of IT, it is necessary to abstract from enterprise software details and focus on the value of IT services. In this paper, we explore how the strategic value modeling approach c3-value can be of help in the decision making process. Keywords: Modelling methods, tools and frameworks for (networked) enterprises, Strategy and management aspects of interoperability, Requirements engineering for the interoperable enterprise
1 Introduction Modern organizations in all sectors of industry, commerce and government are fundamentally dependent on their information systems and the way these systems interoperate with the systems of their partners. These interoperability issues have only become more prominent with the recent IT outsourcing wave. During the 1990s, it was increasingly recognized that competitive advantages of IT are often short-lived, and drawing inspiration from the Resource Base Theory in organization theory, the emphasis shifted towards sustainable advantages based on core capabilities. Whether IT provides such core capability is being doubted. When Eastman Kodak decided in 1989 to outsource its information system completely, this had a significant impact, and although IT outsourcing has not always been a success, no IT manager can afford to ignore the question anymore. IT outsourcing, or more general, “IS sourcing”, in the sense of “the organizational arrangement instituted for obtaining IS services and the management of resources and activities required for producing these services” [4]
560
Hans Weigand
involves many stakeholders. Therefore, there is a need for facilitating and structuring the decision process. To facilitate discussions on the contribution of IT, it is necessary to abstract from enterprise software details and focus on the value of IT services. In this paper, we explore whether and how the strategic value modeling approach c3-value can be of help. This method offers a value rather than process perspective, is supported by graphical models, and envelops insights from both the Porter-style competitive strategy literature as well as the Resource Based Theory. According to the empirical study of Rivard [16], an examination of IT contributions to business performance should build upon the complementarity between the resource-based view and the competitive strategy view, as both sets of variables were found to influence performance. The outline of this paper is as follows. In section 2, we identify issues in IS outsourcing that any method that aims to support sourcing decisions should address. This is based on some key publications in the area. Section 3 gives a short summary of the c3-value method with a redefinition on the relatively unexplored notion of value activity. In section 4, we describe an abstract IT outsourcing decision process and consider the possible contributions of c3-value in each step. Section 5 evaluates the results and provides suggestions for future research.
2 Requirements In the Information Management literature, it is generally conceded that information technology in itself has no competitive value. In a conceptual analysis of IT and competitive advantage, Mata et al [10] concluded that only IS management skills are likely to be a source of sustained competitive advantage, and this position has been supported by several empirical studies. Arguably, it is in line with the Resource-Based Theory (RBT) perspective [2,10] that states that resources per se do not create value; value is created by an organization’s ability to utilize and mobilize these resources. This “ability” is also denoted with the term “competencies” [14]. These competencies take the form of a bundle of skills (bundles are hard to imitate). The word “capability” is used by Peppard and Ward to refer to the strategic application of competencies. If a certain firm has an IT capability, this may be because of various competencies. However, the word “capabilities” is often used as a synonym of competencies, and in this paper we will follow this tradition. According to Peppard and Ward, IS management includes the definition of the role of IS in the organization and the translation of this into processes as well as long term architectures and infrastructure on the one hand, and the development and exploitation of business solutions and technology maintenance at the other. On the basis of this definition, they are able to define 26 IS competencies. Examples are the ability to define information management policies for the organization, or the ability to develop and implement IS solutions that satisfy business needs. These competencies are viewed as competencies of the organization as a whole, and are embodied in processes. In some cases, these processes are well-defined (e.g. system development base on standard IS design methodologies), in other cases they not defined at all. Employees contribute to these processes by means of their
IS Outsourcing Decisions: Can Value Modelling Be of Help?
561
skills, knowledge and attitudes. The latter are identified as “resources”. The Feeny/Willcocks framework [5] identifies 9 IS core competencies, grouped under 3 faces: Design of IT architecture, Delivery of IT service, and Business and IT vision. A few remarks are in order. First, it should be clear that the RBT perspective is concerned with sustainable advantages of the company, but does not dismiss the use of short-term competitive advantages. The latter can be important as part of a first mover strategy. Disruptive technology can provide such a short-term advantage. In this paper, we focus on long-term strategies. A second remark concerns the way resources are defined by what employees contribute by their individual knowledge. This seems overly restrictive. Another kind of resource is the systematic experiential knowledge built up over time, for example, in-depth customer knowledge based on intelligence analysis of customer behaviour. Willcocks et al [17], following the theory of Stewart, calls this kind of knowledge “structural capital”, distinguished from human capital, customer capital as well as social capital. Mehta [12] describes how systematic knowledge management can contribute to essential capabilities. He proposes a so-called Knowledge-Based View (KBV) of the firm. Thirdly, it should be kept in mind that outsourcing can be done in many different ways [8,17]: single-vendor or multiple-vendor, and with varying degree of intensity, among others. From the above, some basic requirements on an IS outsourcing decision process can be derived. One may expect that it indicates how to identify IS competencies and how they contribute to key resources. It should give particular attention to the role of knowledge capital in relation to these resources. It should be realistic about the consequences of outsourcing in terms of additional transaction costs.
3 Value Modelling and c3-value Value modelling has been introduced by Jaap Gordijn under the name e3-value. In this section, we first describe a variant of e3-value called c3-value that specifically supports strategic analysis. In the second part, we zoom in on the notion of value activity. 3.1 c3-value
e3-value [6] is a modeling approach that is originally aimed at supporting the explorations of new business networks. For these explorations, process details are not relevant. What is important is whether a collaboration can be set up that provides value to all participants. We briefly introduce the basic concepts (Fig. 1 shows an intuitive example). An actor is an economically independent entity. An actor is often, but not necessarily, a legal entity. A value object is something that is of economic value for at least one actor. Examples: cars, Internet access, stream of music. A value port is used by an actor to provide or receive value objects to or from other actors. A value port has a direction, in (e.g., receive goods) or out (e.g.,
562
Hans Weigand
make a payment) indicating whether a value object flows into or out of the actor. A value interface consists of in and out ports that belong to the same actor and is used to model economic reciprocity. A value exchange is a pair of value ports of opposite directions belonging to different actors. It represents one or more potential trades of value objects between these value ports. A value activity is an operation that could be carried out in an economically profitable way for at least one actor. Given an e3-value model, attributed with numbers, Net Value Sheets (NVF) can be generated that show net cash flow for each actor involved. This gives a rough indication of whether the model is economically sustainable.
Fig.1. Initial (simplified) e3-value model for Amazon
c3-value is an extension of e3-value to support strategic analysis [17]. Why should a firm choose for one business model rather than another? Following the strategic management literature and balancing the different perspectives, there are three generic answers: x x x
to distinguish itself from competition to (better) fulfill customer needs or future needs to explore its resources, capabilities and partnerships that make up its competitive advantage
Therefore, the strategic analysis process in c3-value contains three dimensions: competition analysis, customer analysis and capability analysis. These dimensions
IS Outsourcing Decisions: Can Value Modelling Be of Help?
563
are not exclusive but integrative: if a firm develops a new service to distinguish itself from its competition, this only makes sense when the service fulfils a customer need and the firm is able to deliver that service. So the strategy should always align the three dimensions, and it does not matter so much where we start. As all the three dimensions are constantly changing and evolving, this alignment is not a one-shot action but a continuous effort. Following the three dimensions, c3-value distinguishes three value model views: the Customer Value Model, the Capability Resource Model, the Competitive Value Model and the Competition Analysis Tree (which is not a value model, but an aggregation of Competitive Value Models). 3.2 On the Notion of Value Activity
In the capabilities analysis, c3-value distinguishes resources and competencies, where competencies exploit resources. However, the ontological definitions of these concepts need to be made more precise and integrated with the e3-value ontology such as analyzed in [1]. We start from the e3-value concept of value activity, an activity that can be executed by at least one of the partners in an economically profitable way. In [1] the value activity is identified with the event concept in REA [11] and generalized into “an activity that adds value”. It is reasonable to assume that operations can only be economically profitable if they add value. However, we cannot assume that any value-adding activity can be explored as such on the market, as transaction costs may exceed the benefits. So strictly speaking, value activity (e3-value) is a specialization of value-adding activity. However, the problem with this restricted notion of value activity is that we typically only know whether some activity is profitable after the profitability analysis. For that reason, c3-value generalizes the e3-value definition of value activity to any activity that adds value. However, the goal of the profitability analysis is still to keep only those value activities that can be explored on the market – which blocks endless decomposition of activities. Given this new definition, we can proceed to identify related concepts. A valueadding activity, or event in REA for that matter, adds value to something (transform in REA) while using other things. It is important to distinguish between type and role: we use the word “business objects” for the “things” that play a role in the value activity; a business object can have a subject role of the activity – receiving value – and a business object can have an instrument role in the activity. For example, the value activity “acquire IT” uses instruments such as “IT acquisition skills”, a vendor short-list and “finances”, whereas its subject is the IT infrastructure of the company. Learning is a value activity that increases skills and can use various instruments: it can be context-free information, e.g. a book, but most important from an RBT perspective, is learning from experience. These skills are exploited in other value activities, but most prominently in the source activity itself (in the example: acquire IT). This means that there is a positive feedback loop between skills and activities. We identify a capability (or competence) with skills used in such a feedback loop; so skills are only a capability when they are constantly improved by actual use. Whether it is a core capability can only be decided when comparing
564
Hans Weigand
with competitors. A resource can be defined as a business object that is exploited in some value activity. So a web site may be a resource for an e-commerce firm like Amazon, and the web site development skills a capability.
Fig. 2. Basic value activity pattern
Fig. 2 describes the basic pattern of a value activity, while Fig. 3 gives an example. It is called a pattern because it describes the resources and related value activities that one typically expects around a certain value activity. A value activity adds value to some value subject. The value activity itself uses certain resources on the one hand and skills on the other (both are represented as dashed rounded boxes; in addition, one may use different colours). Resources must be acquired, which is another value activity, and this requires other resources again, like money. The skills are acquired primarily from experience, which is expressed by the flow between the focal value activity and “acquire skills”. The acquisition of skills can be supported by learning technology. The pattern shows one resource and one set of skills to support the value activity, but more resources can be distinguished (for example, raw product as one type and production machine or human personnel as other types), and we may distinguish between different skills. Evidently, this pattern is a generating pattern: starting from one value activity, we generate (at least) two new ones, acquire resource and acquire skills. Applying the pattern recursively to these value activities introduces yet other activities, such as the acquisition of skills in acquiring resources (i.e., procurement expertise). The generative property of the pattern can be used as a design heuristic during the development of value models. Note that the generative process (which is not decomposition!) can terminate for two different reasons: because we reach the boundaries of the actor, or because certain activities don’t seem relevant anymore to expand (economic value is too low).
IS Outsourcing Decisions: Can Value Modelling Be of Help?
565
Fig. 3. Example value activity (provide music) as an instantiation of the pattern
Besides the value activity pattern we distinguish a value transfer pattern that is used when the value-adding process is divided over multiple actors. A market transaction incurs new value activities: inbound logistics and outbound logistics for the custody transfer of the resource, as well as search and contracting activities.
4 IS Outsourcing Decision Process Although it has always been conceded that investments in IT should be formally planned and aligned to corporate strategy, it was found in an early study by Ciborra [3] that successful applications of IT are often due more to serendipity than to any formal planning process. This insight should be kept in mind when we are now exploring the structure of a decision process and how c3-value might contribute. We describe an abstract IS outsourcing decision process that incorporates based on Simon’s well-known general decision making model, also used in [5]. Intelligence relates to the problem identification. Design means identifying solution alternatives, and Choice means comparing the alternatives and selecting the best one. This step is followed by the implementation of the decision and subsequent evaluation. Fig. 4 depicts the main steps, typical pitfalls and a brief indication of how c3-value can support. Each step will be discussed below in more detail. It should be noted that the outsourcing decision process is better not considered in isolation, although in practice it often is. The fundamental question is not which IS functions should be outsourced, but how to optimize the contribution of IS to the core capabilities of the firm. In addressing this objective, some IS functions can better be outsourced, some IS functions need reinforcement, while there may also be successful IS contributions that should be explored even more or IS functions that have to adapt to evolving needs of the business.
566
Hans Weigand
Fig. 4. IS outsourcing decision process: main steps, pitfalls and possible contributions of c3-value
4.1 Intelligence The first question is why an organization might consider outsourcing. This is not the same as asking whether a particular IS should be outsourced or not, which is addressed later in the decision process. But why going into a decision process at all? At the time that IS outsourcing was seen as an innovation, this intelligence step amounts to a standard assessment of innovations as they propagate in the market. By now, the situation is different. Organizations should reassess their core competencies continuously. As indicated above, c3-value method has been designed to support strategic decision-making, including the identification of resources and the capabilities to explore and evolve them. Some specific guidelines are also presented in [15] in the context of e3 competences. A less structured approach is described in [7]: here capabilities are simply collected from managers in a GDSS environment. The global capability model is always worked out to a certain level of detail and does not necessarily touch the IS functions yet. 4.2 Design
The next question is what an organization might consider to outsource, that is, the identification of alternatives. There are two ways to arrive at a first candidate list: either by the further exploration of the global capability model, or bottom-up by a systematic assessment of all the IS functions and their contribution to the enterprise resources. As a checklist one may use a general list of IS functions such as
IS Outsourcing Decisions: Can Value Modelling Be of Help?
567
provided by [13] or [5]. The general criterion to include a certain IS function in the short-list is that it does not seem to contribute to core capabilities. Although this is not in the primary scope of this article, we note that another short-list is also useful, namely, the core capabilities that appear to be not well served, and that could benefit from more IS support, perhaps by means of insourcing.
Fig. 5. Example local capability model related to the value activity “host web site”
The shortlist of alternatives can be analyzed by means of c3-value. That is, for each function we draw an initial value model that identifies the value relationships with other components, a customer value model that analyzes the real and future needs of the business customer, a Vendor Analysis Tree that provides a systematic comparison with competitors – in this case other departments within the organization that could offer the same function, as well as outsourcing alternatives – and a local resource capability model that explicates the resources and capabilities behind the value offering. For example, if the resource is a certain web site, the initial value model might identify the value of this web site for marketing, as well as the dependencies of the web site on IT infrastructure and so forth (Fig. 5 depicts an example). The customer value model takes a closer look at the marketing function. In particular, this is useful for anticipating possible future needs; for example, the web site may support web presence only at the moment, whereas the marketing department may have a deeper need of targeting the right customer segment, and for that purpose will need a web site with more advanced customer tracking functions. The competition tree might include an external hosting service that offers the same function: the competition tree explicates the second-order values in which the offerings differ. At this point, the analysis is still qualitative, as a quantitative analysis of costs and benefits is only made in the next decision step. It might turn out that the external hosting service excels in terms of availability, whereas the current internal service provides more adaptability.
568
Hans Weigand
4.3 Choice The choice on what to do with the alternatives identified is typically based on a further quantitative analysis of each “promising” alternative taking into account of all the costs involved (including possible switching costs and the “cost of relationship” for instance). For this analysis, several business-economic approaches have been described in the literature, e.g. [9,13]. As there are different outsourcing strategies [8], the question is not only whether to outsource, but also how to outsource. The procedure will further include a ranking of the alternatives from which the most profitable cases are selected. A contribution of the c3-value at this stage may be that it supports the conceptual modelling of the alternatives as well as (drawing on e3-value) a profitability analysis. The conceptual modelling is useful as outsourcing a certain resource has many implications that need to be discussed between the stakeholders involved. When a resource, be it material as a web site, or more abstract as an IT management policy, is outsourced to a vendor, the value offering is usually not changed, but what does change is the learning cycle (acquire skills). Unless special arrangements are made about this with the vendor, it will not be possible anymore to learn from the exploitation of the resource, or other value activities around it (such as the acquisition of the resource). Not just the resource, but also the whole cluster in which it is embedded is moved out. The reason is that value contributions cannot cross organizational boundaries directly; so moving a resource to another organization means that value contributions of this resource to value activities are either removed or explicitly included in a value exchange between the two organizations (but then the value contribution is not direct anymore). If the organization outsources a certain resource but wants to maintain a certain level of exploitation skills, it cannot be done in the old way, but a new design is required (for example, certain skills are kept up to date by monitoring the performance of the vendor from a distance, and by systematic market surveying). These skills are not identical to the skills learned in the pre-outsourcing situation, as the c3-value model will also make clear, but they may be close enough so that the company will not have to invest too much if after some years, the particular IS function has to be backsourced again. 4.4 Implementation
The implementation of an outsourcing decision typically involves the development and subsequent management of SLAs with the selected vendor. It may also include a design of the collaboration process between systems, but the relationship does not necessarily involve interoperability at the system level. c3-value might be very useful for this step, as it is an excellent tool to model the essence of the relationship without going into process details. It can model the primary value offering as well as second-order value objects. These second-order values (quality of service) are preferably based on standard ontologies. For example, in the domain of IT service outsourcing, standard quality areas include Availability, Performance, Support and Security. The quality level should be assessed in at least two ways. One is to assign a certain performance indicator to
IS Outsourcing Decisions: Can Value Modelling Be of Help?
569
each area that measures its level, for example MTBF in the case of Availability. Equally important is it to measure the maturity of the service, for which the maturity model of COBIT can be used. 4.5 Evaluation
After the implementation, the organization must evaluate the outcomes of the outsourcing. Does the vendor provide satisfactory results? What are (perhaps unexpected) outcomes of the organizational change? According to recent studies [18], contracts are better made for a short term (2-3 years) only, to avoid lock-in – although this depends on the kind of outsourcing relationship. In all cases, data must be collected to support the end-of-period decisions. According to [8], firms must have a long-term perspective on the evolution of their IT outsourcing relationships. In the above, we stated that the monitoring of the contract and relationship must be a value activity in the new c3-value model. This value activity must be given an organizational position and realization.
4 Conclusion IT outsourcing is a delicate process that involves many stakeholders and is much more than the design of an interoperable system, although the latter may be a result of it. In this paper, we have described an abstract decision process, and we have explored how the c3-value modelling method can be used to support the various steps. In the research literature, we have found several articles that support specific steps in the process, such as the calculation of agency costs, but not a method that supports the process as a whole in a systematic way. We can conclude from this exploratory study that c3-value might support several steps in the process and does provide a coherent framework, firmly based in the resource-based view of organizations. The actual evaluation of the usability of the method is to be given in further action-research projects. Furthermore, we have developed a new definition of value activity (based on Porter’s notion of value-added activity) as well as a new concept, the value activity pattern, which structures the relationships between value activities, resources and capabilities. Value modelling used to be primarily focused on business interactions as value exchanges. However, equally interesting is its application to the analysis of value chains and value creation cycles.
References [1]
[2]
Andersson B. et al (2006) Towards a Reference Ontology for Business Models. In: D.W. Embley, A. Olivé, S.Ram (Eds.): Conceptual Modeling - ER 2006, Tucson, AZ, USA, LNCS 4215 Springer, 482-496. Barney, J.B., (1996) The Resource-based Theory of the Firm. Organization Science 7(5), 131-136.
570
Hans Weigand
[3]
Ciborra C. (1994) The grassroots of IT and strategy. In: Ciborra C., Jelessi T., (Eds), Strategic Sourcing of Information Systems: A European Perspective, Wiley, Chichester, 3-24. Dibbern J., Goles T., Hirschheim, R., Jayatilaka B. (2004) Information Systems Outsourcing: A Survey and Analysis of the Literature. ACM SIGMIS Database – Vol. 35, No. 4, 6-102. Feen, D., Willcocks L. (1998) Core IS Capabilities for Exploiting Information Technology. Sloan Management Review 39(3), 9-21. Gordijn J., Akkermans J..M., Vliet J.C. van, (2000) Business Modeling is not Process Modeling, Conceptual Modeling for E-Business and the Web, LNCS 1921, Springer-Verlag, 40-51. Lin C., Hsu M-L. (2007) A GDSS for Ranking a Firm’s Core Capability Strategies. Journal of Computer Information Systems Summer 2007, 111-130. Kishore R., Rao H.R., Nam K., Rajagopalan S., Chaudhury C. (2003) A Relationship Perspective on IT Outsourcing. Comm of the ACM 46/12, 87-92. Maltz A.B.. Ellram L.M. (1997) Total cost of relationship: an analytical framework for the logistics outsourcing decision. Journal of Business Logistics 18(1), 45-65. Mata F.J., Fuerst. W.L., Barney J. (1995). Information Technology and sustained competitive advantage: a resource-based analysis. MIS Quarterly 19, 487-505. McCarthy W. E. (1982) The REA Accounting Model: A Generalized Framework for Accounting Systems in a Shared Data Environment. The Accounting Review 57(3), 554-78 Mehta N. (2007) The value creation cycle: moving towards a framework for knowledge management implementation. Knowledge Management Research & Practice 5, 126-135. Ngwenyama O.K, Bryson N. (1999) Making the information system outsourcing decision: a transaction cost approach to analyzing outsourcing decision problems. European Journal of Operational Research 115 (2), 351-367. Peppard J., Ward, J. (2004) Beyond strategic information systems: towards an IS capability (2004) J. of Strategic Information Systems 13 (2), 167-194. Pijpers V., Gordijn J. (2007). e3 competences: Understanding core competences of organizations. Proc. BUSITAL’07 (CAISE workshop). Rivard S., L. Raymond L., Verreault D. (2006) Resource-based view and competitive strategy: an integrated model of the contribution of information technology to firm performance. J. of Strategic Information Systems, 15 (1), 29-50. Weigand H., Johannesson P., Andersson B., Bergholtz M., Edirisuriya A., Ilayperuma Th. (2007) Strategic analysis using value modeling – the c3-value approach. Proc. HICSS ’07, IEEE Press. Willcocks, L., Hindle J., Feeny D., Lacity M. (2004) IT and Business Process Outsourcing: The Knowledge Potential. Information Systems Mgt 21(3), 7-15.
[4]
[5] [6]
[7] [8] [9] [10] [11]
[12]
[13]
[14] [15] [16]
[17]
[18]
Process Composition in Logistics: An Ontological Approach 15 A. De Nicola, M. Missikoff, L. Tininini Isituto di Analisi dei Sistemi ed Informatica “A. Ruberti” – CNR, Rome, Italy {denicola, missikoff, tininini}@iasi.cnr.it
Abstract. Maintenance of complex engineering artifacts is a complex issue that often consumes significant resources. An even more critical situation is represented by the ad-hoc intervention that takes place in the presence of a failure. Autonomic Logistics Services (ALS) is an emerging approach that aims on the one hand to minimize periodical maintenance interventions and on the other to apply advanced diagnostics to anticipate unexpected failures, minimizing ad-hoc interventions. For the latter, ALS requires dynamic service composition, typically consisting of interleaved diagnostics and repairing operations. In the paper we propose an ontological framework supporting ALS and the dynamic composition of its ad-hoc maintenance programs. In particular we propose BPAL (Business Process Abstract Language) as the formal ontological foundation, derived from the (informal) BPMN proposed by the OMG. Keywords: Ontology based methods and tools for interoperability, Business Process Management, Service oriented Architectures for interoperability
1 Introduction The maintenance program for large engineering artifacts (e.g. a radar system or a helicopter) is a very critical factor for the overall success of a large project. While scheduled maintenance interventions are relatively easy to be structured and planned, extraordinary interventions often take place in critical situations and require an ad-hoc planning of operations. Ad-hoc operations are costly and hence, there is an increasing need to reduce extraordinary interventions. An economically-efficient maintenance plan is gaining importance also because many enterprises are increasingly offering a Contractor Logistics Support, i.e., a form of warranty that covers the entire life cycle of the sold product with quality 15 This work is partially supported by the Tocai Project (http://www.dis.uniroma1.it/~tocai/), financed by the FIRB Programme of the Italian Ministry of University and Research (MIUR.)
572
A. De Nicola, M. Missikoff, L. Tininini
parameters directly specified in the support contract, e.g., the minimum acceptable Mean Time Between Failure or Mean Time To Repair, etc. This is particularly critical for complex integrated systems whose life cycle may span over 10-20 years, or even more. The real challenge here is to minimise faults and failures, as they can produce serious damages and far more expensive maintenance interventions, while optimally identifying the sequence of scheduled maintenance events. Ideally, the scheduled maintenance events should be reduced to a minimum while: (i) identifying when an unplanned intervention is required, before the actual failure takes place; (ii) alternatively, provide a timely and effective intervention in case of unpredictable failure events; (iii) in both cases, plan the intervention in an optimal fashion taking into consideration: the structure of the system/device, the diagnosis of malfunctioning, the intervention capacity, the conditions on the field, the specified quality parameters. In other words an ideal approach should schedule the interventions when they are really needed (i.e. right before the failure occurrence would appear), providing the right action, in the right place, involving the right people. In this paper we propose an ontology-based modelling framework, called BPAL (Business Process Abstract Language) to address the problem of defining an ad-hoc maintenance strategy based on the structure of the system and of the diagnostic process, as well as on the conditions of the operational context. This effort is targeted at the implementation of Autonomic Logistics Services (ALSs), where (at least ideally) the maintenance program would be derived automatically and the majority of the common logistics and maintenance actions for a complex system would become automated, thus minimising manpower and human errors. BPAL has been primarily conceived to provide a formal semantics to BPMN, a graphical language for BP modelling that is gaining consensus in the business world. In particular, it provides a procedural semantics for translating BPAL abstract processes into an executable form and a declarative semantics to support the automated analysis by an inference engine. The rest of the paper is organized as follows. Section 2 introduces Autonomic Logistics Services with their main characteristics and objectives. In Section 3, we briefly introduce a central issue of the proposed solution, namely dynamic process composition (DPC). In Section 4, we illustrate the main components and issues related to the BPAL approach, with a specific focus on ALS activities. Then, in Section 5, the expected benefits of the semantic ALS approach are illustrated. Related works are presented in Section 6, while conclusions are finally discussed in Section 7.
2 Autonomic Logistics Services: Objectives and Operations An ALS should provide an advanced level of logistic and maintenance support, according to three main lines of intervention: x
diagnostics: monitoring and analysing anomalies and faults, triggering and supporting a timely and effective corrective maintenance process, along with the related logistics flows and processes;
Process Composition in Logistics: An Ontological Approach
x x
573
prognostics: preventing possible (highly probable) anomalies and faults, by introducing a high flexibility in scheduled maintenance interventions; health management: assessing the system operational environment, and, when necessary, suggesting corrective actions aimed at maintaining the system in the best operational conditions.
For instance, the data concerning a control radar system would be automatically downloaded into a data warehouse that would in turn mine the data for anomalies in an effort to detect existing or impending faults. Additionally, unpredictable critical faults would be detected and isolated in real-time during normal operation and, in order to enhance reliability and safety, they would automatically trigger the ordering and tracking of spare parts, along with the alert, diagnostic and logistic support of specialised personnel, suited for performing the required maintenance intervention. As in many other business contexts, the methods commonly used to represent maintenance and logistics processes are mainly informal, aimed at supporting the enterprise organization at the human-communication level, rather than the system automation and interoperability, by means of a shared formalised knowledge. On the other hand, the ambitious objectives of Autonomic Logistics Services (ALSs) necessarily imply a formal description of both the systemic part of the artifact to be maintained (e.g., the equipments, with their perceivable behaviours in regular and abnormal conditions, the spare parts, the part-of relationships between equipments, the several kinds of faults and failures) and the dynamic part of ALS (in particular the diagnostics and maintenance processes), which is increasingly based on Service Oriented Architectures, where the process logic is represented by standard Business Process (BP) models, e.g., by using BPMN [8]. One side of the problem is of organizational nature. In fact, logistics is an area where two different communities, business people and engineering experts, need to cooperate. In order to close the gap between the ways business people and engineers conceive the diagnostic and maintenance BPs, our solution is based on a semantic approach, combining well-established ontology-based techniques (e.g., the OPAL ontology design patterns [2]) to represent the structural part of the system, with an ontological framework16 to represent the BP semantics, based on the Business Process Abstract Language (BPAL). We propose BPAL as a framework for the management of BP ontologies, primarily conceived to provide a formal semantics to BPMN, an informal BP modelling method gaining consensus in the business world. BPAL is an abstract language (no drawing symbols are provided) having a procedural semantics (allowing a translation to an executable form, BPEL), and a declarative semantics, to be processed by an inference engine.
16 The term “framework” is overloaded. Here we mean a language (lexicon and grammar), a set of axioms, an inference mechanism, and a collection of methods, tools, and best practices aimed at producing a valid BP model. Here we illustrate the BPAL framework at a descriptive level, since a formal treatment falls outside the scope of this paper.
574
A. De Nicola, M. Missikoff, L. Tininini
3 Dynamic Process Composition Dynamic Process Composition has been investigated for long time in a wide variety of contexts, e.g. in automatic program synthesis. In general, this is a very hard problem extensively elaborated in the past with limited practical results. Here we intend to address the problem in a specific context, starting from a number of pre-existing elements. In particular, we assume that the DPC is based on the following elements: x x x x
Top Generic Processes (TGPs). A set of process skeletons that represent the starting point of an ALS process specification. ALS activities. A set of generic activities, structured according to two hierarchy relations: decomposition and specialization. ALS operations. A set of elementary actions, that can be performed either automatically by a machine, manually by a human expert, or synergically by both. DB storing the history of previous significant interventions.
The process composition starts with the analysis of the data collected on the apparatus, (possibly) requiring a maintenance intervention. The data analysis guides the following steps, that consist of a mix of diagnosis and repair actions. In so doing, the system tries to identify the top generic process, thus providing a first support to the expert. Then, the TGP is progressively refined to include ALS activities and, finally, ALS operations. The latter, when automated, are supported by e-services and a SOA can therefore be used for the enactment of the ALS process. The basic pattern underlying diagnosis-decision-action is basically similar to the patterns that can be found in Game Theory. For this reason, part of our future studies will be devoted to explore solutions developed in this field. In this paper we focus on the basic mechanisms provided in BPAL to support the dynamic composition of BPs, as well as their expansion and refinement. Such mechanisms will be used for the dynamic composition of ALS processes.
4 BPAL Basic Components BPAL is structured according to a number of modeling notions defined in accordance with the business culture (such as activity, decision, role), corresponding also to BPMN constructs. The set of symbols denoting such modeling notions constitute the BPAL lexicon, while the corresponding concepts, expressed as atomic formulae (atoms), represent the core BP ontology. BPAL atoms can be combined to build an Abstract Diagram that, once validated with respect to the BPAL Axioms, corresponds to an Abstract Process. An isomorphism can be defined between an abstract process and a BPMN process, the former providing the formal semantics of the latter. More precisely the main components of the BPAL framework are:
Process Composition in Logistics: An Ontological Approach
x
x
x
x
x
575
BPAL Atoms, represented in the form of logical predicates, are the core of the BPAL ontological approach. Any business process modeling starts by instantiating one or more BPAL Atoms provided by the process ontology framework. Note that this is a second-order instantiation, since BPAL atoms represent meta-concepts used to create a BP ontology. Atoms represent unary or n-ary business concepts and some special atoms specifically introduced to support the modeling process. As an example, unary predicates can be used to represent an action (e.g. “check the component voltage”) or an actor (e.g. “maintenance service engineer”), while n-ary predicates can represent precedence relations among activities and decision steps, as well as messages exchanged between activities. Furthermore, special atoms are provided, enabling the user to express specialisation or part-of relations among concepts. BPAL Diagram: a set of BPAL Atoms constitute a BPAL (abstract) diagram. Generally speaking, an abstract diagram is an intermediate product, which can be manipulated during the design process. A BPAL diagram is not required to satisfy all the axioms (see below). The operations of assert and retract can be used to incrementally build an abstract diagram, by introducing (and removing) atoms. BPAL Axioms, representing the constraints that a BPAL Diagram must satisfy to be validated as a BPAL Process. They are conceived starting from the guidelines for building a correct BPMN process. As there is no formal specification of BPMN processes, we provide here a “reasonable” solution, derived from the analysis of a number of publications and practical experiences of what a legal BPMN process should be. However, future official specifications can be incorporated in the framework by simply modifying the proposed axiomatization (see below). An example of BPAL axiom can be: “only decision points can have more than one immediate successor activity.” We also distinguish three kinds of Axioms: Diagramming, Domain and Application Axioms, with increasing degree of specialization with respect to the particular domain of interest. A BPAL Diagram is a Draft if it satisfies all Diagramming Axioms: in some sense a Draft Diagram has an intermediate level of refinement between a generic Diagram and a Process. BPAL (abstract) Process, which is a BPAL Diagram that has been validated with respect to the BPAL Axioms. The validation is achieved by supplying the abstract diagram and the BPAL axioms to a reasoner (we are currently experimenting JTP [17] and SWI-Prolog [18].) BPAL Application Ontology, which is a collection of BPAL Processes cooperating in a given application.
In the following subsections each component of the BPAL Framework is elaborated in more detail.
576
A. De Nicola, M. Missikoff, L. Tininini
4.1 BPAL Atoms
The BPAL atoms are predicates where functors represent ontological categories, while arguments are typed variables representing concepts in the Core Business Ontology (CBO). The CBO is built according to the OPAL methodology [2]. For instance, an activity variable can be instantiated with a process name in the CBO. Variables are characterized by a prefixed underscore and, in building a BP ontology, will be instantiated with concept names of the category indicated by the functor. A process ontology is built by instantiating the following unbound predicates with the constants (i.e., concept names) declared in the CBO. Unary predicates (upre)
x x x
x x x
act(_a) – a business activity, element of an abstract diagram. role(_x) – a business actor, involved with a given role in one or more activities. dec(_bexp) – a generic decision point. Its argument is a Boolean expression evaluated to {true, false}. It is used in the preliminary design phases when developing a BP with a stepwise refinement approach. In later phases, it will be substituted with one of the specific decision predicates (see below). adec(_bexp), odec(_bexp) - decision points representing a branching in the sequence flow, where the following paths will be executed in parallel or in alternative, respectively. cont(_obj) – an information structure. For instance a business document (e.g., purchaseOrder). cxt(_obj) – a context, represented by a collection of information structures.
Relational predicates
x x x x x
prec(_act|_dec,_act|_dec) – a precedence relation between activities, decisions, or an activity and a decision. xdec(_bexp,_ trueAct) - this is a decision where only one successor will receive the control, depending on the value of _bexp. iter(_startAct,_endAct,_bexp) – a subdiagram, having _startAct and _endAct as source and sink, respectively. It is repeated until the boolean expression _bexp evaluates to true. perf(_role,_act) – a relation that indicates which role(s) is dedicated to which activities. msg(_obj,_sourceNode,_destNode) – a message, characterized by a content (_obj), a sending activity (_sourceNode), and a receiving activity (_destNode).
Development predicates The following predicates are used during the BP development process. They are part of the BPAL core ontology, but are not used to categorize business concepts
Process Composition in Logistics: An Ontological Approach
577
(therefore will not contribute to the generation of the executable image.) Conversely, they provide a support in the development of draft BPs. x x
pof(_upre,_upre) – Part-of relation that applies to any unary predicate. It allows for a top-down decomposition of concepts. isa(_upre,_upre) – Specialization relation that applies to any unary predicate. It allows to build a hierarchy of BP concepts, supporting a topdown refinement.
A BPAL Process is fully refined only if each of its atoms can not be further decomposed or specialised. Finally, we have two operations acting on a BP abstract diagram: x x
Assert (BP_Atom). It allows a new atom to be included in the ontology; Retract (BP_Atom). It allows an existing atom to be removed from the ontology.
To improve readability, multiple operations of the same sort can be compacted in a single operation on multiple arguments, e.g. Assert ([BP_Atom1, ..., BP_Atomn]). 4.2 BPAL Diagrams and Processes
By using BPAL atoms it is possible to compose an abstract diagram first and then, after its validation, a BPAL process. An abstract diagram is a set of BPAL atoms respecting the (very simple) formation rules. Below we illustrate (Figure 1) an abstract diagram; the presentation is supported by a concrete diagram, drawn according to a BPMN style. The node labels are concepts in the CBO.
Fig. 1. A simple BPMN Diagram
The corresponding BPAL abstract diagram is the following. act(a), act(b), act(c), act(d), act(e); prec(a,b), prec(a,c), prec(c,d), prec(b,d), prec(d,e). Please note that the order of the atoms is immaterial in a BPAL diagram and, in the punctuation, comma and colon are equivalent, while the full stop ends the abstract diagram.
578
A. De Nicola, M. Missikoff, L. Tininini
4.3 BPAL Axioms
The BPAL framework is characterised by a number of axioms that must be satisfied by a BPAL Process. As anticipated, in BPAL we distinguish three categories of axioms and here, for sake of brevity, we only consider diagramming axioms. They are conceived starting from the guidelines for building a correct BPMN process. As said earlier, there is neither a formal specification nor a widely accepted view of the formation rules for a BPMN process, therefore we provide a “reasonable” solution, derived from the analysis of a number of publications and practical experiences. In any case the framework is sufficiently flexible to incorporate future revisions and the proposed axiomatization can be updated as soon as an official specification will be available. A complete treatment of the BPAL axionatic theory is beyond the scope of this paper. Here we outline the main features of the proposed framework, showing how it can provide a good trade-off between completeness and conciseness. BPAL axioms address different features of a BPMN process formalization. Here we report just one axiom, to provide a first insight in the BPAL methodology. Axiom1 - Branching Axiom If a node is followed by two or more immediate successor activities, then it must be a decision. x CBO : S ( x)
^y CBO : precx, y `
S ( x) ! 1 o dec( x)
According to the Branching Axiom, the above reported diagram is an invalid process and needs to be transformed into the following diagram: act(a), act(b), act(c), act(d), act(e), dec(k); prec(a,k), prec(k,b), prec(k,c), prec(c,d), prec(b,d), prec(d,e). This transformation is obtained by a number of updates on the original BPAL abstract diagram, sketchily summarised as follows: assert([dec(k), prec(k,b), prec(k,c), prec(a,k)]) retract ([prec(a,b), prec(a,c)])
Please note that we have now a Generic BPAL abstract process. It is a process since the Branching Axiom is no more violated and therefore the diagram is validated. However, it is Generic, since there is a generic atom dec(k) that needs to be substituted with a specific atom (one of: adec, odec, xdec.) Such a substitution, assuming that in the following steps of the BP design we discover that we need an and branching, will be achieved by the following design operations: assert: adec(k) retract: dec(k)
Process Composition in Logistics: An Ontological Approach
579
Fig. 2. A Generic BPAL Process in BPMN notation
Further steps of design refinement will involve other BPAL atoms, to specify roles, messages, etc.
5 The Expected Benefits of the Semantic ALS Approach The method presented in this paper represents the core of the SALSA (Semantic Autonomic Logistics Services and Applications) environment, aiming at an extensive application of Semantic Web solutions in the context of Autonomic Logistics services. In SALSA, we envisage a federation of ontologies, as sketchily reported below: x x x x
System Architecture Ontology (SAO), modelling devices and complex artefacts, with their components and operational characteristics; Failures and Malfunctioning Ontology (FMO), modelling the manifest behaviour (or absence of it) revealing abnormal situations; Monitoring and Diagnostics Ontology (MDO), representing the process and activities aimed at identifying the existence of instances of the previous ontology; Maintenance and Repairing Ontology (MRO), modelling the transformations to be applied to the system architecture ontology in order to correct identified failures (i.e., removing instances of FMO).
SALSA includes a reasoner that supports a large part of the above operations. The above ontologies, being federated, have cross-references and refer to a unique name space. This is an important feature that opens the possibility of a high level of automation. For instance, we can link elements of SAO, FMO, and MDO to support the generation of MRO processes. In essence, we assume that the repairing process is dependent on the system architecture and the identified failures, discovered by means of a diagnostic activity.
6 Related Works Several languages for BP have been proposed in the literature. Such languages can be sketchily gathered in three large groups.
580
A. De Nicola, M. Missikoff, L. Tininini
Descriptive languages. They have been conceived within the business culture, but they lack a systematic formalization, necessary to be processed by an inference engine. We find in this group diagrammatic languages, such as EPC [12], IDEF [3, 4], and BPMN [8, 6]. Also UML-Activity Diagram [9] can be listed here, even if originally conceived for other purposes. The BPs defined with these languages are mainly conceived for inter-human communication and are not directly executable by a computer. Procedural languages. They are fully executable by a computer but are not sufficiently intuitive for being used by humans, and lack a declarative semantics, necessary to be processed by a reasoning engine. Examples of these languages are BPEL [5] and XPDL [6, 16]. Formal languages. They are based on rigorous mathematical foundations, but are difficult to be understood and are generally not accepted by business people. In this group we find several formalisms, e.g. those based on PSL [1, 13], Pi-Calculus [7], Petri-Nets [10]. Finally, there are the ontology-based process languages, such as OWL-S [14], WISMO [11], and WSDL-S [15]. This group of languages have a wider scope, aiming at modeling semantically rich processes in an ontological context, and have been conceived not directly connected to the business world.
7 Conclusions BPAL has been conceived to gather the most relevant features of the above mentioned categories of BP languages. Since its constructs are derived from BPMN, it maintains a close correspondence with this standard and therefore it is easy to provide a diagrammatic environment that satisfies the descriptive need of some communities (e.g., business people.) Many BPMN tools (e.g., Intalio BPMS Designer) allow to generate executable code, typically BPEL, starting from the built diagram. The tight correspondence between the BPMN and the BPAL constructs allows us to easily derive a correspondence between BPAL and BPEL constructs, and then provide the generation of an executable BPEL file starting from a complete BPAL process. Finally, BPAL has a formal background, since it is rooted in mathematical logic, and in particular Horn Logic, used to formally verify the consistency of a BPAL business process. With respect to the semantic Web services proposals, such as WSMO or OWL-S, BPAL adopts the opposite strategy: instead of proposing a “holistic” approach that must be embraced as an exclusive choice, BPAL proposes a progressive approach, complementary to existing commercial SOA solutions. It is complementary with respect to the existing SOA solutions, since you may start with a “non-semantic”, implementation and decide later on to add semantics. It is progressive, since you may decide to what extent you “inject” semantics into your SOA (i.e., what you decide to semantically annotate) and then progressively proceed along the semantic enrichment process as long as the need emerges.
Process Composition in Logistics: An Ontological Approach
581
References [1] [2] [3] [4] [5]
[6]
[7] [8]
[9] [10] [11] [12]
[13] [14] [15] [16]
[17]
[18]
Bock C, Gruninger M (2005) PSL: A Semantic Domain for Flow Models. Software and Systems Modeling Journal 4: 209-231. D’Antonio F, Missikoff M, Taglino F (2007) Formalizing the OPAL eBusiness ontology design patterns with OWL. I-ESA Conference 2007. IDEF. IDEF0 - Function Modeling Method. [http://www.idef.com/IDEF0.html]. IDEF. IDEF3 - Process Description Capture Method. [http://www.idef.com/IDEF3. html]. Khalaf R, Mukhi N, Curbera F, Weerawarana S (2005) The Business Process Execution Language for Web Services. In Dumas M, van der Aalst W, ter Hofstede AHM (eds) “Process-Aware Information Systems”. WILEY-INTERSCIENCE, Pages 317-342. Mendling J, zur Muehlen M, Price A (2005) Standards for Workflow Definition and Execution. In Dumas M, van der Aalst W, ter Hofstede AHM (eds) “Process-Aware Information Systems”. WILEY-INTERSCIENCE, Pages 281-316. Milner R (1999) Communicating and Mobile Systems: the Pi-Calculus. Cambridge Univ. Press, ISBN 0-521-65869-1. OMG (2006). Business Process Modeling Notation Specification. Version 1.0. February 2006 [www.bpmn.org/Documents/OMG%20Final%20Adopted%20BPMN%2010%20Spec %2006-02-01.pdf] OMG (2007). Unified Modeling Language: Superstructure version 2.1.1. [http://www.omg.org/docs/formal/07-02-03.pdf] Peterson JL (1977) Petri Nets. ACM Computing Surveys 9 (3): 223–252. Roman D, Keller U, Lausen H, et al (2005) Web Service Modeling Ontology. Applied Ontology, 1(1): 77-106. Scheer A-W, Thomas O, Adam O (2005) Process Modeling Using Event-Driven Process Chains. In Dumas M, van der Aalst W, ter Hofstede AHM (eds) “ProcessAware Information Systems”. WILEY-INTERSCIENCE, Pages 119-145. Schlenoff C, Gruninger M et al (2000) “The Process Specification Language (PSL) Overview and Version 1.0 Specification”, NIST. The OWL Services Coalition (2003). OWL-S: Semantic Markup for Web Services. [http://www.daml.org/services/owl-s/1.0/owl-s.pdf]. W3C (2005). Web Service Semantics WSDL-S. [http://www.w3.org/Submission/WSDL-S]. WFMC (2005). Process Definition Interface – XML Process Definition Language, version 2.00. [http://www.wfmc.org/standards/docs/TC-1025_xpdl_2_ 2005-1003.pdf]. Fikes R., Jenkins J., and Gleb F. "JTP: A System Architecture and Component Library for Hybrid Reasoning." Proceedings of the Seventh World Multiconference on Systemics, Cybernetics, and Informatics. Orlando, Florida, USA. July 27 - 30, 2003. Wielemaker J., Huang Z., and van der Meij L.: SWI-Prolog and the Web, Journal of Theory and Practice of Logic Programming (to appear in 2007.)
Interoperability of Information Systems in Crisis Management: Crisis Modeling and Metamodeling Sébastien Truptil1, Frédérick Bénaben1, Pierre Couget2, Matthieu Lauras1, Vincent Chapurlat3, and Hervé Pingaud1 1
2
3
Ecole des Mines d’Albi-Carmaux, Campus Jarlard, Route de Teillet, 81000 Albi, France {truptil, benaben, lauras, pingaud}@enstimac.fr Préfecture du Tarn, Place de la Préfecture, 81000 Albi, France [email protected] Ecole des Mines d’Alès, Parc Scientifique Georges Besse, 30900 Nîmes, France [email protected]
Abstract. In a crisis situation (natural disaster, industrial accident, etc.) several partners have to act simultaneously to solve the emergency situation. Their coordination in such a context is a crucial point, especially in the first moments of the crisis. Their interoperability (precisely their Information Systems interoperability) is a major component of the success of the network. ISyCri 17 French project propose to tackle this topic according to two aspects: (i) responsiveness of the network (its ability to act rapidly and efficiently) and (ii) flexibility of the obtained system of systems (its ability to evolve and follow the changing situation). This is so an agility problem of ISs of partners. This article presents the first results of this work: a metamodel of crisis situation and its ontological links with collaborative process design, and also the treatment of a first case of study, a NRBC 18 exercise. Keywords: ontology, crisis management, interoperability, information system, collaborative process, system of systems, SOA, MDA.
1 Introduction On a crisis situation, different actors have to act (generally) simultaneously in a hurry to reach the shared goal of crisis reduction. These actors are heterogeneous by nature (mission, equipment, culture, etc.). To be efficient, they have to collaborate, or at least to coordinate their actions, in order to build a coherent 17 Interoperability of System in Crisis situation. 18 Nuclear Radiological Bacteriologic Chemical.
584
S. Truptil, F. Bénaben, P. Couget, M. Lauras, V. Chapurlat, H. Pingaud
response. However each performer remains autonomous for the deployment of its means and the achievement of its own objectives (even if a general entity is often in charge of the global authority). Generally, each actor owns its own information system (IS) adapted to its own needs, resources and processes. That is why, we believe, as shown in [1], that the major issue of integration of partners in crisis context is the ability of these information systems to communicate, to exchange information, to share services and to coordinate their behaviors according to the global goal of crisis reduction. This observation is, for the authors, the keystone of the efficiency of crisis reduction. The point of the ISyCri project is to provide partner organizations involved in such a situation with a Mediation Information Systems (MIS) able to merge their respective heterogeneous and autonomous ISs into a global System of Systems (SoS). This MIS, as the linking support between ISs have to meet the two mains requirements: (i) providing a fast and efficient link between ISs (in order to ensure responsiveness) and (ii) following the unavoidable evolutions of the crisis by remaining adapted and rightly dedicated to the specific needs of the crisis situation and to the – possibly changing – group of involved partners working on its resolution (in order to ensure flexibility). Thus the MIS design should deliver an agile result (agile can be seen as the union of responsiveness and flexibility). The global principle of the ISyCri project and its contents will be presented in the first section of this article. We will then focus on the crisis modeling part, as a way to build the first step to MIS design. In order to meet its objectives, the ISyCri project plans to set up a global crisis metamodel. This metamodel can be seen as the combination of two ontologies: one for crisis characterization and one for reaction processes. The knowledge supplies by the instantiation of this metamodel can be used, first to model the adequate MIS to react promptly to the situation and second to maintain the model of the crisis in order to follow its evolution. The proposed metamodel and the ontological objectives will be shown in the second part of this paper. Thirdly, the authors will present the instantiation phase of the metamodel on the treatment of a particular instance: a civilian training exercise concerning a NRBC context. This third part will show the adequacy of the proposed metamodel and the choices the authors made to create the dedicated crisis model. This is a way to test the metamodel and also a capitalization phase offering formalized knowledge on crisis situations for the continuation of the project. ISyCri is a project supported by the French Research Agency (ANR for Agence Nationale de la Recherche 19) involving five main partners: two companies (THALES-Communication and EBM-WebSourcing) and three academic labs (DR/GI from Ecole des Mines d’Albi-Carmaux, IRIT from Université de Toulouse 1 and LGI2P from Ecole des Mines d’Alès). Institutional partners are also providing their user point of view. This project will stand from the end of May 2007 to the beginning of June 2009.
19 French national agency for research.
Interoperability of Information Systems in Crisis Management
585
2 Overview of the ISyCri Project The authors believe that integration of partners is a crucial step on the way of success in crisis reduction. The authors point is to propose to solve this issue of integration of partners by the means of ISs interoperability. According to InterOp 20, Interoperability is “the ability of a system or a product to work with other systems or products without special effort from the customer or user” [2]. For the authors, Interoperability can be seen as the collaborative maturity level (of organizations) adapted to integration, which can be seen as the ultimate collaboration level (of network). 2.1 A Method of Mediation Information System Design
Ensuring partners’ ISs interoperability is not a trivial issue. The authors believe it is rational to tackle this topic on the base of existing partners’ ISs. Another approach could propose to rebuild partners’ ISs, but the authors believe such a radical approach can not be considered as a realistic one. The goal of ISyCri is thus to provide a method of MIS design. The two crucial needs of responsiveness and flexibility (i.e. agility) must be covered. Due to the nature of the MIS (an information system), we propose to develop our design approach on the MDA21 precepts. The general view of this design method is shown on next figure (using the “Y” of the MDA approach):
Fig. 1. Proposed MIS Design method in the MDA context (an overview).
Knowledge about the crisis context allow to model the specific situation according to the crisis metamodel and the corresponding ontology. Using this characterization, the crisis ontology provides a deduced model of an adapted collaborative process: the CIM (which could be enriched and validated). This step 20 InterOp is a European Network of Excellence (NoE) dedicated to Interoperability issues. 21 Model-Driven Architecture.
586
S. Truptil, F. Bénaben, P. Couget, M. Lauras, V. Chapurlat, H. Pingaud
is based on the adaptation of the results about the deduction of collaborative process model (in industrial context) from collaboration characterization (using ontology) presented in [3] and [4]. A model transformation mechanism use this CIM to build (in UML 22 or in a specific DSL 23) the logical view of the MIS: the PIM. This mechanism is directly inherited from [5] and propose a SOA 24 structure for the MIS. Concurrently, the ESB 25 targeted technologic platform is modeled: the PM. ESB technology has been chosen because of its obvious adequacy with SOA principles. Some partners of the ISyCri project are skilled in these technologies. Next, a projection (logic to technology) provides a MIS computable model: the PSM. This design method and ensure the exigency of responsiveness listed below, but concerning the requirement of flexibility a complementary study should be added (such as loops in the method). 2.2 Contents of the ISyCri Project
Obtaining such a MIS design framework implies to divide the whole ISyCri project into several tasks which provide the needed elements to support the MIS design method itself. Those tasks can be listed as follow: 2.
Ontology building: This first task will be developed in section 3. however to summarize, the authors aim at building a global crisis ontology providing a way to deduce adequate collaborative processes from crisis descriptions. This global crisis ontology is built by linking semantically two ontologies which are crisis ontology and response ontology. Crisis is organized in two parts: the studied system (including people, natural site, goods, etc.) and the crisis characterization (containing elements of crisis identification, such as type, gravity, trigger, etc.). Response ontology represents the treatment system deployed to reduce the crisis as well as the collaborative process executed. 6. Logical modeling of MIS: Extracting the embedded knowledge from collaborative process(es) model (CIM) in order to design the logical model (PIM) can be seen as a model transformation task. We already have some results on this field (in industrial collaboration situations) which can easily be extended in crisis context. This work is especially focused on SOA approach. 7. Technical Modeling of architecture and projection from logical view to technological view: We believe ESB can be a pertinent candidate for an adequate technological platform. It is essential to study and to model the structure of such a tool in order to provide the PM. Furthermore, mechanisms of projection of logical view (PIM) onto this technical view (PM) should also be established.
22 Unified Modeling Language. 23 Domain Specific Language. 24 Service-Oriented Architecture, a logic approach for IS conception based on clustering functions. 25 Enterprise Service Bus, a technological solution for service-based IS.
Interoperability of Information Systems in Crisis Management
587
8.
Study of the dynamic part: The keystone of the added-value of the ISyCri project is the ability of the MIS to follow changes of the crisis situation. This capacity can be carried by several levels: the process engine could be flexible enough to support evolutions of the dynamical model of the network. This is flexibility at the implementation level. In mind of the authors it is possible to bring flexibility at any MDA levels (PSM, Projection, PIM, CIM). An iterative approach can be used to bring a “loop” in the design process (on precise looping criteria). This study remains obviously a critical task of the ISyCri project. 9. Experimentation: Such a task is a generic part of this kind of project. It will be based on specific use-cases in order to check the described principles.
3 Crisis Characterization and Collaborative Process Deduction This section will present the theoretical aspects of collaborative process inference. The main principle is to use a crisis ontology (including descriptive part and dynamic part, which is the response process) in order to formalize the knowledge available at the moment (about the crisis situation). The result is a partial instantiation of the crisis ontology. The incomplete part (especially the dynamic view) have to be inferred from the existing knowledge using deduction mechanisms and connection rules specific to the generic crisis ontology. The point is to obtain a proposal of a collaborative process adapted and dedicated (by construction) to the specific crisis situation which have been characterized through the ontology. The obtained process could obviously need to be validated and/or completed by experts (crisis expert or member of the collaborative network) in order to become a pertinent dynamic suggestion. 3.1 Ontological Principle
The following picture (Fig. 2) presents the global structure of the crisis ontology the authors aim to use. Its design will be based on a crisis metamodel (in UML) presented in part 3.2.
Fig. 2. Ontological architecture of the deducing mechanism (sub-ontologies and links).
588
S. Truptil, F. Bénaben, P. Couget, M. Lauras, V. Chapurlat, H. Pingaud
ISyCri Ontology contains two main parts (crisis ontology and response ontology) which contains each one two sub-parts (studied system and crisis characterization for the crisis ontology and treatment system and process for the response ontology). These sub-ontologies of ISyCri ontology are connected with each others using semantic links or structural links inherited from the metamodel (links connect elements of ISyCri ontology, for instance, a start event is connected to a crisis through a trigger link). Once defined, such a metamodel covers the whole crisis representation but it cannot express the dynamic of such situations. We need to make it more expressive and absolutely non-ambiguous to have its instances able to provide the dedicated collaborative processes. One possible approach to meet this requirement is to use descriptive logic-based ontologies. We used the approach and definition given by Thomas Gruber [6] to get an ontology which has to be specified as clearly as possible, all concepts being possibly defined axiomatically. So, the main constraints under the building of the ontology are to use only explicit assumptions, excluding any implicit behavior of the system in order to provide a full and clean model usable by the inferential services we planed to use. Technically speaking, our UML metamodel was translated into an OWL-DL ontology. In comparison with OWL-Full or OWL-Lite (subsets of the same language), OWL-DL offers large expressivity possibilities and is the only one which ensures computational completeness and decidability: it is fully usable by computers program. 3.2 Crisis Metamodeling
This part presents the crisis metamodel the authors point to use in order to build the ISyCri ontology presented previously. This metamodel has been built on the basis of the capitalized knowledge about several studied crisis (especially civil and humanitarian crisis) and about risk management [7]. We choose UML language to design this metamodel. This is the first stabilized version. The way of building the ontology from the UML metal-model derivates from [8]. By this way, we ensure not to lose any information nor relation from the UML description. The structures of the ontologies are based on the UML metamodel where each UML class becomes an OWL class. In the same way, UML relationships between classes are OWL properties. The original metamodel was split into two separate parts. The first one describes crisis while the second one is focused on the resolution process. This approach eases the maintenance of the whole system, allowing the replacement of the resolution process ontology. The proposed metamodel may be cut in three main parts (which cover the four parts of the ISyCri ontology presented in Fig. 2 of part 3.1,): the studied system (corresponding to the studied system ontology), the crisis characterization (corresponding to the crisis characterization ontology) and the treatment system (corresponding to the response ontology, that is to say, the treatment system ontology and the collaborative process ontology).
Interoperability of Information Systems in Crisis Management
589
Fig. 3. A proposal for crisis metamodel.
x
The studied system: The studied system is defined as the sub-part of the world affected by the crisis. The components of this subsystem have been grouped in different categories such as goods, natural site, people and civil society. All those elements are considered as studied system components which can be concerned by the situation. Goods can be seen as each man-made entities (roads, bridges, buildings, houses, etc.). On the opposite, natural sites are the elements of the studied system which are not man-made, such as rivers, forests, etc. People concerns all the group of persons which are threatened by the crisis situation (people of a city, group of travelers, employees of a company, etc.). Civil society includes legal entities (media, intellectuals, etc.), associations and organizations which act in the crisis area.
x
The studied system contains also risks and dangers. A danger exists continually (on the studied system) and one or several risk(s) may concretize the exposure to this danger. For instance, an area like US WestCoast presents a danger of seismic instability while an earthquake occurrence is a risk attached to this danger. The crisis characterization:
590
S. Truptil, F. Bénaben, P. Couget, M. Lauras, V. Chapurlat, H. Pingaud
x
Crisis includes several elements: some (dynamic) are involved in its occurrence or its evolution while some others (static) are dedicated to its description. Crisis occurs due to one (or several) trigger(s) and, once appeared, is composed with three main components: (i) effect(s), (ii) complexity factor(s) and (iii) gravity factor(s). A trigger is a kind of event which starts the crisis. It is the realization of a risk. An Effect is the noticeable consequence of the studied crisis. It is also considered as an event and can produce other effects. It can be evaluated through indicators. A complexity factor is a danger which impacts directly the nature of the crisis and can affect its type (for instance, a sanitary crisis may evolve into a social crisis due to the “over-communication” through media). A gravity factor is a danger which impacts directly the gravity of the crisis (for instance, a strong wind and a dry weather could affect the gravity of a fire in a forest). The treatment system (including collaborative processes): In order to solve (or at least to reduce) the crisis situation, it is necessary to define a treatment system which aims to drive the situation to a stable and handled state. This treatment system includes actors (institutions or not, on the site or not), their resources, the services they provide, their procedures, their ISs, the MIS (named Collaborative IS on Fig. 4) and the collaborative process it should run. The bottom part of this package is dedicated to collaborative process description (it includes elements of process modeling) and is directly inspired from a metamodel of collaborative process described in [5].
4 Instantiation of an Example In order to test the metamodel, we propose to instantiate it on a specific case of study: a NRBC exercise (27th of February 2004) managed by the Prefecture du Tarn 26 in France. 4.1 Brief Description of the Studied Case
The played scenario is the following: “At 10 AM, the 27th February 2004, the police is informed of an accident between a tanker truck (unknown substance) and a wagon containing chemical products (materializing a cloud). The sent policemen and the around employees of the railway station fall unconscious while several children of the near kindergarten (outside when the accident happened) feel sick.”.
26 Prefecture du Tarn is a French institution in charge of representing the government authority on a local scale (they are among one hundred of prefectures in France).
Interoperability of Information Systems in Crisis Management
591
4.2 Model of the Case of Study Using the Crisis Metamodel
We used GME (Generic Modeling Environment, 2005) to describe and store the crisis metamodel. This software tool allows us to describe a model relating to the designed metamodel (with specially chosen graphical icons). The following pictures (Fig. 4 and Fig. 5) present the packages studied system and crisis characterization of the instantiated model of NRBC crisis.
Fig. 4. Studied system
This part of the model contains three natural sites: the Tran river, the ground, and the local atmosphere. It contains also five goods: the railway station, the kindergarten, buildings and houses, the truck and the train. There are four kind of people: people of the city of Marssac, people of the railway station (travelers and employees), children of the school and their parents. Parents are also the civil society. Finally there is one main danger which is area of hazardous material transport and three identified risks associated: explosion, contamination, and panic.
592
S. Truptil, F. Bénaben, P. Couget, M. Lauras, V. Chapurlat, H. Pingaud
Fig. 5. Crisis characterization
As for the crisis characterization, there is one crisis: NRBC_27_02_2004, one trigger: accident between the truck and the wagon, one factor of gravity: the fact that the truck is carrying a tank with unknown substance and two effects: a cloud of escaping chemical products and several sick people (policemen, employees of the station and children). This kind of obtained model can be seen as the basic result showing how the ISyCri ontology instantiation will be performed. According to Vatcharaphun Rajsiri works (see [3]), the main principle is to find a particularly strong links between components of the first ontology (crisis ontology in our case) and the second ontology (response ontology in our case). Then this “bridge” can be use to start the deduction mechanism. For instance, in Mrs. Rajsiri works (concerning ontology of industrial collaborations in order to deduce industrial collaborative process) a strong link exists between objectives of the collaboration and business services provided by partners. Defining needed business services from the expected objectives allow the inference procedure to instantiate a lot of complementary elements of the expected collaborative process. Similarly, in our crisis context, there are strong links between effects and services or risks and procedure, etc. It is this kind of transverse connections which could permit the fulfilling of the collaborative process part of the ontology (exactly as shown in [3] or as exposed in works on strategic alignment such as [9]). Furthermore, the crisis characterization should be an iterative activity (in order to complete the representation and to follow the crisis evolution). It is so necessary to include a friendly-user requirement for the characterization method (and the illustration tool which will be prototyped). This aspect is a crucial point of the remainder of the project due to the essential will of flexibility of the approach. The iterative aspect of crisis modeling is one component of this expected flexibility.
5 Future work As shown before, the use of semantical links between the two ontology (crisis ontology and response ontology) is the first step of MIS determination. So the next
Interoperability of Information Systems in Crisis Management
593
step consists in determining these semantical links. For that, the authors use real situations : one situation of a civil crisis (NRBC exercise), one situation of a humanitarian disaster (earthquake of Yogyakarta, and one situation of a terrorist attack. Through the study of these examples, the authors try to determine the deduction rules between the crisis characterization and the collaborative process. This step is based on similar works on collaborative process deduction from industrial situation characterization (see [3]). Once these semantical links known, the largest part of work will be to bring flexibility into the MIS process engine.
6 Conclusion This article presents the French ISyCri project, its contents, its objectives and its first orientations. ISyCri deals with the topic of integration of partners trying to solve a critical situation. This integration matter is tackled through the angle of ISs interoperability. The project propose to include a mediator (MIS for Mediation Information System) between partners in order to support the required interoperability functions (such as data transmission, application management and collaborative process orchestration). ISyCri finally aims to provide a design method of this MIS. It is noticeable that this project is structured with three main parts: (i) crisis characterization and deduction of collaborative process through ontology mechanism, (ii) study of the technical architecture of the mediation information system and (iii) study and experiment on flexibility of the MIS (should the MIS be re-designed when the crisis changes? Should it be adaptable?). It is obvious that in a crisis context, the notion of adaptability or flexibility of the MIS is an unavoidable requirement. The real question is “how to include flexibility and eventual loops of evolution in the MIS design method?”. The answer to this question is a crucial issue of the project and a PhD have been started on this topic. It seems that ontology and the associated deducing process offers a strong basis to challenge this question. Indeed, the authors believe that maintaining a relevant situation model among the duration of the crisis can be the first step to global flexibility. Upholding a right vision of the situation is necessary to maintain the right support for its resolution. The metamodel presented in this article is the first result of this task.
References [1]
[2]
Bénaben F., Pignon J.-P., Hanachi C., Lorré J.-P., Chapurlat V.: Interopérabilité des systèmes en situation de crise, WISG’07, ANR & DGA Interdisciplinary Workshop on Global Security, Troyes, France. (2007). Konstantas D., Bourrières J.-P., Léonard M. and Boudjlida N.: Preface of Interoperability of enterprise software and applications. proceedings of INTEROPESA’05, Geneva. Springer-Verlag; p. v-vi. (2005).
594
S. Truptil, F. Bénaben, P. Couget, M. Lauras, V. Chapurlat, H. Pingaud
[3]
Rajsiri V., Lorré J.-P., Bénaben F. and Pingaud H.: Cartography for designing collaborative processes. Proceedings of INTEROP-ESA’07, Madeira. SpringerVerlag; p.257-261. (2007). Rajsiri V., Lorré J.-P., Bénaben F. and Pingaud H.: Cartography based methodology for collaborative process definition, Establishing the Foundation of Collaborative Networks. Proceedings of Pro-VE’07, Guimaraes. Springer, pp. 479-486. (2007). Touzi J., Bénaben F., Lorré J.-P. and Pingaud H.: A Service Oriented Architecture approach for collaborative information system design. Proceedings of IESM’07, BeiJing. (2007). Gruber T. : Toward principles for the design of ontologies used for knowledge sharing. In : International Workshop on Formal Ontology, Padova. (1993). Gourc D., HDR Gasevic D., Djuric D., Devedzic V., Damjanovic V.: Converting UML to OWL ontologies. Proceedings of the 13th International World Wide Web Conference, NewYork. (2004). Ralyté J., et al.: State of the Art: Exploration of Methods and Method Engineering Approaches. Deliverable DTG 6.1 of InterOp NoE. (2005).
[4]
[5]
[6] [7] [8]
[9]
A Novel Pattern for Complex Event Processing in RFID Applications Tao Ku11,2, Yun Long Zhu1, Kun Yuan Hu1 and Ci Xing Lv1 1
2
Shenyang Institute of Automation of Chinese Academy of Sciences, 110016, Shenyang, China {kutao, ylzhu, hukunyuan, smale}@sia.cn Graduate University of the Chinese Academy of Sciences, 100039, Beijing, China
Abstract. This study investigates Complex Event Processing (CEP) Pattern on RadioFrequency Identification (RFID) applications. RFID technolgy has brought tremendous benefits for the business by incorporating RFID data into supply chain planning and business processing, however, this progress has also raised important problems for RFID data processing. How to mine the signification information from RFID events is a challenge in RFID applications. In this paper, we take an CEP Pattern-oriented approach to process RFID data. A novel event pattern based on semantics operator is proposed. A formalization event hierarchy is used to model complex event with the event ontology, and provides abstract hierarchical views allowing us to view the system activities at different levels. Several complex event patterns are proposed based on semantic event operators. An algorithm is performed to test reorganization performance of patterns, a rule-based method is used to recognize efficiently on primitive event and composite event. The results showed that the advantage of the proposed complex event pattern approach is remarkable. It is concluded that the method of CEP pattern can simplify and improve RFID application. Keywords: Formal approaches and formal models of interoperability, Formal approaches to interoperability
1 Introduction Over the past few years, a great deal of attention has been driven towards RFID applications. RFID technology can store and retrieve remotely the unique scanner codes contained within the specialized RFID tags by means of electromagnetic radiation. Advances in the field of RFID have brought tremendous benefits for the business and industry, especially in incorporating RFID data into supply chain planning and business processes. This progress has also raised important data processing problems [5]. One of the most important aspects is how to deal with
596
Tao Ku1, Yun Long Zhu, Kun Yuan Hu and Ci Xing Lv
RFID data in the form of continuous and potentially infinite streams. There are quite a few academic projects involved in the research of Data Stream Processing (DSP), related projects include the STREAM [17] project, the Tapestry [18], the Telegraph [19], and the Aurora [10] et al. Most of these systems provide the novel algorithms by using synopsis structures to support filter and aggregation on data streams. However, there are few communications on the effect of the semantic and context information contained in the RFID event streams [8] flowing through all of the layers of the enterprise systems, and how to discover the signification information and act upon in real time in terms of the RFID events impacting on high level management goals and business processes. It has proved that data streams processing can not satisfy the requirements for enterprise in real time. Recently, Complex Event Processing (CEP) [1, 2, 21] has received a lot of attention that facilitates RFID events processing. CEP is an emerging technology for building and managing information systems. A major application of CEP is Business Event Management (BEM) in the event-driven real time enterprise. CEP events are created by filtering real-time data and infusing it with defining detail such as timing, or causal relationships discovered by correlating other events. Especially, it has been undertaken to integrate CEP into RFID data processing, such as, in [11] several formal specifications and semantics operators of RFID events are introduced. In [13] complex event language and workflow models are introduced, and to match complex event pattern for RFID. In [14, 15, 16] present Middleware with CEP infrastructure for processing RFID data. However, a great deal of efforts only was focused on raw RFID data processing, such as the elimination of duplicated data and the aggregation of data. In spite of the potential importance of RFID data processing with CEP, the following questions need to be answered: (1) what kinds of complex event patterns are requisite for RFID applications; (2) how to design the pattern and pattern reorganization algorithm; (3) how to map the lower level events into the high level business processing logic. The goal of this paper is to improve the ability of RFID data processing and trigger high-level business logic by applying complex event pattern. To test specifically the ability of complex event patterns for RFID application, we firstly classify the RFID events as hierarchical events by event ontology and propose a formal CEP pattern specification based on pattern operators. This paper presents an algorithm done to test the effects of complex event pattern recognition. The performance comparisons to show the advantages of the proposed CEP approach through simulation studies. The paper is organized as follows: we first give a formal definition of hierarchical RFID events in section 2, the complex event pattern and recognition algorithms are discussed in section 3, and analysis the proposed algorithm performances, followed by conclusions in section 4.
2 The Event Hierarchy In this section, we will formalize the semantics and specification for RFID events. In particular, we will discuss RFID event hierarchies, which is an abstraction hierarchy allowing us to view the system's activities at different levels. And it lets
A Novel Pattern for Complex Event Processing in RFID Applications
597
us focus on what we want to think about. For examples, RFID product and package events are produced by the reader at the different locations on the package production line, which is an object recording of an activity. We do not have to think low-level read and write events at the same time as we think about high-level operations, such as stock in and stock out. To build hierarchical events we must first define RFID event model [9]. As Figure 1 shows that we view RFID event model as being a hierarchical constructer. All read and write events are viewed as a primitive event level; the relationships between events are viewed as complex event level, which include timing, distance relationship et al. The events with the business context are viewed as application event level, which provides real-world business information as context and applies rules to trigger business logic processing. The each of level provides precise definitions of the structure of the event data. An example of event data is described follow: Primitive event level: “At some time, RFID Tag X was observed at some Location”. Complex event level: “At some time, items were put in pallets X at Location L distribution center located at Street/town/country”. Application event level: “a Purchase Order P contains a list of the ordered items”. Below we give the formal definition of hierarchical event. Application Event Level Pattern Abstract Complex Event Level Pattern Abstract Primitive Event Level Fig. 1. A three-level hierarchy for viewing RFID event
2.1 Formalization Basic Events
An event is a record of some occurrence in the physical world. Each event explicitly as part of its data or implicitly as part of its interpretation rule has a type that indicates what happened, and schematized data that describes the details. Events can be extracted from RFID and activities, or any information that users and systems are interested in can be defined as an event. In this section, we will formalize the semantics and specification for RFID basic events [20]. Definition 1. Let Event be an (m+1) tuple E (a1 ,...a m , t ) , let R { A1 ,.... Am } be a set of event attributes with domains Dom( A1 ),....Dom( Am ) . Then an event instance
598
Tao Ku1, Yun Long Zhu, Kun Yuan Hu and Ci Xing Lv
is e where ai Dom( Ai ) and t is a real number, the occurrence time of the event. Let S be an event sequence is a collection of events over R {T } , where the domain of the attribute T is the set of real numbers IR . The events in the event sequence are ranged in an ascending order by their occurrence times. Example 1. RFID readers produce large amounts of tag events. A tag event is generated by a reader when it has detected some RFID tags. Such a tag event flow can be viewed as an event sequence. Each tag event has several attributes like reader ID and Tag ID, indicating logic location that sent the tag event, and the content of the tag event, respectively. A tag event also has a type and an occurrence time associated with it. An example of a real tag event is:
(id, tag-type, reader-id, tag-id, time)
(1)
Definition 2. Let the set of H be a set of event types. Then, An event is a pair (e, t ) where is e H an event type and t IR is the occurrence time of the event. An event sequence S is then an ordered collection of events, i.e. S e1 , t1 !, e 2 , t 2 !,... en , t n !!
Where ei H for all i
(2)
1,...n and t i d t i 1 for all i 1,...n 1 . The length of the
sequence S is denoted by S
n . A sequence that only consists of event types in
temporal order, i.e. ST e1 ,e2 ,...en ! , where ei H for all i 1,...n , ST is called an event type sequence. And event type is denoted by an empty event sequence, or an empty event type sequence is denoted by ! . Example 2. Let H A, B, C , D, E , F ! be the set of possible event types. Formally, an example of an event sequence consisting of events of types e H can be expressed as S A,201 !, B,22 !,... E ,40 !, F ,55 !!
(3)
An event type sequence corresponding to this event sequence is: ST A, B ,...E , F !
(4)
Real-life event sequences are often extremely long, and they are difficult to analyze as such. Therefore, we need a way of selecting shorter sequences suitable for our purposes. This leads us to the definition of an event subsequence. Definition 3. Let S be an event sequence and ST an event type sequence over a set H of event types. A Boolean expression T on event types and/or occurrence times of events is called a selection condition on events of a sequence. An event subsequence of the sequence S is an event sequence that satisfies T , the condition T can contain restrictions on event types, occurrence times of the events, or both of them i.e.
A Novel Pattern for Complex Event Processing in RFID Applications
S (T ) (ei , t i ) | (ei , t i ) S satisfy (T ) !
599
(5)
An event type subsequence can either be an event type sequence that satisfies T in the event sequence S , i.e.: ST ( T ) ei | ( ei ,t i ) S satisfy( T ) !
(6)
Or an event type sequence that satisfies T , in the event sequence S , i.e.: S ( T ) ei | ( ei ST satisfy( T ) !
(7)
Example 3. Consider a sequence of the events of type A., for example, this is S ( ei
A) A,30 !, A,50 !, A,60 !!
(8)
2.2 RFID Event Ontology
The Event ontology deals with the notion of reified events. It defines one main Event concept [22]. An event may have a location, a time, factors and name, with the addition of sub-events to represent information about complex events in a structured and non-ambiguous way, as depicted figure 2. RFID Event Ontologies build hierarchy of events by providing a formal description of the kinds of inheritance relatonships and how they are related. We can use terms defined in RFID Event ontologies to describe the high-level event patterns. Such hierarchies are needed, for example, an RFID-tagged pallet may contain a number of RFIDtagged cases, each of which may contain a number of RFID-tagged items. Hierarchical events are typically necessary to model such containment directly. In the following discussion, we will formalize hierarchical event.
600
Tao Ku1, Yun Long Zhu, Kun Yuan Hu and Ci Xing Lv
sub_event NamedEntity
factor
name Event
time
location LocationlEntity subclassOf
FactorEntity TemporalEntity
subclassOf subclassOf
RFID Event
RFID Primitive Event
RFID Composite Event
expand RFID Simple Event
RFID Complex Event
collapse
composedBy
ControlConstructor has
Sequence
has Causal
Temporal
Fig. 2. RFID Event ontology. The dotted lines indicate subclass relationships, while the real lines represent relationship of aggregation and association between classes.
Definition Primitive Event. Given a set of event types and a class of all possible sequences over , a primitive event is defined as: Pr
{S i| ei ,t i ! S i ,|S i| 1, distance (pr) 0, t i z 0}
For each event p Pr , let dom( pe) denote the domain from which the values of p are taken. The distance(pe) returns the distance between event instance p . Definition Composite Event. Given a set of event types and a class of all possible sequences over , a compose event is defined as: Ce {S i u S j o IR | e, t ! S i u S j , | SI | z 1, interval (ei ,e j ) z 0}
Which are composed of primitive events or other composite events by applying several event operators, for example, S i S j , S i S j , and S i S j can be represented as a composite event. The interval(ei ,e j ) returns the interval between two event instances ei and e j . The size of such a composite event, i.e., the number of event instances in such a set, is denoted by | SI | . Informally, an instance of a composite event represents the primitive event occurrences that caused an occurrence of the composite events. Occurrences of primitive events are at a specific point, which are assumed to be atomic. The primitive event can be are classified into domain event, explicit event and temporal event. Domain events are specific to application domain. Explicit event are explicitly defined by an application and raised by the user. The
A Novel Pattern for Complex Event Processing in RFID Applications
601
parameters of the explicit event are also specified by the user. Temporal events correspond to absolute and relative temporal events. The absolute temporal event is an event associated with an absolute value of time.
3 Complex Event Pattern In this section we will discuss complex event pattern based on hierarchical event. An event pattern is a template that matches partially ordered sets of events. The event pattern of a rule defines the types of events whose occurrence may trigger the rule, and its expressions can consist of filter expressions combined with pattern operators. Unlike a message, the important part of the semantics of an event is temporal and no-temporal relationship-what the orders will happen among constituent events with or without temporal constraint, causality–what events caused this to occur and what will be the consequences of the current event. In the following discussion, we will describe fundamental complex event patterns. 3.1 Complex Event Pattern Definitions
The CEP event pattern language allows the user to describe patterns of events. However, the current pattern language has revealed weaknesses of design in semantic appropriateness and completeness of certain event operators [3,4]. In this section, we will formalize complex event pattern with novel operators for improving the pattern semantics. Definition 1. Pattern Operators: The operators can be used to specify patterns, flowing, we define the basic operators: 1. 2. 3. 4. 5. 6.
“ each ” to mean “*”, which used to control pattern sub-expression repetition. “ o ” to mean “followed by”, which is a temporal operator that operated on event order: “ && ” to mean “And”, which is logic operator. “ || ” to mean “OR”, which is logic operator. “ ! ” to mean “Not”, which is logic operator that “ time : within, time : interval , and time : at ”: which are temporal operators
that used to control the lifecycle of sub-expressions. Definition 2. Relationship Pattern: Let E A a1 , a 2 ,...t a ! be an A event, let E B b1 , b2 ,...t b ! be a B event, let t be a real number for the occurrence time of the event. A relationship pattern i.e.:
1.
E A * o E B : This pattern matches when every event A followed by an event B .
602
Tao Ku1, Yun Long Zhu, Kun Yuan Hu and Ci Xing Lv
E A * o E B * : This pattern matches when every event A followed by every event B . 3. E A & & E B : This pattern matches when event A and event B are found. 4. E A || E B : Look for either event A or event B .
2.
E A o E B & & EC o E D : This pattern matches on any sequence A followed by B and C followed by D . 6. (E A o E B ) & & ! EC : This pattern matches only when an event A is encountered followed by event B but only if no event C was encountered. 7. E A o E B where time : intevel (ms ) : This pattern matches if an event A arrives followed by event B within ms seconds.
5.
Definition 3. Filter Pattern: This pattern subscribes to event sequence, evaluates a specified logical condition based on event attributes, and, if the condition is true, publishes the event to the destination event sequence, i.e.: (E A (a i ) || E A (a j ) ) * o (E B && ! E B (b i
a i || bi
a j ))* where timer:with in (ms) a or a
j This pattern fires after every A event with the i attribute arrived, and bi ms B seconds where the attribute of B is not same as followed by event wait of A .
Definition 4. Type-Based Filtering Pattern: We can filter duplicate RFID data Based on Type of events. If E A and E B are observed within ms seconds, and E A # E B , then mark E A as a duplicate event. if ((E A (a i ) o E B (b i ) where timer:with in (ms) ) & & ( i
1 ... m , a i
bi ) )
E A # E B E A is a duplicate event
Definition 5. Content-Based Filtering Pattern: We can express theft event based on events content filtering, let E A be represented item read event, let E B be represented administer event, then: if ( (E A (a i ) o ! E B (b i ) timer:with in (ms) ) & & a i E A theftby E B
item & & b i
ad min ister )
Definition 6. Map Pattern: This pattern would involve causal relationship, aggregation relationship, and temporal relationship. i.e.: let EC be represented a virtual event that caused by E A and E B , ( E A & & E B ) o EC Example. RFID package event: If a time distance-constrained periodic event E A is observed followed by a distinct event E B , then it implies that objects produced by E A are being packed in the object produced by E B . if ((E A (a i ) * o E B (b i ) where timer:with in (ms) ) & & ( i E A, EB , E A % EB
1 ... m , a i z bi ) )
A Novel Pattern for Complex Event Processing in RFID Applications
603
3.2 RFID Application Scenarios
One example of a RFID application environment is the production line automation, a RFID readers-rich environment inside our test laboratory (Fig.3). In this laboratory, RFID readers that are installed on both sides of the conveyor belt and multi-layer shelves provide information about productions. The simulation environment is used to keep track of every individual product, the stage of the production process, and how many packages are leaving the warehouse, et.al. The readers potentially produce a vast amount of data. However, information consumers prefer a high-level view of the primitive sensor data. Thus, the CEP used in this application scenario has to cope with high-volume data and be able to aggregate and transform it before dissemination. Composite event detection can help process the primitive events produced by the large number of sensors and provide a higher-level abstraction to users.
A
RFID Antenna 1
B
RFID Antenna 2
C
LED Monitor 1
D
LED Monitor 2
E
Multilayer Shelf
F
Shelf Antenna
G
Conveying belt
Fig. 3. The Production Line with multi-readers
1.
A user may subscribe to be notified when products are packaged in some type pallet, or products are removed from shelves. 2. A user may be interested in composite events about what products come from what batch the pallet originated. 3.3 Complex Event Pattern Recognition
In this section, we will discuss the methods for recognition of complex event patterns. There are many algorithms proposed for complex event detection [6, 7]. Traditional graph-based event processing systems detect complex events in a bottom-up fashion shown as Figure 4, it uses an event graph to detect composite events .Each node in the event tree represents either a primitive event or a composite event. Primitive event nodes are leaf nodes from which composite event nodes are constructed.
However, such a bottom-up event detection approach is inapplicable to detecting RFID events. We improve algorithm performance by grouping rules into sets and a link list data structure is also used to group together a set of logically related events and rules. The proposed algorithm is show as Figure 5. Algorithm: Graph-based complex even pattern recognition. Input: D, a primitive event set; R, a rule set. P, a pattern set. Output S k , the complex event subsequences set. Method: S1 Å Single-elements in the event sequence S e1 ,t1 ! ,... en ,t n !! P1 Å Single-elements in the pattern sequence P p1 ,... p n ! Call Graph (D, R, S k , Pk ) Procedure Graph (D, R, S k , Pk ) (1) S k m ) Read from event sequences (2) Pk m ) Read from the set of patterns (3) for each primitive event d i Dk do (4) for each pattern element p j Pk do (5) (6) (7)
for each size (k+1) graph g formed by the merge of d i if g R and g Pj then Build event hierarchy
A Novel Pattern for Complex Event Processing in RFID Applications
605
(8) else Insert g into S k 1 ; (9) If S k 1 z ) then (10) Graph (D,R, S k 1 , Pk 1 ); (11) return; Fig. 5. Complex Event Pattern Recognition Algorithm
3.4 Algorithm Performance Analysis
The algorithm tests were made on a PC with 1.4GMHz Pentium IV processor, running Window 2003 as the operating system with 1 GB of RAM. In our experiment, the prototype is implemented in Java language, there are 50 event types were defined representing primitive or simple events, and there are 100 rules implemented using the Rule Definition Language were defined. The rules incorporate different complexities in their definition, such as event expressions with simple, composite and event set constructions; and the use of different operators in conditions expressions. Also, every rule produces one or more events. 200
2400
160 140
2000
120
Event
time (s)
Our method Traditional method
2800
Our method Traditional method
180
100
1600
80
1200
60
800
40
400 20
0
0 0
1000
2000
3000
4000
5000
6000
Primitive Event
Fig. 6. Comprision Event processing Time
0
2
4
6
8
10
Algorithm
Fig. 7. Amount of events produced in algorithm evaluation
In figure 6 presents the comparison of event processing time that taken to perform the evaluation of primitive event. The horizontal axis corresponds to the quantity of primitive events processed, and the vertical axis corresponds to the amount of time (in seconds) taken in the evaluation of the events In general, our method is better than traditional method in terms of capabilities of event processing because of our elaborated data structures and optimization strategies. In figure 7 the amount of events produced in the evaluation of each set of algorithms implemented in our method and traditional implementation, through simulation studies, it is proved that as the number of events produced increases, the growth
606
Tao Ku1, Yun Long Zhu, Kun Yuan Hu and Ci Xing Lv
rate in event production has an exponential tendency, and the algorithm execution time tends to grow linearly.
4 Conclusions In this paper, a novel semantic-based complex event processing pattern is discussed in detail by formalizing event ontology and event pattern. The performance evaluation on pattern recognition shows that event pattern processing has the capabilities to react to the occurrence of some complex event pattern rather than a single event. It can be concluded that leveraging CEP in RFID applications can bring more sophisticated application functionality; therefore, we believe that CEP technology can provide a robust infrastructure layer to simplify RFID application deployment and scaling. Future work can be in the area of CEP query optimization techniques.
Acknowledgments This work was supported by the Hi-Tech research and development program of China under contract No. (20060104A1118), National Science and Technology Key Project of China under contract No. (2006BAH02A09), National Natural Science Foundation Key Project of China under contract No. (2002CB312204)
References [1] [2]
[3] [4] [5]
[6]
[7]
Louis Perrochon, E.J., Stephane Kasriel, and David C. Luckham, Enlisting Event Patterns for Cyber Battlefield Awareness. 2006.. Shyh-Kwei Chen, J.-J.J., Henry Chang. Complex Event Processing using Simple Rule-based Event Correlation Engines for Business Performance Management. in Proceedings of the 8th IEEE International Conference on E-Commerce Technology on Enterprise Computing, E-Commerce, and E-Services. 2006. Y. Bai, F. Wang, and P. Liu: Efficiently Filtering RFID Data Streams. To Appear in Proc. of the First International VLDB Workshop on Clean Databases (CleanDB'06). J. L. David Maier1, Peter Tucker2, Kristin Tufte1, and Vassilis Papadimos1, "Semantics of Data Streams and Operators," 2005. F. W. a. P. L. Shaorong Liu "Integrated RFID Data Modeling: An Approach for Querying Physical Objects in Pervasive Computing," in CIKM’06, November 5–11, 2006, Arlington, Virginia, USA. 2006. S. Chakravarthy, J. D. Yang, and S. Yang, “A formal framework for computing composite events over histories and logs,” University of Florida, E470-CSE, Gainesville, FL 32611, Tech. Rep. UF-CIS TR-98-017, November 1998. S. Chakravarthy, V. Krishnaprasad, E. Anwar, and S.-K. Kim, “Composite Events for Active Databases: Semantics, Contexts, and Detection,” in 20th International Conference on Very Large Databases (VLDB), 1994, pp. 606–617.
A Novel Pattern for Complex Event Processing in RFID Applications
[8]
[9]
[10] [11]
[12]
[13]
[14]
[15]
[16]
[17]
[18] [19]
[20]
[21] [22]
607
Q. Jiang, R. Adaikkalavan, and S. Chakravarthy, “Towards an Integrated Model for Event and Stream Processing,” TR CSE-2004-10, CSE Dept, Univ. of Texas at Arlington, 2004. Roger S. Barga, Jonathan Goldstein, Mohamed Ali and Mingsheng Hon”Consistent Streaming Through Time: A Vision for Event Stream Processing”, Microsoft Research Redmond, WA, 3rd Biennial Conference on Innovative Data Systems Research (CIDR) January 7-10, 2007, Asilomar, California, USA. D.J.Abadi, D, Carney, U. Cetintemel, et al. Aurora: A New Model and Architecture for Data Stream Management. VLDB Journal, 12(2):120-139, 2003 Wang, F., Liu, S., Liu, P., Bai, Y.: Bridging Physical and Virtual World: Complex Event Processing for RFID data Streams. In: Proc. of the 10th International Conference on Extending Database Technology, Munich Germany (2006) Daniel Gyllstrom, Eugene Wu, et al, SASE: Complex Event Processing over Streams. In Proceedings of the Third Biennial Conference on Innovative Data Systems Research (CIDR 2007), Asilomar, CA, January 2007. C. Zang, Y. Fan, Complex event processing in enterprise information systems based on RFID, Enterprise Information Systems, Volume 1, Issue 1 February 2007 , pages 3-23 Liang Dong, Dong Wang, Huanye Sheng, Design of RFID Middleware Based on Complex Event Processing, in: Cybernetics and Intelligent Systems, 2006 IEEE Conference on, June 2006 On page(s): 1-6 Byung-Kook Son, Jun-Hwan Lee1, et al, An Efficient Method to Create Business Level Events Using Complex Event Processing Based on RFID Standards. In IFIP International Federation for Information Processing .2007 CHRISTOF B, TAO L, STEPHAN H, et al. Integrating smart items with business processes an experience report, In: Proceedings of the 38th Annual Hawaii International Conference on System Sciences (HICSS), 2005:227-235. Babcock B, Babu S, Datar M, Motwani R, Widom J, Models and issues in data streams. In: Popa L. ed. Proc, of the 21st ACM SIGACT-SIGMOD-SIGART Symp. On Principles of Database Systems. Madison: ACM Press, 2002. 1-16. Terr D, Goldberg D, Nichols D, Oki B. Continuous queries over append-only databases SIGMOD Record, 1992.21(2):321-330. Avnur R, Hellerstein J.Eddies: Continuously adaptive query processing In: Chen W, Naughton JF, Bernstein PA, eds. Proc. of the 2000 ACM SIGMOD Int’l Conf. on Management of Data Dalias: ACM Press.2000.261-272. P. Ronkainen. Attribute similarity and event sequence similarity in data mining. PhLic thesis, Report C-1998-42, University of Helsinki, Department of Computer Science, Helsinki, Finland, Oct. 1998 David C. Luckham, The power of events: an introduction to complex event processing in distributed enterprise systems [M].Boston: Addison Wesley, 2002. Zhen Liu, Anand Ranganathan, Anton Riabov. Specifying and Enforcing High-Level Semantic Obligation Policies. Eighth IEEE International Workshop on Policies for Distributed Systems and Networks (POLICY'07) pp.119-128
Part VII
Architectures and Frameworks for Interoperability
Enterprise Architecture: A Service Interoperability Analysis Framework J. Ullberg, R. Lagerström, P. Johnson Industrial Information & Control Systems, KTH Royal Institute of Technology, Osquldas väg 12, SE-100 44 Stockholm, Sweden {johanu, robertl, pj101}@ics.kth.se
Abstract. Enterprise architecture is a model-based approach to IT management used for promotion of good IT decision making. Thus, an enterprise architecture framework needs to support various forms of analysis. Creation of enterprise architecture models is costly and without intrinsic value, therefore it is desirable to create models that effectively support the sought after analysis. This paper presents an extended influence diagram describing theory of enterprise service interoperability. The theory is augmented with a metamodel containing the information needed to perform analysis of interoperability. A fictional example is provided to illustrate the employment of the metamodel and the theory in the context of IT decision making. Keywords: Measuring, validating, and verifying interoperability, Enterprise modeling for interoperability, Modelling methods, tools and frameworks for (networked) enterprises, Formal approaches and formal models of interoperability, Enterprise applications analysis and semantic elicitation
1 Introduction Enterprise architecture is an approach to enterprise information systems management that relies on models of the information systems and their environment. Instead of building the enterprise information system using trial and error, a set of models is proposed to predict the behavior and effects of changes to the system. The enterprise architecture models allow reasoning about the consequences of various scenarios and thereby support decision making. In order to predict which enterprise architecture scenario is preferable, three things are needed. Firstly, models over the two scenarios need to be created. Secondly, it is necessary to define what is desirable; the goal, i.e. high service interoperability. Thirdly, we need to understand the causal chains from scenario choice to goal. Suppose that scenario A features services with high availability
612
J. Ullberg, R. Lagerström, P. Johnson
affecting the interoperability positively but has several service descriptions which are not complete, affecting the interoperability negatively. However, scenario B is built up on service orchestration descriptions of high compatibility with respect to the available services, promoting high interoperability of the system. To make a decision on which scenario to choose is often difficult, particularly without a formal analysis. In order to perform this kind of analysis, the enterprise architecture models need to contain the proper information. In the above example, where the decision maker is interested in service interoperability, the models need to answer questions regarding service availability, service description completeness, and service orchestration description compatibility. The kind of information contained in a model is given by its metamodel, so it is important that enterprise architecture metamodels are properly designed. In order to determine if a metamodel is amenable to the analysis of a certain quality attribute, such as interoperability, it would be helpful with a structured account of that analysis. We will use a notation called Extended Influence Diagrams (EID) [1] in order to formalize the analysis of interoperability. Figure 1 depicts the relation between an enterprise architecture scenario, modeled using a metamodel, the analysis of the scenario, the formal specification of the analysis through an extended influence diagram and finally the output: the interoperability level.
Fig. 1. The relation between metamodels, enterprise architecture scenarios, analysis, formal specification of analysis, and the result of the analysis
The main contribution of this paper is a metamodel that supports the creation of enterprise architecture models amenable to service interoperability analysis. Also introduced are formalizations of such an analysis using extended influence diagrams. The remainder of this paper is delineated as follows; extended influence diagrams are introduced in section 2. Section 3 presents the framework for service interoperability analysis in the form of an extended influence diagram. Section 4
Enterprise Architecture: A Service Interoperability Analysis Framework
613
evaluates the usefulness of a number of common enterprise architecture metamodels. Section 5 proceeds to detail the content of the metamodel that supports service interoperability analysis. The applicability of the metamodel is demonstrated in the subsequent section 6. Finally, section 7 concludes the paper.
2 Extended Influence Diagrams Extended influence diagrams are graphic representations of decision problems coupled with a probabilistic inference engine. These diagrams may be used to formally specify enterprise architecture analysis [1]. The diagrams are an extension of influence diagrams, [2][3] which in turn are an enhancement of Bayesian networks [4][5]. In extended influence diagrams, random variables associated with chance nodes may assume values, or states, from a finite domain (cf. Figure 2). A variable could for example be service availability. These variables are connected with each other through causal or definitional arcs. Causal arcs capture relations of the real world, such as “higher service availability increases the system interoperability”. With the help of a conditional probability matrix for a certain variable A and knowledge of the current states of the causally influencing variables B and C, it is possible to infer the likelihood of node A assuming any of its states. Extended influence diagrams support probabilistic inference in the same manner as Bayesian networks do; given the value of one node, the values of related nodes can be calculated. For more comprehensive treatments on influence diagrams and extended influence diagrams see [1], [2], [3], [4], [5], [6], [7] and [8].
Fig. 2. An extended influence diagram and a simple example. With a chosen scenario in the decision node, the chance nodes will assume different values, thereby influencing the utility node [9].
3 A Framework for Enterprise Service Interoperability Analysis This section presents an extended influence diagram that captures theory from the field of service interoperability. The extended influence diagram is mainly influenced by [10][11][12][13][14]. Interoperability is the ability of two or more systems or components to exchange information and to use that information [15]. Adopted to the domain of services, enterprise service interoperability is the ability of services in an enterprise to exchange information and to use that information. Enterprise Service
614
J. Ullberg, R. Lagerström, P. Johnson
interoperability is divided into run-time enterprise service interoperability and design-time enterprise service interoperability, cf. Figure 3.
Fig. 3. The extended influence diagram for enterprise service interoperability containing factors influencing service interoperability and thereby of interest when performing analysis.
3.1 Run-Time Enterprise Service Interoperability
Run-time interoperability is concerned with the interoperability of services that in the scenario under evaluation are supposed to be working together. It is divided into three sub categories; firstly, there are properties that can be assessed within the scope of a single service, service run-time interoperability. Secondly, the quality of service pair interaction captures aspects that can be measured by pair-wise comparison of all service pairs supposed to interact. Finally, some properties must be analyzed in a wider scope than that of pairs, namely in the power set scope (excluding the empty set and the sets with only one member), this is denoted quality of interaction for power set 3.1.1 Service Run-Time Interoperability. The measurable properties of each service that are of importance at run time are the inherent properties of the service, cf. the node quality of service in Figure 3, service bus compatibility which is the service’s compatibility with the communication medium, the service bus and quality of service orchestration
Enterprise Architecture: A Service Interoperability Analysis Framework
615
description, consisting of properties relating to the orchestration descriptions, i.e. a specification detailing the interaction of services [11]. The node quality of service is defined by the nodes correctness and availability. Correctness is the ability of the service to perform the intended task correctly. Availability of a service can be measured in terms of mean time to failure and mean time to repair. From a run-time point of view there are three properties of the service orchestration description that need to be attended. The first is that the orchestration description calls the operations of the service in a syntactically correct manner, syntactic compatibility with respect to service. The second, behavioral semantic compatibility with respect to service, is the subject of the orchestration description’s ability to act in conformity with the dynamics of the service. Generally this means to call the operations of the service in a permissible order. Finally, the denotational semantic compatibility with respect to service is concerned with the real world, the orchestration description and the service are denotationally equivalent if they refer to the same phenomenon in the real world [16], so that the service really executes what the orchestration description intended. 3.1.2 Quality of Service Pair Interaction. When studying services in pairs there are two additional properties that can be assessed, namely protocol compatibility and syntactic compatibility. The first is a match of protocols so that the services share at least one compatible protocol. The latter is a comparison of the provided and invoked operations of the services, e.g. that the provided methods of the service provider must have the same syntax as the requests sent from the consumer for communication to be possible. 3.1.3 Quality of Interaction for Power Set. When studying even larger sets of services, two additional factors of importance can be added to the theory. Both factors are concerned with semantic compatibility and are similar to those of Section 3.1 regarding semantics. The first, behavioral semantic compatibility, captures the behavior of the interaction, two concepts (e.g. services) are equivalent with respect to their behavioral semantics if they display the same dynamics, i.e., if their interaction patterns are equivalent [16]. The second, denotational semantic compatibility is concerned with the actual meaning of the services’ operations, so that they refer to the same operation in the real world. 3.2 Design-Time Enterprise Service Interoperability
Design-time service interoperability is the matter of analyzing the effort needed for possible future constellations of services to interoperate, regardless of their current relationship to each other. Design-time interoperability is, as in the case of runtime interoperability, divided into three categories covering aspects of a single service, pairs of services and the power set of services, cf. the node service designtime interoperability as well as the previously mentioned quality of service pair interaction and quality of interaction for power set in Figure 3.
616
J. Ullberg, R. Lagerström, P. Johnson
Different from run-time interoperability is however how these services, pairs, and sets are selected, rather than only regarding services that in the currently assessed scenario are supposed to work together, the focus now is on all services available in the enterprise as well as all pairs and sets of services that possibly could work together in some future scenario. As seen from Figure 3 the properties of concern for the pairs and the power set are the same as for run-time interoperability, see Section 3.1. 3.2.1 Service Design-Time Interoperability At design time two of the properties measurable per service are common with those of interest at run time, the service bus compatibility and the quality of service as described in Section 3.1. Other properties that are of interest at design time are quality of service description, covering aspects of the service descriptions, descriptions containing for instance the behavior and abilities of the service [17], and service orchestration language compatibility stating to which degree the service is compatible with available languages for service orchestration. Finally, the node existence in service description repository corresponds to a validation that the description of the service is placed in a repository, a storage area used for discovery of services [18]. The quality of the service description is divided into five parts where the first, completeness, considers the service description coverage, i.e. that all abilities of the service are included in its description. Understandability means that the description can be easily understood. The third, syntactic correctness¸ ensures that the service description is syntactic correct with respect to the actual operations of the service. Behavioral semantic correctness implies that the dynamics in the description corresponds to that of the service itself, and finally denotational semantic correctness which is the matter of the service description really describing the same action that the service performs.
4 Enterprise Architecture Frameworks for Analysis With the requirement on enterprise architecture models to support enterprise architecture analysis follows a specific requirement on enterprise architecture metamodels. Specifically, all entities and attributes that are required for a complete analysis as specified in an extended influence diagram must be found in the enterprise architecture metamodel, in order for the corresponding model to be amenable to analysis. See Figure 4. A substantial number of enterprise architecture frameworks have been proposed in recent years, including the Zachman Framework [19], the Department of Defense Architecture Framework (DoDAF) [20] the Open Group Architecture Framework (TOGAF) [21], the Federal Enterprise Architecture (FEA) [22], the General Enterprise Reference Architecture and Methodology (GERAM) [23], Architektur integrierter Informationssysteme (ARIS) [24], the Metis Enterprise Architecture Framework (MEAF) [25], and more. When considering the suitability of the metamodels related to these frameworks to the enterprise architecture analysis considered in preceding sections, we have found significant difficulties.
Enterprise Architecture: A Service Interoperability Analysis Framework
617
Firstly, a number of metamodels are not detailed enough to provide the information required for the analysis. We are interested in information such as for instance the complexity of a service. This is information that would typically be represented as an attribute in an entity in a metamodel. Many metamodels, including the Zachman Framework, the TOGAF and the GERAM, do not systematically propose attributes, thereby underspecifying their metamodels with respect to the analysis proposed in the previous section. The frameworks that do specify attributes, e.g. DoDAF, FEA, and MEAF, contain few of the specific attributes required for the analysis described in Section 3. Finally and perhaps most importantly, many of the frameworks do not contain the entities that would be required.
Fig. 4. The properties found in an extended influence diagram determine what entities and attributes should be present in an enterprise architecture metamodel.
5 The Metamodel for Enterprise Service Interoperability Analysis In this section, we present the metamodel suggested for enterprise service interoperability analysis. The metamodel is constructed to satisfy the requirements of the preceding section, containing all of the entities and attributes necessary to conduct analysis of interoperability. 5.1 Entities in the Metamodel
Services are independent building blocks that collectively represent an application environment, much like components of a software system. Services possess a number of qualities that components lack, e.g. the complete autonomy from other
618
J. Ullberg, R. Lagerström, P. Johnson
services which allows a service to be responsible for its own domain and services are typically limited in their scope to support a specific business function or a group of related functions [10]. For communication among services, each service has a service interface. The service interface contains the protocols a service needs for communication and it specifies which operations the service provides or invokes [11][26]. Services also have service descriptions. These are used for advertising and describing the service capabilities, behavior, quality, and its interface [17]. An example of a service description language is WSDL (Web Service Definition Language). Services use a service bus, often referred to as an enterprise service bus (ESB), as a communication medium. The service bus is a middleware-like solution to manage message and transaction traffic [12]. As the amount of services and the number of versions of each service steadily increases it has become critical to keep track of the service descriptions. Thus, the need for a service description repository, i.e. a storage area containing all service descriptions and making the descriptions searchable. Each time a service requestor needs a service the requestor can find the most appropriate service for its intentions in the repository. A standard repository solution is the UDDI [18]. The service orchestration description is the specification that details and controls the orchestration of services to interact [11]. These descriptions are written in a service orchestration language, where BPEL (Business Process Execution Language) is considered an industry standard.
Fig. 5. The enterprise architecture metamodel for service interoperability analysis with its entities, attributes, and relations.
Enterprise Architecture: A Service Interoperability Analysis Framework
619
5.2 Attributes of the Metamodel
For the purpose of service interoperability analysis, a metamodel without attributes would be inadequate. In an enterprise architecture model, many important concepts are best captured as entity attributes. As seen in Figure 5, some entities have attributes that correspond to the service interoperability extended influence diagram. The availability of a service, for instance, is of importance for the interoperability (according to the extended influence diagram of Section 3). Consequently, the service entity of our model explicitly contains the attribute availability. Analogously, the service description entity contains the attribute completeness, also found as a node in the extended influence diagram. Other attributes in the metamodel directly related to nodes in the EID are service correctness and service description understandability. There are variables in the extended influence diagram not directly related to one attribute in the metamodel, e.g. service bus compatibility. This is represented in the metamodel by the attribute in service called compatible service buses and by the attribute type in the entity representing the service bus. If there is one compatible service bus in service that matches the service bus type, this means that the service and the service bus are compatible. Syntactic compatibility, protocol compatibility, and existence of service description in repository are other attributes evaluated in a similar manner. The values of two types of nodes in the extended influence diagram cannot be derived from the metamodel. These are the denotational and behavioral semantic compatibility and correctness nodes. It is well known that it is both practically and philosophically difficult to determine denotational equivalence. Although behavioral equivalence is possible to determine, it requires detailed dynamic models beyond the scope of the present work [16].
6 Modeling and Analyzing Using the Metamodel – An Example This section presents an example of an enterprise service interoperability analysis used as decision support for a Chief Information Officer (CIO) at LIAM Energy, a large power distribution company in Sweden. LIAM Energy has initiated an implementation of an Automatic Meter Reading (AMR) system. A pre-study revealed that a service-oriented solution would be the most appropriate and would provide the company with long-term business value. The CIO faces a choice between three suppliers of AMR software where no supplier is able to deliver all the wanted functionality, thus a combination is desirable. Our CIO will also face the task of integrating the meter reading system with the company’s existing service-oriented ERP system for billing purposes. This integration is needed because new regulations require distribution companies to keep track of outages and only bill their customers for actual electricity usage. Several possible scenarios must therefore be considered and the CIO decides that a formal evaluation of the candidate scenarios is to be performed. Based on the metamodel of Section 5 information on the entities and their attributes are
620
J. Ullberg, R. Lagerström, P. Johnson
collected, see Figure 6 for one scenario containing three services from three different vendors. Some examples of information that is gathered are the semantic correctness of the service description and the provided and invoked operations of the service interfaces. To find information on for instance provided operations, the code of the services was studied while information on the availability of services was found by performing interviews with the developers of the service. All collected variable values were then translated into discrete states, such as Low, Medium, or High, and used as input to the enterprise service interoperability analysis employing Bayesian theory as described in Section 2. When collecting information for the models, there is an issue of credibility. Low credibility may lead to a large uncertainty in the analysis, making it difficult for the CIO to take a rational decision. For instance, studying the code to find the operations of the service is a tedious work but, if done well, this will provide the CIO with high credibility of the gathered information. Whereas, interviewing personnel, e.g. developers and architects, to find the availability of the service is less credible and also dependent on the experience of the personnel and the bias of the interviewer. Oftentimes it is very expensive to collect the information needed for a perfectly credible analysis. Since the analysis is based on the formalism of Extended Influence Diagrams this credibility variation can be handled, thus the presented method of analysis provides the CIO with an uncertainty degree in the result, shown in Figure 7 as bars indicating the range of values the result may assume.
Enterprise Architecture: A Service Interoperability Analysis Framework
621
Fig. 6. The enterprise architecture model of scenario A. In the model the service calculate energy cost has been enlarged to visualize its attribute values; correctness being High with 90 % certainty and availability being Medium with 80 % certainty.
The final result of the analysis is shown in Figure 7. As can be seen from the figure, scenario 2 didn’t achieve any service interoperability at all due to the choice of service bus with which only one of the services was compatible. Even though not detailed in this paper, the method allows for analysis of subcomponents and it is therefore possible to discover that scenario 3 has decent run-time interoperability but almost no design-time interoperability. Further, it is possible to see that scenario 1 and 4 both have near to perfect run-time interoperability and that scenario 4 scores a bit higher on design-time interoperability, yielding the higher total score. The CIO can now make a rational decision, choosing the set of services providing the degree of interoperability needed by the enterprise.
622
J. Ullberg, R. Lagerström, P. Johnson
Fig. 7. The comparison between the service interoperability of the different scenarios, the black I-bars indicate the uncertainty of the assessments.
7 Conclusion This paper has presented an enterprise service interoperability analysis framework in the form of an extended influence diagram with attributes affecting service interoperability and an enterprise architecture metamodel supporting the analysis. The metamodel consists of entities with accompanying attributes that can be used to create enterprise architecture models from which it is possible to extract precisely the information that is needed for quantitative enterprise service interoperability analysis. An example was provided illustrating the use of the metamodel and the extended influence diagram for analysis.
Acknowledgements The authors would like to thank Per Närman for his previous work on the topic of system quality analysis using enterprise architecture models [27].
References [1]
[2]
[3]
[4]
Johnson, P., et al.: Enterprise Architecture Analysis with Extended Influence Diagrams. In: Information System Frontiers, vol 9(2), Springer, The Netherlands (2007) Shachter, R.: Evaluating influence diagrams. Operations Research, 34(6) pp 871-882, Institute for Operations Research and the Management Sciences, Hanover Maryland (1986) Howard, R.A., Matheson, J.E.: Influence Diagrams. Decision Analysis Vol. 2(3), pp. 127–143, Institute for Operations Research and the Management Sciences, Hanover Maryland (2005) Neapolitan, R.: Learning Bayesian Networks. Prentice-Hall, Inc. Upper Saddle River, NJ, USA (2003)
Enterprise Architecture: A Service Interoperability Analysis Framework
Jensen, F.V.: Bayesian Networks and Decision Graphs. Springer New York, Secaucus, NJ, USA (2001) Johnson, P., Lagerström, R., Närman, P.: Extended Influence Diagram Generation. In: Enterprise Interoperability II – New Challenges and Approaches, pp. 599-602, Springer, London (2007) Shachter, R.: Probabilistic inference and influence diagrams. Operations Research, 36(4), pp 36-40 (1988) Johnson, P., Ekstedt, M.: Enterprise Architecture – Models and Analyses for Information System Decision Making. Studentlitteratur, Lund, Sweden (2007) Lagerström, R.: Analyzing System Maintainability Using Enterprise Architecture Models. In: Proceedings of the 2nd Workshop on Trends in Enterprise Architecture Research (TEAR’07), pp. 31-39, St Gallen, Switzerland (2007) Erl, T.: Service-Oriented Architecture: A Field Guide to Integrating XML and Web Services. Prentice Hall, New Jersey (2004) Erl, T.: Service-Oriented Architecture: Concepts, Technology, and Design. Prentice Hall, New Jersey (2005) Marks, E., Bell, M.: Service-Oriented Architecture: A Planning and Implementing Guide for Business and Technology. John Wiley & Sons, New Jersey (2006) Kasunic, M., Anderson, W.: Measuring Systems Interoperability: Challenges and Opportunities. Technical Note, CMU/SEI-2004-TN-003, Software Engineering Institute, Carnegie Mellon University, Pittsburgh (2004) Linthicum, D.: Enterprise Application Integration. Addison-Wesley, New Jersey (2000) IEEE: Standard Glossary of Software Engineering Terminology. Std 610.12-1990. The Institute of Electrical and Electronics Engineers, New York (1990) Saeed, J.: Semantics. Second Edition, Blackwell Publishing, Oxford, UK (2003) Papazoglou, M., Georgakopoulos, D.: Service-Oriented Computing. In: Communications of the ACM, Vol. 46 No. 10 (2003) Gottschalk, K., Graham, S., Kreger, H., Snell, J.: Introduction to Web services architecture. In: IBM Systems Journal, Vol. 41 No. 2 (2002) Zachman, J.A.: A Framework for Information Systems Architecture. IBM Systems Journal, IBM, vol 26(3), pp 454-470 (1987) Department of Defense Architecture Framework Working Group: DoD Architecture Framework, version 1.0. Department of Defense, USA (2004) The Open Group: The Open Group Architecture Framework, version 8 Enterprise Edition. Reading U.K. (2005), http://www.opengroup.org/togaf/ Office of Management and Budget: FEA Consolidated Reference Model Document Version 2.1. OMB, USA (2006) IFAC-IFIP: Task Force on Architectures for Enterprise Integration Geram: Generalized enterprise reference architecture and methodology, version 1.6. (1999) Scheer, A.W.: Business Process Engineering – Reference Models for Industrial Enterprises 2nd Edition. Springer Verlag, Heidelberg, Germany (1994) Troux Technologies: Metis Architect – Datasheet. http://www.troux.com (2007) Papazoglou, M.: Service-Oriented Computing: Concepts, Characteristics and Directions. In: Proceedings of the Fourth International Conference on Web Information Systems Engineering (WISE’03), IEEE (2003) Närman, P., Johnson, P., Nordström, L.: Enterprise Architecture: A Framework Supporting System Quality Analysis. Will appear in: Proceedings of the 11th International IEEE EDOC Conference, Annapolis, USA (2007)
Logical Foundations for the Infrastructure of the Information Market Michael Heather1, David Livingstone2, Nick Rossiter2 1
2
Ambrose Solicitors, St Bede’s Chambers, Jarrow NE32 5JB, UK, [email protected] School of Computing, Engineering and Information Sciences, Northumbria University, NE1 8ST, UK WWW home page: http://computing.unn.ac.uk/staff/CGNR1/ {david.livingstone,nick.rossiter}@unn.ac.uk
Abstract. The European knowledge-based economy has a complex product to market to the rest of the world. Techniques of the past, based on the closed-world assumption, have proved useful in many types of local information systems. However, theory and practice suggest that this approach may be inadequate for the infrastructure required. In databases the relational model through SQL has maintained wide dominance in business data processing. However, interoperability between different databases, even when based on the relational model, is proving a major problem. Predicate logic (consistent and complete to first-order) has many advantages for practical application. Interoperability however requires higher order logic as the arguments themselves are relations and functions. Higher order logic in the context of set theory behaves less satisfactorily according to Gödel’s theorems as such logic cannot satisfy all three of soundness, completeness and effectiveness. This may be a fundamental reason why interoperability is proving to be so difficult. This paper looks at underlying problems and suggests that they may be avoided by the use of categorial higher order logic. Cartesian categories are complete, consistent and decidable. They can be employed as an engineering technique to construct a general architecture of interoperability. Keywords: Formal approaches and formal models of interoperability, Interoperability infrastructure and IT platforms, Interoperability-related standards, legal and security issues, Architectures and platforms for interoperability, Interoperable knowledge management
1 Introduction Information systems are basic building materials for the knowledge-based economy of the 2000 Lisbon strategy. The Lisbon agenda of March 2000 set out a strategy resulting in the eEurope 2005 Action Plan to build the knowledge-based
626
Michael Heather, David Livingstone, Nick Rossiter
economy of the single market [17]. The fundamental basis of knowledge is information which has to be handled appropriately by both the technology and by the law. What was appropriate for the physical media of the last millennium cannot be just carried over to the new digital media. Modern information systems operate at every level: from data held in a single purpose fixed device, through common PCs at home or mobile computing with business systems of an SME, to databases, intra-acting locally and at national level, inter-acting between nation states and even then open to wider global systems outside of Europe. This interoperability requires global coherence, which mirrors the interoperation of the EU itself. The reported slow progress with the Lisbon agenda is reflected in a similar tardy development of interoperable information systems. The latter is perhaps one of the causes of the former. The report of Wim Kok, quoted in [5], recommended that the agenda be re-focused on growth and employment to remedy the small progress over the first five years in member states. Like national employment the focus needs to be on the details of operation on local information systems. Particular attention needs to be paid to the delivery of the Lisbon agenda. In order to achieve these objectives, the Union must do more to mobilise all the resources at national and Community levels so that their synergies ... can be put to more effective use ([6] at p.9) There is a problem with logic. Successful local systems are first order but they need to participate in higher-order activity. The quest for synergy between levels runs into problems arising from semantic interoperability. There are two fundamental types of data: images and text. Images can be structured, as in graphs, or unstructured, as in photographs. Text can be structured, as in relational databases, or ‘unstructured’, as in natural language [16]. There is also process data, represented as transactions in many current systems. Music as performed is an interesting example of process data whose permanent form is expressed as graphics with notes on musical staves and as text with expressions in italics such as andante. Syntactical interoperability is already achieved for digital and analogue data, either where the data is unstructured or where the data has a natural structure representable in a form that holds the informational content, for instance as ordered pixels or natural language text. These may be universally recognisable as in pictures or other interpretable materials where a common understanding readily exists. For instance an English text is interoperable throughout the world only in its native state or by translation into some other natural language. Consequently there are not too many problems with interoperability of the natural data. We simply need operations at the syntactical level, which Google does well for text and for well-established image formats like jpeg, tiff and gif. However, these forms of information are essentially raw data. When value is added to the data through the application of analytical methods, we obtain structured data, for which inter-communication is problematical. In the past these problems were minimised because the data was used mainly in a local setting, that is intracommunication was needed where any standard was common to the locality. Interoperability itself is concerned with the inter-communication of data at
Logical Foundations for the Infrastructure of the Information Market
627
different and therefore usually heterogeneous localities. Table 1 summarises those definitions in the form of a table. The importance of exactitude in transmission of data around the single market and commercial inter-communication with the rest of the globe needs to be emphasised. Table 1. Natural and Structured Data Types
Type
Structure
Examples
Applications
Images
Structured
Graphics
Business graphics
Natural (‘Unstructured’)
Photographs
Publishing
Structured
Relational databases
Business data
Natural (‘Unstructured’)
The web
Google
Text
Inter- mediate Data
Meta-structure added Semi-structured data Fitting spare parts to unstructured data (as in XML), Semantic Web, CAD/CAM Engineering Drawing with instructions
2 Exactness The concept of exactness is important in commerce. Customers always want what is ordered, whether it is good for their services or not, and not some approximation of the order. Of course in the real world there are always experimental errors but these can normally be controlled in a local environment and in practical terms can be minimised according to how much the customer wants to pay. Commerce has always had an international dimension but until very recently it has tended to be locally based. Communications between local bases have been point to point, with transportation by land, sea or air and conversations by mail, telephone or wireless broadcasting. The world consisted of a large number of mainly autonomous locally-based entities with simple inter-connection where the main effort is intraactivity at the local level, fit for the purpose with merchantable quality. A satisfactory theory can rely on linear models, linear logic, etc [8]. Local equates with classical and of the same type. Exceptionally large systems, even if of the same type, may lie outside local operating conditions. For example the large database cannot be maintained by one person and a very large database cannot be copied because in the meantime it is changed. However, distinction is usually qualitative rather than quantitative.
628
Michael Heather, David Livingstone, Nick Rossiter
Non-locality on the other hand may still be composed of what approximates to a set of localities of the same type. It then operates as a classical organisation. The US legal system is able to operate this way with a federal law coordinating different local state laws. The early days of the EU imposed the same laws on the member states which at that time were few in number. But the character of Europe is diversity and variety, in language, culture, customs and style with quite different ways of working in manufacturing, commodities and providing services. When the number of member states of Europe became enlarged, imposing the same laws became impossible and there was a move to harmonisation. It is a comparable position with information systems in different states. Co-ordinating systems of the same type is only a first-order activity but for heterogeneous systems higher-order operations are needed. Legal systems are archetypal general information systems. Intensionally the legal systems of the member states of Europe are identical and feed into the overarching European law in the European Court of Justice. But each local legal system is extensionally different. We shall see here how the theory of interoperating business information follows the same practice. The topos in Figure 4 could represent the legal systems of Europe just as easily as information systems. Exact operations with systems of the same type mean it is easy to assume that the same conditions will apply for different type systems but in reality the results can be radically different and unpredictable even leading to dangerous results and therefore a subject for rigorous risk management. These may be derived from theory or experimental results, preferably both. The theory in this respect is shown by the Austrian mathematician Kurt Gödel. Gödel was able to show in his doctoral thesis of 1929 on the completeness theorem [9] that first-order predicate logic is complete. The significance is that consistent logical propositions may be applied as a model of first-order systems. Thus a digital computer operating with a von Neumann architecture gives satisfactory results with first-order limits. Most of the work in applied mathematics in the last two centuries has been gauged on clever theories to keep within the first-order limits. However, there are applications which have defied such analytical methods like turbulence where it has been necessary to resort to more qualitative techniques as found in chaos theory. These do not provide exact results. This can also be explained by Gödel’s theory of undecidability which is perhaps even better known [10]. Gödel’s theorem shows that both intensional and extensional systems which rely on axiom and number are undecidable 27. Traditional mathematical modelling, which relies on set theory cannot therefore be applied directly to higher order behaviour. An example of undecidability in the case of a computational machine relying on the Church-Turing thesis is the halting problem. A commercial example of this can be found in the implementation of Codd’s relational model [1] as it is utilised in modified form in much of current data processing. As we shall see in the next section, in its pure form the relational 27 There is the question of even how to define completeness, consistency, decidability, soundness and effectiveness. The literature itself is not consistent and we will therefore leave aside what these words mean in a set theoretic context and rely below on corresponding categorial concepts as definitive.
Logical Foundations for the Infrastructure of the Information Market
629
model works well for atomic data because it is within Gödel’s principle of firstorder predicate logic and is therefore complete. That is the relational model and its corresponding calculus give exact results for atomic data. While the commercial version of the relational model SQL is a vast improvement on earlier data models, it has compromised some of the relational model features and is neither complete, nor decidable. The relational model is sometimes trumpeted as an example of the effectiveness of logic in computer science [11]. This is only for theoretical computer science. Implementations of the model show a divergence between theory and practice. Of course real-world data does not consist of homogeneous independent items making up atoms. This has resulted in various techniques such as normalisation with a series of normal forms: first normal form, second, third, etc, which attempt to squeeze real-world phenomena into a collection of first-order relations, to behave optimally with regard to update and search operations. Hence data, lacking any naturally regular atomic form, may be squeezed by normalisation into such a structure in a consistent manner.
3 Practical Examples of Interoperability Problems The relational model predominates in much of commerce today as the format for structured information. Yet there are very significant interoperability problems between one relational database and another. Some of these can be attributed to problems with the underlying SQL standard as described below. 3.1 Variants of SQL
Vendors of SQL DBMS support different variants of SQL, all in varying degrees differing from the versions of the SQL International Standard 28, either having additional features and/or omitting features. Table 2. Effects of Variants of SQL on Interoperability
Features
Achievements
Problem in interoperability
Full facilities
Not achieved by MySQL
Very difficult between MySQL and other DBMS
Hierarchies, manipulation
Peculiar to Oracle
Difficult between DB2 and Oracle in network/hierarchical structures
Recursive union, assembling Peculiar to DB2 networks
28 Information Technology – Database Languages - SQL, ISO/IEC 9075:2003 (2003).
630
Michael Heather, David Livingstone, Nick Rossiter
Implementation of integer type
Oracle treats as numeric(38)
Difficult between Oracle and other DBMS in formatting and rounding numbers
Dates
Different logical formats
Difficult between all systems in reliable data format recognition
Some features described in the standard are labelled implementationdependent, meaning they are independent of the standard. Others are implementation-defined, meaning the manner in which the feature is achieved is at the discretion of the implementer. Therefore it is not always possible to guarantee a semantically valid transfer of data from one SQL DBMS to another, since the recipient DBMS may treat the received data in a different way to that which the sending DBMS would have treated it, had it carried out ostensibly the same operations. This situation arises because the standards are not based completely on scientific or mathematical principles. Standards are also influenced by the software vendors, who are looking for pragmatic and strategic ways in which their products can be promoted. Examples of problems at the data level are shown in Table 2. Therefore if two databases are to be interoperable, it is much simpler if they are both managed by the same DBMS package, because then the only problems in this context are those arising from the consistent application of one variant of SQL. For this and other reasons, such as reduced DBMS maintenance and licence fees, in practice multiple-vendor SQL DBMS are unheard of, except where they arise due to force of circumstance, for example the merger of two previously independent companies. 3.2 SQL versus the Relational Model
The relational model is based on two mathematical theories: first-order predicate calculus and relations ([1] at p.v). Interestingly the relations permitted are not completely general (at p.467-477). In particular a collection of n-ary relations of assorted degrees is strongly encouraged where n is an integer, giving the degree of a particular relation. Both a single universal relation and a collection of binary relations are strongly discouraged, as the former loses flexibility in logical navigation and the latter is cumbersome and unnatural. Further if the collection of n-ary relations is constrained to be in first normal form with all values atomic, then the predictable regular form greatly simplifies the query language. No version of SQL implements the full relational model, either that specified by Codd or evolved from Codd’s model by others, for example see [3]. Some differences between SQL and the relational model are summarised in Table 3. The different structures of set and bag respectively for the relational model and SQL are of particular interest. Sets, bags and other container-types such as sequences of tuples are specifically defined and cannot be used interchangeably. However, there are means of carrying out conversions from one kind of container type to either of the other two, in an attempt to make it trivial to achieve mathematical exactitude in defining and manipulating the different kinds of tuple containers. Current work at
Logical Foundations for the Infrastructure of the Information Market
631
Northumbria University in the Open Database Project shows the need for rigorous definition at the local level in prototyping languages like Raquel [14] to produce an open source implementation of the features specified in the Third Manifesto [3]. The aim is to keep as close as possible to the philosophy of the pure relational database model including object classes as data types orthogonal to relations, an open architecture satisfying this philosophy, a design for the architecture and an implementation of that design. Interoperability is facilitated by the use of pure relational languages such as Raquel, together with conversion techniques for mapping between different container types. Table 3. Differences between SQL and the Relational Model
Feature in SQL
Feature in relational model
Consequence for SQL
Default structure is bag
Structure is set
Duplicate rows permitted, inconsistency in updates
Row identifier
No identifiers
Physical bias to extension
Rows may be sequenced
No sequencing of rows
Data is apparently ordered
Set operations such as union Set operations such as union Set operations are based on based on column position based on column name physical, not logical, ordering of columns Duplicate column names allowed in output
No duplicates allowed
Confusing output
3.3 Closed World Assumption
The definition of a relation should be a logical predicate such that each tuple in a relation corresponds to a logical proposition that is true for that predicate. By the Closed World Assumption (that is the CWA) any tuple not in the relation represents a false proposition. This is an attempt to satisfy Gödel’s decidability principle: that tuples in the relation are true and tuples outisde the relation are false [2]. However it is not possible for a relational DBMS to guarantee that all the tuples in a database represent true propositions, only that they all consistently meet a set of integrity constraints that partially represent the real-world logical predicate. This is because the typing system is based on set inclusion principles rather than constructive ones. Therefore decidability may be a problem with all relational systems. Nulls are another example of where database approaches based on CWA run into difficulties. ([1] at p.383-387) suggests that the relational model should permit nulls as markers, with two interpretations: missing-but-applicable and missing-andinapplicable. Nulls are not data values. A four-valued logic is then employed to manipulate such data with the outcome of true, false or two types of maybe. SQL claims to have a three-valued logic since logical variables may take the values true, false or null. However, it is not clear that null can be safely equated with maybe and a number of problems arise as shown in Table 4. From the Gödel perspective,
632
Michael Heather, David Livingstone, Nick Rossiter
nulls make a system undecidable. It is therefore not surprising that practical implementations such as SQL have many problems in handling nulls. Some more recent versions of the relational model do not permit nulls, for example [3]. It is likely that the handling of nulls will be facilitated by the use of metadata to describe the reason for the null. The questionable CWA assumption also raises problems in proving that the result of a query is logically valid. This gives rise to undecidability and a lack of completeness. Compounding the problem is that many users look for plausible results, often on small volumes of data. Table 4. Problems with Handling of Nulls by SQL
Case
Result
Problem
Creation of nulls whether value is missing-butapplicable or missing-andinapplicable
No distinction
Semantic simplification
Use of null to represent maybe Three values for Boolean in the Boolean type logic
Contrary to normal view of Boolean logic as binary valued
Comparing a null value with a maybe with join/restrict, true Difference in outcome null value in set operations between set operations and other operations such as join Split table into a set of subtables using restrict; union resulting sub-tables
No guarantee that this will be Restrict only returns rows the original table where the comparison returns true; hence those returning null are ignored and lost
Aggregation operators applied Count includes them; others to columns containing some ignore them nulls
Arbitrary application
Aggregation operators applied Count returns zero; others to columns containing all return null nulls
Arbitrary application
Second order distributivity (e.g. fuzzy sets)
Logical equivalences are not Inconsistent treatment of nulls true
It is perhaps worth raising the question as to whether object-oriented databases would overcome some of the disadvantages above. The answer is no, as objects will suffer from all the problems of sets and methods are implemented through an enriched type system, similar to sketches or perhaps 3-categories. The objectoriented approach needs to be founded in category theory to be complete and decidable. Codd said (at p.22): “One of the main reasons that object-oriented DBMS and prototype products are not going to replace the relational model and associated DBMS products is their systems appear to omit support for predicate logic. It will take brilliant logicians to invent a tool as powerful as predicate logic. Even then such an invention is not an overnight task. Once invented it may take
Logical Foundations for the Infrastructure of the Information Market
633
more than a decade to be accepted by logicians. Thus features that capture more of the meaning of data, which is important, should be added to the relational model, instead of being proposed as replacements” [1]. Intuitionistic logic offers a more convincing way forward. Compared to first-order Boolean predicate logic the intuitionistic is more naturally applicable to open systems, is constructive and avoids the problem of impredication.
4 Interoperability and Categories In category theory [15] alternative meanings of decidability, completeness, satisfiability, soundness and consistency, all used by Gödel, converge. They come together in the composition diagram in Figure 1(a). The negation of these terms or where they fail wholly or in part are all subsumed in the diagram of punctured composition [7] in Figure 1(b). At one level the composition diagram of Figure 1(a) is a formal categorial representation of Gödel’s result that first-order predicate logic is complete. This diagram therefore satisfies a local intra-operability of a single system and first-order interoperability between simple systems. The difference is that moving to higher orders such as axiomatic number systems is undecidable. However, that limitation does not apply to a process view of the arrow. Composition is still satisfied by the diagram but where the arrows can be a different type or from different levels. We have shown in [19] that free interchange between four levels can satisfy any realisable system. The conditions for interoperability come from adjointness [18] between the two composition triangles in Figure 2.
'
Fig. 1. Commuting Diagrams for a) Composition, b) Punctured Composition: (gRf)=(gRf)
Critical details of these two triangles are the values K, H, respectively the unit and counit of adjunction [13] in Figure 3. F is the functor that carries the data across from the left system to the right system. G is the underlying functor giving the rules for that transmission. f,g respectively represent dynamic data in the left
634
Michael Heather, David Livingstone, Nick Rossiter
and right systems. L is an object in category L and R an object in category R. Note that this only defines information in one direction. For two-way communication there has to be a self-adjointness of both left and right systems. As explained above in the example of legal systems this is a process of harmonisation. The systems do not need to be identical, that is K may be other than A and կ other than H.
Fig. 2. Cartesian closed adjointness. Each circle superimposes clockwise ands anticlockwise closed arrows as identity functors indicated by the contravariant arrow heads (top and bottom representing initial and terminal objects respectively). The left (L) and right (R) categories are themselves opposites as shown by the arrowhead directions.
4.1 Architecture for Interoperability
Fig. 3. Roles in Adjointness of a) K, b) H
The architecture for interoperability between more than two systems is then a composition diagram of the form shown in Figure 4. Fundamental category theory
Logical Foundations for the Infrastructure of the Information Market
635
shows that for physical existence the real world operates as a cartesian closed category. All the categories drawn above are therefore cartesian closed. The theory also shows that any such operation involves only two categories (L,R) and a context category C, that is a left system communicates with a right system in the context of all other systems. All other systems are therefore a single context category. More precisely this is a topos T as in Figure 4. Remember that interoperability is really a global character where everything is connected to everything else. We are not dealing with discrete systems. The context category described above is only a view and is really the limit of the topos. So C Æ T. And of course L and R are themselves subobjects of T. This is the essence of interoperability where category theory, or nothing less than category theory, can give the required insight to construct exact and decidable interoperating information systems. Category theory has long been associated with graph theory. A graph-based approach to interoperability such as that by [20] using metamodels is broadly in the same direction as our work.
Fig. 4. Architecture for Interoperability: Topos T involving categories L,R and Context Category C
5 Conclusions From the work of Gödel, first-order predicate systems are complete, consistent and decidable. Much of the attention in defining a relational data model has focused on keeping to a strict first-order system. The treatment of issues such as normalisation, nulls and recursion by workers developing a pure relational model [3, 1] is designed to avoid the need to handle higher-order logic in set theory. Indeed the relational model in its proper form is classified as one of the outstanding successes of logic in computer science [11]. The more casual treatment of such factors in SQL has led to systems, which are no longer consistent and decidable, giving many problems in interoperability. The kinds of problems with SQL standardisation are
636
Michael Heather, David Livingstone, Nick Rossiter
covered in [4] where underlying weaknesses in standards are thought to occur early on in the standardisation chain, such as through a weakness in the standards idea or the standards process. Interoperability is essentially a higher-order problem, For higher order systems we need composability to achieve the same rigour as found in first-order predicate systems. Composability is a cornerstone of category theory and an architecture has been proposed, based on the topos, for achieving interoperability while meeting Gödel’s requirements.
Acknowledgements We are grateful to Hugh Darwen for his comments on the SQL standard and the relational model and to the contributors at the AREIN workshop, Madeira, March 2007 where a preliminary version of this work was first presented [12].
References [1] [2]
Codd, E F, The Relational Model for Database Management Addison-Wesley (1990). Date, C J, Gödel, Russell, Codd: A Recursive Golden Crowd http://www.dcs.warwick.ac.uk/ahugh/TTM/goedel.pdf 6pp July 17th (2006). [3] Date, C J, & Darwen, Hugh, Databases, Types and the Relational Model: The Third Manifesto 3rd ed, Addison Wesley (2006). [4] Egyedi, T, Experts on Causes of Incompatibility between Standard-Compliant Products, in: Enterprise Interoperability: New Challenges and Approaches Doumeingts, G, Müller, J, Morel, G, & Vallespir, B, (edd) Springer 553-563 (2007). Network, Relaunch of the Lisbon Strategy [5] Euractiv http://www.euractiv.com/en/innovation/growth-jobs-relaunch-lisbon-strategy/ article131891 (2005). [6] European Commission, Internal Guidelines in: Integrated Guidelines for Growth and Jobs 2005-2008 2005/0057, http://ec.europa.eu/growthandjobs/pdf/ COM2005_141_en.pdf (2005). [7] Freyd, P, & Scedrov, A, Categories, Allegories, North-Holland (1990). [8] Girard, Jean-Yves, Une extension de l’interpretation de Gödel, à l’analyse, et son application à l’élimination des coupures dans l’analyse et la théorie des types, Studies in Logic and the Foundations of Mathematics North-Holland 63-92 (1971). [9] Gödel, Kurt, Über die Vollständigkeit des Logikkalküls Doctoral Thesis, D1.736 33pp, University of Vienna (1929). Reprinted in Feferman, S, ed. Gödel Collected Works, volume 1 (1986). [10] Gödel, Kurt, Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme, I. Monatshefte für Mathematik und Physik 38 173-198 (1931): translated in Jean van Heijenoort, From Frege to Gödel: A Source Book on Mathematical Logic Harvard 596-616 (1967). [11] Halpern, Joseph Y, Harper, Robert, Immerman, Neil, Kolaitis, Phokion G, Vardi, Moshe Y, Vianu, Victor, On the unusual effectiveness of logic in computer science Bulletin of Symbolic Logic 7(2) 213-236 (2001). [12] Heather, Michael, Livingstone, David, & Rossiter, Nick, Higher Order Logic and Interoperability in Global Enterprise, AREIN I-ESA 26-30 March 12pp (2007).
Logical Foundations for the Infrastructure of the Information Market
637
[13] Lawvere, F W, Adjointness in Foundations, Dialectica 23 281-296 (1969). [14] Livingstone, David, Open Database Project, CEIS, Northumbria University, http://computing.unn.ac.uk/openDBproject/ (2007). [15] Mac Lane, Categories for the Working Mathematician, 2nd ed, Springer (1998). [16] Probst,G, Raub, S, & Romhardt,K, Managing Knowledge Building-Blocks for Success Wiley (2000). [17] Rodrigues, Maria Joao, The Debate Over Europe and the Lisbon Strategy for Growth and Jobs, 2005.10.21, http://www.mariajoaorodrigues.eu/files/ The_Debate_over_Europe_and_LS.doc (2005). [18] Rossiter, Nick, & Heather, Michael, Conditions for Interoperability, 7th ICEIS Florida, USA, 25-28 May 2005, 92-99 (2005). [19] Rossiter, Nick, Heather, Michael, & Nelson, David, A Natural Basis for Interoperability, in: Enterprise Interoperability: New Challenges and Approaches Doumeingts, G, Müller, J, Morel, G, & Vallespir, B, (edd) Springer 417-426 (2007). [20] Ziemann, J, Ohren, O, Jae, kel, F-W, Kahl, T, & Knothe, T, Achieving Enterprise Model Interoperability Applying a Common Enterprise Metamodel, in:Enterprise Interoperability: New Challenges and Approaches Doumeingts, G, Müller, J, Morel, G, & Vallespir, B, (edd), Springer p.199-208 (2007).
Meeting the Interoperability Challenges of eTransactions among Heterogeneous Business Partners: The Advantages of Hybrid Architectural Approaches for the Integrating Middleware G. Gionis, D. Askounis, S. Koussouris, F. Lampathaki National Technical University of Athens, Athens, Greece {gionis, askous, skoussouris, flamp}@epu.ntua.gr
Abstract. Escalating economic and societal demands of today, along with the continuous advancements in ICT, push enterprises and organisations to move towards networked paradigms and leverage electronic transactions in the everyday practice. Although technical solutions providing for the necessary organisational, semantic, and technical interoperability means to enable e-transactions have been rigorously justified during the last years, their adoption and application into the everyday business practice by enterprises and organisations still remains limited. Specific characteristics of the existing solutions, such as inflexible workflows, predefined formats and content for the exchanged documents, hardcoded business and legal rules, use of proprietary technologies and inability to be readily deployed and validated for their efficiency, act as the main inhibitors for any potential users. The present work discusses the needed characteristics of centralized and decentralized architectures for e-transactions among business partners, identifies the weak points of every pattern and proposes a hybrid architectural approach that brings together the best features from both paradigms. Furthermore, specific insights, methodologies and underlying technologies are proposed with an objective to support the effectiveness of the architecture and its components in integrating processes, data and services achieving fully electronic transactions among businesses and governments in many European countries.. Keywords: electronic transactions, architectures, interoperability, ERP
1 Introduction Today, escalating economic and societal demands, along with the continuous advancements in Information and Communication Technologies (ICT), set a growing agenda of pursuits for enterprises and organisations and challenge the capabilities of their underlying technical infrastructures to support them effectively
640
G. Gionis, D. Askounis, S. Koussouris, F. Lampathaki
in their struggle of gaining steam against their competitors and establishing their position in the market. In this world novel business practices such as the paradigm of electronic transactions (eTransactions) are constantly taking up momentum. Just as Porter [17] in 2001 stated that it is not a question for enterprises whether or not to move to the Internet but when and how to do it in order to create new value, today the silver bullet decision for businesses and organisations does not lie any more in the dilemma of adopting or not ICT enabled eTransactional practices but how to integrate them as quickly and effectively as possible to their operation to achieve the biggest possible benefits. However, specific characteristics of the proposed solutions, such as inflexible workflows, predefined formats and content for the exchanged data, predefined business and legal rules, traffic of sensitive business information through third party systems, use of proprietary technologies and inability to be readily deployed and validated as to their efficiency act as the main inhibitors for the potential users. This renders current solutions costly, rigid and difficult to adapt to meet the requirements of evolving enterprises [21]. Many of these drawbacks can be largely attributed to the adopted architectural patterns – namely fully centralized (server based) or purely decentralized (peer to peed) – and the specificities that these patterns bring about in terms of rigidity, operational philosophy, technological maturity and cost that contradict fundamentals of the eTransactions business perspective as it viewed by enterprises and organisations. In this paper we begin by outlining in section 2 the framework of what constitutes ICT enabled eTransactions from the enterprises’ viewpoint. Following in section 3 we present a view on the underlying state of the art in enabling technologies and platforms, discuss the advantages and disadvantages of the two main architectural paradigms and establish the characteristics of a hybrid architecture bringing together the best of breed in features from both worlds. In section 4 we present the features of this hybrid architectural approach and roadmap the creation of a corresponding platform based on it.
2 Interoperability Challenges of eTransactions: The Enterprises Viewpoint Enterprises and organisations realize the entire spectrum of their everyday transactions through a series of consecutive cycles of requesting, receiving, filing, issuing and sending business documents (order forms, invoices, payment responses, etc) that take place in parallel with the corresponding physical operations that are carried out during the transactions (shipping products, receiving products, rendering services, etc). In the back-end of each stakeholder there are a number of internal processes whose business logic defines the flow of these sendreceive activities. However it is through these send-receive endpoints that the two stakeholders communicate with each other. In simple cases the business logic of the internal processes is entirely confined within the interior of each stakeholder thus producing a straightforward workflow. In real world situations however this is hardly the case, especially when long
Meeting the Interoperability Challenges of eTransactions
641
successive cycles of send-receive business documents are required. In such cases the internal business logic inevitably intervenes during the execution time in the way send-receive endpoints are sequenced making the “a priori” knowledge of the complete workflow virtually impossible. This is illustrated in the following example taken from [23] where during a transaction the invoicing procedure between seller B and buyer A may be supplemented either by a Credit Note or a Debit Note. Such a case is very often when there are discrepancies between the items stated in the Invoice and the items actually received (prices, volume, etc). There are two options: A pays for the items received (the amount is different form the one stated in the Invoice), then sends a response to B for the amount paid and B issues a corresponding credit to A (Credit Note). 2. A sends a note to B (Debit Note) for the items received, B re-issues the Invoice, A pays the proper amount according to the new Invoice and sends a response to B for the amount paid
1.
Table 1: Fulfilling the Invoicing Procedure Either with a Credit Note or a Debit Note Invoicing Between Seller A and Buyer B with a Credit Note B: issues an Invoice B: sends the Invoice to A A: receives the Invoice A: arranges for payment (internal process) A: issues a Response A: sends the Response to B B: receives the Response B: issues a Credit Note B: sends the Credit Note to A A: receives the Credit Note
Invoicing Between Seller A and Buyer B with a Debit Note B: issues an Invoice B: sends the Invoice to A A: receives the Invoice A: issues a Debit Note A: sends the Debit Note to B B: receives the Debit Note B: re-issues the Invoice B: sends the Invoice to A A: receives the Invoice A: arranges for payment (internal process) A: issues a Response A: sends the Response to B B: receives the Response
All business documents affiliated with business transactions are produced by data that exist within the enterprises’ technical infrastructures. Therefore the heterogeneity of the stakeholders’ infrastructures mandates a corresponding heterogeneity in the way business documents are represented by every partner in terms of semantics and format of their content. The following table illustrates an example of how a segment of the same data in an invoice that is sent from partner A to partner B is represented differently when the invoice is stored inside each one of the stakeholders’ systems.
642
G. Gionis, D. Askounis, S. Koussouris, F. Lampathaki
Table 2: Different Representations of the same Segment of Data within the Stakeholders’ Infrastructures System of Partner A Company Name Sui Generis S.A. Billing Address 7 Panepistimiou str, Athens, Greece
Postal Code Customer ID Invoice Number
13713 7130713 1370137
System of Partner B Partner Name Sui Generis S.A. Street Name Panepistimiou Street Number 7 City Athens Country Greece Zip Code 13713 Partner ID 7130713 Invoice ID 1370137
Consequently, for any two potential business partners to engage in eTransactions they need to find an ICT-based, systematic way, for addressing the following challenges: x
x
Defining and executing workflows of service endpoint that correspond to issue, send, receive, etc functionalities of underlying systems that have been externalised as web services – the process integration and dynamic service composition challenges. Transforming business documents between different formats based on a mapping between the semantics used by the underlying systems – the data integration challenge.
Based on the status of the underlying technological state of the art, today the above challenges are feasible to be realized in a one-to-one basis between two stakeholders. However, if an enterprise or organisation aspires to realise the “Business Perspective” of eTransaction, it would have to apply this one-to one approach with every single one of its business partners (one-to-many approach) something that is feasibly impossible due to its exponential complexity and corresponding cost. Following on the definitions of interoperability in organisational, semantic and technical aspects, this paper focuses in solving the problem in the latter two levels (i.e. semantic and technical) by introducing an architecture with the ability to integrate existing systems by defining and executing dynamic service workflows of externalised (web service) functionalities and transforming business documents among different formats based on underlying models of processes and data of these systems.
3 Discussion on the Underlying State of the Art Interoperability as a driver for eTransactions constitutes a challenging field where scientific advances are very often accompanied by significant benefits for the adopting stakeholders – according to Gartner ICT interoperability products alone constitute a thriving market that surpasses 200 M€ per year worldwide. Therefore the underlying state of the art appears to be quite progressed, incorporating from
Meeting the Interoperability Challenges of eTransactions
643
methodologies, frameworks and standards for process, data and service integration to commercial of the self (COTS) solutions and integrated platforms for eTransactions. 3.1 Products and Platforms for eTransactions
Several approaches emerge today with the objective to help in the integration of business partners and provide them with capabilities for eTransactions [26]. Following we provide a small overview of the foremost examples for such commercial, open source and research community middleware. Microsoft BizTalk Server [3] is a business process management server that enables companies to automate and optimize business processes. It includes tools to design, develop, deploy, and manage business processes. The environment also provides the ability to design, build, and execute dynamic business interactions that span applications, platforms, and organizations. The current version of BizTalk supports specifications (XLANG) for modeling business processes and incorporates an orchestration engine for executing and monitoring processes. Oracle Application Server [16] is a new release of the core service oriented architecture platform underlying Oracle Fusion Middleware. It is designed to address three challenges, service oriented development of applications, eventdriven business process optimization and a unified workplace with pervasive multichannel access. SAP Netweaver Platform [18] is an application builder platform for integrating business processes across various systems, databases and sources. It is a service oriented application and integration platform that is designed to interoperate with various proprietary technologies and commercial platforms like Microsoft .NET, Sun Java EE, and IBM WebSphere. Freebxml [7], an initiative that aims to foster the development and adoption of ebXML related technology through software and experience sharing. In contradiction to other initiatives around ebXML, Freebxml does not define specification, but instead provides implementations of already existing ebXML specifications. 3.2 Peer to Peer and Server-based Approaches: Advantages and Disadvantages
Up until now middleware integration platforms have mostly been realized on the basis of centralized, client/server, models [1]. Different flavours of client/server constellations exist today, ranging from centralized hub-and-spoke systems to point-to-point links between two trading partners [20]. In all cases a central instance mediates between one or several users, provides the necessary functionality for seeking, negotiating and binding services and finally orchestrates the collaboration processes and handles message exchanges. However when it comes to business integration of enterprise services across the boundaries of firms, central systems cannot be considered as an ideal solution to cope with “mission critical” issues such as increasing workload, scalability and risk of failure. In central systems all services are inevitably assembled and executed on
644
G. Gionis, D. Askounis, S. Koussouris, F. Lampathaki
the central server’s runtime, thus consuming a significant amount of its computation power that may diminish the performance of the overall system. Additionally, such systems present by design reduced scalability since any changes in the server are propagated and affect all the interconnecting clients. Finally the existence of a single, central point of failure makes this architectural approach even more inappropriate for business transactions. On the other hand decentralized, peer to peer (P2P), architectures do not present such problems. P2P networks typically do not have the notion of clients or servers, but only equal peer nodes that concurrently work as both clients and servers to the other nodes that are part of the network. Instead of a central instance being responsible for the setup and control of business relationships, P2P transactional systems enable users to dynamically and autonomously negotiate and establish an agreement for the automatic execution of business processes. Therefore, theoretically, this approach clearly provides paramount advantages in many respects for business transactions. From an economic point of view, the absence of cost and risk of ownership and maintenance of a central server and the corresponding infrastructure is avoided. The improved scalability and ability to deal with transient populations of users can be considered as a further important argument for deploying distributed eTransactional systems. Furthermore two trading partners do not have to exchange documents via a third-party with the risk of having sensitive data intercepted but have the benefit of direct, unmediated and potentially synchronous communication. However, P2P based architectures are also compensated by a number of drawbacks. Such environments consist of numerous autonomously acting peers that require sophisticated and reliable mechanisms for partner retrieval to cope with the quickly changing network topologies, in the opposite a central server could easily maintain one single catalogue indexing all currently connected users. A second issue is the assurance of user authentication and access control, where as opposed to a centralized architecture, there is not a single entity that can act as neutral mediator, maintaining certificates of the respective participants’ identities. Apart from these, the negotiation and establishment of collaboration protocol agreements (CPAs) is more complex than in the case of central system since there is no central instance available that could provide standardized guidelines. Finally, the progress and state of the conducted transactions must be maintained (“mirrored”) by all the engaged peers, since the orchestration of services is performed in a decentralized way. Further challenges such as the introduction of methods for authentication, non-repudiation, logging, time-stamping and maintaining of all transaction information are still left to be investigated. 3.3 Best of Breed of the Two Approaches
According to [27], The different models for semantic interoperability are classified based on two fundamental dimensions: 1) choosing one of two possible ways to set up integration mappings, one in which each service schema is mapped to any other (any-to-any) and another in which each one is mapped to a single schema (any-toone), and 2) choosing whether the integration logic is executed in a single,
Meeting the Interoperability Challenges of eTransactions
645
distinguished node (centralized) or the execution is distributed among multiple, functionally equivalent nodes (decentralized). By using this criterion, if we try to classify the aforementioned advantages and disadvantages of each architectural approach we end up with the following table that illustrates the strong points (3) and weaknesses (x) of every architecture to integrate the mapping and logic of their interconnected users. Table 3: Classifying Advantages and Disadvantages of Server-Based and P2PArchitectures According to the Different Categorisations of Semantic Interoperability Model for Mapping Integration Any-to-One Any-to-Any
Server-Based
3
P2P
Model for Logic Integration
Centralized
Decentralized
x x
3
According to the table in a Server-Based system the advantages lie with the fact that users are required to do the mapping of their processes, data and services in an Any-to-One way (each partner maps only against the Server). However the Server resumes the entire burden of executing the necessary logic i.e. service workflows and document transformation in a centralized way, which is the main disadvantage due to the created high workloads. On the other hand, in a P2P platform, according to the table, the main disadvantage is that business partners are required to perform and maintain an Any-to-Any mapping of their processes, data and services which leads to a very high level of complexity (each partner maps against every other partner in the environment). The advantage though is that there can be decentralized execution of the necessary logic, i.e. every two partners can define mutual service workflows and document transformation mappings. Among all the possible categorisations in the above table the ideal situation would be an architecture where business partners could integrate their mappings in a Any-to-One way (each partner mapping against one central instance) in order to keep the level of complexity low and then be able to proceed to the execution of the integrated logic in a decentralized way (any two partners being able to engage in a transaction directly among themselves without any third-party mediation). These two points are the fundamental design principles of the architecture we propose in the following section of this paper.
4 Roadmapping Hybrid Architectural Approaches for eTransactions 4.1 Architectural Overview
According to Gartner [8] monolithic, centralized architectures that focus only within the enterprise, and not on business partners and customers, are worthless.
646
G. Gionis, D. Askounis, S. Koussouris, F. Lampathaki
Newer architectural models need to put us on a track to make real the loosely coupled enterprise, and push for an increasing capability to collaborate among all kinds of applications. Proportionally, the Enterprise Interoperability Research Roadmap [6] defines the concept of interoperability service utility to denote an overall system that provides enterprise interoperability to its users as a utility-like capability in a “plug-n-play” way. The proposed architecture embraces this general philosophy for extroverted, easy to use applications by specifying an environment with the ability to promote peer-to-peer eTransactions between business partners through a centralized support – hybrid architecture. The architecture facilitates the direct exchange of business documents between the partners through the provision by a central instance (server) of customized service workflows and data mapping schemas.
Fig. 1: Architectural Overview
The architecture comprises three distinct components – the repository, the server and the adapters. Collaborating together these components constitute an integration platform that enables stakeholders of heterogeneous nature to engage and sustain networked enterprise paradigms based on dynamically composed, end to end integrated services. The Repository stores semantic information about the characteristics of the platform users in the form of models about the processes, corresponding service endpoints and data models each user can support based on its underlying infrastructures. Additionally, such a Repository may even store higher level models of business and legal rules depicting the followed business practices and underlying framework guidelines according to business nature of the engaged partner or the geographic location where the transaction is executed [12]. The Server acts as the integrating component among the users providing them with the necessary capabilities to engage in eTransactions directly among
Meeting the Interoperability Challenges of eTransactions
647
themselves. Specifically, for every transaction between two stakeholders the Server makes use of the information in the Repository in order to synthesize specific workflow based on the process models and the existing service endpoints supported by the partners, provides specific data mappings based on the underlying data models of the partners to enable the transformation of business documents between different system formats, and may also provide for the necessary monitoring and management of the transaction flow where required . The Adapter which is the only component in the architecture that depends to some extend on the configuration of the users’ infrastructures provides the technical capabilities for the direct exchange of information among the users. The Adapters, based on the input by the server, execute dynamic workflows, perform document transformation and interconnect to the users’ internal infrastructures by calling the supported externalised service endpoints. 4.2 Technology Viewpoint
In order to support the server’s integrating capabilities the repository must incorporate a series of semantic models of processes and data, specifically: x
x x
Generic process models in the form of XML description – e.g. XMI-based representations of UMM models [14], [15], BPMN collaboration models [5], ebXML BPSS [13]. At the moment, BPMN appears to be the most preferable specification due to the fact that there is already a considerable amount of work on how to transform such models into BPEL [4] which is an XML-based executable description of service workflows. Generic business documents in an XML-based format. For this purpose the the readily available UN/CEFACT Core Components [24] and Naming and Design Rules [25] may be used. XML descriptions of the users profiles and their mutual agreements that apply during their transactions – e.g. ebXML Collaborations Protocol Profiles and Agreement
In order to create such a diverse semantic repository a multidisciplinary modelling methodology that will provide a holistic view upon processes, data, services and rules in a specific business domain is required. Although several successful approaches do exist, such as in [19], [11], they address only one or two aspects of the problem, namely processes or processes and services. The modelling approach we propose comprises three dedicated modeling levels for business processes, business documents and business and legal rules. The business process modelling level comprises two distinct categories of process models – private and public – each one of which captures a specific description view of a business process. Private process view shows all activities which are performed within the process, like internal decisions or internal administrative work. Public process view only shows activities that are useful to understand the relevant process outputs and communication with external entities. The significant process logic has to be indicated as well in this view. Activities of the external entity (i.e. the other collaborating partner) are not described.
648
G. Gionis, D. Askounis, S. Koussouris, F. Lampathaki
At the business document level, document models are created through syntax independent blocks of data – i.e. by utilising the UN/CEFACT Core Components Technical Specification [24] that already implements a considerable number of readily available data components. The overall business document structure can be modelled either according to existing data components (UN/CEFACT or UBL core components of business documents) or from scratch (which is the case especially for documents for governmental transactions). The business and legal rules level must contain models with the ability to reference both process and document models. To this end a specific rules metamodel needs to be created taking into consideration the specifics of the business domain to be facilitated in order to define a vocabulary of all the entities referenced by the rules. The metamodel must incorporates concepts such as events and conditions that trigger the rule, action that the rule bears during its execution, application fields that the rule effects (i.e. external process and document models) and finally originating legal structures in the case of legal rules [10]. In the proposed architectural approach the server is a component that is layered between the repository and the adapters with the objective to provide a series of supportive services that eventually enable business partners to engage in electronic transactions without ever assuming the burden of delivering the entire transactional load like in pure server based approach. To fulfil its purpose it must comprise mechanisms for: x
x
Generating executable service workflows for any transaction between two specific partners. This is achieved by retrieving the corresponding public process models from the repository, matching their service endpoints in order to define a common collaboration model and finally transforming this to executable sequence of services. From a technological perspective the final service workflow can be produced by converting a collaborative BPMN process model to its corresponding BPEL script. Synthesizing generic business documents that fulfil both partners’ data requirements based on their underlying business document models in the repository and providing dedicated XML mapping to transform the generic business documents into specific business document to be stored in each partners’ system.
Every adapter in the client side of the proposed architecture assumes the burden of conducting the transaction by executing the workflows and applying the transformation schemas provided by the server. To achieve this functionality it requires a dedicated component capable of executing the workflows by calling the prescribed services, in our case a BPEL engine will do since the execution script is in BPEL and one simple function from transforming XML documents from one format to another based on a predefined transformation schema.
5 Conclusions and Future Work In this paper we have outlined the ICT challenges of what signifies ICT enabled eTransactions paradigm from the enterprise’s viewpoint. Furthermore we
Meeting the Interoperability Challenges of eTransactions
649
presented an in-depth view in the underlying state of the art in terms of enabling technologies, COTS middleware and research initiatives for service, process and data integration. Additionally we discussed the advantages and disadvantages of the two most common architectural paradigms, namely centralized and decentralized, and establish the characteristics of a hybrid architecture bringing together the best of breed in features from both worlds. Following we analysed the features of this hybrid architecture and roadmapped the creation of a corresponding platform by introducing specific insights, methodologies and underlying technology to support the effectiveness of the architecture and its components in integrating processes, data and services. Next steps in this work include the specification in design and technological level of additional features in the proposed architecture such as the capability to provide for online contract negotiation between the partners, advanced authentication of new users of the environment, non-repudiation of the conducted actions, logging and time-stamping of the exchanged business documents in order to constitute a full scale collaboration platform capable of achieving the objective of organisational interoperability.
References [1]
S. Androutsellis - Theotokis, V. Karakoidas, G. Gousios, D. Spinellis, Y. Charalabidis: “Building an e-Business Platform: An Experience Report”, in P.Cunningham, M. Cunningham (eds.), “Innovation and the Knowledge Economy: Issues, Applications, Case Studies”, European Commission eBusiness Yearly Edition, IOS Publishing, 2005, Part1, Volume 2, pp199-206 [2] Athena project, Deliverable D.A1.1.1 First Version of State of the Art in Enterprise Modelling Techniques and Technologies to Support Enterprise Interoperability, available online at www.athena-ip.org, 2007 [3] BizTalk Server, available online at http://www.BizTalk.org, 2003 [4] BPEL4WS specification, version 2.0, available online at http://www.oasisopen.org/committees/tc_home.php?wg_abbrev=wsbpel, 2005 [5] BPMN specification, available online at http://www.bpmn.org [6] Enterprise Interoperability Research Roadmap, available online at http://cordis.europa.eu/ist/ict-ent-net/ei-roadmap_en.htm, pages 19-24, 2006 [7] Freebxml, available online at http://www.freebxml.org/, 2007 [8] Gartner, Enterprise Architecture Special Report: Overview, available online at http://www.gartner.com/pages/story.php.id.2230.s.8.jsp, 2007 [9] GENESIS project, Deliverable D3.1 Analysis of the Data Modelling State of the Art, available online at www.genesis-ist.eu, 2007 [10] George Gionis, Yannis Charalabidis, Katerina Sourouni, Dimitris Askounis, Enabling Cross-Border Interoperability: Modelling Legal Rules for Electronic Transactions in the European Union, in the proceedings of the 3rd International Conference in Interoperability for Enterprise Software and Applications I-ESA 2007, Madeira, March 28th - 30th, 2007 [11] Gong, Ruinan Li, Qing, Ning, Ke, Chen YL, O'Sullivan, David, Business process collaboration using semantic interoperability: Review and framework, semantic WEB - ASWC 2006, proceedings Lecture Notes In Computer Science 4185: 191-204 2006
650
G. Gionis, D. Askounis, S. Koussouris, F. Lampathaki
[12] Charalabidis Y., Askounis D: “Interoperability Registries in eGovernment: Developing a Semantically Rich Repository for Electronic Services and Documents of the new Public Administration”, HICCS, January 7-10, 2008, Hawaii. [13] OASIS,ebXML - BPSS v1.10, available online at http://www.untmg.org, 2003 [14] OMG, XML Metadata Interchange (XMI) Specification, Version 1.4, available online at http://www.omg.org/cgi-bin/doc?formal/02-01-01, 2001 Meta-Object Facility (MOF), Version 1.4, available online at [15] OMG, http://www.omg.org/technology/documents/formal/mof.htm, Application Server, available online at [16] Oracle http://www.oracle.com/appserver/index.html, 2004 [17] M. Porter, Strategy and the Internet, Harvard Business Review, pp. 63–78.G, March 2001 NetWeaver, available online at [18] SAP http://www.sap.com/platform/netweaver/index.epx, 2007 [19] Seng, Jia-Lang, Lin, Woodstock, An ontology-assisted analysis in aligning business process with e-commerce standards, Management & Data Systems 107 (3-4): 415-437 2007 [20] A. Svirskas, B. Roberts, An architecture based on ebXML and Peer-to-Peer technologies and its application for dynamic virtual enterprises of European SMEs, in the proceedings of the XML Europe 2004 (XML Europe 2004), Amsterdam, The Netherlands, 2004 [21] D. P. Truex, R. Baskerville, and H. Klein, Growing Systems in Emergent Organizations, Communications of the ACM, vol. 42, no. 8, pp. 117-123, 1999 [22] Tsalgatidou, A., Pilioura, T., An Overview of Standards and Related Technology in Web Services, International Journal of Distributed and Parallel Data Bases, Special Issue on E-Services, 12(2); p. 135-162, Sep 2002 [23] Universal Business Language version 2.0, available online at http://docs.oasisopen.org/ubl/os-UBL-2.0/UBL-2.0.html, June 2007 [24] UN/CEFACT: Core Component Technical Specification, available online at http://www.unece.org/, 2006 [25] UN/CEFACT, UN-CEFACT - XML Naming and Design Rules Version 2.0, available online at http://www.untmg.org, 2006 [26] Y. Charalabidis, V. Karakoidas, S. Theotokis, D. Spinelis, Enabling B2B Transactions over the Internet through Application Interconnection, in P.Cunningham, M. Cunningham (eds.), “eAdoption and the Knowledge Economy: Issues, Applications, Case Studies”, European Commission eBusiness Yearly Edition, IOS Publishing, December 2004 [27] G. Vetere, M. Lenzerini, Models for semantic interoperability in service oriented architectures, IBM Systems Journal, Vol 44, NO 4, page 894, 2005
A Model-driven, Agent-based Approach for a Rapid Integration of Interoperable Services * Ingo Zinnikus, Christian Hahn, Klaus Fischer DFKI GmbH, Saarbrücken (Germany) {ingo.zinnikus, christian.hahn, klaus.fischer}@dfki.de
Abstract. In cross-organisational business interactions, integrating different partners raises interoperability problems especially on the technical level. The internal processes and interfaces of the participating partners are often pre-existing and have to be taken as given. This imposes restrictions on the possible solutions for the problems which occur when partner processes are integrated. In this paper, we describe a solution which supports rapid prototyping by combining a model-driven framework for cross-organisational business processes with an agent-based approach for flexible process execution. We show how the W3C recommendation for Semantic Web service descriptions can be combined with the model-driven approach for rapid service integration. Keywords: Model Driven Architectures for interoperability, Agent based approaches to interoperability, Service oriented Architectures for interoperability, Semantic-web based interoperability approaches
1 Introduction Flexible and rapid integration of collaborating partners into executable business processes requires methods and tools for resolving interoperability problems which arise from heterogeneous IT environments. The European project ATHENA (Advanced Technologies for interoperability of Heterogeneous Enterprise Networks and their Applications) provides a comprehensive set of methodologies and tools to address interoperability problems of enterprise applications in order to realize seamless business interaction across organizational boundaries. For improving the effectiveness, timeliness and competitiveness of the IT solutions needed, currently, service-oriented architectures (SOA) are considered * The work published in this paper is (partly) funded by the E.C. through the ATHENA IP. It does not represent the view of E.C. or the ATHENA consortium, and authors are solely responsible for the paper’s content.
652
Ingo Zinnikus, Christian Hahn, Klaus Fischer
the most appropriate approach for flexible integration of partners. It enables partners to offer the functionality of their systems via a public service interface (e.g. using WSDL, [1]) and hide the sensitive parts behind it. A very important second advantage of SOA is the possibility of a loose coupling of partners. New partners can enter the system with little effort whereas obsolete partners are able to leave it easily. Since, in our view, agents are an abstraction of services, we use an agent-based approach for modelling and executing the collaborative business interaction. Due to the rather static nature of many business scenarios, often a rather fixed execution platform as e.g. BPEL4WS [2] is used. However, even in traditional collaborative settings, points or situations of choice occur, where the specific partner which delivers a specific task, can be selected even at execution time, thus allowing a certain flexibility. Especially in the case where additional smaller non-OEM manufacturers providing e.g. vehicle parts like radios or tires are integrated in the sales process, the system needs to become robust against temporary unavailable partners. We argue that the application of agents enables a flexible business process execution. Since existing services have to be integrated, the agents are situated in a service-oriented and specifically, a Web service environment. Two tasks which involve interoperability problems have to be tackled: x x
integrate services which are "fixed" i.e. known in advance when the process is specified. Provisioning services at design time which are additionally required or could be beneficial for improving the overall result or reducing costs.
When integrating partners into a collaborative process, interoperability problems occur on a syntactical as well as on a semantical level [3]. For solving the interoperability problems related to service and process integration, the Semantic Web initiative proposed to harness formalized knowledge representation, i.e. ontologies, for aligning heterogenous data models. Several proposals for semantically enhanced service descriptions were submitted for standardisation, namely, OWL-S [4], WSMO [5] and SAWSDL [6], which has just reached the status of a proposed recommendation within W3C. SAWSDL extends the de-facto standard for service description (WSDL) by annotating elements in a service description and providing schema mappings for transformation of data. The annotation of elements can be used for service discovery whereas the mapping information can be used for invocation of a service. It is this latter feature which makes SAWSDL an interesting candidate for service integration, because, based on a well established standard (WSDL), a service provider can supply its partners not only with a syntactical description of his service interface via a WSDL file, but additionally with the information required for ad-hoc invocation of his service. In this paper, we use a model-driven approach for the integration of existing services and, by using SAWSDL service descriptions, show how a SAWSDL description of partner services can be used to accelerate the integration process. The paper is organized as follows. In Section 2 we will sketch the business case of our pilot application which we use as motivational background, containing static interactions with dynamic features. Section 3 is devoted to our technical approach.
A Model-driven, Agent-based Approach for a Rapid Integration
653
Here, we present the approach developed in ATHENA and used within our pilot for the integration of cross-organizational processes. We describe how the W3C recommendation SAWSDL can be used for facilitating service integration in Section 4, discuss related work in Section 5 and conclude the paper by taking a look at the lessons learned in Section 6.
2 Scenario In 2002, due to new laws in EU legislation, the market of car distribution changed fundamentally. Instead of being limited to selling only one brand, vending vehicles of different brands under one roof was facilitated. Dealers now can reach a broader audience and improve their business relations for more competitiveness. As a consequence, many so-called multi-brand dealers have appeared. Today, multi-brand dealers are confronted with a huge set of problems. Rather than having to use the IT system of one specific car manufacturer, multi-brand dealers are now faced with a number of different IT systems from their different manufacturers. One specific problem is the integration of configuration, customization and ordering functionality for a variety of brands into the IT landscape of a multi-brand dealer. We describe an integrated scenario, where multi-brand dealers use services provided by the different car and non-OEM manufactures and plug them into an integrated dealer system.
Fig. 1. Overview over the architecture of the solution.
The desired to-be-scenario with its general architecture is depicted in Figure 1. The systems of the different car and non-OEM manufacturers are integrated via an integrator component. This integrator enables the dealer to access the software of the manufacturers in a uniform manner. The paper will focus on the manufacturer integration (Section 3) and present the model-driven, agent-based integration approach for cross-organizational processes modeling. For the service integrator, the generated process models are executed as software agents on Jack [7], an agent platform based on the BDI-agent theory (belief-desire-intention, [8]). In the following, we will describe this approach in detail.
654
Ingo Zinnikus, Christian Hahn, Klaus Fischer
3 Agent-based Modelling and Execution of Inter-organisational Processes Business process modelling and execution in this collaborative environment requires a set of methodologies and tools which support the transition from an analysis to an execution level and integrate the process with a pre-existing IT infrastructure. In business-driven scenarios, a top-down approach is often applied [9]: a human-comprehensible analysis model which is used for communication among analysts is enriched with process and data details which yields a design model. For generating run-time artifacts, the design model is enriched by technical details leading to a technical model which in turn is transformed into executable code. We follow the approach outlined in Kahl et al. [10] where business processes are modelled on different abstraction levels and transformed down from a business layer to a technical and an execution level. An agent-based approach is applied for modelling and transforming the technical layer to the execution layer. In [10], for the technical level a platform-independent metamodel for service-oriented architectures (PIM4SOA, an ATHENA result [11]) was used. In this section, we concentrate on the technical level. The scenario as described in Section 2 involves a complex interaction between the partners. The design of such a scenario implies a number of problems which have to be solved: x x x x
the different partners (may) expect different atomic protocol steps (service granularity) changing the protocol and integration of a new partner should be possible in a rapid manner (scalability) the execution of the message exchange should be flexible, i.e. in case a partner is unavailable or busy, the protocol should nevertheless proceed the partners expect and provide different data structures
These are typical interoperability problems occurring in cross-organisational scenarios which in our case have to be tackled with solutions for agents and SOAs. A core idea in the ATHENA project was to bring together different approaches and to combine them into a new framework: a modelling approach for designing collaborative processes, a model-driven development framework for SOAs and an agent-based approach for flexible execution. It turned out that these approaches fit nicely together, as e.g. the PIM4SOA metamodel and the agents’ metamodel bear a striking resemblance to each other. Hence, the first problem is solved by specifying a collaborative protocol which allows adapting to different service granularities. Scalability is envisaged by applying a model-driven approach: the protocol is specified on a platformindependent level so that a change in the protocol can be made on this level and code generated automatically. Flexibility is achieved by applying a BDI agent-based approach. BDI agents provide flexible behaviour for exception-handling in a natural way (compared to
A Model-driven, Agent-based Approach for a Rapid Integration
655
e.g. BPEL4WS where specifying code for faults often leads to complicated, nested code). Finally, the mediation of the data is tackled with transformations which are specified at design-time and executed at run-time by transforming the exchanged messages based on the design-time transformations. For integrating consortial partners and their services, partners define the shared process together. The common information/data model can also be defined together. Roughly speaking, two alternatives are possible (analogous to the local-as-view vs. global-as-view distinction, cf. [12]): the common data structure is defined independently from the local data model of each partner. Each partner then defines a (local) mapping from the common data model to the local model. The main advantage of this approach is scalability, i.e. the number of mappings which have to be specified is reduced from m*n (in the case of m partners interacting directly with n partners) to m+n [13]. The mapping in turn can be executed (at run-time) either by the consumer of the service or the partner service itself. The first solution is the one preferred by Semantic Web service descriptions, e.g. SAWSDL where the service provider describes the mapping to e.g. XML Schema. The mapping is used by a service consumer who invokes the service. The second solution means that the service consumer always sends the same message (e.g. a SOAP message) to a partner service and does not care about the local data model. This is reasonable if specifying as well as testing the mapping is tedious and the mapping underlies many changes. In a global-as-view approach, the common data model is defined as view on the local data models of the partners. A disadvantage of this approach is that the integration of a new partner requires changing the common data model. PIM4SOA: A Platform-Independent Model for SOAs
In this section, we give a summary of a metamodel for service-oriented architectures which allows modelling service-oriented processes in a model-driven manner. Processes are specified at design-time, transformed to and executed on a specific agent platform. External partner services are provided as Web services and integrated into the specified process. The PIM4SOA is a visual platform-independent model (PIM) which specifies services in a technology independent manner. It represents an integrated view of SOAs in which different components can be deployed on different execution platforms. The PIM4SOA model helps us to align relevant aspects of enterprise and technical IT models, such as process, organisation and product models. The PIM4SOA metamodel defines modelling concepts that can be used to model four different aspects or views of a SOA: Services (see Figure 2) are an abstraction and an encapsulation of the functionality provided by an autonomous entity. Service architectures are composed of functions provided by a system or a set of systems to achieve a shared goal. The service concepts of the PIM4SOA metamodel have been heavily based on the Web Services Architecture as proposed by W3C [14].
656
Ingo Zinnikus, Christian Hahn, Klaus Fischer
Fig. 2. PIM4SOA Service Aspect.
Information is related to the messages or structures exchanged, processed and stored by software systems or software components. The information concepts of the PIM4SOA metamodel have been based on the structural constructs for class modelling in UML 2.0 [15]. Processes describe sequencing of work in terms of actions, control flows, information flows, interactions, protocols, etc. The process concepts of the PIM4SOA metamodel have been founded on ongoing standardization work for the Business Process Definition Metamodel (BPDM) [16]. Non-functional aspects can be applied to services, information and processes. Concepts for describing non-functional aspects of SOAs have been based on the UML Profile for Modeling Quality of Service and Fault Tolerance Characteristics and Mechanisms [17]. Via model-to-model transformations, PIM4SOA models can be transformed into underlying platform-specific models (PSM) such as XSD, Jack BDI-agents or BPEL.
A Model-driven, Agent-based Approach for a Rapid Integration
657
Fig. 3. PIM4SOA Model for Pilot (part).
Transforming the metamodel to an agent platform and integrating partner services
The business protocol between dealer (dealer software), integrator and manufacturers is specified as PIM4SOA model (see Figure 3). In order to execute collaborative processes specified on the PIM level, the first step consists of transforming PIM4SOA models to agent models that can be directly executed by specific agent execution platforms. In our case, the Jack Intelligent agent framework is used for the execution of BDI-style agents. The constructs of the PIM4SOA metamodel are mapped to BDI-agents represented by the Jack metamodel (JackMM). For detailed information on JackMM we refer to [18]. In this service-oriented setting, the partners provide and exhibit services. Partner (manufacturer etc.) services are described as WSDL interfaces. The WSDL files are used to generate integration stubs for the integrator. We use a modeldriven approach for mapping WSDL concepts to agent concepts, thereby integrating agents into a SOA and supporting rapid prototyping. The partner models are transformed to a Jack agent model with the model-tomodel transformation developed in ATHENA. The following sketch outlines the metamodel mappings (see Figure 4).
658
Ingo Zinnikus, Christian Hahn, Klaus Fischer
Fig. 4. PIM4SOA and WSDLMM to JackMM transformation.
A ServiceProvider (i.e. ServiceIntegratorProvider in Figure 3) is assigned to a Team (which is an extension of an Agent). The name of the ServiceProvider coincides with the name of the Team, its roles are the roles the Team performs. Furthermore, the team makes use of the roles specified as bound roles in the CollaborationUse (i.e. Dealer and Manufacturer), in which it participates. For each of these roles, we additionally introduce an atomic Team. The Process of the ServiceProvider is mapped to the TeamPlan of the non-atomic Team. This TeamPlan defines how a combined service is orchestrated by making use of the services the atomic Teams (i.e. ManufacturerTeam and DealerTeam in Figure 5) provide. Finally, Messages that are sent by the roles we already have transformed are mapped to Events in JackMM.
Fig. 5. Jack Model generated from PIM4SOA (part).
The process integrator and the manufacturers are modelled as Web services. Their interface is described by WSDL descriptions publishing the platform as Web service. In the pilot, only the process integrator is executed by Jack agents which
A Model-driven, Agent-based Approach for a Rapid Integration
659
are wrapped by a Web service, whereas the manufacturers and other partner services are pure Web services. For integrating Web services into the Jack agent platform, we map a service as described by a WSDL file to the agent concept Capability which can be conceived of as a module. A capability provides access to the Web services via automatically generated stubs (using Apache Axis). A capability comprises of plans for invoking the operations as declared in the WSDL (it encapsulates and corresponds to commands such as invoke and reply in BPEL4WS). By executing the model transformations we automatically derive the JackMM model illustrated in Figure 5 (for more details, cf. [18]). This Jack model can in turn be automatically transformed into Jack code (e.g. in Figure 6) which can be modified with the Jack development environment, if necessary. It should be stressed that these model transformations and the respective code generation can be done automatically if (i) the PIM4SOA model is defined properly and (ii) the WSDL descriptions are available. The only interventions necessary for a system designer are the insertion of the proper XSLT transformations and the assignment of the capabilities to the agents/teams responsible for a specific Web service invocation.
Fig. 6. Generated Jack Team Plan for ServiceIntegrator.
660
Ingo Zinnikus, Christian Hahn, Klaus Fischer
4 Interoperable Services: Improving the Integration Process The integration of partners as described in the previous section is based on the assumptions that partners provide their service description in a WSDL format and that the mapping between heterogeneous data formats is specified especially for integrating the partner service at a pre-defined place in the process. However, a more flexible way of integrating is required if a SOA should tap its full potential. Therefore, a service description which supports flexible integration has to contain additional mapping information for mediating different data structures. In our automotive scenario, there are a number of standards which can form the basis for the global information model (e.g. STAR standard for automotive retail industry 29). The concepts of the standard and their relation to each other are either integrated into the common data model or used as annotation of the data model. If the local data model of the partners differs from the common model, the local partner is responsible for defining a mapping from the common model to the local model. If we assume that partners or other external services use the same vocabulary for their service description (or their annotation), the concepts can be used to annotate service descriptions and specify a mapping from the global data structure to the partner services and vice versa. This assumption of a shared vocabulary among actors is reasonable, since in our scenario product data underlies strong standardization pressure. As mentioned in the introduction, the Semantic Web standard SAWSDL is a suitable candidate for improving the integration process described in the previous section since it is open enough to allow for annotation with arbitrary ”models”, i.e. ontologies, embodied in the global data structure. Furthermore, SAWSDL contains references to a liftingSchemaMapping and a loweringSchemaMapping. A liftingSchemaMapping takes as input XML data (that adheres to a given XML schema) and produces semantic data (that adheres to a semantic model, in our case the global data model) as output. The application of a loweringSchemaMapping has the reverse effect. Both mappings can be used to facilitate the integration steps described in the previous section. Partners annotate their WSDL with mappings to and from the global data structure and produce a corresponding SAWSDL description. The transformation to Jack models can still be done according to the model-driven approach. The XSLT transformations which were necessary for each integration task can now be isolated and embedded into the service description. This allows reusing the service description at different steps inside and outside of the collaboration. Annotating an existing WSDL description of a service for integration is an additional effort for a partner; however, the advantage is reusability of a service description if the collaborative process changes.
29 www.starstandard.org/
A Model-driven, Agent-based Approach for a Rapid Integration
661
5 Related Work Apart from the wealth of literature about business process modelling, enterprise application integration and SOAs, the relation between agents and SOAs has already been investigated. [19] cover several important aspects, [20] propose the application of agents for workflows in general. [21] and [22] present a technical and conceptual integration of an agent platform and Web services. However, the model-driven approach and the strong consideration of problems related to crossorganisational settings have not been investigated in this context. Furthermore, our focus on tightly integrating BDI-style agents fits much better to a model-driven, process-centric setting than the Web service gateway to a JADE agent platform considered by e.g. [21].
6 Conclusions and Summary From a research transfer point of view, the following lessons could be learned: x
x x
Evidently, a model based approach is a step in the right direction as designtime tasks are separated from run-time tasks which allows performing them graphically. Moreover, it is easier to react to changes of the different interacting partners as only the models have to be adapted but not the runtime environment. A model-driven, agent-based approach offers additional flexibility and advantages (in general and in the scenario discussed) when agents are tightly integrated into a service-oriented framework. The new Semantic Web service standard SAWSDL supports integration of services into an existing business process.
In this paper, we presented a pilot developed within the EU project ATHENA in the area of multi-brand automotive dealers. For its realization, several integration problems on different levels had to be solved. We described a solution which supports rapid prototyping by combining a model-driven framework for cross-organisational service-oriented architectures with an agent-based approach for flexible process execution. The model-driven approach can be extended using Semantic Web service descriptions for service integration.
References [1] [2] [3]
E. Christensen, F. Curbera, G. Meredith, and S. Weerawarana. Web Services Description Language (WSDL) 1.1. W3C, 2001-03-15 2001. Business process execution language for web services, version 1.1. Technical report, OASIS, 2003-05-05 2003. Omelayenko, B. and Fensel, D. (2001): A two-layered integration approach for product information in B2B e-commerce. In Madria, K. and Pernul, G., editors, Proceedings of the Second International Conference on Electronic Commerce and
662
[4]
[5]
[6]
[7] [8]
[9]
[10]
[11]
[12]
[13]
[14]
[15] [16]
[17]
[18]
Ingo Zinnikus, Christian Hahn, Klaus Fischer
Web Technologies (EC WEB-2001), number 2115 in LNCS, pages 226–239. Springer-Verlag. Martin, D., Burstein, M., Hobbs, J., Lassila, O., McDermott, D., McIlraith, S., Narayanan, S., Paolucci, M., Parsia, B., Payne, T., Sirin, E., Srinivasan, N., Sycara,K. (2004): OWL-S: Semantic Markup for Web Services. W3C Member Submission, 22 November 2004. Available from http://www.w3.org/Submission/OWL-S/. de Bruijn, J., Bussler, C., Domingue, J., Fensel, D., Hepp, M.,Keller, U., Kifer, M., König-Ries, B., Kopecky, J., Lara, R., Lausen, H., Oren, E., Polleres, A., Roman, D., Scicluna, J., Stollberg, M. (2005): Web Service Modeling Ontology (WSMO). W3C Member Submission, 3 June 2005. Available from http://www.w3.org/Submission/WSMO/. Farrell, J., Lausen, H. (ed.) (2007): Semantic Annotations for WSDL and XML Schema. W3C Proposed Recommendation, 05 July 2007. Available from http://www.w3.org/TR/sawsdl/. JACK Intelligent Agents, The Agent Oriented Software Group (AOS), http://www.agent-software.com/shared/home/, 2006. Rao, A.S., Georgeff, M.P. (1991): Modeling Rational Agents within a BDIArchitecture. In: Allen, J., Fikes, R., Sandewall, E., eds.: 2nd International Conference on Principles of Knowledge Representation and Reasoning (KR91), Morgan Kaufmann publishers Inc.: San Mateo, CA, USA (1991) 473–484. Koehler, J., Hauser, R., Küster, J., Ryndina, K. and Vanhatalo, J., Wahler, M. (2006): The Role of Visual Modeling and Model Transformations in Business-driven Development. Proceedings of GT-VMT 2006, pages 1-12, 2006. Kahl, T., Zinnikus, I., Roser, S., Hahn, C., Ziemann, J., Müller, J.P., Fischer, K. (2007): Architecture for the Design and Agent-based Implementation of Crossorganizational Business Processes. 3rd International Conference on Interoperability for Enterprise Software and Applications (I-ESA 2007). Benguria, G., Larrucea, X., Elvesæter, B., Neple, T., Beardsmore, A., Friess, M. (2006): A Platform Independent Model for Service Oriented Architectures. 2nd International Conference on Interoperability of Enterprise Software and Applications (I-ESA 2006). Lenzerini, M. (2002): Data integration: a theoretical perspective. In Proceedings of the Twenty-First ACM SIGMOD-SIGACT-SIGART Symposium on Principles of Database Systems (Madison, Wisconsin, June 03 - 05, 2002). PODS ’02. ACM Press. Fensel, D., Ding, Y., Omelayenko, B., Schulten, E., Botquin, G., Brown, M., and Flett, A. (2001): Product data integration for B2B e-commerce. IEEE Intelligent Systems, 16(4):54–59. W3C, Web Services Architecture, World Wide Web Consortium (W3C), W3C Working Group Note, 11 February 2004. http://www.w3.org/TR/2004/NOTE-wsarch-20040211/ OMG, UML 2.0 Superstructure Specification, Object Management Group (OMG), Document ptc/03-08-02, August 2003. http://www.omg.org/docs/ptc/03-08-02.pdf IBM, Adaptive, Borland, Data Access Technologies, EDS, and 88 Solutions, "Business Process Definition Metamodel - Revised Submission to BEI RFP bei/200301-06", Object Management Group (OMG), Document bei/04-08-03, August 2004. http://www.omg.org/docs/bei/04-08-03.pdf OMG, UML Profile for Modeling Quality of Service and Fault Tolerance Characteristics and Mechanisms, Object Management Group (OMG), Document ptc/04-09-01, September 2004. http://www.omg.org/docs/ptc/04-09-01.pdf Hahn, C., Madrigal-Mora, C., Fischer, K., Elvesæter, B., Berre, A.J., Zinnikus, I. (2006): Meta-models, Models, and Model Transformations: Towards Interoperable Agents. MATES 2006.
A Model-driven, Agent-based Approach for a Rapid Integration
663
[19] Singh, M., Huhns, M. (2005): Service Oriented Computing: Semantics, Processes, Agents. John Wiley & Sons, Chichster, West Sussex, UK (2005). [20] Vidal, J.M., Buhler, P., Stahl, C. (2004): Multiagent systems with workflows. IEEE Internet Computing, 8(1):76–82, January/February 2004. [21] Greenwood, D., Calisti, M. (2004): Engineering Web Service –- Agent Integration. IEEE Systems, Cybernetics and Man Conference; 10-13 October, 2004, the Hague, Netherlands. [22] Dickinson, I., Wooldridge, M. (2005): Agents are not (just) web services: Considering BDI agents and web services. AAMAS 2005 Workshop on Service-Oriented Computing and Agent-Based Engineering (SOCABE).
BSMDR: A B/S UI Framework Based on MDR Panxiang Zhang, Song He, Qing Wang, Huiyou Chang Department of Computer Science, ZhongShan University, GuangZhou 510275, China {percy_zhang, hesong0326}@hotmail.com, [email protected], [email protected]
Abstract. The MDR(Model-Driven Runtime) environment is able to execute a PIM for a specific purpose such as generate system UI framework, which precisely models system UI and improves the UI development efficiency and maintainability. Model-Driven UI system such as Fuse or Versuf is aimed to GUI or general web UI rather than B/S UI of MIS. In this paper, we describe a B/S MIS’s UI framework based on Model-Driven Runtime. Firstly, we introduce the modeling process of UI requirement Analysis Model in the requirement analysis stage, including the task model and domain model. Then the paper shows how BSMDR transform such models into Platform Independent Model, including Object Model, Layout Model, Content Model, Presentation Model, Interaction Model and Mapping Model. Finally, we focus on the design and implementation of the BSMDR Framework, demonstrate our approach with an example. Long-term application shows that BSMDR can generate most of the presentation layer pages and greatly facilitate the User-interface development. Keywords: MDA, MDR, BS UI Framework
1 Introduction 1.1 Background
MDA is a software development framework presented by OMG[1](Object Management Group) in July, 2001. It is proposed in order to adapt to the shifting changes of the software requirements and development technology, upgrade the software develop process to a higher abstract level – Analysis Model. Model can precisely describe the system (or part of the system) with modeling language, which can be easily understood by computer[2]. MDA defines three models, CIM, PIM and PSM, from requirements analysis, system design and architecture, including technical details of the system aspects.
666
Panxiang Zhang, Song He, Qing Wang, Huiyou Chang
CIM(Computation independent model) is independent from software and used to describe business logic. PIM(platform independent Model) describes the system’s architecture. And PSM(platform specific Model) describes the model based on specific software development environments. The essence of MDA technology is abstract to get the PIM which is independent form realization technology and can precisely describe the system, then transform tools will convert PIM into specific technology-related PSM, and finally generate code. MDR(Model Driven Runtime) proposed by Jorg Pleumann and Stefan Haustein in 2003, abandoned the PIM to PSM conversion process and the PSM to code generation process in MDA, was able to dynamically analyze and generate code in runtime[3]. In MDR environment, we only need to provide PIM and API of storage and reading PIM. Then provide MDR runtime analytical models, code can be dynamically generated. MDR hide PSM and the PSM code generation process in its analytical model and the process of implementation. It provides different MDR for realization on different platforms, which helps different platforms share PIM to achieve MDA's original purpose. 1.2 MDR and Limitation of MDA
MDR solve many limitations in MDA[1]: Iteration is not required in MDR-Based System, PIM can generate code directly through the interpreting and executing of MDR, and therefore there are no complex issues brought by MDA. 2. MDR-based system abandoned PSM model, transition of these model and code generation process, which makes the system easier to maintain and debug. 3. MDR-based system to provide the storage and reading process of the PIM, the interpreting and executing of models are dynamic, so we can directly maintain the PIM and then MDR will dynamically interpret and execute these models.
1.
Although MDR systems abandon PSM, Model transition and code generation process, which will bring us some influence, MDR still retains many key features in MDA: MDR model is platform-independent. MDR preserves PIM’s feature of MDA, which makes PIM can still be shared in the use of different platforms. 2. MDR is still PIM-driven. In MDR-based software development process, PIM is defined firstly to drive the entire software development process. The difference is, in MDA System we must give definitions of PSM, PIM to PSM and the PSM code generation process, while in MDR systems MDR dynamically complete these processes. 1.
BSMDR: A B/S UI Framework Based on MDR
667
1.3 Our Work
The essence of MDR is still Model-Driven. There are many model-driven interface systems such as FUSE[4] and Vesuf[5] System. But all of these models are targeted at the graphical user interface or the common Web interface rather than B/S information system user interface. B/S information system UI is more suitable for model-driven development than WUI[6] and GUI or other UI frameworks[7,8] because: WUI is information-presentation oriented, while B/S Information System is always designed as task-oriented. B/S info system's user interaction model is more simple and fixed than WUI. 2. B/S information System use client browser as interaction platform. Endusers with different types of browsers require a rapid model-driven development prototype. 3. B/S information System UI doesn’t contain complex business logic but page layout and presentation code, which facilitates the page code generation. 1.
In this paper, we study the deficiencies of current B/S information system's UI models, propose a new B/S information system UI model, including Interface Requirements Analysis Model and Platform Independent Model. Interface Requirements Analysis Model is created in the requirements analysis phase by domain modeling experts. PIM is created in design and development phase to drive the interface code generation. At last, our paper focuses on the design and implementation of the MDR-based B/S information System UI Framework.
2 UI Requirements Analysis Model Interface requirement analysis modeling, built by the domain model experts is used in the interface required anlysis phase. This model consists of two parts, the task model and the domain model. The task model represented by the UML use case diagram is used to describe the users, the interface, use cases and the relationship among all of them. The domain model represented by the UML classes diagram is used to describe the conception object of the interface and the relationship of data sources among the conception objects. 2.1 Task Model
Task model describes the tasks directly participated in B/S information system by terminal user. It specifies interface of B/S information system, and use case of information system included by each interface in a high level. Information system’s use cases reflect original task attended by user participating in information system, such as creating and editing an In-Store Sheet etc.
668
Panxiang Zhang, Song He, Qing Wang, Huiyou Chang
Task model is presented by Use-Case Diagram of UML, which will be explained as follow diagram. Storing of the task model adopts the form of XMI compatible with UML.
Fig. 1. Example of Task Model
2.2 Domain Model
Domain model describe the concept class inferred by user interface system or object in the real world, store the object or entity dealt by user interface system. It just shows the abstract object. MDR can use it directly only after the system transforms the domain model to the object model of PIM. Domain Model is presented by class diagram of UML. Storing of the domain model adopts the form of XMI compatible with UML.
Fig. 2. Example of Domain Model
3 Modeling of PIM Platform Independent Model includes Object Model, Layout Model, Content Model, Presentation Model, Interaction Model and Mapping Model. Object Model is a more detail domain model, describing object, properties of object and properties of the properties of the object. Frame model describes the layout and the
BSMDR: A B/S UI Framework Based on MDR
669
division of the frame in the page. Representation model describes the page style of the generated page, and this model is open to users for customization of the UI. Interaction model describes interaction and navigation on the page. Mapping model describes the mapping relationships among other models of PIM. The content model is interface content’s modeling, corresponding with a division of layout model. It includes static text, dynamic text, image, navigation, page cited and information component. := (*) := (ID,<StaticText>||| |<PageReference>|) := (<SrcType>,) <SrcType> := (PAGE|SESSION|REQUEST) := ([,Length][,Width]) := (NavigationGroupID,()*) := (,URL,| <ButtonNavigation>|) := () <ButtonNavigation> := (<ButtonText>) := () <PageReference> := (ReferenceURL) := (,|)
The B/S information systems are relatively simple in interactive, generally completed by navigation. This definition of the interface system framework includes two interactive mode, link interaction and function interaction. Because of this interface framework for the user interface is based on HTML / JSP / JavaScript. All the interactive can be completed by JavaScript. Our paper referring to the function interaction is JavaScript interactive function. 3.1 Object Model
Object Model is more detail domain model. Generally, the class of it is corresponding to the class of domain class. Besides class name, property name and data source name, object model includes the following data. x x x x x x x x
Usually, the database’s table name of class is the database’s table name. (if more than one database table with the class, class only shows the primary database table, and secondary tables are defined in the attribute). Page name, is an attribute in the manifestation name on the page. Reserved field, this attribute is for the system’s future extension. Reserved attribute is used or not, means whether reserved attribute is used or not. Validation, system provides simple verification. The more complicated logic inspection needs to write codes during the period of developing. Text’s max length, aims at checking whether a string is legal length or not. Required Value. The yes Value refers to the submission of data, the data can not be empty or NULL, convenient for the data inspection. Default value. Pre-defined a value for the data.
670
Panxiang Zhang, Song He, Qing Wang, Huiyou Chang
x
Data sources of data were divided into four: no data source, predefine data sources, SESSION data sources and REQUEST data sources.
Since there is no corresponding UML model to describe object model, we use XML to store it. 3.2 Layout Model
The layout model describes the B / S information system interface’s layout model. The main description of the layout model is the division of the interface, before division, the entire interface is a whole. The division can be divided to subdivision. Sub-division can also be divided to sub-sub-division. Next, there is an example to explain the division using the formal symbols. The division includes five leaf division and two non-leaf division, the serial numbers of leaves is 2, 4, 5 and 7, respectively, of the non-leaf numbers 1 and 3. Number 6 and number 7 average number 3, taking 50% each.
Fig. 3. Example of Layout Model
In this paper the layout of the interface uses the symbols of the figure 3. Formal description of the symbols divided into five: Root division, horizontal division, vertical division, leaves division and derivatives. Fig. 4 shows the result of Fig. 3.
Fig. 4. Result Layout of Fig 3
BSMDR: A B/S UI Framework Based on MDR
671
3.3 Presentation Model
Presentation model is the content model presenting in the page. The content model of static text, dynamic text, photographs, and page navigation group in the presentation model is simple and corresponds with HTML tags. And we focus on the information components (form component and list component) of Presentation model. Form component includes information group ID, title and column count. ID is used only identification information packet. Title is displayed on the head of the form. And column count is listed on a page the number of column. := ( InfoGroup>*, * <MappingRelation>*) := (,<PageName>,) := (,,, ,,,[,] [,]) := (|<SingleLineTextBox>|<MultiLineTextBox>|| ||||<SpecialCtrl>) <MappingRelation> := (,)
Generally, list component shows the results of inquiries. Its presentation model includes such attributes: List ID, title, Pagination Item Count Each Pagination. And it also includes the mapping relation between list component and attribute of object model, the attribute’s location column. Its logic definition is like above BNFs. 3.4 Mapping Model
Mapping model describes the relationships of object model, layout model, content models, presentation models and interactive model. The mapping relationship between layout model layout and content model uses the layout number and content model ID. The framework can be linked, for the content will be placed in designated layout in the interface. The mapping relationship between content model layout and interactive model uses the navigation number and interactive model ID. There is an interactive function only in the navigation of the content model. Through this mapping relation, framework can generate interactive code. The mapping relationship between content model and presentation model uses content ID and presentation model ID. So framework can present pre-defined page through the pre-defined content. The mapping relationship between presentation model and object model has implied, and does not need the establishment of another model mapping and additional XML storage space.
672
Panxiang Zhang, Song He, Qing Wang, Huiyou Chang
Fig. 5. Example of Mapping Model
4 Design and Realization 4.1 Architecture, Develop Process and Execute Process
Fig. 6. System Architecture
Fig. 6 shows the entire architecture of our system, including the following parts: x x
System participants are domain experts, design/develop specialists and customers. UI Model, including UI Requirements Analysis Model (UIRAM) and Platform Independent Model (PIM), as described above chapter.
BSMDR: A B/S UI Framework Based on MDR
x
x x
673
System Component contains MDR, UIRAM Component, PIM Component, XMI/XML Access Component, Presentation Model Edit Tools and JSP pages. MDR is the core of system. It reads system UI PIM and generates page code according to the request of JSP page. UIRAM abstracts the requirements analysis model, configures the requirements analysis model and initializes the PIM. PIM component abstracts the PIM, reads, caches and persistence the PIM. Model Storage and Cache. We store UIRAM into XMI files, PIM into XML files, while Cache is used to accelerate the speed of MDR analysis and execute. UML tools and other extensive tools.
The development of BSMDR divides into the following 5 phase.
Fig. 7. Development Process
x x x x x
Domain Expert uses UML tools to create UIRAM(including task model and domain model), defines functions and conception objects in the Interface. Initialize PIMs with UIRAM created in step 1. Design/Develop specialists revise and review PIMs with tools or directly edit it. Design/Develop specialists edit JSP page code. The JSP pages only need to include code for MDR to generate UI framework and some special control. User can revise the display style of pages by editing the presentation model.
The entire process above is model-driven. Only the JSP pages development process needs to write a small number of interface code, and other processes are directly create or modify models.
674
Panxiang Zhang, Song He, Qing Wang, Huiyou Chang
Fig. 8. UI Running process
The execution of MDR and generation of page code is also model-centric. 1) User requests certain JSP page from server. 2) Servlet analyzes JSP page and send request to MDR component with code dev specialist written in JSP. 3) MDR requests the PIM component for the PIM models requested in current process when receive a JSP request. 4) PIM component gets Models from Cache or XML files and return to MDR. 5) MDR generates page code with models get from step 4 and code generation algorithms. 6) Servlet gets the page code in step 5 and then adds it into JSP pages, and finally generates the result html page. 4.2 Design, Class Diagram and Interface Design
Fig. 9. Class Diagram
As Figure 9 shows is the system class diagram. MDRInterface includes the following four interfaces, generatePageCode can generate page html code by pageName parameter, generateContextHTML generate content model’s html code according to content model itself, generateDivHTML generate html code based on tables, and setCodeGeneratorPlugIn is used to set plugin for MDR code generation. Plugin is mainly used to customize the page code style and minimize the scale of model. In B/S information system, the UI is almost the same, for example, forms colors, the style controls, button styles, etc. So it is helpful to use code generation plugin to generate a unified interface code, without the tedious work to configure of presentation models. The definition of code generation plugin is in Fig. 9.
BSMDR: A B/S UI Framework Based on MDR
675
In Class Diagram, besides MDRInterface and CodeGenerationPlugin, PIMModel manage PIM and its related operations. PIMModelPersistence manage the persistence work of PIM, which including Read Model from XML and save Model to XML process. RequirementModel manage the UIRAM and its related operations. RequirementModelReader is used to read model from UML XMI files. And DataObject, MappingObject, ContextObject, PageDiv, PresentationModel, InteractionObject, TaskModel, DomainObject as show in diagram. 4.3 Case Study
Through UI required analysis stage, we get Interface requirement analysis modeling. Here I will introduce an example to explain how the B/S information system generates code by PIM. Firstly, we defined a layout model. In the layout model we can define how many divisions we need. Then in each division block, we can define the data presented in that block.
Fig. 10. Layout/Content/Object Model
Configure Object Model instance. Each attribute can be edited as Fig. 11 shows.
Fig. 11. Object Model in detail
List Component in Fig. 12 as an independent component in content model, we can configure this component for result display. We also can configure the details of the component in presentation model.
Fig. 12. A list Component
Interactive model we should only give the image’s name and the JavaScript function name as the top buttons in Fig. 13 shows. After all these settings, we get the following result page.
676
Panxiang Zhang, Song He, Qing Wang, Huiyou Chang
Fig. 13. Result Page
5 Conclusions BSMDR UI Framework greatly enhance the UI develop efficiency of the B/S information system. First, it moves the concern point of UI development to a macro level, developers just need to focus on models rather than pages or code in every phase of development; second, it reduces the code amount, page code can be generated automatically by writing MDR code or special control code; Third, the process of developing UI and developing system is efficiently join together, task model and domain model can be shared in both sides, all of those facilitates the efficiency of development. BSMDR UI Framework also improves the Maintainability of the information system. Customers can directly revise the presentation model to change the display style of UI, reduce the maintenance job of design/development specialists. Code Generation Plugin reduces the duplicated code and enhanced the flexibility of the code generation process. Update of UI Framework can be done by reviewing the UI models, which is easier to maintain. In this paper, through analysis and model of UI requirements analysis model and PIM, we design and implement a B/S information system UI framework. UI code can be generated by dynamically analyze and execute the MDR model and MDR component as section 4.2 described. This paper still has many items need for further improvement. Interaction Model of BSMDR is simple and can fulfill most of the system requirements, but it also restricts the interaction ability and application field of system. This paper hasn’t performed much works on the Cross-platform study, especially the crossplatform application of BSMDR framework. Modeling tools is not obligatory in our system, but efficiency can be greatly enhanced when it is provided. We use BNF to describe PIM in this paper, and how to describe in UML is also an important topic in further research.
BSMDR: A B/S UI Framework Based on MDR
677
References [1] [2] [3] [4] [5]
[6] [7]
[8]
Object Management Group. http://www.omg.org/mda/. 2001 Anneke Kleppe, Jos Warmer, Wim Bast, MDA Explained:The Model Driven Architecture:Practice and Promise, Addison Wesley, 2003 Jorg Pleumann, Stefan Haustein. A model-driven runtime environment for web application. Springer Verlag, UML Conference 2003, LNCS. 2003. Frank LonczewskiInstiute. Providing user support for interactive applications with FUSE. Proceeding of ACM Press, 1997. Lars Braubach, Alexander Pokahr, Daniel Moldt, Andreas Bartelt, Winfried Lamersdorf. Tool-Supported Interpreter-Based User Interface Architecture for Ubiquitous Computing. Springer-Verlag Berlin Heidelberg DSVIS 2002 LNCS 2545 2002: 89-103. Piero Fraternali, Paolo Paolini. Model-driven development of Web applications of the AutoWeb system. ACM Press 2000 Transaction. 2000: 323-382. Jurgen Falb, Thomas Rck, Edin Arnautovic. Fully-automatic generation of user interfaces for multiple devices from a high-level model based on communicative ats. Proceedings of the 40th Hawaii International Conference on System Sciences. 2007. Christian Janssen, Anette Weisbecker, Jurgen Ziegler. Generating User Interface from Data Models and Dialogue Net Specification. INTERCHI 1993. 1993.
A Proposal for Goal Modelling Using a UML Profile Reyes Grangel1, Ricardo Chalmeta1, Cristina Campos1, Ruth Sommar1, Jean-Pierre Bourey3 1
2
3
Grupo de Investigación en Integración y Re-Ingeniería de Sistemas (IRIS), Dept. de Llenguatges i Sistemes Informàtics, Universitat Jaume I, 12071 Castelló, Spain {grangel, rchalmet, camposc}@uji.es Combitech, SE-164 84 Stockholm, Sweden [email protected] Laboratoire de Génie Industriel de Lille, Ecole Centrale de Lille, 59561 Villeneuve d’Ascq Cedex, France [email protected]
Abstract. UML has become the standard object-oriented language for modelling systems in the domain of Software Engineering. More and more relationships are being established between this domain and the Enterprise Modelling context, and the number of advantages of using UML as knowledge representation language is also growing. Some recent research works, such as the MDE approaches, suggests it would be interesting to provide concrete methods and mechanisms that facilitate the much-needed link between enterprise models and the requirements defined to develop the computer system. UML is a good candidate to connect these two levels, that is to say, the CIM level and the PIM level from an MDA perspective. In this paper, we present a Proposal for Enterprise Knowledge Modelling based on UML, which focuses on representing enterprise knowledge. This Proposal is developed at the CIM level and presents different models for capturing the software requirements of a Knowledge Management System at the CIM level. In particular, the metamodel concerning the goal dimension and the UML Profile implemented from it are shown. Finally, the resulting Goal Diagram is explained by means of an example. Keywords: Enterprise Modelling, Knowledge Representation, Goal Modelling, Model Driven Architecture, UML Profile
1 Introduction Enterprise Modelling refers to the externalisation and expression of enterprise knowledge [1], which provides a holistic view of an enterprise and considers all its dimensions, i.e. processes, decisions, information, behaviour, resources and so forth [2]. Nowadays, there are a great number of languages, standards,
680
R. Grangel, R. Chalmeta, C. Campos, R. Sommar, J.-P. Bourey
methodologies and their corresponding tools, such as GRAI [3], IEM [4], MEML [5], IDEF [6], etc., available for use. On the other hand, Unified Modeling Language (UML) has become a standard language for object-oriented modelling and has been used successfully for modelling software systems in very different domains [7]. However, UML is a general-purpose modelling language that can also be useful for modelling other types of systems such as, for example, an enterprise [8, 9]. Other works, such as [10], point out the possibility of using UML as a language for Enterprise Modelling, while in [11] it is explained how and under what conditions this can be performed. However, the benefits of model-driven approaches and the new specification of UML2 specification suggest the need to provide more practical examples for Enterprise Modelling with UML based on these recent works [12], and especially for Enterprise Knowledge Modelling. Some work, such as [13], has been carried out in this line, but this proposal is not enterprise-oriented and thus it does not take into account the different dimensions of the enterprise for modelling [14]. The main weakness of Enterprise Modelling is the lack of strong links between enterprise models and software generation [15]. One solution, as pointed out in [16], is that the role of enterprise models should be that of facilitating the design, analysis and operation of the enterprise according to models, i.e. it should be driven by models (model-driven). In this context, UML is a good candidate to establish the necessary links between enterprise models and systems models in general (and, more particularly, requirement engineering) using the extension mechanism of UML Profiles. Taking this context into account, the aim of the research presented in this paper was to consider the possibility of using UML for Enterprise Modelling with two objectives: first, to provide an extension of UML (one of the modelling languages most commonly used by engineers to develop software) focused on representing enterprise goals and, second, to establish the basis for connecting enterprise goals and system models. To achieve this, the capacity of UML2 to extend the language to a specific domain was used, and a UML2 Profile for Enterprise Goal Modelling was implemented. The paper is organised as follows. Section 2 outlines two approaches related to the aim of establishing connections between enterprise and system models. In section 3, the Proposal for Enterprise Knowledge Modelling using UML is described. Section 4 presents one of the metamodels and UML Profiles implemented in this Proposal, in particular the one related to the goal dimension. Finally, section 5 outlines the main conclusions.
2 Linking Enterprise Models and System Models Linking enterprise models in general and, more particularly, enterprise goals and strategies to the first step in software development, that is to say, requirements elicitation, is one of the recent research trends bridging the domains of Enterprise Modelling and Software Engineering. This section offers a brief summary of two initiatives developed to connect the enterprise and system models.
A Proposal for Goal Modelling Using a UML Profile
681
2.1 MDA
Model-driven approaches are a good solution to try to put to rights the shortcomings of Enterprise Modeling for generating code from enterprise models. Model Driven Engineering (MDE) or Model Driven Development (MDD) approaches are a new paradigm in the context of Software Engineering. Such a perspective attempts to improve the software development process by focusing on models as the primary artifacts and transformations as the primary operation carried out on models (which are used to map information from one model onto another). As a result, they may have important consequences on the way information systems are built and maintained [17, 18]. As an example, Model Driven Architecture (MDA) [19] defined by the OMG [7], is intended to promote the use of models and their transformations as a fundamental way of designing and implementing different kinds of systems. The main purpose of this approach is to separate the functional specification of a system from the details of its implementation on a specific platform. This architecture therefore defines a hierarchy of models from three points of view [15, 19, 20], namely: x
x
x
Computation Independent Model (CIM): which is used to represent domain and system requirements. It is based on business models and shows the enterprise from a holistic point of view, that is independent of the computation. Platform Independent Model (PIM): which is used to model system functionality but without defining how it will be implemented and on what platform; it is focused on information and sets out from a computational point of view. Platform Specific Model (PSM): the PIM is transformed into a platformdependent model according to the platform selected for use and is focused on a technological point of view.
Nowadays, the model-driven approach is followed by numerous projects such as MODELWARE [21], ATHENA [22], and INTEROP [23] in the European Union, and Model Driven Architecture (MDA) [19] defined by the OMG. MDA is an emerging paradigm. A lot of work is being carried out within the OMG framework in relation to PIMs, PSMs, and so forth, but the characterisation of CIMs and the features that an enterprise model must satisfy in order to be considered a CIM and generate appropriate software is still in progress [15]. The main problem involved in enterprise modelling at the CIM level is how to accomplish a clear definition of the various aspects that the actors want to take into account. The domain and purpose of modelling, together with the aspects that must be highlighted, should be defined, and then the most suitable Enterprise Modelling Language (EML) will have to be chosen [24]. Therefore, the number of issues that can be modelled at the CIM level raises the complexity of CIM models and their transformations, especially when the final aim is to capture enterprise knowledge.
682
R. Grangel, R. Chalmeta, C. Campos, R. Sommar, J.-P. Bourey
2.2 GORA Methods
At the same time, another areas of research have emerged that recognise the importance of guaranteeing the quality of requirements by goals, especially Goal Oriented Requirements Analysis (GORA) methods, which aim at bridging the gaps between stakeholders’ needs and requirements specifications [25]. These methods use mainly progressive top-down approaches [26, 27, 28]. They start with the definition of the customers’ needs and, by refining and breaking the needs down into more concrete goals, allow elicitation of the system requirements using a topdown approach. The result is generally structured as a directed AND-OR graph. Its upper parts show the needs and its lower parts show the requirements. These approaches can be combined or interlaced with use case modelling techniques [29, 30, 31, 25] in order to get a clear connection between the goal-oriented methods and the requirements elicitation processes. For example, [25] proposes such an approach, which allows collaborative tasks to be supported and goals to be decomposed from multiple perspectives. All these methods and techniques are devoted but no limited, to information systems and software engineering and can be used in a broader context such as Enterprise Modelling. As pointed out by [32], in this context Enterprise Modelling can be used in connection with requirements engineering and goal-oriented approaches. In this way, it is possible to establish links between the goals of the enterprise defined at several levels of granularity, (for example from the strategic, tactical, and operative levels) and the requirements system to be implemented in order to reach these goals. However, one weakness of these approaches is that they generally use different formalisms both at the enterprise level for expressing strategic goals and at the IT system development level. For example, a specific formalism is developed in [32] for describing a Strategic Dependency model. In the following sections, a Proposal based on the definition of a UML Profile, which allows for the development of an integrated approach based on a unique formalism, is presented.
3 Proposal for Enterprise Knowledge Modelling The study presented on this paper focuses on Enterprise Goal Modelling. It belongs to a wider research project [33] which aims at modelling Enterprise Knowledge at the CIM level and the result of using it in enterprises is a graphical model of the Enterprise Knowledge Map. In general terms, this Proposal is based on MDE and, more particularly, on the MDA defined by the OMG. According to this approach, the process of developing a computer system is based on the separation of the functional characteristics of the system from the details of its specification on a specific platform. Therefore, the Proposal defines a framework for modelling enterprise knowledge on the CIM level at two levels of abstraction, which are required due to the great complexity of this level (see Table 1):
A Proposal for Goal Modelling Using a UML Profile
683
2.
CIM-Knowledge: this corresponds to the top level of the model at the CIM level; the enterprise is represented from a holistic point of view, thus providing a general vision of the enterprise focused on representing enterprise knowledge that will later be detailed in a local way in successive lower levels. 4. CIM-Business: here, the vision of enterprise knowledge is detailed by means of a representation of its business, according to three types of models, i.e. the Organisational, the Structure and the Behaviour Model. Table 1. Proposal for Enterprise Knowledge Modelling.
The Proposal follows the MDE premise as a fundamental concept together with the following principles, it is focused on Enterprise Modelling, since it takes into account dimensions of the enterprise and previous works leading to initiatives such as UEML 30 [35] and POP* 31 [33]; and it is a user-oriented modelling framework, since it should be developed at the CIM level by domain experts. From a technological point of view, this Proposal was implemented using the capacity of UML2 to extend a metamodel, that is to say, by defining a UML2 Profile for each aspect of the enterprise to be taken into account. A summary of the Proposal showing its levels of abstraction, metamodels and the profiles that are developed, as well as the models and diagrams proposed for each level, can be seen in Table 1. The Proposal was developed following these steps: Definition of the models and diagrams that can be used to obtain the Enterprise Knowledge Map. The models and diagrams defined within the Proposal are presented in Table 1. 2. Definition of the metamodels shown in Table 1, so that the elements used for Enterprise Knowledge Modelling can be represented at the conceptual level. 3. Definition of the UML Profile for Enterprise Knowledge Modelling, following these steps for each of the profiles detailed in Table 1:
1.
30 Unified Enterprise Modelling Language, first developed by the UEML Thematic Network [34] and currently being worked on by INTEROP NoE [23]. 31 Acronym of the different enterprise dimensions: Process, Organisation, Product, and so on (represented by a star), proposed by ATHENA IP [22].
684
R. Grangel, R. Chalmeta, C. Campos, R. Sommar, J.-P. Bourey
x x x
Definition of stereotypes, tagged values and constraints of the profile. Extension of the metaclasses of the UML2 Metamodel. Detailed description of the profile.
4.
Implementation of the Profile using a UML tool (IBM Rational Software Modeler Development Platform 32 or MagicDraw UML 12.0. 33, for example). 5. Validation of the Profile by means of a real case study.
In the next section, one of the profiles that make up the UML Profile for Enterprise Knowledge Modelling is presented as an example of how the goal dimension is taken directly into account in this Proposal, since there are some implicit concepts related to GORA concepts inside the other models of the Proposal. Therefore, the main steps outlined above are shown in the next section for the goal dimension, that is to say, the suggested Goal Metamodel, the ’UML Profile for GM’ that is implemented, and an example to illustrate the Goal Diagram.
4 UML Profile for GM The Goal Metamodel was defined with the objective of representing the elements related to the goal dimension in enterprises at conceptual level. Based on [36], the main elements that can be represented at the conceptual level are shown in Table 2. Table 2. Conceptual elements to be represented in the goal dimension.
Fig. 1. Goal Metamodel: an excerpt from the Organisational Metamodel.
Figure 1 depicts an excerpt from the Organisational Metamodel (the Goal Metamodel), showing only the constructs needed to represent the enterprise goals defined from Table 2, which are the following: x
Objective: this represents any target that enterprises want to achieve, it is possible to define it at different hierarchical levels, such as the strategic, tactical and operative levels. At the strategic level, this construct is also used to represent the enterprise’s mission and vision. For this class, the following properties are defined:
-
-
x x
type: this specifies the category of the objective, which is one of the following defined in the enumeration “ObjectiveType”: mission, vision, strategic, tactical or operative. isLeaf: this indicates whether it is possible to divide the objective into other sub-objectives. level: this indicates the hierarchical levels on which the objective is defined: one of the following levels defined in the enumeration “LevelType” is possible: collaborative, strategic, tactical or operative.
Strategy: this represents how the enterprise wants to achieve the objectives proposed at the strategic level. Plan: this represents the organisation of the work at different hierarchical levels in order to accomplish the objectives and strategy defined in the enterprise. For this class, the following properties are defined:
-
-
type: this specifies the kind of plan, which can be one of the types defined in the enumeration “PlanType” i.e. business, action or initiative. period: this specifies the interval of time that the plan is defined for.
686
R. Grangel, R. Chalmeta, C. Campos, R. Sommar, J.-P. Bourey
x
Variable: this represents any factor that is able to influence the execution of the plans defined in the organisation. For this class, the following properties are defined:
-
type: this specifies one of the categories defined in the enumeration “VariableType” i.e. values, strengths, weaknesses, opportunities, threats, keySuccess, policies or attitudes.
Figure 2 shows the diagram of the “UML Profile for GM” that was implemented using MagicDraw UML 12.0., which was developed from the Goal Metamodel shown in Figure 1.
Fig. 2. Diagram of the “UML Profile for GM”.
Finally, Figure 3 shows the Goal Diagram for a real case applied to an audit enterprise, where some of the requirements needed for the computer system can be observed; this diagram can be mapped onto the use case at system level. The diagram presented in Figure 3 is located at the CIM-Business level. This diagram is useful to show the goal structure defined by an audit enterprise in order to fit its strategic plan. The Goal Diagram is included within the Organisation Model (see Table 1), since it is linked to the following diagrams developed in this real case: x x x
Organisational Structure Diagram: this shows the human resources and skills that are needed to accomplish the goals depicted in the Goal Diagram. Analysis Diagram: this is focused on representing the Performance Measurement System defined by the audit enterprise to measure the degree to which the goals depicted in the Goal Diagram are being accomplished. Business Rules Diagram: this is dedicated to showing some of the Business Rules defined by the audit enterprise, for example, those related to the values such as ’Professional ethics’ and ’Moral integrity’ defined in the Goal Diagram.
A Proposal for Goal Modelling Using a UML Profile
687
Moreover, the issues represented in the Goal Diagram are the origin of the several diagrams included in the Structure and Behaviour Model (see Table 1). For example, the Tactical Objective defined in the Goal Diagram, ’To develop a software to support the auditing process’, is linked to a Process Diagram, which shows the IDEF0 diagram of this process. It is also connected to the Resource Diagram which describes the requirements needed to implement the Computer System related to the accomplishment of this objective.
Fig. 3. Goal Diagram for an audit enterprise.
5 Conclusion The Proposal for Goal Modelling presented in this paper is a first attempt to establish links between enterprise and system models. This Proposal is part of a wider research project aimed at defining a set of UML profiles for bridging the
688
R. Grangel, R. Chalmeta, C. Campos, R. Sommar, J.-P. Bourey
Enterprise Modelling and the System Development domains. Combining the main advantages of using a common basic formalism (i.e. UML) with its adaptation to specific concerns and viewpoints through the definition of UML Profiles and, lastly, with an MDA approach makes it easier to define the links between models at both the enterprise level and the system level.
Acknowledgments This work was funded by CICYT DPI2006-14708 and the EC within the 6th FP, INTEROP NoE (IST-2003-508011). The authors are indebted to TG2 [23].
References [1] [2]
[3] [4] [5] [6] [7] [8] [9] [10] [11]
[12]
[13]
[14]
[15]
Vernadat, F.B.: Enterprise Modeling and Integration: Principles and Applications. Chapman and Hall (1996) Doumeingts, G., Chen, D.: Interoperability development for enterprise applications and software. In Cunningham, P., Cunningham, M., Fatelnig, P., eds.: Building the Knowledge Economy: Issues, Applications, Case Studies. eBusiness, IOS Press Amsterdam (2003) Doumeingts, G., Vallespir, B., Chen, D.: Decisional modelling GRAI grid. In: International Handbook on Information Systems. Springer-Verlag (1998) 313–337 Spur, G., Mertins, K., Jochem, R.: Integrated Enterprise Modelling. Beuth Verlag GmbH (1996) Krogstie, J.: Extended Enterprise MEthodology, Final version 1-12-d-2002-01-0. Technical report, EXTERNAL (IST-1999-10091) (2002) IDEF: Integrated DEFinition methods. http://www.idef.com/ (2008) OMG: Object management group. http://www.omg.org/ (2008) Eriksson, H.E., Penker, M.: Business Modeling with UML: Business Patterns at Work. J. Wiley (2000) Marshall, C.: Enterprise Modeling with UML. Designing Successful Software Through Business Analysis. Addison Wesley (2000) Panetto, H.: UML Semantics Representation of Enterprise Modelling Constructs. In: ICEIMT. (2002) 381–387 Berio, G., Petit, M.: Enterprise Modelling and the UML: (sometimes) a conflict without a case. In: Proc. of the 10th ISPE Int. Conf. on Concurrent Engineering: Research and applications. (2003) 26–30 Grangel, R., Bourey, J.P., Chalmeta, R., Bigand, M.: UML for Enterprise Modelling: basis for a Model-Driven Approach. In Doumeingts, G., Müller, J., Morel, G., Vallespir, B., eds.: Enterprise Interoperability. New Challenges and Approaches, Springer, London (2007) 91–102 Abdullah, M.S., Kimble, C., Paige, R., Benest, I., Evans, A.: Developing a UML Profile for Modelling Knowledge-Based Systems. In: Model Driven Architecture. Volume 3599 of LNCS., Springer, Heidelberg (2005) 220–233 IFIP-IFAC: Generalised enterprise reference architecture and methodology (GERAM). Technical Report Version 1.6.3 (1999) http://www.cit.gu.edu.au/ bernus/taskforce/geram/versions. Grangel, R., Chalmeta, R., Campos, C., Coltell, O.: Enterprise Modelling, an overview focused on software generation. In Panetto, H., ed.: Interoperability of ESA
A Proposal for Goal Modelling Using a UML Profile
[16] [17]
[18]
[19] [20]
[21] [22]
[23] [24]
[25]
[26] [27]
[28]
[29] [30]
[31]
[32]
[33]
[34]
689
Workshops of the INTEROP-ESA International Conference EI2N, WSI, ISIDI and IEHENA 2005, Hermes Science Publishing (2005) 65–76 Fox, M.S., Gruninger, M.: Enterprise Modelling. AI Magazine 19 (1998) 109–121 Berre, A.J., Hahn, A., Akehurst, D., Bezivin, J., Tsalgatidou, A., Vermaut, F., Kutvonen, L., Linington, P.F.: Deliverable D9.1. State-of-the art for Interoperability architecture approaches, Model driven and dynamic, federated enterprise interoperabilityarchitectures and interoperability for non-functional aspects. Technical report, INTEROP NoE (IST-2003-508011) D9 (2004) Aagedal, J.., Bézivin, J., Linington, P.F.: Model-Driven Development. In Malenfant, J., Ostvold, B.M., eds.: ECOOP 2004 Workshop Reader. Volume 3344 of LNCS., Springer, Heidelberg (2005) 148–157 OMG: MDA Guide Version 1.0.1. Document number: omg/2003-06-01 edn. (2003) Berrisford, G.: Why IT veterans are sceptical about MDA. In: 2nd European Workshop on MDA with an emphasis on Methodologies and Transformations, Kent, Computing Laboratory, University of Kent (2004) 125–135 MODELWARE: Modeling solution for software systems Project (IST-2004-511731). http://www.modelware-ist.org/ (2008) ATHENA: Advanced Technologies for interoperability of Heterogeneous Enterprise Networks and their Applications IP (IST-2001-507849). http://www.athena-ip.org (2008) INTEROP: Interoperability Research for Networked Enterprises Applications and Software NoE (IST-2003-508011). http://www.interop-noe.org (2008) Grangel, R., Chalmeta, R., Campos, C.: Requirements for Establishing a Conceptual Knowledge Framework in Virtual Enterprises. In Abramowicz, W., Mayr, H.C., eds.: Technologies for Business Information Systems, Springer (2007) 159–172 Kaiya, H., Saeki, M.: Weaving Multiple Viewpoint Specifications in Goal Oriented Requirements Analysis. In: APSEC’04: Proceedings of the 11th Asia-Pacific Software Engineering Conference (APSEC’04), IEEE Computer Society (2004) 418–427 Antón, A.I.: Goal-Based Requirements Analysis. In: IEEE International Conference on Requirements Engineering (ICRE’96). (1996) 136–144 van Lamsweerde, A.: Goal-Oriented Requirements Engineering: A Guided Tour. In: RE’01: Proceedings of the 5th IEEE International Symposium on Requirements Engineering, IEEE Computer Society (2001) 249 Kaiya, H., Horai, H., Saeki, M.: AGORA: Attributed Goal-Oriented Requirements Analysis Method. In: 10th Anniversary Joint IEEE International Requirements Engineering Conference (RE’02), IEEE Computer Society (2002) 13 Rolland, C., Souveyet, C., Achour, C.B.: Guiding goal modelling using scenarios. IEEE Transaction on Software Engineering 24 (1998) 1055–1071 Antón, A.I., Carter, R.A., Dagnino, A., Dempster, J.H., Siege, D.F.: Deriving goals from a use-case based requirements specification. Requirements Engineering Journal 6 (2001) 63–73 Santander, V.F.A., Castro, J.: Deriving Use Cases from Organizational Modeling. In: RE’02: Proceedings of the 10th Anniversary IEEE Joint International Conference on Requirements Engineering, IEEE Computer Society (2002) 32–42 Yu, E.S.K., Liu, L., Li, Y.: Modelling Strategic Actor Relationships to Support Intellectual Property Management. In: ER’01: Proceedings of the 20th International Conference on Conceptual Modeling, Springer-Verlag (2001) 164–178 Grangel, R., Chalmeta, R., Schuster, S., Peña, I.: Exchange of Business Process Models using the POP* Meta-model. In Bussler, C., Haller, A., eds.: BPM 2005. Volume 3812 of LNCS., Springer, Heidelberg (2006) 233–244 UEML: Unified Enterprise Modelling Language Thematic Network (IST-200134229). http://www.ueml.org (2008)
690
R. Grangel, R. Chalmeta, C. Campos, R. Sommar, J.-P. Bourey
[35] Opdahl, A.L., Henderson-Sellers, B.: 6. Template-Based Definition of Information Systems and Enterprise Modelling Constructs. In: Ontologies and Business System Analysis. Idea Group Publishing (2005) [36] Williams, T.: The Purdue Enterprise Reference Architecture. In Elsevier, ed.: Proc. of the Workshop on Design of Information Infrastructure Systems for Manufacturing. (1993)
Index of Contributors
Abecker, Andreas ....................... 381 Åhlfeldt, Rose-Mharie.................. 41 Anderl, Reiner .................... 471, 533 Apostolou, Dimitris .................... 381 Askounis, Dimitrios............ 159, 639 Badr, Y. ...................................... 301 Barthe, A-M................................ 437 Bastiaans, Joris ........................... 183 Bastida, L. .................................. 221 Bénaben, Frédérick..... 145, 437, 583 Beneventano, Domenico............. 329 Benguria, Gorka ........................... 29 Berreteaga, A.............................. 221 Biennier, F.................................. 301 Bourey, Jean-Pierre .................... 679 Bucko, Jozef ............................... 135 Campos, Cristina ........................ 679 Cañadas, I. .................................. 221 Chalmeta, Ricardo ...................... 679 Chang, Huiyou............................ 665 Chapurlat, Vincent...................... 583 Charalabidis, Yannis .................. 289 Chungoora, N. ............................ 411 Couget, Pierre............................. 583 Dahlem, Nikolai ......................... 329 De Labey, Sven .......................... 233 De Nicola, A............................... 571 Delina, Radoslav ........................ 135 Dirgahayu, Teduh....................... 261 El Haoum, Sabina....................... 329 Errasti, A. ................................... 521 Eskeli, J. ..................................... 499
Esper, A. .....................................301 Fernando, Terrence .......................99 Figay, Nicolas .............................423 Fischer, Klaus .............................651 Folmer, Erwin .............................183 Gautier, Gilles...............................99 Ghodous, Parisa ..........................423 Gionis, George ....................159, 639 Gocev, Pavel ...............................397 Gramza, M. .................................499 Grangel, Reyes............................679 Hahn, Axel ..................................329 Hahn, Christian ...........................651 Harding, Jenny ............................381 He, Song .....................................665 Heather, Michael.........................625 Hiel, Marcel ................................197 Hu, Kun Yuan .............................595 Imache, R. .......................................3 Izza, S. ....................................3, 353 Jäkel, Frank-Walter.....................511 Jankovic, Marija..........................547 Johnson, P. ..................................611 Jun, Wei ......................................275 Kääriäinen, J. ..........................55, 85 Kalaboukas, Kostas.....................159 Knothe, Thomas..................511, 547 Kokovic, Zoran ...........................547 Kommeren, R..............................499 Koumpis, Adamantios.................209 Koussouris, Sotirios ............159, 639 Ku1, Tao .....................................595
agent .....................................................................................................................343 Agent based approaches to interoperability.................................................. 197, 651 Agility.......................................................................................................................3 Agility Evaluation ....................................................................................................3 architectures..........................................................................................................639 Architectures and platforms for interoperability .................... 71, 171, 197, 233, 625 ATL prototype......................................................................................................145 BS UI Framework ................................................................................................665 Business....................................................................................................................3 Business Aspects of Interoperability ....................................................................113 business interoperability.......................................................................................329 Business models for interoperable products and services...............................29, 247 Business Process Management.............................................................................571 Business Process Reengineering in interoperable scenarios................. 113, 301, 521 business schema representation ............................................................................329 collaboration.........................................................................................................135 collaborative business processes ..........................................................................183 collaborative process .................................................................................... 145, 583 context ..................................................................................................................343 crisis management ................................................................................................583 cross-enterprise business processes ......................................................................183 cultural diversity in collaborated team work ........................................................471 Decentralized and evolutionary approaches to interoperability.................... 125, 461 design methodologies ...........................................................................................183 Design methodologies for interoperable systems ................. 125, 171, 233, 315, 367 different organisational structures ........................................................................471 eGovernment Interoperability...............................................................................289 eGovernment Ontology ........................................................................................289 electronic transactions ..........................................................................................639 Engineering interoperable systems ......................................................... 15, 233, 499 Enterprise application Integration for interoperability ......................... 315, 461, 485 Enterprise applications analysis and semantic elicitation.....................................611
694
Index of Keywords
Enterprise Information System.................................................................................3 enterprise interoperability.....................................................................................381 Enterprise modeling for interoperability .............. 113, 159, 247, 261, 289, 301, 611 Enterprise Modelling ............................................................................................679 Enterprise modelling for interoperability .............................................................521 Enterprise Modelling in the Context of Interoperability ......................................511 ERP ......................................................................................................................639 Experiments and case studies in interoperability..................................................521 Federation.............................................................................................................423 Formal approaches and formal models of interoperability ................... 595, 611, 625 Formal approaches to interoperability ..................................................................595 Frameworks ..........................................................................................................511 Fuzzy Logic..............................................................................................................3 Global Engineering...............................................................................................471 Goal Modelling.....................................................................................................679 healthcare informatics ............................................................................................41 identity management ............................................................................................135 Industrial case studies and demonstrators of interoperability................... 55, 85, 499 Inference...............................................................................................................397 Information security ...............................................................................................41 information sharing ..............................................................................................135 information system ...............................................................................................583 Information System ..............................................................................................353 Information Technology...........................................................................................3 Integration ............................................................................................................353 Intelligent infrastructure and automated methods for business system integration.........................................................................................................221 interaction.............................................................................................................183 interoperability ..................................................................................... 411, 583, 639 Interoperability .........................................................................................................3 Interoperability best practice and success stories .............................................55, 85 Interoperability for Enterprise Application Integration ........................................171 Interoperability for integrated product and process modeling..............................171 Interoperability for knowledge creation, transfer, and management ....................315 Interoperability for knowledge sharing ................................................................315 Interoperability in SME........................................................................................511 Interoperability infrastructure and IT platforms ...................................................625 Interoperability issues in electronic markets ..........................................................15 Interoperability of E-Business solutions....................................................... 125, 247 Interoperability of Enterprise Application............................................................423 Interoperability performance analysis ....................................................................15 Interoperability-related standards, legal and security issues.................................625 Interoperable enterprise architecture ........................................................ 15, 71, 301 Interoperable inter-enterprise workflow systems ...........................................71, 125 Interoperable knowledge management .................................................................625 interoperable systems ...........................................................................................183 Knowledge Base...................................................................................................397
Index of Keywords
695
Knowledge Integration .........................................................................................471 knowledge management .......................................................................................381 Knowledge Representation...................................................................................679 knowledge services...............................................................................................381 Knowledge transfer and knowledge exchange .......................................................99 Managing challenges and solutions of interoperablity ...................................29, 197 Manufacturing ......................................................................................................397 manufacturing knowledge sharing........................................................................411 Matching...............................................................................................................353 MDA............................................................................................................. 583, 665 MDR.....................................................................................................................665 Measuring, validating, and verifying interoperability ..........................................611 messages...............................................................................................................183 meta- model..........................................................................................................329 Meta-data and meta-models for interoperability ..................................................159 Metaddata for interoperability ..............................................................................289 meta-model...........................................................................................................145 Model Driven Architecture...................................................................................679 Model Driven Architectures for interoperability .......................... 367, 451, 485, 651 Model Driven Interoperability..............................................................................423 model-driven architecture.....................................................................................183 Modeling cross-enterprise business processes.............................................. 113, 159 modeling language ...............................................................................................533 Modelling .............................................................................................................397 Modelling cross-enterprise business processes ....................................................521 Modelling methods...............................................................................................113 Modelling methods, tools and frameworks for (networked) enterprises................................................................................. 221, 261, 559, 611 morphism..............................................................................................................145 object orientation..................................................................................................533 Ontologies and Semantic Web for interoperability ..............................................315 ontology........................................................................................................ 343, 583 Ontology....................................................................................................... 353, 397 Ontology based methods and tools for interoperability................ 171, 367, 437, 571 Open and interoperable platforms supporting collaborative businesses...............437 OWL-S .................................................................................................................353 patient privacy ........................................................................................................41 patient safety...........................................................................................................41 POIRE ......................................................................................................................3 process modeling..................................................................................................533 Product Design .....................................................................................................397 product development ............................................................................................471 Quality and performance management of interoperable business processes ........451 quality of service ..................................................................................................275 Rational Unified Process ......................................................................................209 Requirements engineering for the interoperable enterprise ............................15, 559 security .................................................................................................................135
696
Index of Keywords
Security issues in interoperability.........................................................................301 Self-organisation ..................................................................................................343 semantic interoperability ......................................................................................343 Semantic Preservation ..........................................................................................423 semantic web ........................................................................................................381 Semantic Web.......................................................................................................397 semantics ..............................................................................................................411 Semantics .............................................................................................................353 Semantic-web based interoperability approaches.................................................651 service...................................................................................................................343 Service..................................................................................................................353 service level agreement ........................................................................................275 service matchmaking ............................................................................................275 Service oriented Architectures for interoperability ........................... 71, 197, 221, 233, 261, 301, 451, 485, 571, 651 Service Oriented Computing ................................................................................209 Similarity..............................................................................................................353 Simulation ............................................................................................................397 SOA......................................................................................................................583 Socio-technical impact of interoperability......................................................99, 461 standards...............................................................................................................183 Strategy and management aspects of interoperability ....................................29, 559 Support for cross-enterprise co-operative Work.....................................................71 system of systems .................................................................................................583 The human factor in interoperability ................................................................85, 99 tools ......................................................................................................................183 tools and frameworks for (networked) enterprises ...............................................113 Tools for interoperability.......................................................... 55, 85, 125, 437, 499 transformation ......................................................................................................145 trust............................................................................................................... 135, 381 trusted scenario.....................................................................................................135 UML .....................................................................................................................533 UML Profile .........................................................................................................679 virtual organisation...............................................................................................381 Web services.........................................................................................................275 Web Services........................................................................................................209 web-based platform ..............................................................................................135