Workflow Handbook 2005
Workflow Handbook
2005 _________________________________________________
Published in association with the
Edited by
Layna Fischer Future Strategies Inc., Book Division Lighthouse Point, Florida
Workflow Handbook 2005 Copyright © 2005 by Future Strategies Inc.
ISBN 0-9703509-8-8 05 04 03 02 1 2 3 4 5 All brand names and product names mentioned in this book are trademarks or service marks of their respective companies. Any omission or misuse should not be regarded as intent to infringe on the property of others. The Publisher recognizes and respects all marks used by companies, manufacturers and developers as a means to distinguish their products. The “WfMC” logo and “Workflow Management Coalition” are service marks of the Workflow Management Coalition. http://www.wfmc.org. Neither the editor, Workflow Management Coalition, nor Future Strategies Inc., accept any responsibility or liability for loss or damage occasioned to any person or property through using the material, instructions, methods, or ideas contained herein, or acting or refraining from acting as a result of such use. The authors and the publisher expressly disclaim all implied warrantees, including merchantability or fitness for any particular purpose. There will be no duty on the authors or Publisher to correct any errors or defects in the software.
Published by Future Strategies Inc., Book Division 2436 North Federal Highway #374 Lighthouse Point FL 33064 USA 954.782.3376 fax 954.782.6365
[email protected] Cover design by Pearl & Associates
All rights reserved. Manufactured in the United States of America. No part of this work covered by the copyright hereon may be reproduced or used in any form or by any means—graphic, electronic, or mechanical, including photocopying, recording, taping, or information storage and retrieval systems—without written permission of the publisher, except in the case of brief quotations embodied in critical articles and reviews. Publisher’s Cataloging-in-Publication Data Library of Congress Catalog Card No. 2005902352 Workflow Handbook 2005: /Layna Fischer (editor) p. cm. Includes bibliographical references, glossary, appendices and index.
ISBN 0-9703509-8-8 1. Business Process Management. 2. Workflow Management. 3. Technological Innovation. 4. Information Technology. 5. Total Quality Management. 6. Organizational Change 7. Management Information Systems. 8. Office Practice_Automation. 9. Business Process Technology. 10. Electronic Commerce. 11. Process Analysis Fischer, Layna
TABLE OF CONTENTS FOREWORD
7
Jon Pyke, Chair WfMC, United Kingdom
INTRODUCTION
9
Layna Fischer, General Manager Workflow Management Coalition, United States
SECTION 1—THE WORLD OF WORKFLOW WORKFLOW IN THE WORLD OF BPM: ARE THEY THE SAME?
17
Charlie Plesums, WfMC Fellow, United States
BPM—TOO MUCH BP, NOT ENOUGH OF THE M
23
Derek Miers, Enix Consulting, United Kingdom
INTEGRATED FUNCTION AND WORKFLOW
31
Chris Lawrence, Old Mutual, South Africa
BUSINESS ACTIVITY MONITORING AND SIMULATION
53
Joseph M. DeFee, CACI and Paul Harmon, Business Process Trends, United States
BUSINESS PROCESS IMPROVEMENT THROUGH OPTIMIZATION OF ITS STRUCTURAL PROPERTIES
75
Vladimír Modrák, Technical University of Košice, Slovakia
ENHANCING AND EXTENDING ERP PERFORMANCE WITH AN AUTOMATED WORKFLOW SYSTEM 91 Robert J. Kearney, Image Integration Systems, Inc., USA
NARROWING THE SEMANTIC GAP BETWEEN BUSINESS PROCESS ANALYSIS AND BUSINESS PROCESS EXECUTION 103 Dr. Setrag Khoshafian, Pegasystems Inc., USA
USING SOA AND WEB SERVICES TO IMPROVE BUSINESS PROCESS FLOW
113
Zachay B. Wheeler, Roberta Bortolotti, SDDM Technology, United States
WORKFLOW AND BUSINESS RULES–A COMMON APPROACH
129
Heinz Lienhard and Urs-Martin Künzi,ivyTeam-SORECOGroup, Switzerland
STATE OF BPM ADOPTION IN ASIA Ken Loke, Bizmann System (S) Pte Ltd., and Dr. Pallab Saha,, Institute of Systems Science, National University of Singapore
141
TABLE OF CONTENTS SECTION 2—WORKFLOW STANDARDS BUSINESS PROCESS METAMODELS AND SERVICES
159
Jean-Jacques Dubray, Attachmate, United States
WORKFLOW AND SERVICE-ORIENTED ARCHITECTURE (SOA)
179
Arnaud Bezancon, Advantys, France
A COMPARISON OF XML INTERCHANGE FORMATS FOR BUSINESS PROCESS MODELLING 185 Jan Mendling and Gustaf Neumann, Vienna University of Economics and Business Administration; and Markus Nüttgens, Hamburg University of Economics and Politics
HOW TO MEASURE THE CONTROL-FLOW COMPLEXITY OF WEB PROCESSES AND WORKFLOWS 199 Jorge Cardoso, Department of Mathematics and Engineering, University of Madeira, Portugal
AN EXAMPLE OF USING BPMN TO MODEL A BPEL PROCESS
213
Dr. Stephen A. White, IBM Corp., United States
A SIMPLE AND EFFICIENT ALGORITHM FOR VERIFYING WORKFLOW GRAPHS
233
Sinnakkrishnan Perumal and Ambuj Mahanti, Indian Institute of Management Calcutta, India
ASAP/WF-XML 2.0 COOKBOOK—UPDATED
257
Keith D Swenson, Fujitsu Software Corporation, United States
SECTION 3—DIRECTORIES AND APPENDICES WfMC Structure and Membership Information
281
Appendix—Membership Directory
283
Appendix-Officers and Fellows
289
Appendix—Author Biographies
307
Index
317
Other Resources
320
Foreword Jon Pyke, WfMC Chair, United Kingdom Thank you for supporting the work of the Workflow Management Coalition. It never ceases to amaze me just how much progress can be made in a 12month period. My concerns during 2004 were that the ever-increasing numbers of standards bodies in my industry were doing a great job in confusing the market place, holding back growth and putting us in danger of becoming just another over-hyped trendy sector. Well, things have certainly changed, and changed for the better. I am no longer concerned for the future of Workflow and Business Process Management technology. Over the past year the discussions over which standard fits where has abated, and we now have a clear understanding of direction. The increase in the term BPM is however causing confusion in the minds of many—does it mean Business Process Measurement? Business Process Modeling? or Business Process Measurement? It is clear that all of these terms are valid so the challenge now is to ensure that those responsible for the development of products, associated standards and the promotion of three-letter acronyms articulate the message crisply and clearly and ensure they say what they mean—and that’s part of the job of this publication. The development of standards is now moving at some considerable pace, and this is especially true with the XPDL standard also known by WfMC as Interface 2. Soon to see its second major version released, XPDL is recognized as a key component of the standards landscape. The working group responsible for the standard, led by Robert Shapiro, has made some significant advances especially in mapping the specification into the works of other standards groups such as the BPMI. The growing list of XPDL supporters and implementations from both the vendor and user community can be found on our website. Led by Keith Swenson, WfMC TC Chair, several times during 2004 the WfMC assembled a live demonstration of products that implemented the Wf-XML 2.0 web commerce protocol. Wf-XML is a protocol for process engines that makes it easy to link engines together for interoperability. Wf-XML is built upon OASIS ASAP, so it was simultaneously a demonstration of ASAP interoperability. The demonstration showed six different implementations of the new web services protocol and the exchange of data across the Internet. These demos were both in front of the live audience and online with observers around the world via a webcast, each time with sold-out capacity. A live demonstration led by Keith Swenson took place in Pisa, Italy in October 2004 at which we hosted a local BPM Workshop open to the first 100 attendees, as well as another 600 on line. Participants who implemented the protocol included ADVANTYS, Fujitsu, HandySoft and TIBCO who demonstrated the scenarios of Customer, Retailer and Manufacturer. ASAP and Wf-XML are designed to be used by non-programmers. This is the key.
1
FOREWORD More details on the demonstrations are available elsewhere in this book in the chapter: ASAP/Wf-XML 2.0 Cookbook—Updated by Keith Swenson. Significant milestones for WfMC in 2004 have been the continuing adoption of our standards and specifications by government and big business worldwide. Notably, the United Kingdom e-Gov National Workflow project issued a workflow official standards guide assisted greatly by David Hollingsworth, TC Chair from the Workflow Management Coalition. More information on this workflow guide is available on their website, www.workflowNP.org.uk. The WfMC Standards Reference Model has proved its importance in other areas of technology, most notably the ISO Seven Layer reference model for computer communications. The members of the Workflow Management Coalition hope you enjoy our Workflow Handbook 2005 and find it useful as you explore workflow and business process management and their many diverse benefits. Our thanks go to everybody who contributed to this important body of work and to Layna Fischer, WfMC General Manager for her role as chief editor and publisher. Jon Pyke, Chair WfMC
2
Introduction Layna Fischer, General Manager Workflow Management Coalition Welcome to the Workflow Handbook 2005. This edition offers you:: • SECTION 1: The World of Workflow covers a wide spectrum of viewpoints and discussions by experts in their respective fields. Papers range from an examination of the workflow in the world of BPM to Web Services workflow architectures and Business Process Management Technology and Business Rules. • SECTION 2: Process Standards deals with a comparative analysis of XML standards, with a visionary look into the future of the serviceoriented architecture. Several examples detail important step-by-step instructions of generating processes, such as using the BPMN specification to model a BPEL process. The ASAP/Wf-XML 2.0 Cookbook has been updated following several successful live demonstrations of the protocol. • SECTION 3: Directory and Appendices—an explanation of the structure of the Workflow Management Coalition and references comprise the last section including a membership directory.
Section 1—The World of Workflow WORKFLOW IN THE WORLD OF BPM: ARE THEY THE SAME?
17
Charlie Plesums, WfMC Fellow, United States
This introductory chapter describes how workflow management systems are no longer just a simple inventory of work to be processed, or a simple routing system, but have become sophisticated process management tools. System tools have emerged to help analyze and design complex new business processes. Other tools, the invocation engines, run the process as defined. Specifically these engines invoke transactions on systems both internally and across many organizations—suppliers, partners, and customers. Business Process Management—BPM—is born.
BPM—TOO MUCH BP, NOT ENOUGH OF THE M
23
Derek Miers, Enix Consulting, United Kingdom The problem with many BPM deployments is that they often overlook the reason why this technology is needed in the first place—to support the achievement of business objectives. The re-emergence of business processes as a core discipline in modern business management is fairly clear. But in order to really derive the maximum benefit from BPM initiatives, firms need to manage the people interface more carefully.
INTEGRATED FUNCTION AND WORKFLOW
31
Chris Lawrence, Old Mutual, South Africa Mr. Lawrence discusses designing and building computer systems based on the process modeling methodology called ‘integrated function and workflow’ (IFW) systems. A key claim of the approach presented in this chapter is that
9
INTRODUCTION it keeps the business model and the solution model aligned because they are one and the same model. The subprocess concept and construct is an important factor in that alignment—which is effectively the alignment between what and how.
BUSINESS ACTIVITY MONITORING AND SIMULATION
53
Joseph M. DeFee, CACI and Paul Harmon, Business Process Trends, United States Gartner suggests that BAM will become a major corporate concern in the next few years. Most large organizations will at least explore the possibility of improving business process management by creating systems that provide a broad overview of a process that can provide near-real-time information and advice to process managers. A variety of techniques will be used. The authors believe that simulation-based BAM will prove to be the most powerful and flexible approach to BAM and will increasingly be relied on by those with the more complex processes.
BUSINESS PROCESS IMPROVEMENT THROUGH OPTIMIZATION OF ITS STRUCTURAL PROPERTIES 75 Vladimír Modrák, Technical University of Košice, Slovakia With the growing requirements for the improvement of business activities within organizations, aspects of changes and new concepts of process structures are becoming a topical problem. These aspects are evenly important from standpoint objectives of the first out of two phases of workflow management (WfM). The processes of change have been addressed mostly at the level of administrative business processes (BP). Apart from main intention to present a practicable approach to the measurement of the structural complexity of business processes, the chapter also outlines some conceptual aspects of the effectiveness of the creation of practical tools for business processes redesign consisting of modeling and subsequent analyzing of processes structural attributes.
ENHANCING AND EXTENDING ERP PERFORMANCE WITH AN AUTOMATED WORKFLOW SYSTEM 91 Robert J. Kearney, Image Integration Systems, Inc., USA ERP systems are most commonly and correctly perceived and utilized as transaction processing machines. In that role they excel. Workflow systems, integrated with the ERP system, can function as the data delivery mechanism for ERP transactional processing. Conversely, ERP transactional processing is but one of the many activities in the workflow. Mr. Kearney describes how the integrated result provides capabilities that have been missing with ERP alone: standardization and automation of entire business processes, effective involvement and interaction with the business experts, and, the creation and capture of all relevant business process information. The improved business processes enable the promised economies of scale from centralized ERP processing.
NARROWING THE SEMANTIC GAP BETWEEN BUSINESS PROCESS ANALYSIS AND BUSINESS PROCESS EXECUTION 103 Dr. Setrag Khoshafian, Pegasystems Inc., USA The business process management (BPM) industry is growing rapidly, surpassing the expectations of even its most ardent supporters. Like most new technologies, BPM is enduring its own growing pains thanks to convergence, consolidation, and accelerated adoption. One of the critical areas of conver-
10
INTRODUCTION gence that has not received sufficient attention is the semantic gap and interoperability challenges between business process analysis (BPA) tools and intelligent BPM engines. This interoperability challenge is further aggravated by the lack of robust business rules modeling tools. Business rules are now regarded as essential components of a next generation BPM (intelligent or smart BPM). Even though there are various BPM standardization efforts, the semantic gaps between BPA and run-time intelligent BPM engines are considerable. This paper addresses these semantic gaps and identifies solutions for continuous and iterative development of complex intelligent BPM applications.
USING SOA AND WEB SERVICES TO IMPROVE BUSINESS PROCESS FLOW
113
Zachay B. Wheeler, Roberta Bortolotti, SDDM Technology, United States Case Study: District of Columbia's Oracle Database, Presentation Layer, SOAP/XML. Tasked with the analysis of improving and automating the current business process of license issuance for the Department of Consumer and Regulatory Affairs (DCRA) of the District Columbia, the SDDM Technology team developed Business Logic and Data Access Tiers. The solution to the District of Columbia business license problem was provided by using Service-Oriented Architecture and Web Services, taking advantage of a wide variety of available technologies. WS SOA allows business people in the DC government to consider using an existing application in a new way or offering it to a partner in a new way, thus potentially increasing the transactions between agencies.
WORKFLOW AND BUSINESS RULES–A COMMON APPROACH
129
Heinz Lienhard and Urs-Martin Künzi,ivyTeam-SORECOGroup, Switzerland The authors propose a BPM approach for addressing processes, Web Services and the use of business rules by the processes, starting from graphical models. Transparent, easy to manage and mathematically sound solutions are obtained in a coherent way. The authors show that practical experience with Business Rule Management within BPM will have a beneficial influence on the further development of BPM technology. What is already possible to do now will become very easy to do in the future, e.g. totally integrated calls to rule management from process elements (like “event starts,” “process triggers,” “decisions (gateways in BPMN)” etc). As well, rule inference may become a natural part of these systems.
STATE OF BPM ADOPTION IN ASIA
141
Ken Loke, Bizmann System (S) Pte Ltd., and Dr. Pallab Saha,, Institute of Systems Science, National University of Singapore There will be an exponential growth in the adoption of BPM technologies within ASEAN companies, say the authors. The way businesses and the marketplace are evolving will fuel this adoption. Acadamicians, consultants and solutions vendors are working together to bridge viable deliverables in various forms for ASEAN companies to adopt BPM. Companies have recognized the need to equip their workforce on process excellence. The intense competition and time to market factors further put pressure for companies optimize their current processes. The authors show how observing corporate governance in different degrees is something that most enterprises are taking seriously.
11
INTRODUCTION Section 2—Workflow Standards BUSINESS PROCESS METAMODELS AND SERVICES
159
Jean-Jacques Dubray, Attachmate, United States The software industry has long searched for a computing model where business or work processes would be explicit and where customers could change the business processes without significant coding projects. Programming languages like WS-BPEL, Service orientation and web service technologies represent a major architectural advance to create a new generation of business process engines that can integrate with a wide variety of business functions and across business boundaries going far beyond the original concepts of business process orchestration that were defined in the late ninetiesi and have hardly evolved since then. Mr. Dubray contends that this new generation of process engines is expected to manage end-to-end business processes while being far more flexible, far more business savvy and far more integrated with all aspects of IT as was laid out the business vision in the past 20 years. These concepts are poised to revolutionize software engineering and the way we build business applications.
WORKFLOW AND SERVICE-ORIENTED ARCHITECTURE (SOA)
179
Arnaud Bezancon, Advantys, France Service-Oriented Architecture is clearly the solution for organising information systems, responding on various levels to new development and communication challenges in applications. The work involved in system migration and choosing the appropriate moment to effect this migration are the main obstacles to rapid implementation in companies. Bezancon maintains that workflow, BPM and SOA are therefore not competitors but the proliferation of marketing and techniques surrounding automation of processes are such that solutions are particularly difficult to understand from the client company’s point of view. He predicts that in this particular context, those solutions presenting tools which are easiest to implement and use will almost certainly have the highest rate of success.
A COMPARISON OF XML INTERCHANGE FORMATS FOR BUSINESS PROCESS MODELLING 185 Jan Mendling and Gustaf Neumann, Vienna University of Economics and Business Administration; and Markus Nüttgens, Hamburg University of Economics and Politics This paper addresses heterogeneity of business process metamodels and related interchange formats. It presents different approaches towards interchange format design and effects of interchange format specification first. The authors derive the superset of metamodel concepts from 15 currently available XML-based specifications for business process modeling. These concepts are used as a framework for comparing the 15 specifications. The authors aim to contribute to a better comparison of heterogeneous approaches towards BPM, with the hope that this may finally result in a BPM reference metamodel and a related general interchange format for BPM.
HOW TO MEASURE THE CONTROL-FLOW COMPLEXITY OF WEB PROCESSES & WORKFLOWS 199 Jorge Cardoso, Department of Mathematics and Engineering, University of Madeira, Portugal The major goal of this chapter is to describe a measurement to analyze the control-flow complexity of Web processes and workflows. The measurement is to be used at design-time to evaluate the complexity of a process design
12
INTRODUCTION before implementation. In a competitive e-commerce and e-business market, organizations want Web processes and workflows to be simple, modular, easy to understand, easy to maintain and easy to re-engineer. To achieve these objectives, one can calculate the complexity of processes. The complexity of processes is intuitively connected to effects such as readability, understandability, effort, testability, reliability and maintainability. While these characteristics are fundamental in the context of processes, Prof. Cardoso illustrates that no methods currently exist that quantitatively evaluate the complexity of processes.
AN EXAMPLE OF USING BPMN TO MODEL A BPEL PROCESS
213
Dr. Stephen A. White, IBM Corp., United States The Business Process Modeling Notation (BPMN) has been developed to enable business user to develop readily understandable graphical representations of business processes. BPMN is also supported with appropriate graphical object properties that will enable the generation of executable BPEL. Thus, BPMN creates a standardized bridge for the gap between the business process design and process implementation. This paper presents a simple, yet instructive example of how a BPMN diagram can be used to generate a BPEL process.
A SIMPLE AND EFFICIENT ALGORITHM FOR VERIFYING WORKFLOW GRAPHS
233
Sinnakkrishnan Perumal and Ambuj Mahanti, Indian Institute of Management Calcutta, India The main contribution of this chapter is that a new workflow verification algorithm has been proposed to verify structural conflict errors in workflow graphs. This algorithm is presented along with visual step-by-step trace of the algorithm, correctness and completeness proofs and complexity proofs. Workflow verification issues have been solved in a simple and elegant manner by the proposed algorithm. The authors show how this algorithm is much easier to understand, as it uses search based techniques like DepthFirst Search and has significant advantages in terms of time complexity when compared to other workflow verification algorithms available in the literature.
ASAP/WF-XML 2.0 COOKBOOK—UPDATED
257
Keith D Swenson, Fujitsu Software Corporation, United States Wf-XML is a protocol for process engines that makes it easy to link engines together for interoperability. Wf-XML 2.0 is an updated version of this protocol, built on top of the Asynchronous Service Access Protocol (ASAP), which is in turn built on Simple Object Access Protocol (SOAP). This chapter is for those who have a process engine of some sort, and wish to implement a WfXML interface. At first, this may seem like a daunting task because the specifications are thick and formal. But, as you will see, the basic capability can be implemented quickly and easily. This article will take you through the basics of what you need to know in order to quickly set up a foundation and demonstrate the most essential functions. The rest of the functionality can rest on this foundation. The approach is to do a small part of the implementation in order to understand how your particular process engine will fit with the protocol.
13
INTRODUCTION Section 3—Directory and Appendices •
The Authors’ Appendix provides the contact details and biographies of the valuable contributors to this book. Each is a recognized expert in his or her respective field. You may contact them if you wish to pursue a discussion on their particular topics.
•
The chapters on the WfMC Structure and Membership describe the Coalition’s background, achievements and membership structure and sets out the contractual rights and obligations between members and the Coalition
•
WfMC Membership Directory: WfMC members in good standing as of February 2005 are listed here. Full Members have the membership benefit of optionally including details on their products or services. The WfMC invites you to delve into the information presented in whatever manner suits your reading or research style and knowledge level. Our thanks and acknowledgements extend to not only the authors whose works are published in this Handbook, but also to the many more that could not be published due to lack of space. Layna Fischer, Editor and Publisher General Manager, WfMC
14
Section 1
The World of Workflow
Workflow in the World of BPM Are They the Same? Charlie Plesums, WfMC Fellow, United States WORKFLOW “In the Middle Ages, monks sat at tables carefully copying the scriptures. The father superior would make the assignments, perhaps giving the illuminated first page of a section to the most skilled artist, perhaps assigning the proofreading tasks to the elderly scholar with trembling hands.1” Little has changed in centuries—the process is established, work is assigned and tracked, performance is checked, and results delivered. Often it is done by manual effort of supervisors (even today). But automated tools have emerged to assist.
WORKFLOW MANAGEMENT Document imaging emerged as a practical technology in the mid-1980s. When the images were stored in a computer system, there no longer was any paper to drop in somebody’s in-box—no natural way to assign and track the work. The initial simple workflow management tools routed the work to the person (or sequence of people) that needed to process the documents. The early workflow management systems were intimately associated with the content—the document—that was being moved or routed. The workflow management tools evolved to assign priorities to different types of work, to balance the distribution of work among multiple resources (people) who could handle the work, to support interruptions in the work (I’ll finish it in the morning), and to handle reassignment (I’m sick and am leaving now). Not all work was associated with a document, so most workflow management systems were adapted (if necessary) to process work that had no documents, images, or attachments—pure process, without content. For example, renewing an insurance policy or changing a credit limit. Not all processing steps required a person. Therefore most current workflow management systems support “straight through” processing or robotics. For example, •
As part of a larger process, one person may enter data from an order or application. When the entry is done and necessary data is available, the workflow system may automatically requests a credit report from a system in another company. When the report is received, the manual or automated process continues.
•
The entire process may be automated. An address change may be detected (perhaps on the payment stub), so is automatically sent to an OCR process. If the address recognized from the form is a valid address in the postal database, the change is made, and a confirmation
1
“Introduction to Workflow, Workflow Handbook 2002, page 19
17
WORKFLOW IN THE WORLD OF BUSINESS PROCESS MANAGEMENT letter is sent to the customer. A person is only involved in an error—if the data cannot automatically be verified. Thus workflow management systems are no longer just a simple inventory of work to be processed, or a simple routing system, but have become sophisticated process management tools. They originally assigned documents to people for processing, so most products have special strength assigning longer tasks to people, with one or many attached documents, but the viable workflow management systems have gone far beyond their historical origins.
BUSINESS PROCESS MANAGEMENT Business has become more complex in recent years. Many processes extend outside the organization—even outside the enterprise. Some of the steps that traditionally were handled internally are now being “outsourced” to “business partner” companies or individual contractors. Suppliers of products and services can be more economical and predictable when they are integrated into the larger process. System tools have emerged to help analyze and design these complex new business processes. Other tools, the invocation engines, run the process as defined. Specifically these engines invoke transactions on systems both internally and across many organizations—suppliers, partners, and customers. Business Process Management—BPM—is born. The new BPM tools have been defined “top down” rather than simply evolving (like most workflow management tools). The techniques used to define the process are more rigorous—some people take pride in being able to describe the mathematical foundation behind the various techniques. Graphical maps, modeling, and simulation are common tools to define the process. In production, the invocation engines will use the latest technology (including the Internet) to pass data and invoke processes on local and remote systems within an organization, between different organizations within an enterprise, and between enterprises.
ARE BPM AND WORKFLOW THE SAME? Both Business Process Management and Workflow allow a process to be defined, tested, and used. BPM originally focused on computer transactions— the large number of rapid business processes most often handled entirely by machine. Workflow originally focused on content that required human judgment or processing, often distributed among large numbers of people, with each process taking a relatively long time, thus being subject to interruption. Both BPM and Workflow can handle the entire range of business processes. The question is how well each product works in each situation. BPM is focused on defining the overall business process, and managing and tracking that process wherever it goes, often through multiple organizations, different computer systems, multiple locations, and even different enterprises. However, each transaction is normally quick—the request to get a credit report may need to be queued until morning, but when run, the response is immediate. There may be only one system that performs a particular transaction, and when it is run, it is rarely interrupted. Since processing is fast, most can be processed on a first-in, first-out basis. BPM is optimized for processes that are automatic, not involving human interaction. Workflow is focused on managing the process also, but often has components of uncertainty and delay such as those associated with people. We
18
WORKFLOW IN THE WORLD OF BUSINESS PROCESS MANAGEMENT may have to send the "work" to someone for approval, but that person may be interrupted (suspending the work while they take a phone call, go to a meeting, or even return from vacation). There may be many (even hundreds) of different processors (people or systems) that could handle that particular piece of work, and any one processor may be able to process many different types of work (but not the same combination as the next processor). The total work must be equitably distributed among the many processors. Some of the processing may take many minutes or hours, so more important work (the bigger deal or the better customer or the older work) may be given higher priority—be assigned first. The workflow is often modified as the person seeks additional information (such as a lab analysis or a legal opinion) for this particular case. Quality needs to be checked—not just is the system operating reliably, but are the people making mistakes. (Beginners may have 100 percent of their work checked, but an expert may only have a few percent checked.) Most workflow products can invoke programs automatically (without human intervention) like the BPM engines, but are optimized for the multiple processors and delays in the process. But many of the workflow products on the market today aren't (yet) as focused on the cross-enterprise or inter-enterprise processes as the BPM tools. We could conclude that both workflow management products and BPM products manage a business process, so in that sense they are the same. These products may be very different internally—and thus the strengths and weaknesses of those products in a particular application can be very different. •
Many workflow vendors have integrated BPM products into their workflow packages, but the degree of integration is sometimes limited—the product still is primarily a workflow tool.
•
Many BPM vendors have discovered the special needs of workflow, and have either integrated a workflow product into their BPM packages, or have built-in rudimentary workflow functions.
EMBEDDED WORKFLOW Many computer systems, such as those that process orders or administer insurance policies, have recognized the value of workflow. Therefore many of these systems have embedded workflow among the functions provided by their system. The good news is that this is a fast and convenient way to get started in workflow. The bad news is that it may only have a minimal set of workflow functionality—just enough to perform that specific application in an unsophisticated way. The workflow functions of a dedicated enterprise workflow system are likely to be much more robust than those that are embedded within a specific application. One person rarely works with a single application all day, every day—some of their time may be spent substituting for a supervisor, or quality checking other people’s work. Some companies mix assignments between “back office mail processing” and “front office call center” work. Some time will be spent in training or even developing new work management procedures. Work may arrive from the telephone, internal or external email, from the paper inbox, or from the workflow system. People do not do well balancing the work from multiple sources—while working on email, they will likely respond to all the email, even if more important work is waiting in the workflow system or inbox. This leads to an argument for a separate enterprise (or desktop) work-
19
WORKFLOW IN THE WORLD OF BUSINESS PROCESS MANAGEMENT flow systems, that balance the work between multiple business functions and systems.
STANDARDS The Workflow Management Coalition reference model, below, defines five components of workflow—the five interfaces to the workflow enactment services (what BPM calls the invocation engine). The standards associated with these interfaces are listed in the standards section on the WfMC website at www.wfmc.org. Understanding the five components helps distinguish between BPM and Workflow systems. Process Definition Tool
1
Workflow Enactment Services A1
Administration & Monitoring
A2
4
5 Workflow Engine(s) Vendor A
2
B
C
Other Workflow Enactment Services
3
Workflow Client Application
Invoked Applications
WfMC Reference Model WfMC Interface 1 is the process definition. This is the starting point for BPM and the particular BPM component that has received a lot of analysis and development. Workflow products vary in the techniques and tools used to define the process—some products use third party tools, if any, for analysis of the process, or use custom programs or simply manually generated tables to define the business process for the processing engine. WfMC Interface 2 is the client application—the program that invokes the workflow process. Workflow tools have a strong history of human interface, so may be stronger in this aspect; BPM is traditionally focused on processes
20
WORKFLOW IN THE WORLD OF BUSINESS PROCESS MANAGEMENT that require less human interaction, and may have a stronger programming interface. Some BPM products have no human interface at all—any human interaction must be custom programmed. WfMC Interface 3 is the interface to the programs invoked by the business process. The first workflow systems presented documents to people for processing in their traditional way (none of the processes to be performed by the people were automatically invoked). Workflow vendors quickly learned that they needed at least a minimal interface to expedite the process—start the right program for the user, even if the data all needed to be entered manually. Eventually most workflow tools evolved, and can invoke local and remote processes, but that interface must often be customized—it is an “add on” to many systems. On the other hand, BPM is built to automatically invoke existing systems, using consistent, modern interfaces. BPM products generally excel here, except when dealing with archaic legacy systems that don’t support the modern interfaces and tools. WfMC Interface 4 allows one workflow system to talk to another workflow system… to initiate work on another system (perhaps a different vendor’s system in a different enterprise), to track the status (progress) of that work, to cancel the work if necessary, and to inquire or be notified when the work is complete. BPM invocation engines generally don’t talk to other engines, but would invoke a transaction at another enterprise, and that transaction might, in turn, initiate a local process. Either approach gets the job done. WfMC Interface 5 is for administration and monitoring of the system, including logging, monitoring the performance and backlogs, and adjusting the resources. Both BPM and Workflow have these functions. But this is an area that deserves attention.
CAUTION One organization decided that their systems needed to be modernized across the entire enterprise, using new technology to link the home office and regional offices to their vendors and customers. They selected a BPM product, installed all the system components, and trained all the staff. The first application dealt with processing documents, from the customer through several regional offices, to home office specialists. Their BPM product didn’t naturally link to documents, but that could be added. Their BPM product didn’t have a user interface for the regional and home office staff to query the status of their work queues, but that could be built. Their BPM product didn’t have the ability to suspend work, allow a supervisor to override the assignment of work, or to automatically prioritize work (some work was more urgent than others). All of these requirements could be added to the platform provided by the BPM product. But this process could have easily been implemented by many traditional workflow systems, with little or no customization. The BPM solution may have been the best overall solution for the company, but workflow tools would have been much better to solve this first application. In another environment the problem might have had minimal human interaction, electronic data rather than documents, and processes that involve a more complex organizational structure. A workflow tool may accomplish the job, but BPM products might be easier to implement and perform better for this process.
21
WORKFLOW IN THE WORLD OF BUSINESS PROCESS MANAGEMENT There is no simple answer. This certainly doesn’t mean that Workflow is better than BPM, although it probably would have been for the one business process in the first case. Not does it mean that BPM is better than workflow, even though it might have been so in the second case. It does mean that the business requirements need to be honestly identified, and both BPM and Workflow products need to be considered.
CONCLUSION The key issue is the business process, and how that process works for the business users, partners, suppliers, and customers. BPM and Workflow are both technologies that manage the definition, implementation, and operation of the business processes. One is not good and the other bad, but they come from a different origin, and thus have different strengths. The key is to look beyond the product name, and find the functions that will best serve the business.
22
BPM—Too Much BP, Not Enough of the M Derek Miers, Enix Consulting, United Kingdom INTRODUCTION In the 1990s, I used to say, “The problem with workflow is that you don’t want it (work) to flow—you want it to get done.” The point was that, rather than having cases of work ping-ponging all over your business, the emphasis should be on getting work done. Generally, the reason companies were deploying workflow technology was to drive operational efficiency—to save cost while improving consistency and quality. Workflow also promised a way of dealing with the cultural malaise that had infected organizations over a certain size. But the reality of deployment in many large financial services businesses was that they just got into a bigger mess faster. Sure, many tasks were automated and streamlined to remove delays from the process. But the culture of the organization was seldom fundamentally changed. The body politic of the firm just absorbed the technological change and carried on as before. What firms were often missing was a methodology for dealing with the dayto-day grind of management—production and how to get the most out of the available resources. Even now, most firms still have only a superficial appreciation of how much work their employees are capable of handling. With a workflow implementation, the problem shifted. The visible backlogs of bulging in-trays were now hidden within the shared queues on the system. Individuals were driven (handed work) by the system but were seldom measured or managed in terms of what they could realistically achieve. Deployments took little account of the skill levels of the individuals. Some could achieve work at a tremendous rate, while others struggled to achieve the norm. At the team level, there was no sense of driving performance or the achievement of goals or business targets. Out-of-the-box, management information was simply unavailable in workflow products. While the system logged the history of all work (audit trail), it was left to the customer to write suitable programs that provided managers with effective information—i.e., while ‘management information’ was often promised, it was seldom delivered.
BPM TODAY And it is not much different today. All that has really changed are the terms. Now it’s called Business Process Management (BPM) rather than workflow. But in the same way that workflow implementation often missed the point, so do many BPM deployments—too much emphasis on the BP and not enough on the M. The problem with many BPM deployments is that they often overlook the reason why this technology is needed in the first place—to support the achievement of business objectives. They set out to deliver the ability to ensure that work is ‘done’—consistently, on time, and correctly. Yet they miss
23
BPM—TOO MUCH BP, NOT ENOUGH OF THE M one of the key ingredients to corporate success—the day-to-day management of the people involved. This is not about the life-cycle management of the process itself, which is still important. It is about the management of the people who work within the process—what their collective efforts can achieve, where they are struggling, how much work is coming down the pipe, and what they have to get out the door today, tomorrow, this week, or by the end of the month. The capability to steer the detailed operations of the business by driving its business processes (through BPM) is one thing, but developing an effective production management discipline is another. Technological support for this aspect is usually left to manual spreadsheets—an afterthought, developed by the managers themselves. While this may be good enough for a couple of small teams, it just doesn’t scale to 300 teams of 20 people. Indeed, most white-collar businesses have no accurate idea of what sort of work throughput is possible with the resources they already have. Executives continually hear the cry for more staff, yet they don’t really know how much work the current employees can deliver. Our research in major financial services firms has shown that as much as 30-40 percent additional productivity is possible when a disciplined production management approach is employed (over and above the benefits possible from the core workflow/BPM implementation). This research was based on a series of detailed interviews with key executives within the core business and IT management functions of major financial services firms in the UK and Europe. Firms ranged in size from 120 to several thousand employees. The primary focus of the analysis was to understand the critical success factors associated with major BPM deployments—how these affected productivity and the goals of executive management. In all cases, the businesses concerned had already deployed business process support environments. We really wanted to understand the issues that affected how one firm’s BPM implementation was more successful than that of another. What did they do different? What ‘best practices’ were developed? How was it driven and managed? We found that senior managers often bought into the idea of a BPM deployment based on not only increased productivity and better consistency, but also on the Holy Grail—better ‘Management Information.’ They really wanted the ability to look into the end-to-end profitability of a product or service, drilling down to assess, compare, and contrast the impact of one channel over another, or how individual departments were performing against Service Level Agreements (SLAs). They wanted the ability to track and monitor their business, identifying teams that were failing or quickly spotting bottlenecks that were impacting performance. They also wanted the ability to ensure adequate adherence to new regulatory requirements.
A CASE STUDY Halifax plc (now merged with the Bank of Scotland to create HBOS) is an internationally famous financial services brand. Responding to the challenges of the modern high street, their re-organization involved taking the back office functions out of their many branches and creating centralized administration centers. To support this exercise, process support systems were implemented to drive work from the front office operations in the high street directly into the back office.
24
BPM—TOO MUCH BP, NOT ENOUGH OF THE M The initial emphasis was on the BPM technology implementation, its features and usability at the front end. Significant benefits derived from the initial consolidation project with around a 15 percent reduction in costs. While some new management information was made available as a result, there was little change in the way that first line managers operated across the business. However, through a subsequent project, focused on the introduction of production management disciplines, the company transformed the whole culture of management. Rather than looking at team performance from a historical point of view (as was the norm), managers now predict what work is possible with the resources at their disposal. Previously, managers looked at what level of achievement they had delivered and then justified why they couldn’t handle any more. To support this new project, the core BPM system was extended with an innovative Management Information application that also took over the allocation and distribution of work, marrying the needs of the case and business priorities back to the skills of the employees in the individual teams. First line managers are now driven to understand how much work they have in the system and what is likely to arrive. In turn, this has allowed them to think more deeply about the performance of the individual team members, assessing their skills and personal development in a more holistic way. Individuals are assigned work within their capabilities and monitored against performance—in terms of task completion but also qualitatively. Managers are held accountable against weekly plans and asked to predict productivity over the ensuing 12 weeks. A league table is maintained, and individual team leaders are now incentivised to (over)achieve realistic performance targets through the staff under their control. The end result is a further 20 percent productivity improvement over the previous year alone. Week on week output is still rising and the costs of doing business are being driven ever lower. With over 2000 fulltime staff in the back office alone, that 20 percent improvement equates to 400 man-years—a big impact on the bottom line of the business. Moreover, the company has achieved a real transformation in management culture, building a virtuous circle of corporate performance, team working, and personal development. But none of this would have been possible without an alignment of incentives with the enhanced technology support environment. The product used to extend the BPM environment, Work Manager from eg Solutions in the UK (www.eguk.co.uk), provided a further layer of sophistication over and above the core capabilities of the BPM Engine. It allowed the business to develop a single view of work in the business, integrating the core BPM environment 1 with other workflow systems and applications.
We have since discovered that this same application is relatively widely used in the UK financial services market, working alongside leading BPM products such as Staffware, FileNet, AWD, and eiStream (now Global 360). In some implementations, Work Manager is integrated with two or three different BPM environments, providing a single view of all work in the business and allowing management to track work across processes rather than just functional silos.
1
25
BPM—TOO MUCH BP, NOT ENOUGH OF THE M In the words of the Head of Retail Processing for the bank (talking about the BPM deployment), “You end up with brilliant processes, but the people involved can’t necessarily handle them. The organizational culture is left a mile behind, and the people side suffers. We had the systems and process part working well, but the behavior and people side slipped. The real issue was developing a new set of management behaviors.”
DON’T FORGET THE PEOPLE There is an important lesson here. In a great many BPM projects, there is plenty of emphasis on the Business Process, but the wider holistic aspects of people, culture, and production management are often overlooked, yet they are just as important. As far as functionality goes, it is not a huge difference, but the lack of a ‘manage & control’ and ‘understand & improve’ culture makes a big difference to performance and bottom line profitability. BPM cannot be considered complete without it. All of the over-performing companies we surveyed sought to create a culture of continuous improvement. At the core of that objective were two common strategies: • •
The application of ‘production management’ techniques, and The extraction of quality ‘management information.’
From the executive point of view, the overall objective was generally sublime operational efficiency—reducing the level of resources required to deliver value. They also wanted more meaningful information to support decisionmaking and better adherence/compliance with corporate policies and procedures.
PRODUCTION MANAGEMENT & REGULATORY COMPLIANCE Production Management is really all about seven things: Measure; Plan; Communicate; Allocate; Monitor; Analyze, and Improve. While all this just sounds like plain, old-fashioned common sense to the seasoned manager, it has almost been lost in the hullabaloo created around the Business Process end of BPM. Having understood what their people are capable of and planned accordingly, team leaders need to track and monitor how well they do against targets. People need to know where they fit. When they understand this, they are far more motivated, especially if they can clearly grasp the basis for their individual targets and see that their achievements are fairly reflected. One mortgage firm we spoke with had achieved a particularly dramatic rise in productivity. With a new set of work practices and service level targets in place for six months, throughput rose by an average 221 percent per day (in the number of applications processed), and the number of actual completions carried out increased by a massive 429 percent. These changes have translated into a more profitable company with lending increasing from £220 million in the whole of 2001, to £620 million of completions in the last nine months of 2002 alone. In the words of another senior manager (from one of the world’s leading multinational insurance groups): “Our people now understand that a service level agreement is not a target that we aspire to, but a benchmark below which we will not fall. Introducing SLAs is an opportunity to make rapid and
26
BPM—TOO MUCH BP, NOT ENOUGH OF THE M lasting cultural changes within an organization, whilst bringing people with you. Everyone knows what he or she has to achieve. They now understand what ‘good’ looks like.” Of course, there is more to it than saying managers need to monitor their employees. What came out in our interviews again and again was the need to align the people to the work in hand and then monitor them individually. This implies assessing an individual’s skills and competencies against that required by the task. However, understanding people’s expertise and proficiencies is almost an art in itself. While this is relatively easy with a small team, when you consider the scale required for several thousand customer service staff and their associated back office support, a technological foundation is a must. On the other hand, the cost of getting this alignment wrong is usually not considered until it is too late. In the UK mortgage market during the late 90s, many firms ‘oversold’ endowment related insurance policies that paid off the mortgage at term. Across the industry, the costs associated with handling related complaints have now run into hundreds of millions of pounds. If firms had delivered that work to suitably qualified personnel in the first place, they could well have avoided the current cost repercussions that are having a serious impact on business and product profitability. Sure, some of the issues I am highlighting with this example are related to initial product design and a ‘responsible’ sales process, but the whole financial services industry is now governed by a much stricter regulatory regime. Firms have to meet stringent new targets on handling customer complaints within a given deadline—i.e. managers need to know when they are unlikely to achieve this. But more importantly, organizations must now ensure that the individuals handling sensitive parts of the sales and administrative processes are qualified to undertake the work. Stricter regulatory regimes are not limited to just the financial services industry. With the introduction of the Sarbanes Oxley legislation in the US, most large firms need to ensure compliance in all sorts of ways. Firms are also struggling with the new focus on transparency and compliance. The widely held perception is that greater control of the process will ensure regulatory compliance. While the underlying BPM technology provides an effective audit trail (logging the history of all work items), seemingly small errors in the way a case is handled can have a massive impact on the brand. But the reality is that routing a work item through the business, using rules to get it to the right role (job title), is just the first step. Modern financial markets demand transparency and accountability—on what we do, how we act, and the decisions we make—through every level of the business. With this transparency comes increased risk to the public ‘trust’ in the brand. But transparency is only about finding out afterwards—management via the rear view mirror. What is needed is a more proactive approach. A methodology that ensures employees have the right capabilities and training to undertake the work in hand. And to realistically achieve this goal, a more holistic view of the business is required.
27
BPM—TOO MUCH BP, NOT ENOUGH OF THE M MANAGEMENT INFORMATION As the use of procedural rules to route work moves all firms towards commoditization, differentiation will increasingly be based on how the processes are managed. Managing the life cycle of the process itself is certainly important, but we found that managing the groups of people interacting with those BPM systems is of equal importance. And managing people in such an environment requires that decisions are based upon good information. Virtually every BPM solution one looks at promotes the idea that, by using their technology, customers will get better management information. The audit trail provides all the information you’ll need—a complete history of the work item including who handled it when, what information was changed, etc. But because these products are focused on the needs of the business process life cycle, businesses are left to reflect on their own special needs for Management Information. With the right information gathering and reporting regime, the business can more easily sense important changes in the market and customer behavior, changing the process if necessary. But all that process flexibility will not do you much good unless you optimize the resources at your disposal. Based on a better understanding of operational capacity, managers can make more informed decisions, adjusting resource levels in different departments; effectively load balancing the business in line with peaks and troughs of demand. While this sounds a little like a return to the days of Bigger People Reductions (BPR), during our interviews we found that, with more accurate Management Information, a greater sense of reality prevailed. In some instances, this led to redeploying or recruiting extra staff to manage the work within the desired service and quality levels. Employees are now focused on targets set against several factors including effectiveness, quality, and product knowledge. With the core BPM environment, all that was really monitored was throughput. Firms seeking this sort of sophistication have three alternative options— build your own (i.e., write suitable programs on top of the BPM package), deploy a package with the core Production Management and Management Information components, or plug in a high-end ‘Business Intelligence’ application such as Hyperion or Comshare. While many BPM vendors claim to provide the requisite Management Information, most leave it to the customer to write suitable programs to reflect their own special reporting needs. Some BPM tools do provide their own built-in ‘analytics’ capabilities capturing average cycle times of processes and activities, or how long work items wait before moving on to the next activity. This information is useful for finding process bottlenecks, but often does little for the day-to-day grind of extracting management information to support production or supervision at the team and individual level. When it comes to building your own management information, the problem is that you first of all have to understand all the key business issues and problems. Moreover, a deep appreciation is needed of the relationship between day-to-day operations and effective reporting. You also need to be aware of the technological implementation issues and then develop your own methodology for deployment amongst the workforce. Indeed, developing an
28
BPM—TOO MUCH BP, NOT ENOUGH OF THE M appropriate implementation methodology that embeds the cultural change is a major part of the battle. With BI solutions, the emphasis is largely on financial reporting. Products tend to focus on the provision of an executive dashboard with a 360-degree view of all metrics. This information is largely underpinned by operational metrics, but its purpose is primarily overall financial performance. Again, support for the day-to-day needs of production management is largely lacking. When considering packaged implementation, we found only one product that seemed to cover the whole spectrum. Work Manager from eg Solutions provided a pretty comprehensive approach. Their customer references pointed to the product’s superior work allocation features, providing integrated management information with features for monitoring work throughput at the individual and team levels. This has left managers with more time to address process bottlenecks rather than simply detecting them. Managers were also more able to focus on value-added functions, so time saved on routine tasks such as work allocation was channeled into addressing customer needs.
CONCLUSION The re-emergence of business processes as a core discipline in modern business management is fairly clear. But in order to really derive the maximum benefit from BPM initiatives, firms need to manage the people interface more carefully. Through a focus on this area, successful firms have derived as much as 40 percent additional productivity improvement over and above that achieved by initial process automation using a BPM engine. In order to achieve these sorts of benefits, firms need to adopt a wider, holistic set of principles surrounding the BPM initiative. They need to institutionalize this as a way of thinking. Thinking that delivers real, tangible results and longterm benefits.
29
Integrated Function and Workflow1 Chris Lawrence, Old Mutual, South Africa I want to talk about designing and building computer systems based on the process modelling methodology previously outlined in this book. I shall call them ‘integrated function and workflow’ (IFW) systems. The significance of each of the words in this description should become clear as the rest of this book unfolds, so I won’t try to steal my own thunder by attempting a summary explanation now. It is possible to design computer systems in which the concepts of process, subprocess, task, and so on are themselves the basic building blocks. The immediate advantage of this is that the eventual system design matches the business design. This simple correspondence brings enormous benefits, with implications on a number of levels. Some of the implications can be quite profound. But before explaining how to design a computer system from a business process model, I need to introduce a specialised piece of software which makes it possible. This is the ‘workflow engine’ or ‘process engine’. But before I do that, I must talk a bit about current ‘workflow systems’. I am not claiming that any component calling itself a ‘workflow engine’ will do the job. But nor am I claiming that the kind of workflow engine the approach needs is radically different from any workflow engine currently available. Most if not all of the features which an IFW system needs in its workflow engine would be found somewhere in today’s ‘workflow technology’. I will not be describing workflow engines, or any particular workflow engine, at the technical level. But I will cover the essential business functionality of the kind of workflow engine an IFW system needs. I want to do this in enough depth to make it quite clear what architectural component is responsible for what feature of any eventual solution. Workflow systems Software systems configured as workflow engines have been around for a while, and there are many ‘workflow systems’ on the market. There is an entire industry dedicated to designing and perfecting workflow systems, and to deriving and maintaining standards in workflow technology. The standards include protocols allowing workflow systems and workflowenabled applications to talk to each other. To an extent however this industry has a particular kind of ‘workflow system’ in mind. This is primarily a separate application operating alongside (and therefore ‘external’ to) other business applications, and not necessarily communicating with them. This isn’t to say it rules out the type of workflow which is fully integrated into the business application, but that the paradigm case is of functional separation.
Excerpted with permission from the forthcoming book Make work make sense: An introduction to business process architecture by Chris Lawrence
1
31
INTEGRATED FUNCTION AND WORKFLOW There are a number of reasons for this, some good, some not so good. One good reason is that a business or organization may have a number of system applications, for example a core administration system, an accounting package, and, say, a human resources/payroll system. They may then have a single external workflow system which manages work done on (ie user interactions with) all these systems. If an entire business process spans a number of systems, the advantage of a functionally separate workflow system is that the entire process can be implemented in the one workflow implementation. This brings consistency and comprehensiveness. A less good reason is historical. Workflow is a relatively new technology— newer than a lot of the mainframe systems which dominate many of our banks, insurance companies and government departments. So there was obviously a ready market for workflow systems designed to operate in parallel with, or ‘on top of’, and therefore external to, those legacy systems. Turning now to what these packaged ‘workflow’ applications do, I think it is fair to say that their principal functionality focuses on the routing of links to electronic (scanned or text) documents and the management of work related to those documents. As a result there are now many reasonably sophisticated administration operations which see themselves as functioning as ‘paperless offices’. But their business processes are actually implemented in a mixture of procedure manuals, independent electronic models, people’s heads, and these ‘workflow’ packages. Their business processes are only partially implemented in the routing of links to electronic documents. This is because sometimes the two are the same and sometimes they are not:
There is another type of workflow which is a lot more ‘internal’. Many application systems have a ‘workflow component’ which may be of greater or lesser sophistication. This workflow component often consists of little more than control over the ‘status’ (or ‘status code’) of a particular master record, for example an insurance policy master record. This will exercise a degree of ‘workflow control’ over (say) new business administration, but the same control may not be extended to other areas—claims, investment instructions and so on. This second type of workflow is often called ‘embedded’ workflow—for fairly obvious reasons. This book is principally concerned with a third type of workflow. I shall call this ‘intrinsic’ rather than embedded or external workflow. Architecturally, both embedded and external workflow are afterthoughts. Embedded work-
32
INTEGRATED FUNCTION AND WORKFLOW flow is an afterthought built into the data and functional design of an administration system which is normally functionally complete and coherent without it. External workflow is even more of an afterthought, one that is literally layered onto a pre-existing application—regardless of how much ‘integration’ is then done to make the two layers communicate with each other. Perhaps the best way to characterise intrinsic workflow is to turn the IT clock back, and try to imagine what might have happened if ‘workflow’ systems had appeared first, and record-keeping systems only afterwards. Systems might then have been designed with their primary objective being to model, support, control, direct and record the work passing through the business, and only their secondary objective being to carry out the data processing, record keeping and document generation which was the work itself. Although the intrinsic workflow design pattern did not apply in the past, it will be increasingly important in the future. Even where external workflow makes sense as a unifying layer over disparate legacy systems, intrinsic workflow can serve as a model to which de facto enterprise (compromise?) architectures will aspire. Specification and interoperability standards will be as necessary in this world as in the current world. Where appropriate in this book I will employ the standards and terminology of the Workflow Management Coalition (WfMC), which is where the words ‘process’, ‘subprocess’ and ‘task’ originate. (WfMC uses ‘task’ as an alternate to ‘activity’.) Having said that I am using ‘subprocess’ in a slightly specialised sense. The reason for this I hope I shall make plain later on, when we get onto incremental automation. Further comparisons between external and ‘intrinsic’ workflow are drawn in a later section entitled “IFW and ‘classic’ workflow compared”. Workflow engine The WfMC literature includes a detailed survey of features displayed by a workflow engine or, more broadly, a ‘workflow enactment service’. I do not want to replicate these descriptions here. Instead I want to discuss in general business terms the features which a workflow engine would need in order to animate and articulate the kind of systems this book is about. It is unlikely that anything here will contradict a standard like that of the WfMC. These features are summarised below. Data The workflow engine will need its own database to support a number of specific data structures. I will go through these in turn. Process model These will be primarily the concepts already introduced, plus some supporting entities: • • • •
Process (instance) Process type • Eg order handling; insurance claim. Subprocess (instance) Subprocess type
33
INTEGRATED FUNCTION AND WORKFLOW • Eg take order; check order. Task (instance) Task type • Eg manual take order; automatic check order; automatic follow up. • Task class • Eg manual; automatic etc (others may be needed later). The data model diagram below shows the relationships between these entities. • •
The individual instances of the subject entity will be the individual cases—eg order 123 going through the order handling process. There will also need to be rules, attributes, and possibly additional entities, to allow processes of different types to be assembled from the appropriate subprocesses and tasks, in the right sequences, and with the correct routing. Users and access There will be more to say about tasks later on, including the features of manual versus automatic tasks. For now we shall just say that the main functionality of the systems discussed here will be at task level. So this will be where information about different users’ access rights will need to be applied. Different users will have access to different task types. The data model therefore needs to be extended:
34
INTEGRATED FUNCTION AND WORKFLOW
(In practice access may also need to operate at levels lower than task type. An example is where user A may be able to authorise requests captured by user B, and user B may be able to authorise requests captured by user A; but neither user A nor user B can authorise requests they themselves captured.) Workflow information ‘Workflow information’ here means information about where all the current process instances are at any time, and historic information about what task in what subprocess was performed when for what subject entity. In the case of manual tasks, the historic information would also include what user was involved at what point. So ‘current state’ information would be for example that subject entity 123 is in process ABC and has a current workflow status of ‘awaiting X’, where X is the subprocess (or task within the subprocess) which that process instance has reached.
35
INTEGRATED FUNCTION AND WORKFLOW Historic information would be for example: Process instance
Process
Sub process
Task
User
Date/ time started
Date/time finished
Task result*
Order 123
Order
Take order
Manual take order
Fred
10-1-2001; 09:00:00
10-1-2001; 09:30:00
Next Subprocess
Order 123
Order
Check order
Auto check order
N/A
10-1-2001; 09:31:00
10-1-2001; 09:31:02
Next Task
Order 123
Order
Check order
Manual correct errors
Fred
10-1-2001; 10:00:00
10-1-2001; 10:15:00
Next Task
Order 123
Order
Check order
Auto check order
N/A
10-1-2001; 10:16:00
10-1-2001; 10:16:02
Next Subprocess
Order 123
Order
Check credit rating
Auto credit check
N/A
10-1-2001; 10:20:00
10-1-2001; 10:20:02
Next Subprocess
…etc
Task result If, as a result of what has taken place in the task, the next move is to the next subprocess, then we shall say the task result is ‘next subprocess’. If instead the result is that the next move is to another task in the same subprocess, we shall say the result is ‘next task’. In practice the result ‘next task’ may need to be supplemented with an indication of which ‘next task’ to route to, as there could be more than one choice, but we’ll keep it simple for now. There may also be other possible task results later, but for the moment we shall only consider these two. So for example the only possible task result for ‘Manual take order’ is ‘next subprocess’, as there is nowhere else for the order to go. Then if the order passes all the validation rules in ‘Automatic check’, the task result is again ‘next subprocess’. But if as a result of errors the order has to be routed to the Manual correct errors task, then the task result is ‘next task’. System information We said above that in an actual IFW system the functionality itself is all contained at task level. In fact even that wasn’t strictly true. Just as a process is effectively a container for one or more subprocesses, and a subprocess is a container for one or more tasks, so a task is a container for one or more programs—and it is those programs which are the real functionality level. To prevent any misconceptions, the expression ‘one or more programs’ is only so as not to make assumptions about the implementation environment. Each task would normally be implemented by one ‘logical’ program, even if this consists of a number of different components, subroutines and so on.
36
INTEGRATED FUNCTION AND WORKFLOW The same logical program could however easily implement more than one task. The data model is therefore further extended:
Functionality The workflow engine will also need to ‘do’ a number of things, as well as ‘understand’ its database. Handle tasks A fundamental thing the workflow engine will have to do is handle task instances: both manual and automatic tasks. Both classes have slightly different requirements depending on whether or not they are the first in the process. To explain how the workflow engine handles various categories of tasks will involve explaining the generic features of the tasks themselves. This makes up a big part of the design of an IFW system. Manual task at the start of a process A manual task at the start of a process would normally be the way the relevant process begins—for example entering an order or an insurance claim. In fact most manual tasks in IFW systems, and in particular manual tasks which start processes, are like traditional on-line data entry programs whose job is to capture data and create and/or update records in a production database. And just as traditional data entry programs need to be made available to certain users and not others depending on the users’ access rights, so manual tasks which start processes will normally be accessible from some sort of
37
INTEGRATED FUNCTION AND WORKFLOW menu customised in accordance with the logged-in user’s access rights. In an IFW system this is typically a function of the workflow engine itself. So the workflow engine would be able to present each logged-in user with the list of processes that user is allowed to start: a ‘process menu’. Then, by the time the relevant manual task has completed, and as a result has captured enough data to start the process, the workflow engine would have created an instance of the correct process in its database. It would then follow the routing rules for the process of that type to determine what the next task should be. Following the order process example, since the subprocess ‘take order’ only has that one manual task, the routing will be to the next subprocess, in which the first task is the automatic check task. Having said all this, it is also possible for a manual task at the start of a process to behave like one after the start of a process (see below) and not be started from the process menu. There will be an example of this in the case study. Automatic task at the start of a process This would normally be where a process is not only triggered by another process, it also uses data captured by that other process. The first task of the triggered process can therefore be an automatic task which operates on the data captured by the process which triggered it. There is an example of this in one of the main processes in the case study. Automatic task after the start of a process The automatic ‘check order’ task is an example of an automatic task after the start of a process. The initial manual task will have completed, and the workflow status of the relevant subject entity will have been set to ‘awaiting check order’. The workflow engine makes the next task available, which is ‘Automatic check’. ‘Make available’ means effectively ‘place on the job queue’. The workflow engine therefore needs scheduler logic to decide what task instance gets processed next. This will use data including priority or frequency parameters held at the task type and/or task instance level. A fairly simple type of logic is that the workflow engine processes task instances in strict order of ‘effective date/time’. Thus each task instance gets created with an effective date/time, and the scheduler processes any task whose effective date/time is less than or equal to the current date/time, in strict order of effective date/time. Task instances can be created to run at a particular time in the future by setting their effective date/time in the future. One common way of doing this for automatic follow-up functionality is by setting a ‘time constraint’. The ‘rule’ would typically operate at task (type) level—eg to set the effective date/time of all instances of that task type to a week following the current date/time. The constraint itself (the future effective date/time) would be at task instance level. So for example a task type of ‘auto follow up’ might have a time constraint of seven days. An instance of task: Auto follow-up created at (say) 10:00:00 am on 3 March 2003 would have its effective date/time set to 10:00:00 am on 10 March 2003. The workflow engine would run the task at (or as soon as possible after) 10:00:00 am on 10 March 2003.
38
INTEGRATED FUNCTION AND WORKFLOW The more ‘normal’ task types which are not post-dated in this way would therefore have a default time constraint of zero. It is possible to implement other types of constraints, for example conditional constraints (only action if such and such is true). It is also possible to implement more complicated priority algorithms into the workflow engine if required. An automatic task is like a batch program processing a file of data with only one logical record in the file. In outline, the responsibilities of the workflow engine in this context are as follows: • • •
Decide when it is the turn of this automatic task instance to run. When its turn comes, call the appropriate (logical) program and cause it to be supplied with the appropriate data. The ‘appropriate data’ will be identified from the subject entity. When the program has processed that instance, act on the task result—if ‘next subprocess’, then route to the next subprocess; if ‘next task’, then route to the appropriate next task in the same subprocess. (In practice ‘route to’ means creating a record on the appropriate workflow table identifying the next task instance—either the first task of the next subprocess or the relevant next task in the same subprocess—and therefore placing the next task instance on the job queue.)
Manual task after the start of a process Take the example of the Manual correct errors task in the Check order subprocess. The Automatic check task has run and found errors in the captured data. It returns a result of ‘next task’. The only possible next task is Manual correct errors. Just as in the case of an ‘Automatic task after the start of a process’ a task instance record will need to be put on the appropriate workflow table. This will identify the subject entity instance (either directly or via identification of subprocess and process), the task type, and the task class. In this case the task class is ‘manual’. Therefore a user has to action it. Which user? Any user with the appropriate access rights—controlled by the task access table. How will these users access the task instance, and how do they know it is there to be actioned? This brings us to another important workflow feature, that of the ‘in tray’. This is another familiar way in which workflow systems interact with users (and vice versa). Whereas the ‘process menu’ enables users to start processes (of types they are allowed to start), the ‘in tray’ presents users with instances of (manual) tasks from processes which have already been started. The instances will be of task types the users are allowed to perform. They may well be from processes someone else (or something else) has started. An example of a ‘Manual task after the start of a process’ is ‘manual underwriting’. Someone else (eg a new business clerk) may have started a new business process for application number 123. When application number 123 reaches the underwriting subprocess the business rules decide it needs to be underwritten manually, by an underwriter of level 2. The routing within the
39
INTEGRATED FUNCTION AND WORKFLOW process will then ensure that a task instance for application number 123 will appear in the in tray of one or more underwriters of level 2. Different workflow systems distribute work in different ways, and they may be flexible in how they do this. In one paradigm the task instance for application number 123 will appear in the in tray of all level 2 underwriters. As soon as one of them ‘picks’ application number 123 off his in tray it will no longer be available for any others. In another paradigm there could be a work distribution algorithm which allocates work across users in accordance with skill levels, current workloads etc. Another variant of the distribution algorithm paradigm is where there is no in tray as such. The workflow engine not only allocates manual task instances between individuals, it also controls the order the individuals process the tasks. In this paradigm the user is only given one task instance at a time. See also ‘Distribute work’ below. It is worth mentioning that there are two overall logical categories of manual tasks which are ‘waiting’ to be performed. One is of tasks which can be done immediately—for example the manual underwriting case. The other category is of manual tasks which can only be performed when a particular external event occurs. An example is of a manual task which can only be performed when a reply is received from a customer. Manual task instances also sometimes need to be post-dated—ie created so they appear on user’s work queues (in trays) not immediately but at some time in the future. As with automatic tasks, this is done by applying a ‘time constraint’ at the task type level. Distribute work Work allocation algorithms have already been mentioned (‘Manual task after the start of a process’ above). Workflow systems will often have functionality to allow managers and supervisors to manipulate in trays, reallocate work and alter task access rights to ease bottlenecks and generally spread work appropriately across the available resources. Record data Another important job the workflow engine has is to record information about tasks as they happen. Information about subprocesses and processes can normally be derived from task-level data. Example data items are: • • • • • •
Task type • Eg manual underwriting; automatic check order Task class • Manual or automatic Date + time started Date + time completed User • (Or, more generically, ‘resource’) Task result • Next task; next subprocess;…
40
INTEGRATED FUNCTION AND WORKFLOW Display data It may not necessarily be the responsibility of the workflow engine itself to display data. But it will need to make the data available for query and display. This will cover both historic data (work which has happened) and data about work still to be done. This data will need to be accessed for a number of reasons, including both work management (how much work is there ‘in the system’; what has been allocated to whom) and also customer service (contact management; history of work done for particular customers; and so on). Incremental automation We can now draw a number of threads together to introduce another powerful feature of the architectural approach. The example will be the order process discussed earlier. We need to imagine a context where the system has been implemented initially with a relatively low level of automation, but with an intention to enhance the system incrementally thereafter. This ‘minimal’ principle can apply to all processes in the system, and all subprocesses and tasks in each process. But to illustrate what I mean I shall focus on one particular subprocess, that of ‘check credit rating’. For convenience the relevant process diagram is repeated below. Check credit rating Take order
Check order
+
+
Automatic credit check
Match against stock
Authorise order
Despatch order
+
+
+
pass 1 or 2
approved (rule 3)
meets criteria of 3, 4 or 5
Manual credit check
written to customer (rule 4 or 5)
Automatic follow up
Manual record documents
I now want to consider how the ‘Check credit rating’ subprocess may be developed incrementally, by going through a number of iterations. For reasons of illustration, the example will be fairly extreme.
41
INTEGRATED FUNCTION AND WORKFLOW There were some fairly intricate business rules involved in the ‘check credit rating’ subprocess. Incremental automation will to a large extent mean progressive implementation of those rules. Iteration 1 In the first iteration the subprocess would consist of just two tasks, both of which would have minimal functionality: Check credit rating Take order
Check order
+
+
Automatic credit check
Match against stock
Authorise order
Despatch order
+
+
+
Manual credit check
Automatic credit check This task would do literally nothing. Every case would pass straight through it. Manual credit check This task would consist of a screen identifying the order, the customer etc, and a yes/no button signifying the order either passes the credit check or it doesn’t. If yes, the process continues. If no, the order is cancelled, which terminates the process. The user would need to telephone or write to the customer to tell him the bad news. Other than the identity of the order and the customer, all the information the user would need to take into account before making the yes/no decision would be external to the system—on paper, in other systems etc. All the business rules described earlier would need to be applied ‘manually’. The reader will note that the Manual credit check task (however minimal) is a necessary component, because the credit check has to be performed, and control has to be exercised at this point in the process. In this iteration (in this example) the Automatic credit check is theoretically redundant (as it does nothing). But it was put there so that it could be built on in the future. Iteration 2 In the next iteration the subprocess might consist of the same two tasks, but the functionality of both could be enhanced. Automatic credit check The first business rule mentioned was: If the customer has sent cash with the order, then pass credit check. Assuming the ‘cash received’ amount was captured in the initial subprocess (Take order), this rule could be programmed into the automatic task:
42
INTEGRATED FUNCTION AND WORKFLOW If cash received >= order total, then Task result = next subprocess; otherwise Task result = next task. Manual credit check In the first iteration all instances would have gone to Manual credit check, and would therefore need to be worked on by a user. In the second iteration, the only ones which will need to be touched by a user would be those which did not pass business rule 1 above. The Manual credit check screen could be left as it was in the first iteration. Or it could be made slightly more sophisticated. As well as showing the identity of the order and the customer, and allowing the user to make a yes/no decision, it might help to show how close the instance came to passing, for example by showing the cash received and the order total and maybe the difference between the two. The very simple increase in automation between iterations 1 and 2 demonstrates the principle behind its economic justification. In iteration 1, 100 percent of instances go to Manual credit check. Let’s say for the sake of argument that in iteration 2 only 70 percent of instances go to Manual credit check. If the cost of changing the system from iteration 1 to iteration 2 is less than the saving made by automating 30 percent of instances (over a particular payback period), then the enhancement is justified. The key thing is to know in advance what the saving ought to be, which is why measurement is important. Iteration 3 etc In subsequent iterations further business rules could be implemented which will progressively reduce the proportion of instances which need to be handled manually. In view of the suggestion in iteration 2 to show how close the instance came to passing, rule 1 could be altered as follows: If (cash received + tolerance) >= order total, then Task result = next subprocess; otherwise Task result = next task. This would effectively automate the manual judgment in cases where the credit exposure is negligible. The second rule was: If the customer has not sent cash with the order, and the amount of the order is less than or equal to the total current unused credit, and the customer is not in arrears, then pass credit check. It will be evident that this rule would be straightforward to implement arithmetically—assuming the relevant data items are available. A rule like this one however may call for more sophisticated treatment: If the customer has not sent cash with the order, and the amount of the order is less than or equal to the total current unused credit, but the customer is currently in arrears, then write to the customer to request payment before accepting the order. We shall assume all the following data items are available to the system: cash received order total total current unused credit
43
INTEGRATED FUNCTION AND WORKFLOW total unpaid invoices greater than x days old customer name and address It should then be possible for the system to generate a letter to the customer. In the diagram below, the Manual credit check task would be responsible for generating the letter—for example by having a ‘generate letter’ button, and providing an opportunity for a user to check and/or amend the letter. An alternative design would give the Automatic credit check task the job of producing the letter, which might then go out immediately with no human intervention. Check credit rating Automatic credit check
pass 1 or 2
approved (rule 3)
meets criteria of 3, 4 or 5
Manual credit check
written to customer (rule 4 or 5)
Automatic follow up
Manual record documents
And so on. The principle should be established by now how this kind of architecture allows for targeted, incremental automation. Development effort can be concentrated on one or more processes, on one or more subprocesses, or on one or more tasks—depending on where automation makes the most economic sense. To a large extent, incremental automation means incremental implementation of business rules. (The automation does not have to be progressive—we could also literally get the rules wrong or not quite right the first time, and adjust them the next time. The architecture allows for ‘learning’ as much as it allows for incremental automation.)
44
INTEGRATED FUNCTION AND WORKFLOW ‘Raw’ and ‘implemented’ business rules We mentioned before that it was possible to distinguish between a ‘raw’ business rule and an ‘implemented’ business rule. We introduced rule 1 like this: If the customer has sent cash with the order, then pass credit check. This is what I mean by a ‘raw’ business rule. It is expressed purely in business terms. It could be (and very often will be) something which a person might follow. We then redescribed rule 1 like this: If cash received >= order total, then Task result = next subprocess; otherwise Task result = next task. We also said it could be programmed into the automatic task ‘Automatic credit check’. This is what I mean by an ‘implemented’ business rule. It is one which is now coded somewhere and somehow into the system, so it can be used to generate automatic behaviour. The words ‘somewhere and somehow’ are important. It could be ‘hard coded’ or programmed so as to be available to one or more system components (for example the component which implements the automatic task ‘Automatic credit check’). It could be implemented in data terms. For example if the minimum premium on policy type X is £50, then this might be implemented by setting a ‘minimum premium’ field on a ‘policy type’ record to £50 for policy type X. Or it could be implemented in workflow and routing. After all, the order process flow described in words (first take the order; then check the order; then check the customer’s credit rating; and so on) is only another set of (raw) business rules. They just happen to be rules implemented in the process model component of the system. Very often a raw business rule will be implemented by a combination of two or all three of these (program logic, data, process model). As far as implementing rules in program logic is concerned, the statement about a rule being ‘programmed so as to be available to one or more system components’ needs some explanation. It is generally desirable to reuse system components, and business rules are no exception. For a rule implemented in program logic, this means the same rule being run, or called, or applied, by more than one task. In data terms therefore, one task can apply many rules; but equally one rule can be applied by many tasks. So to extend part of a diagram we have used before:
45
INTEGRATED FUNCTION AND WORKFLOW Before leaving this subject it is worth mentioning that as with workflow and business process management there is a growing industry relating to academic research, standards and interoperability concerning the concept of a business rule and how types of business rule are to be categorised and logically analysed. See for example www.businessrulesgroup.org. I do not wish to stray into this heady atmosphere. I would however like to mention a recent review of business rules by CJ Date2. Date’s approach to business rules is to reduce them as much as possible to a subset of data (‘…business rules really ought to be part of the data model’). This is in pursuit of the ideal of ‘declarative’ rather than ‘procedural’ programming. I do not wish to argue the rights and wrongs of any particular way of implementing business rules in computer system design. The important objective is to achieve the correct results in an efficient and maintainable manner. From a theoretical standpoint however I do question an attempt to reduce all business rules to rules about well-formed data, integrity constraints, and data transformation. This is related to comments we made very early on (see ‘Process, data and work: Process and data’ above). It may be that all computer systems transform data. There may also be significant categories of computer systems for which it makes sense to view their prime function as that of transforming data. From the viewpoint of software engineering it may be true that for systems which are primarily mechanisms for transforming data the declarative approach to implementing rules is the best on offer. However, the systems I am talking about in this book are not primarily mechanisms for transforming data. Their transformation of data is a means to an end. They are primarily mechanisms for getting work done. The business rules are primarily rules about how that work is to be done. It may be useful to reduce some of those rules to rules about well-formed data, integrity constraints, and data transformation. But not all of them. The important word is ‘useful’. A rule which says an underwriter of level 2 or above must approve all life assurance proposals above £500,000 sum assured is not primarily a rule about well-formed data. It may be possible to express it as a rule about wellformed data—for example: IF Proposal.SumAssured > £500,000 THEN IF Proposal.Underwriter is level 2 or above THEN Proposal.Status can be ‘Approved’ END IF END IF
CJ Date: What Not How: The Business Rules Approach to Application Development, Addison-Wesley, 2000.
2
46
INTEGRATED FUNCTION AND WORKFLOW But expressing it like this risks losing sight of the dynamic features and implications. The implications are to do with work, routing, communication. If there isn’t an underwriter of level 2 or above where does the proposal go? How do we make sure it doesn’t get forgotten about? Perhaps it might be useful at this stage to embark on a brief review of the ‘systems I am talking about in this book’. I shall do this under the familiar aspects of data and functionality. High-level architecture The diagram below is intended as a high-level view of an IFW-architected system.
The diagram is intended to bring out several key points. Process model underpins entire design The process model (a finite set of processes, broken down into subprocesses and tasks, and interacting in specific ways) is what the workflow engine operates on and animates. But the process model is also the organising principle of the application system itself. However the application system is physically configured, it is logically partitioned into components which are primarily linked to tasks. Programs are not assembled into batch runs or (generally) called by on-line menus. They are linked to tasks (which are in turn linked to subprocesses
47
INTEGRATED FUNCTION AND WORKFLOW and processes) and are initiated by the workflow engine to process one instance of work at a time. This is a significant difference between IFW architecture and the currently more common ‘layered’ approach in which a generic workflow solution operates in parallel with a totally differently architected administration system. The latter would typically not be structured around the business process model. (See ‘Integrated function and workflow: Workflow systems’ above.) Below is an equivalent high-level view of a ‘layered’ approach.
Workflow and application functionality separate Although the process model in an IFW system both ‘instructs’ the workflow engine and ‘structures’ the application components, the workflow and application functionality each keeps to its own domain. The workflow engine almost operates as a ‘higher-order’ operating system for the application functionality. Workflow and application data separate Just as the workflow and application functionality are separate, so are the workflow and application logical databases. Workflow data is all about production instances (new business process for policy 123; order check subprocess for order number 456; etc). The applica-
48
INTEGRATED FUNCTION AND WORKFLOW tion data is the familiar production database of entities and attributes: customer tables, policy tables, transaction files. There clearly need to be areas of overlap, in the key or index data: policy numbers, agent numbers, order numbers—anything which is needed to identify instances of work. Subject entities in fact. It is this shared key data which supports the powerful reporting and analytical potential. Synthesising information The application database knows all about customers, orders, policy applications, names and addresses, credit limits. The workflow database knows all about business processes currently being performed and (more significant from a management information viewpoint) processes which have been performed in the past. These two domains of data can be synthesised to provide insight into, for example: •
Which customers (buying which products) generate the most complaints (require the most service attention, etc); • Which financial products take longest (and cost the most) to put into force (to underwrite, to arrange payment, etc); • Which areas of the business have the highest levels of manual work (and might therefore repay investment in greater levels of automation, after taking into account the differential costs of labour, etc); • Who did what on what case; who spends most of his time on what product; whose work needs to be checked and redone most often; … • …and so on. The potential of the architecture is limited only by imagination and economic common sense. Some at least of that imagination and common sense should be applied to the distribution of work. Distribution of work As long as there are appropriate ways of distributing system functionality, the architecture also provides scope for distributing work (or ‘work’) among people and other resources around the world and along the relevant value chain. The value chain can include geographically separate components of the same business, but also customers, agents, partners and suppliers, including business process outsource (BPO) providers. So for example the initial data entry of a switch instruction on a unit-linked investment contract could be done over the internet by a broker, or the investor himself, or by a company employee at a sales branch. The same principle could apply to placing a sales order. Authorisation tasks in the same process could be handled at a head office, by a GUI screen on a local area network or intranet. Since the workflow component is ring-fenced it can conform to appropriate interoperability standards (WfMC, BPMI, etc) and communicate with other workflow architectures. Batch routines in legacy systems can create instances of processes (eg policy or pension maturities). Accounting journal entries generated at instance level from an IFW system can be aggregated and/or summarised to feed on a daily/weekly/monthly basis into a batch-architected general ledger package.
49
INTEGRATED FUNCTION AND WORKFLOW There is just as much potential in the realms of distribution and interoperability as there is in targeted and incremental automation. How best to exploit that potential will depend on what the business or organization who owns the processes is aiming to achieve. So we shall soon do just that—look at what a particular business or organization may want to achieve, and how it might achieve it. But before that I want to round off this Theory section by looking at some remaining differences between the ‘IFW’ architectural approach and classic ‘extrinsic’ workflow. IFW and ‘classic’ workflow compared I do not want to delve into the mathematical theory underpinning workflow management systems. I shall however take as my example ‘generic external workflow system’ not any particular proprietary product but a ‘model’ as discussed in the theoretical literature. The main comparative features will be as applied to business applications and implementation projects. The first point I want to make is that in pure workflow terms it is generally possible to translate the two paradigms into each other. The main difference apparent in the diagrammatic representation of a classic workflow process and an IFW process is that classic workflow doesn’t have the IFW-style ‘subprocess’. Since in an implemented workflow solution a subprocess is ultimately only a ‘container’ for one or more tasks, this should not matter—in an implemented solution. I shall take as an example a completed ‘Order process’. In IFW format (and simplified BPMN notation) this would appear something like this: Take order Manual take order
Check order
Check credit rating
Automatic check
Match against stock Automatic credit check
Automatic match
Authorise order
Despatch order
Automatic authorise
Manual despatch
pass 1 or 2 Manual correct errors
Manual match
approved (rule 3)
meets criteria of 3, 4 or 5
Manual credit check
Manual authorise 1 Manual authorise 2
written to customer (rule 4 or 5)
Automatic follow up
Manual record documents
The same process expressed in ‘classic’ workflow notation would appear something like this:
50
INTEGRATED FUNCTION AND WORKFLOW
The point is not that one is ‘right’ and the other is ‘wrong’. As an implementation the classic schema is just as ‘right’, especially if the automatic and manual tasks do exactly the same in both cases. But the classic schema is normally (although not necessarily) associated with a context where the ‘manual’ tasks are the focus of attention, and are either ‘truly manual’, or are tasks associated with separate administration systems. The automatic tasks are normally concerned with routing, and therefore route the work (request) automatically by the application of coded rules. Those rules typically only have a limited amount of data available to them. There is something ‘producer-centric’ about this paradigm, as the accent is on getting the most out of human resources, and limiting the overhead of routing between them. But, as I said, this is not a necessary feature of the ‘classic’ workflow schema. One could imagine a workflow system configured in the classic format, but with no linked ‘administration’ system. And then more and more functionality and data storage is built into that workflow system so that in the end the workflow system and the administration system are one. The result would be an IFW system, even though it originated as a workflow ‘skeleton’ designed in the classic format. If this were done, it would have been without the ‘subprocess’ construct. What then is the point of the ‘subprocess’?
51
INTEGRATED FUNCTION AND WORKFLOW The subprocess does not serve a crucial system-architectural function in the implemented solution. The solution will work without it. But it serves a business-architectural function. This is best brought out by the following observations, which are all different aspects of the same thing: •
The subprocess is concerned with the what rather than the how. At the subprocess level what is important is that a transition has occurred between one business status and another. How it happened is not relevant. This accords with the customer-centric viewpoint. The customer is not interested in how his claim got approved, just that it did get approved. • Continuing the theme of the what versus the how, a process model which stops at the subprocess level is almost by definition a model of the business at the logical level. • The subprocess level can be important for measurement. Measurement at subprocess level can be both ‘natural’ and familiar. Yes it might be possible and important to count the number of cases waiting for initial assessment, waiting for medical evidence requested by the underwriter, or waiting for a final underwriting decision, but it is also important to provide subtotals at subprocess level: how many cases are ‘in underwriting’? • In terms of solution design the subprocess level very often corresponds to particular domains of data and functionality—which span across the component tasks. The partitioning referred to in the last bullet above is not as evident from a ‘classic’ workflow schema. In the example above the only indications that ‘Auto authorise’, ‘Manual authorise 1’ and ‘Manual authorise 2’ are linked are (i) the word ‘authorise’; and (ii) the fact that they are in a loop ‘controlled’ by ‘Auto authorise’. Reason (i) is completely arbitrary. Reason (ii) is also fairly arbitrary, in that ‘Manual authorise 1’ and ‘Manual authorise 2’ (in either classic or IFW schemas) do not have to route back through ‘Auto authorise’ to make ‘Auto authorise’ the only entry and exit point. Both manual authorisation tasks could quite legitimately route direct to ‘Manual despatch’. If so this would link both ‘Manual authorise 1’ and ‘Manual authorise 2’ closer to ‘Manual despatch’ than to each other. But as soon as one considers the business content the link between (say) ‘Auto authorise’, ‘Manual authorise 1’ and ‘Manual authorise 2’ is evident. The sort of data items processed automatically in ‘Auto authorise’ (cash amount of order; customer’s outstanding credit; etc) will typically be the ones needing to be displayed and considered ‘manually’ by ‘Manual authorise 1’ and ‘Manual authorise 2’. I do not want to labour the point. A key claim of the approach presented in this book is that it keeps the business model and the solution model aligned because they are one and the same model. The subprocess concept and construct is an important factor in that alignment—which is effectively the alignment between what and how.
52
Business Activity Monitoring and Simulation Joseph M. DeFee, CACI and Paul Harmon, Business Process Trends, United States 1. MANAGING A BUSINESS IN REAL TIME Companies have always depended on processes. Historical processes may not have been as well-analyzed as they are today, but there have always been business procedures designed to turn inputs into outputs in an efficient manner. Just as there have been processes that defined how materials flowed from their arrival to assembly and then to shipping, there have always been communication and control systems that attempted to monitor the process flows and deal with events that threatened to upset the expected flow. Consider one example of how processes have historically been managed. Imagine a small hospital of 30 years ago. As we have today, this hospital had a Customer Lifecycle Process that managed customers from admission through treatment to discharge. (See Figure 1.) The first subprocess or activity was probably Admissions. As patients came through the front door they were documented. The patient’s medical history was determined, credit was established, and the patient was assigned to a specific ward for treatment. Over the course of time, the hospital had established expectations. In a normal week, roughly the same number of patients entered as were discharged, maintaining a predictable need for doctors, nurses, medicines, and beds. Hospital Administrator
Activity Supervisor
Activity Supervisor
Activity Supervisor
Activity Supervisor Outprocessing
Admission
Actual Process Activities being Executed in Real Time
Figure 1. A manual patient-lifecycle process with reports delivered by phone. (Bold lines indicate flow of patients. Dashed lines indicate flow of communication and control information.)
Consider what happened when a serious local infection manifested, or when a major fire or traffic accident resulted in a large number of patients arriving at once. When a sudden, unexpectedly large number of patients arrived in the lobby, the supervisor in charge of admissions picked up the phone and called the administrator to alert her that there was a problem. The administrator would normally ask the nature of the emergency, and then consider
53
BUSINESS ACTIVITY MONITORING AND SIMULATION possible actions. If the emergency was an accident, it was easier to deal with, in the sense that a few phone calls could probably determine the extent of the accident and number of patients that would be arriving at the hospital in the course of the next hour or two. Calls to other supervisors would result in still other calls to change shifts and make more doctors and nurses available. Still other phone calls would result in shifts of bed assignments to assure that the emergency trauma ward had enough beds available. In the course of an hour or so, the hospital would adjust its activities and assign new resources to assure that the patient lifecycle process would continue to function effectively. A harder problem would be an increase in flu patients. In this case, instead of getting a large, but known, increase in admissions over the course of a few hours, the hospital would need to deal with a number of unknowns. Admissions would begin to increase slowly, and then, as the flu spread, the increase would grow daily. Many variables would determine the overall course of the flu. Severity, the susceptibility of particular groups—like older people or youngsters, whether school was in session, the availability of flu shots, and many other things could limit the spread, or control the duration of hospital flu patients. Complicating matters, the flu might infect doctors and nurses, making it harder to smoothly adjust staffing schedules. Although most flu epidemics pass without serious consequences, there have been especially virulent epidemics, like the one following World War I, which killed millions of people. The alert hospital administrator has to try to plan for a variety of different scenarios, and then adjust her actions as she acquired more data on the development of the flu in the hospital’s community, and in the nation as a whole. Different industries have different kinds of problems. Most, however, have processes that are designed to run within set parameters, and those processes have communication and control systems in place to handle exceptional periods. Most exceptions are easy to understand and deal with, while some are much more challenging, involving, as they do, more complex interactions among variables over a longer period of time. Historically, companies have relied on smart, experienced managers to gather appropriate data, interpret it correctly, and take decisions to minimize the effect of the changed circumstances on the daily functioning of company processes. As companies have become larger and processes have been dispersed over wider geographical areas, managing large business processes has become more difficult. In the past 30 years, most companies have installed computers and used them to collect data, and, in some cases, to automate processes. Thus, for example, our hospital admission office now enters new patient data via a computer terminal and can often access data from customer databases to determine a new patient’s medical history and credit. Since most supervisors have access to computers, it is often possible for ward supervisors to check the admissions database to determine how many new patients will be arriving in their ward in the next hour. Similarly, it is possible for an administrator to check historical data and generate a report that describes how many patients were admitted during the flu season last year or during the last 10 flu epidemics. In essence, computers, that were originally installed to facilitate or automate the flow of patients, parts, or assemblies through a process, can also be used to facilitate monitoring and communication, and some can even support
54
BUSINESS ACTIVITY MONITORING AND SIMULATION managers who have to make decisions to maintain the efficiency of a process in unusual circumstances. During the last few decades, most companies have also become more sophisticated in their management of processes. To counteract a tendency toward departmental functions that don’t communicate as efficiently as they might, most companies have designated managers who are responsible for largescale business processes. In product-oriented companies, these managers are often termed line-managers. In other cases, managers are assigned to coordinate processes that cross functional lines, like our patient lifecycle process. To support these managers, who are often responsible for managing processes that occur at several different locations and over long periods of time, software vendors are working to create tools that pull together all of the relevant information, highlight problems, anticipate problems, and assist making decisions to assure rapid successful adjustments of the process flow.
2. BUSINESS ACTIVITY MONITORING In 2002, the Gartner Group coined the term Business Activity Monitoring (BAM) to refer to software products that aimed at “providing real-time access to critical business performance indicators to improve the speed and effectiveness of business operations.”[1] In the past year the term BAM has become quite popular. Before using the term, however, it’s important to emphasize that BPM is a misleading term. The emphasis should have been on Business PROCESS Monitoring. Unfortunately, Gartner already used the acronym BPM to refer to Business Process MANAGEMENT, so Gartner apparently used “Activity” to get a unique, new acronym. BAM has caught on, and we’ll use it throughout this paper, but readers should remember that the emphasis in BAM is on pulling together information about large-scale processes, rather than on monitoring small-scale activities. BAM and Other Decision Support Technologies As we have already suggested, using computers to help smooth the flow of items through a process is nothing new. For many years, software designers have built triggers and alerts into software applications. Thus, for example, if admissions exceed some set number, an administrative terminal may sound an alert. One only has to think of an operator at a power plant to understand how a computer can provide an operator with a wide variety of alerts and even provide diagnostic information to assist the operator in his or her job performance. Similarly, marketing groups have analyzed data from sales for decades to determine shifts in customer preferences. In the past decade, many companies have invested in large Data Warehouses, that consolidate data from many smaller databases, and Business Intelligence (BI) applications that use special algorithms to search massive amounts of data, looking for patterns that humans might overlook. The results of these efforts usually find their way to senior managers who set strategy or design new products. IT groups have also used ERP and EAI tools to analyze the flow of data between application components. The emphasis, in the case of IT, has been on fast, efficient data processing and smoothly functioning middleware and not on drawing any broader meaning from the data. Still, it’s easy to imagine how transaction data, relabeled, and provided within the broader context of
55
BUSINESS ACTIVITY MONITORING AND SIMULATION
Who is Information Delivered To
a business process model, could help business process managers understand how a process is working. BAM proposes something that falls in-between the immediate feedback that alert signals and triggers can provide operators and supervisors and the long term trend reports that database reports and BI can provide senior managers. (See Figure 2.) BAM aims at providing a process manager with a broad overview of a major business process. As in the case of our example, it seems to provide a hospital administrator with an overview of the current status of the patient lifecycle process. Or, it seeks to provide a factory administrator with an overview of how an entire production line is functioning. Senior Staff Manager
Strategic Planning Systems
BAM Systems
Process Manager
Operator or Supervisor
Alarm and Process Control Systems
No Delay
Little Delay
Long Delay
Latency of Information Delivery Figure 2. One way of organizing the monitoring and decision support systems in use today.
Figure 2 provides one way of summarizing the range of monitoring and decision support systems in use today. Process control systems that provide information to operators and provide alerts to supervisors are mostly real-time systems that report to employees and supervisors who are very close to a specific activity. Similarly, systems that gather data, analyze it over hours, days, or weeks and report to senior staff managers are mostly designed to aid in future planning. BAM systems are newer and less widely deployed. They aim to fill the middle-ground between activity specific and strategic planning systems by providing business process managers with near-realtime information about an entire process. Properly done, they allow the process manager to initiate changes in specific activities that keep the entire process running smoothly. The Functions Required for an Effective BAM System A BAM system can not simply provide the administrator with the kinds of raw data or the signals that it provides plant operators or IT managers, or the administrator would be overwhelmed with inputs. Instead, someone must design a filtering system that draws data from a wide range of sources
56
BUSINESS ACTIVITY MONITORING AND SIMULATION and then massages it so that only truly significant data reaches the administrator. On the other hand, the BAM system can’t spend too long in massaging the data, or it will be out of date, like the BI systems that provide trend data to strategists, and only useful for future planning. A good BAM system should provide the administrator with enough information to enable good decisions, and it should provide the information in something close to real time so that decisions, when needed, can be taken in time to actually affect the ongoing performance of the process flow. Any effective BAM system requires a collection of modules. (See Figure 3.) There are different ways the various modules can perform their functions, but all of the functions must be present if the BAM system is to perform as its strongest advocates suggest it will. The first set of modules must convert data about actual events into digital information. In most cases this can be done by simply monitoring databases and transaction events that occur as software is used to automate a process. Thus, the same data the administration clerk enters into the computer, as he signs in a new patient, can feed a monitoring system that keeps track of the rate of patients entering the hospital. Hospital Administrator Business Activity Monitoring System Level 4 BUSINESS DASHBOARD
A User Friendly Interface that Presents Information on the Process in Close to Real Time, Provides Alerts, and Recommends Actions When Necessary.
Level 3 DECISION SUPPORT PROGRAM
Some Program for Analyzing the Data on Changes and then Determining Appropriate Actions the Manager Might Take
Level 2 PROCESS MODEL TO PROVIDE CONTEXT
Process Model of a Hospital Admission
Software Components and Databases that Automate Portions of the Hospital Admission System
Level 1 DIGITALIZED INFORMATION AVAILABLE
ACTIVITY Actual People and Process Activities being Executed in Real Time
Figure 3. Modules required for a serious BAM system.
The second level or set of modules required for BAM must provide some context for the digital data being accumulated. The BAM system may depend on an explicit model of the actual business process, as illustrated in Figure 3,
57
BUSINESS ACTIVITY MONITORING AND SIMULATION or it may simply depend on a series of equations that establish relationships between data sources. One way or another, however, the system must be able to organize the data to reflect the process it is monitoring. The analysis of the relevance of the data, the generation of information about trends, and intelligent action suggestions all depend on an analysis of the process and the relationships between process elements. Using its understanding of the process, the BAM system must apply some kind of logic to the data to identify problems, diagnose them, and to recommend managerial actions. For example, the BAM system might apply a set of business rules. One rule might state that whenever patient admissions increase by more 10% of the expect rate for a given period, a signal should be set. Another rule might state that whenever a signal is set as a result of a 10% increase, the patients’ ward assignments should be analyzed to determine if the increase in patients was random, or if there was a significant increase of patients for a particular ward. Still another rule might say that whenever patients were assigned to a given ward in excess of some historical number, the rule should post a suggestion on the administrator’s BAM monitor that specific changes are to be made in the staffing of the ward. Rule-based systems can be used to accomplish a number of different tasks. For our purposes here, they simply provide an example of one way the process data can be analyzed so as to generate action recommendations. Finally, any BAM system needs some way of presenting the information to an administrator. Most vendors speak of these monitor displays as “dashboards.” The term is meant to reflect the fact that the displays often have dials, gauges or other graphic devices that alert senior managers to changing conditions. Equally important, however, is a context for the information presented. If the manager is simply monitoring the admissions process, then several gauges might be adequate to let the manager know what is happening. If the manager is managing the entire customer lifecycle process, however, the manager is probably going to need some general picture of the process as a whole to pinpoint the problem area. For example, admissions might be normal, but discharges might drop, creating a shortage of beds for the new patients. Or, abnormal delays in obtaining lab test results may delay procedures that result in longer patient stays. So, some kind of graphic should probably help the manager pinpoint the area of concern. Then, within each area, data ought to be summarized. Finally, if alerts are displayed, information about the nature of the problem and possible corrective actions should also be presented. 3. Supporting Managerial Decision Making At this point, let’s focus on what we termed Level 3 in Figure 3, the specific approaches that are available to analyze data and generate actionable recommendations to process managers. There are, in essence, three more or less independent techniques that a BAM system might use to analyze data in order to make recommendations to a manager: Rule-Based Systems, Business Intelligence-Based Systems or Simulation Systems. (See Figure 4.)
58
BUSINESS ACTIVITY MONITORING AND SIMULATION
Hospital Administrator Business Activity Monitoring System
Findings Presented to Administrator
Current Data Compared With Historical Data by a BI engine to Identify Patterns
Current Data Analyzed With Business Rules To Identify Actions
Data About Historical Activities
Data About Activities
Current Data Analyzed By A Simulation To Identify Future Consequences
Simulation Model Uses Current and Historial Data to Generate Simulations
Activities Occuring
Figure 4. Sources and nature of decision analysis.
Rule-Based Systems The most straightforward approach, as we suggested earlier, is to use a set of rules to analyze existing data. Whenever the current data triggers a rule, it fires, either generating a recommendation, or triggering still other rules that ultimately lead to a recommendation. There are different types of rule-based systems. The simplest are rule systems that are embedded in data base management programs. The more complex rely on an inference engine that processes rules which are held in a rule repository. BI-Based Systems An alternative approach is to rely on historical data and Business Intelligence (BI) techniques. In this case, the BAM system might compare current data to historical data in an effort to identify a pattern. Such an approach might, for example, identify a slight increase in the use of certain medicines, correlate that with the season and a slight rise in admissions, and detect the onset of a flu epidemic before the doctors recognize that they are facing an epidemic. Using the same approach, the system might suggest to the administrator shifts in staffing and drug orders to bring today’s activities in line with the staffing and drug order patterns that were relied upon during the last three years’ epidemics. BI is an umbrella term used to refer to a broad collection of software tools and analytic techniques that can be used to analyze large quantities of data. The data used by BI systems is usually stored in a data warehouse. A data warehouse consists of the data storage and accompanying data integration architecture designed specifically to support data analysis for BI. The data
59
BUSINESS ACTIVITY MONITORING AND SIMULATION warehouse integrates operational data from various parts of the organization. Unlike operational databases, which typically only include current data, a data warehouse incorporates historical information, enabling analysis of business performance over time. Data warehousing is considered an essential, enabling component of most BI and analytic applications. BI usually relies on pattern matching algorithms derived from Artificial Intelligence (AI) research, or, in some cases on specially designed rule-based systems. Simulation-Based Systems Simulation systems rely on a process model and a set of assumptions about how work flows through the process. The assumptions are often based on knowledge of historical flows, but can be combined with current data. In essence, a simulation system projects future states of affairs. Thus, a trend that may be too small to attract attention today, may, if unchecked, result in major problems in a few days or months. A simulation system can run repeatedly simulations with the latest data and alert managers of potential problems before they occur. It can also be used to determine how proposed changes will affect the process in the future. Mixed Systems In many cases, BAM products will combine various analytic techniques. Thus simulation systems could also employ rules to facilitate certain kinds of analysis. Similarly, rule systems might also employ BI and other techniques. Table 1 provides a summary of some of the advantages and disadvantages associated with each analytic approach. Advantages
Rule-Based Systems That Analyze Current Data
BI Systems That Use Historical Data
Simulation Systems That Project Consequences
Disadvantages
- Can be provided for very specific tasks and can operate independent of other systems. - Can be easily defined and tested. - Is a well understood approach.
- Can become complex if the range of variables are extensive - Can become complex to test and maintain over time as the business changes - Difficult to graphically validate relationships to processes
- Have powerful algorithms for analyzing trends and patterns - Can pull together data from EAI and ERP systems and from best-of-breed applications.
- Work best when used in conjunction with large amounts of data - Must create Data Warehouse as precondition to using BAM capabilities - Aren't designed to use a process context for reporting - Have not been designed to operate in real-time.
- Provide capability to model highly complex and dynamic processes - Provides better insight into the future predicted state of the business, based on validated process flows - Can provide unique insight into longer range situations. - Notices gradual changes not so readily identified by other techniques. - Can take advantage of both BI and Rule-Based approaches to provide even better simulation results at run time.
- Simulation technology is not so well understood at the user level - Development of complex simulations requires specialized knowledge. - Requires an initial analysis and specification of a business process model.
Table 1. Comparison of Analysis Methods
60
BUSINESS ACTIVITY MONITORING AND SIMULATION Obviously there is no one best analytic technique for all situations. As a generalization, rule-based techniques are best for narrowly focused, specific analysis. BI techniques are best when there is a lot of historical data and you are reasonably sure that future situations will be like those that occurred in the past. Simulation techniques are best for more complex and changing situations. Early Examples of BAM Offerings Consider some early examples of vendors who are creating BAM solutions. The ERP vendors offer application modules. Their modules are designed to store information in a common database. Most integrate the flow of information between modules with a workflow system and rules. Depending on their existing technology, most have begun to create interfaces for business managers that are driven by information from the database. They have difficulty including data about applications other than their own. A good example of this approach is SAP’s BAM offering. Similarly, workflow vendors are well-positioned to enter the BAM market. Workflow tools depend on an initial analysis of the flow of material and information between activities. They usually supplement their models of a process with rules to make it easy to control and alter the flow. Although most workflow systems are small in scope, some cover entire business processes, and some are capable of interacting with non-automated processes by providing workers with tasks and recording when they indicate they have completed the tasks. For a good example of a BAM solution being offered by a workflow vendor, see SeeBeyond's BAM offering. EAI vendors offer systems that integrate a variety of different applications. They also depend on workflow-like systems, supported by rules, to describe how various applications are related and how to manage the flow of data between applications. EAI vendors have also begun to create BAM interfaces that allow managers to see how applications are functioning. These systems usually have trouble including information about human activities that may play a role in a large-scale process. A good example of this approach is TIBCO’s BAM offering. IBM acquired Holosofx, a business process modeling vendor, and has begun to integrate this business modeling tool into their WebSphere middleware environment. Holosofx, at the moment, relies on its strength in monitoring IBM MQSeries workflow data, but it can also monitor other middleware data flows to provide a manager with a management dashboard. Expect to see a variety of BAM offerings from Business Process Modeling tool vendors and from middleware vendors. The problem with most of these approaches is that they have to be hand-tailored for each application. In addition, a number of Data Warehouse/Business Intelligence vendors have begun to offer BAM modules. In this case the vendors are already storing data and already have powerful BI tools to examine the data. What they usually don’t have is a process model to provide context for the data, nor are they adept at providing analysis in near-real time. However, most are working on BAM extensions to their suites. A good example of this approach is the Business Objects BAM offering. Finally, a BAM solution is being developed by at least one Simulation vendor and that is the focus of this white paper. Like workflow, EAI, and business process modeling tools, simulation tools already rely on the creation of models of processes. Simulation tools are especially flexible in their modeling ca-
61
BUSINESS ACTIVITY MONITORING AND SIMULATION pabilities, since they are often used to model large-scale processes. Simulation tools normally rely on rules that incorporate statistical assumptions and on historical data to execute their models and generate data on possible future scenarios. Most of the current products rely on rules to offer limited decision support. The two exceptions are the DataWarehouse/BI tools that rely on their BI algorithms to identify historic patterns and the simulation tools that can use current and historical data to run scenarios and project future states based on current trends. Obviously, there are many combinations that are also possible. Many Business Modeling tools also support limited simulation, and, increasingly, most of these tools can be integrated with offerings from more powerful rule-based tools like those from Pegasystems and Fair-Issac. There are no mature BAM tools. All of the offerings, to date, are early products that have been assembled from the features of the vendor’s current product. As the market grows and matures, more comprehensive and specialized BAM offerings will appear. 4. Simulation At this point, since our primary focus is on the use of simulation to support BAM, let’s consider simulation in more detail. Simulation is the use of computing to mimic the behavior of a real-world system or process. Simulations are represented and executed from models that are abstractions of those real-world systems or processes. Simulation modeling is a broad topic that in its entirety is beyond the scope of this paper. We will focus on discrete-event simulation models since they are more likely to add value to business process analysis, business process management, business monitoring and decision support. Discrete event simulation models are based on events that occur within and are acted upon in a business process. By using random occurrences of those events, the simulation can mimic the dynamic behavior of the business. There are commonly two types of implementations of discrete event simulation models: • Probabilistic—the use of probability distribution functions to represent a stochastic process (this type of implementation is also commonly known as Monte Carlo) • Deterministic—the events that are input into the simulation will produce the same set of results over time This section will focus primarily on probabilistic discrete event simulation models. These models are used to define the business work steps, and specifically, the entities that flow through the business, as well as the resources required to perform each work step. Once the process modeler has created the basic process diagram, a simulation expert must enter information about the flow of events. The timing and occurrence of the events are based on probability distribution functions, which reproduce the behavioral dynamics of a real world business process. The developer of the simulation must choose probability functions that reflect the behavior of a given process. The process model and the information about the events are entered into a software program, which can then “execute” the simulation. By entering initial data, and executing the system, the modeler can determine future states of the process. As the simulation executes, the events are generated, the entities flow through the process, the delays are sampled, and the resources are
62
BUSINESS ACTIVITY MONITORING AND SIMULATION used—all using probability distributions to produce the real-world randomness of the business process. Consider a simulation of our hospital patient lifecycle process. We have a probabilistic function that determines how many patients who enter Admissions are routed to the maternity ward. This function is based on historical data. If we indicate that 100 patients enter on Monday, our system will automatically assign a portion of them to the Maternity ward. If we indicate that 1000 patients enter on Tuesday, the same formula will assign a proportionally large number to the Maternity ward. As the number of patients entering the Maternity ward increases, resources, ranging from beds and rooms to doctors and nurses, must be increased. By running different simulations we can determine just what resource would run out first and develop a plan to deal with the constraint if we expect that we might one day have that number of Maternity cases. A Detailed Simulation Study Now consider a more detailed simulation. In this case, consider what happens to patients entering the emergency room who need operations. This will illustrate how a discrete event simulation model can successfully support business process analysis. The purpose of this model was to consider the potential business impacts of opening a new emergency room facility that increased the number of treatment rooms by fifty percent. As with any good simulation model, the primary goals of the simulation were clearly stated. They were: • Examine the resource impacts (resource utilization, cost, etc.) from opening the new facility. Resources include physicians, nurses, support staff, supplies, and facilities. • Examine the impacts to patient treatment cycle time. This particular hospital maintains an average treatment cycle time of 2.1 hours per patient on average. The new facility should seek to improve the cycle time at best and maintain the cycle time at worst case. • Examine the cost of specific activities in the operation (activity based cost). You will notice that the goals of the simulation are kept to a few key business metrics. This is one of the most important things to remember when applying simulation technology—don’t try to model the whole world all at once. Models should be built incrementally or evolved to solve increasing complex problems over time. The measurements need to have a business focus and not focus on technical problems that the management is not immediately concerned with. The model is made up of three basic ingredients: entities, activities or work steps, and resources. The primary entities in the model are the types of patients that may arrive randomly at the door of the emergency room. The model makes assumptions about the number of individuals who will show up within any given period, and what types of problems they will have. These probabilistic assumptions are based on historical data. Some types of patients arrive more often, on average, than others. Some will have higher priority than others, based on severity of the illness or injury. The work steps include the primary activities performed, ranging from triage to initial evaluation, to treatment, to out-processing. The amount of time to perform each work step depends on the type of problem, and this, again is based on historical data and is represented as a probabilistic function that introduces a realistic variation into each patient’s treatment. For example, the amount
63
BUSINESS ACTIVITY MONITORING AND SIMULATION of time to perform the triage may be a random sample from an Exponential or Poisson distribution. The Normal distribution would have a mean value and a standard deviation to represent, statistically, what occurs in the real world. The resources required to perform each work step are assigned and include such attributes as the number available, cost, and planned down time. The number of each resource type may also be selected from a probability distribution in cases where a business has variations in the number of resources available. In other words, the model will consider things like employees taking leave or breakdowns in machinery. Figure 5 illustrates a screen shot of the emergency room process model.
Figure 5—SIMPROCESS® Screen Showing Top Level Process Flow Model
Notice that the model provides a concise, high-level view of the process. Each of the major sub-processes shown in Figure 5 has been defined in more detailed process models. This is another important aspect of simulation models. Where possible, the overall model should be kept as general as possible to assure it can be easily understood and that it will generate answers to the important business questions. Models with too much detail actually make it harder to identify and answer important questions. At any point in the analysis, if more granularity is needed, you can always drill down to the lower processes and model their sub-processes and activities to get the results needed. This technique allows you to have some processes modeled in detail while others are simply pass-through boxes. It lets you see the whole process at the higher levels and drill down in later spirals of analysis, if needed, without having to change the high level model. The patients (the key entities) in the model are categorized into three types: • Level 1—the most critical or severe, needing immediate attention • Level 2—critical patients needing attention but are probably in a lifethreatening condition • Level 3—patients that do not need immediate attention The three types of patients arrive at the door at varying rates. Figures 6 through 8 below show pop-up Windows that SIMPROCESS presents to analysts who are entering the information needed to set up a simulation. The ®
64
BUSINESS ACTIVITY MONITORING AND SIMULATION three Windows require the entry of information about the activity that occurs at the emergency room entry door. In this case, the analyst has decided to enter three different rates to reflect the patient flow historically experienced by the hospital. Figures 6 and 7 focus on the arrival rates for Level 1 patients.
Figure 6—SIMPROCESS® Screen Showing Entity Definition
In Figure 6 we define the three types of patient entities that will flow through our business model. Since the patient types, Level 1, 2 and 3, arrive at the hospital at varying rates and at varying quantities over time, we must define separate inter-arrival schedules for them in SIMPROCESS®. In Figure 6 we have set up three schedules and named them Level 1, 2, and 3 to correspond to the patient entity types. By choosing the Level 1 schedule in the Figure 6 dialog and choosing the Edit function, we can now describe the inter-arrival schedule for the Level 1 patients, as depicted in Figure 7.
Figure 7—SIMPROCESS™ Screen Showing Entity Schedules
The dialog presented in Figure 7 allows us to create schedules within schedules. That is, the inter-arrival of Level 1 patients vary depending on the time of day it is. In this model, we have defined three inter-arrival schedules for
65
BUSINESS ACTIVITY MONITORING AND SIMULATION the patients: Day Shift, Evening Shift, and Morning Shift. By selecting the shifts in Figure 7 and selecting the Edit function, we can define how the Level 1 patients will arrive for each daily period, or shift. This concept can be extended, as necessary, to include varying the schedules for weekends, time of month, or season. This is an important and powerful capability when using simulation models since this mimics how the business really encounters patients statistically during different periods of time. Once a shift is selected from the choices in Figure 7, the dialog box shown in Figure 8 is presented allowing the user. This dialog box allows the user to define the types of patients (entities) that could appear during that shift. Patient quantity and rate of arrival are defined as well. Since the hospital manages by shifts and keeps their metrics by shift, it only makes sense to have the simulation model be consistent with the shift schedules.
Figure 8 –SIMPROCESS® Screen Showing Patient Arrival Rates
The inter-arrival rates for Level 1 patients are 0.109375 patients on average every hour. This is based on the past history at the hospital having an average of 111 total patients per day with Level 1 patients making up 2.775 of those on average. For the day shift, it is 0.875 patients over an 8-hour period resulting in the 0.109375 per hour average. You will notice the use of a probability distribution function (Poisson) to provide a representative statistical curve of how the patients arrive. If we just use the average, then our model would not be probabilistic and would not provide a true representation of how things really occur. If events occurred in the real world at steady states, then simulation analysis would not be needed. However, that is not the case. Not only do the patients arrive at random, resources fail (x-ray machine breaks), go down (person takes leave or sick), are occupied or busy (physician takes varying times to diagnose and treat the patient), or deplete (for consumable resources, e.g., oxygen canisters) at random rates. Likewise, the time it takes to perform work steps varies and is not accurately represented by a steady state average. Once the entity (patient) arrival rates have been defined and the process flow diagram (see Figure 5) has been developed, the timing and resources on the work steps must be set to account for the time delay that is required and exactly what resources are assigned or consumed to perform the work steps.
66
BUSINESS ACTIVITY MONITORING AND SIMULATION Figure 9 shows a pop-up Window that an analyst could use to define the delay time and the resources required to perform one of the treatment steps (Perform Treatment) in this model.
Figure 9—SIMPROCESS® Screen Showing Activity Delay Time
In this model, we have defined the time to perform the treatment work step as a probability distribution function. We have used the Normal distribution function with an average of one hour and a standard deviation of 0.25 hours. When the simulation is run, the entities (patients) will flow through the system and when they reach this work step, the time allotted to perform the task will be randomly sampled from the Normal probability distribution function. This will create the randomness in the time it takes to do the task just as the probability distribution function in Figure 8 defined the interarrival rates of patients.
Figure 10—SIMPROCESS® Screen Showing Required Resources
Figure 10 shows the resources required to perform the actual treatment. Resources are globally defined in SIMRPOCESS™ and can be used or assigned to any task in the process model where they may be needed. The number of resources available as well as the number required to perform a
67
BUSINESS ACTIVITY MONITORING AND SIMULATION certain work step can also be randomly sampled as described earlier using probability distribution functions. You will notice that in this model, the number of nurses required is sampled using a probability distribution function to simulate how the real world process works since the same number of resources is not always the same, depending on the type of treatment. Notice that the physician resource is a shared resource since physicians bounce between five or six patients on average. This model is sampling a Poisson distribution with a mean of 20 percent of a physician’s time on average being consumed by a single patient. Additionally, resources can be required, based on different scenarios. In Figure 10, we have specified that all the resource requirements (physicians and nurses in this case) are required at the same time to perform the work step. This means the entity (patient) will wait (this is where queuing and bottlenecks are discovered) at this work step until both resource types are available simultaneously before the task can start. In some cases, we may list substitutable resources and set the requirements to “Any One Member” to specify that the work step can start as soon as any one of the resource types are available to do the job. The use of the “Reserve as Available” option in Figure 10 allows us to lock one of the resources needed as soon as it becomes available and wait until the others are available and lock them until all resources requirements are met. As can be seen in the use of probability distribution functions for all the key components of a SIMPROCESS® model (entities, activities, and resources), we can simulate even the most complex dynamic behavior of any business process. Recall the goals of the simulation model. We used the model to do “what if” scenarios that will be influenced by the new emergency room facility and 50 percent increase in treatment rooms. This model looks rather simple on the surface, but is a very powerful tool to do the “what if” analysis and determine if the business metrics and goals can be met. Figure 11 is a plot of the treatment cycle time (one of their most important business performance metrics) with the old facility plotted on the left and the new (50 percent additional rooms) plotted on the right. The plots in Figure 11 are clipped for the first 100 hours of simulation time. Before
After
Figure 11—Two SIMPROCESS® Screens Showing Patient Treatment Cycle Time: Before and After
Multiple simulations of “what if” scenarios were used to ensure the hospital could achieve its treatment cycle time business goals. For example, if the number of physicians, nurses, and lab technicians remain the same, the ad-
68
BUSINESS ACTIVITY MONITORING AND SIMULATION ditional rooms will cause the treatment cycle time to go above three hours. That is due to patients getting in the rooms without enough resources to treat them, hence waiting in the room for additional time (treatment time is counted from the time the patient gets to a room until the time they leave the treatment room). The simulations quickly uncovered these types of problems and allowed the hospital to play additional “what if” games to find the right balance of resources and costs to make optimal use of the new treatment facilities and resources. The objective of all this is being able to avoid embarrassing impacts to treatment of patients (for this particular hospital, cycle time is one of their marketing nuggets) or worse, poor utilization of a new facility from an operating cost standpoint. As can be seen from the “after” plot in Figure 11, the treatment cycle time is actually improved when adjustments are made to the physician and nurse resources. In the old facility, adding resources would not have improved treatment time since the bottleneck was the number of room resources. 5. Using Simulation for BAM Section 4 focused on how simulation models are used in traditional business process analysis (BPA). Models are developed and validated for an existing business process (an As Is model). Then, various changes (What If models) were imagined and tested, via simulation, to see if they would improve the efficiency of the process. In the simulation process, bottlenecks and specific inefficiencies were identified and eliminated. Although this use of simulation is very common and valuable, most organizations use the simulation models developed in this manner during a limited improvement project and then set them aside as the new process is implemented. It is possible, however, to use simulation to support BAM systems. In essence, the new process model is maintained in the simulation environment, and new simulations, using the latest data, are run periodically. Triggers or rules are used to identify problems. The current level of admissions at our hospital, for example, may be slightly higher than it was a week ago, but not high enough to trigger an alert. On the other hand, if a simulation is run using the past month’s data, it may be determined that an underlying trend is present that will result in unacceptable rates of admissions in three weeks. Figure 12 suggests how a Simulation Environment might be linked to software applications and databases to allow the simulation system to be run with real-time data. In effect, data generated by the actual process would be used to run simulations. The simulation system would have alerts and rules to identify problems and suggest alternatives. In some cases, the simulation system might include alternative activities. If the system determined that the normal process would generate problems, it might try pre-packaged alternative approaches to a specific activity to see if a particular set of changes would result in an acceptable projection. This is one way in which a simulation might be able to combine alerting a manager about problems with suggested remedies.
69
BUSINESS ACTIVITY MONITORING AND SIMULATION Hospital Administrator Business Activity Monitoring System BUSINESS DASHBOARD
SIMULATION PROGRAM
PROCESS MODEL TO PROVIDE CONTEXT
A User Friendly Interface that Presents Information on the Process in Close to Real Time, Provides Alerts, and Recommends Actions When Necessary.
Simulation Program is part of Modeling Program and Makes Decisions by Generating Projections to See if Any Fire Triggers or Rules.
Process Model of a Hospital Admission
alternative approaches
Software Components and Databases that Automate Portions of the Hospital Admission System
DIGITALIZED INFORMATION
ACTIVITY Actual People and Process Activities being Executed in Real Time
Figure 12. A BAM System Based on a Simulation System.
The approach described in Figure 12 goes beyond what is available in today’s simulation products. In effect, it changes simulation from something done to test alternatives, and makes it, instead, a way of dynamically determining what will happen in the future if the current state of the process is allowed to continue without change. Obviously, this diagram greatly simplifies what is involved in using simulation for BAM. The developers need, for example, to identify the events or data items that will be monitored. Similarly, they need to insert triggers or create rules to determine when managers should be alerted. And they need to determine how frequently the simulation should be run, and how far ahead it should project. These are all decisions that will need to be made in the context of a specific company process. These decisions are not unique to the use of simulation. The same events, triggers, and information monitoring will need to be defined for any BAM solution. Simulation is merely providing an additional dimension to already useful BAM solutions. Rule-based systems, by themselves, only look at the present for problems. BI systems use historical data to look for current patterns that might suggest problems, but can only identify problems that have already occurred before, in the past. Simulation systems can combine the best of both and add the ability to look for future states that suggest problems and then dynamically try alternative assumptions to identify changes that the manager could make today to avoid the undesired future state. Companies that have already used simulation and have been happy with the results find the possibility of reusing their simulation investment to create powerful BAM systems exciting.
70
BUSINESS ACTIVITY MONITORING AND SIMULATION CACI, for example, has used simulation in the loop for real-time decision making for customers such as the Department of Defense. For example, DOD training systems are a good example of using simulations along with feeds from of real-time operational systems to create scenarios for training purposes. The result is a hybrid tool that operates partially off validated probabilistic models along with real world events at real time. This concept, when extended to more mainstream solutions such as BAM, opens up some interesting ways of significantly improving the benefits of BAM (especially if process modeling investments have already been made with BPA models). 6. A Case Study If we consider the hospital example described early in this paper, we can get an idea of how this capability could be put to practical use. The process simulation models were already built as part of a business process improvement project and were extended to serve as a key ingredient in a simulationbased BAM solution. Let’s imagine a scenario of where this particular hospital—which has operational information systems that capture the key events in the business activities such as patient sign-in, capture initial triage data, resource clockin/out, and post treatment data capture—can actually provide data to the simulation models in real-time. The data feeds can be done at periodic intervals to get a “look ahead” on the impacts to patient treatment cycle time based on the current resources, patients received, expected future patient arrivals (see section 5 for simulation of patient arrivals based on empirical data), and the validated standard processes documented in the simulation modeling tool. Remember, the patient inter-arrival rates were based on probability distributions, as well as the activity delays and resource assignments. These probability distributions provided us a powerful tool to mimic real world dynamics; however, if an unexpected peak occurs in any of the variables, the probability distribution function chosen may not have predicted a worse-case situation. These spikes may be statistically insignificant over a long period of time but it could skew the performance of the business for several days or weeks, impacting the business goals as well as the P&L. When using the actual real world data feed along with the probability distribution for future expected events, we get a hybrid model that is based on both real and simulated data. This is technically a pre-load of the model with current data while the simulation will used expected events as it simulates to the future, for example: two weeks, one month, six months, etc. The result is the capability to alarm management with data to make decisions (such as call in temporary resources or change to a crisis process) based on the simulated future. One might ask what the benefits of this would be over traditional BAM type dashboards. The difference is that the current data and rules associated with alerting management from the traditional BAM solution may see only gradual changes in the variables of the business performance and may not have enough insight into the long range impact to know to alert management. With the simulation-based BAM, we add another dimension to the BAM solution. We can look several weeks or months into the future and predict what impacts may be experienced in overall key business performance measurements while there is still time to affect those impacts. The hospital example we used in this paper was based on the customer’s very important business metric of average patient treatment cycle time. If certain run-time
71
BUSINESS ACTIVITY MONITORING AND SIMULATION events drive the average up, due to unforeseen peaks, the hospital loses its ability to use its performance metric as a marketing tool. There are many other metrics that could be looked at, such as efficient resource utilization. In many cases, staffs are overworked and exhausted before management is aware of it, and exhausted staffs can create even more business performance problems such as quality of service. Another example situation that could arise in the hospital model is if a flu outbreak starts an upward spike of Level 1 and Level 2 patient types that are normally not expected. Since Level 1 treatments can disrupt the primary process flow due to priority of patient care, and Level 2 treatment takes longer on average to administer, the BPA model would not have uncovered those scenarios unless an explicit “what if” experiment were run. With the simulation-based BAM solution, the actual data affects the simulation and, in theory, “reprograms” the simulation and gives a better future picture of the organizational performance. The proposed simulation-based BAM does not have to have user intervention as in BPA simulation analysis. Once the model is built, it runs completely in the background and presents management dashboard data that is in a business metrics form. However, another consideration is to let the simulation run through multiple “fall back” or “crisis” process alternatives based on the real-time data, and present the user with choices that would then, in turn, be related back into BPM solutions for temporary process adjustments. Figure 13 is an example of the type of performance metrics that could be placed on a management dashboard. The dashboard in this example is showing both the real time data that a traditional BAM would provide in the top half of Figure 13. The bottom half is a simulation of five days into the future using the validated business process model.
Figure 13. Management Dashboard
The dashboard information is described as follows: • The actual real-time reporting (traditional BAM scenario) of business information is depicted in the top half of Figure 13 and includes the following graphical gadgets: • A meter to the left of the dashboard that depicts the running average of patients per day for the past month. The colors on the rim of the meter
72
BUSINESS ACTIVITY MONITORING AND SIMULATION are used to help the manager quickly see the numbers that represent critical values (i.e. red indicates a critical situation). • A bar graph that depicts the total patients for the last month for each type of patient. • A thermometer gadget that depicts the utilization of treatment rooms. The fill in the thermometer changes colors to indicate critical values. • A trace plot on the far right that depicts the average treatment cycle time of all patients over the last month. • Two text values that depict the current levels of the most critical resources—nurses and physicians. • The text field to indicate the date of the dashboard information. • A text field that indicates the average treatment cycle time up to the last hour of reporting. This is important to see the difference from the 30-day average and the last hour or so of activity. This field is used in conjunction with the next field. • A text field that is used with the previous field and indicates the average treatment cycle time for the previous 24 hours in this example. Obviously, this time span can be set to a wider range based on the business being monitored. • The projected (simulated) business information is depicted in the bottom half of Figure 13 for five days into the future (Feb 5th in this example) and includes the following graphical gadgets: • A meter to the left of the dashboard that depicts the simulated average of patients per day up through February 5th. • A bar graph that depicts the simulated total patients expected based on the past month and the probability distribution function in the model. • A thermometer gadget that depicts the predicted utilization of treatment rooms. • A trace plot on the far right that depicts the predicted average treatment cycle time of all patients up through February 5th. • Two text values that depict the predicted levels of the most critical resources—nurses and physicians. • The text field to indicate the date of the simulation for the dashboard information (February 5th in this example). • A text field that indicates the expected average treatment cycle time for the last 24 hours of actual data and up through February 5th of simulated data. The data reported in Figure 13 was based on certain events occurring in real time and as can be seen from comparing the data, the impact is minimal and probably not alarming to management based on the traditional BAM data. However, when the simulated data up through February 5th is provided, you can see the urgency of the problem as the key business metrics are affected. For example, the average treatment cycle time goes from 2.14 hours average over the last month to 2.83 hours for the last 24 hours of real-time data up through February 5th. An interesting piece of information is that if you focus only on the last 24 hours in the actual data (2.09 hours average which is less than the monthly average), you can see how not including simulated data can delay alarming management to a building problem. The events that caused the situation to become critical are as follows:
73
BUSINESS ACTIVITY MONITORING AND SIMULATION •
Starting on January 15th the average arrival of patients increased by 6 per hour. This was due to a flu and cold outbreak. • On January 30th, one of the nurses became ill and had to take emergency leave for up to 10 days. • On January 31st at 4:00pm, one of the physicians and one additional nurse had to take emergency leave due to exhaustion and illness. Notice the number of nurse and physician resources on the dashboard have decreased from 7 and 3 to 5 and 2 respectively. • An additional 10 patients arrived on January 30th due to two separate accidents. • An additional 20 patients arrived on January 31st at 11:00am due to multiple accidents. Since the traditional BAM reporting does not simulate ahead to predict the queuing theory problems of the growing arrival rates and the drop in resources, as of 5:00pm on January 31st the actual data is not showing any major impact to the business metrics. The simulated BAM data, however, predicts significant problems growing rapidly over the next several days and risks driving the average patient treatment time significantly above the goals set by management. If the problem is allowed to spike the treatment time, it could take a couple months of improved performance to pull the average back into line with management goals. Simulation-based BAM solutions are achievable today. CACI’s model-viewcontroller (MVC) architecture provides for separation of the simulation from the front-end analysis tool, therefore providing a server-based, GUI-less, and scalable capability. It provides connectors to outside applications through Java-based remote calls and/or XML, which are needed to feed the real-time data to the simulation tool. The example above is easily implemented with any vendor operational application with simple messaging capability. 7. Conclusion Gartner suggests that BAM will become a major corporate concern in the next few years. Most large organizations will at least explore the possibility of improving business process management by creating systems that provide a broad overview of a process that can provide near-real-time information and advice to process managers. A variety of techniques will be used. Some “BAM” systems will, in fact, monitor subprocesses. Some will use rules to alert managers about specific real-time problems. Some will be based on simulation engines and use models that allow the system to project future events from the current state of the process and then dynamically generate alternative options to identify what changes, taken today, would maintain the process in the most efficient manner over the long term. We believe that simulation-based BAM will prove to be the most powerful and flexible approach to BAM and will increasingly be relied on by those with the more complex processes. Notes [1] Gartner Group. Business Activity Monitoring: The Data Perspective. February 20, 2002. SIMPROCESS® is a registered Trademark of CACI. For more information about SIMPROCESS®, check www.simprocess.com Although not emphasized in this article, SIMPROCESS produces XPDL, the WfMC’s XML process language.
74
Business Process Improvement through Optimization of its Structural Properties Vladimír Modrák, Technical University of Košice, Slovakia PROCESS IMPROVEMENT THROUGH PROCESS CHANGES With the growing requirements for the improvement of business activities within organizations, aspects of changes and new concepts of process structures are becoming a topical problem. These aspects are evenly important from standpoint objectives of the first out of two phases of workflow management (WfM). The processes of change have been addressed mostly at the level of administrative business processes (BP). Such situation partially accords with a logical succession of reengineering framework steps and corresponds with advancements of information technology infrastructures. On the other hand, less attention has been devoted to the changes at the shop-floor level. Despite this, the redesign of the shop floor infrastructures is equally legitimated. Approaches to the business process improvement (BPI) can generally be divided into two categories: improvement of the operational properties of BP and improvement of the structural properties of BP. Both of them are useful for the administrative and manufacturing BP1 While the first approach is oriented on dynamic parameters of BP, the approach based on structural analysis deals with the static properties of BP. This chapter is concerned with the second category of BP properties from the point of view of measuring and benchmarking. Its main scope is to present a practicable approach to the structural complexity measurement of business processes.
MODELING OF BUSINESS PROCESS STRUCTURES Classification concept of business process structures One of the important roles of Business Process Reengineering (BPR) should be building a logistical concept of organization, which should involve coordination and management of all material and information flows. Especially higher-level modeling of business process structures substantially support the company’s successful running. Such process models structures normally start by establishing a framework for the systematic classification of company processes. A classification framework for the systematic rebuilding of processes can be built from three hierarchical levels, which are (bottom to top)[18]: • Elementary process (EP) represented by a set of complex tasks, consisting of the smallest elements-activities;
1 1 1Existing
distinctions between administrative BP and production BP resulted in the creation of different concepts of administrative and production workflow systems (see for example [14],[17]).
75
BUSINESS PROCESS IMPROVEMENT •
Integrated process (IP) which represents a set of two or more elementary processes with the purpose to create the autonomic organizational unit at the second hierarchical level;
•
Unified enterprise process (UEP), which consists of one or several integrated processes to the extent that is conditioned by its capability to flexibly and effectively secure customers’ requirements. The application of classification approach in the modeling of business processes structures is further shown by the process mapping technique that was inspired by other methods for analyzing and designing of information systems [1], [2], [8] [15]. Business process mapping Our process mapping technique is based on process decomposition that is resulting in a set of business structure models, which are represented by diagrams in the order given: system diagram, context diagram, commodity flow diagrams, state transition diagrams. An example of the first three simplified diagrams is illustrated on Figure 1.
Figure 1. A fragment of the process map described by the hierarchical diagrams According to the procedure outlined for redesigning enterprise processes, the first step of this method is the creation of a System Diagram. Its purpose is to separate so-called Unified Enterprise Processes (UEP) from the original arrangement of processes. Subsequently, relations between them and the
76
BUSINESS PROCESS IMPROVEMENT environment of the enterprise are specified. The environment is represented in the diagram by External Entities (EE), with which the system communicates, while their content is not a subject of analysis in the following steps. They usually represent the initial source of commodity, or their end consumer. In the fact it represents the starting base of modeling processes, from which other diagrams are derived using the principle of process decomposition. Context Diagrams are created for each UEP on the basis of the System Diagram. Individual Context Diagrams express relations of the given Unified Enterprise Process with its environment. These surrounding elements, irrespective of whether they represent objects outside the enterprise or internal processes, are approached as External Entities. It means that External Entities are considered in the same way as internal processes of the System Diagram. System decomposition at the given level emphasizes the need for the creation of an equal customer approach in which there should not be differences between internal and external customers. The essence of the Commodity Flow Diagrams is gradual decomposition of UEP, up to the level of so-called elementary or primitive processes. The diagrams at the first stage of decomposition start from Context Diagrams. In the sense of the proposed classification, the mutual links of Integrated Processes (IP) are described in them. Despite that, the Commodity Flow Diagrams do not provide any details about modeled processes; their purpose is to provide an general overview of the follow-up of processes, which allows their owners at different levels to see the boundaries of their own as well as subsequent processes. Commodity Flow Diagrams of the second stage are constructed in an analogous way as Commodity Flow Diagrams of the first stage. It is the last stage of commodity flow diagrams because the elementary processes, which present the objects of modeling, are considered to be the primitive processes. The objective of the State Transition Diagram is the description of the dynamics of elementary processes by modeling states in which objects can be presented and transition periods between these states. These diagrams also describe events that initiate transitions between states and conditions for the realization of these transitions. A comprehensive view of the business processes structures in the sense of the approach outlined helps to make the process models transparent and usable resulting in workflow models. Without a classification framework for defining the basic types of processes, the overall view of the business processes structures could become disorganized. Structuring of business process models Models of business process structures are often used in very different ways. Technically, horizontal structuring and vertical structuring are mostly recognized [10]. Horizontal structuring helps to handle each customer engagement with guaranteed service. Vertical structuring serves to distinguish different levels of details, or abstraction levels. In the first stage of our attempt to formulate and apply metrics for BPI in more detail, the issues of the horizontal structuring will be addressed. Subsequently, evaluation of the structural properties of the vertical levels, based on degree of structure centralization, will be outlined. According to Franken et al [7] models of business processes can be divided into extensive and intensive model structures. In extensive model structures of the business processes are the processes described as an integrated whole. With such a model, investigation is not concentrated on internal process structure and its behavior. An extensive
77
BUSINESS PROCESS IMPROVEMENT model describes the process from the viewpoint of its environment, which usually consists of external or internal suppliers and customers. From the business structure models that have been outlined by the process mapping technique two classes of diagrams, the system diagram and the context diagram, belong to this category of business process models. The above-mentioned diagrams are pertinent to the evaluation of the structural properties based on vertical structured models. On the other hand, the intensive structures models of business processes are used to describe the interactions of entities inside the part of the system that is being investigated. Such models describe business processes from the viewpoint of their internal objects, sources and other entities such as the staff, technological components, protocols, etc. The rest of business structure models: commodity flow diagrams, state transition diagrams from the process mapping technique can be incorporated into this category of business process models. Intensive models of processes structures can be additionally divided into: • Workflow-oriented model structures, which represent the behavior of a business process from the perspective of a single item that passes through the process. •
Functional-oriented model structures that reveal the behavior of functional units (departments) from the viewpoint of subsequent business functions. These models represent the obsolete functional approach of organization structuring. The commodity flow diagram, represent ting the first sub-category of models, will be subjected to the evaluation of structural properties using a set of proposed indicators in subsequent paragraphs.
ATRIBUTES OF PROCESS STRUCTURES AND THEIR MEASUREMENT Process structure modeling by commodity flows The business process can be understood in general as sets of logically related, structured activities that produce a specific service or product for a particular customer or customers. Depending on the medium that is the subject of the operation, it is possible to differentiate between material, information or energy processes. On the basis of the above mentioned, it is possible to analyze these processes individually or in a complex way. In the application of reengineering, the attention is focused especially on the effectiveness of the key business processes, and only later on processes of lower significance are taken into account. Inputs of such processes are usually material and/or information items—simply marked as commodities. They are re-transformed in the framework of processes into new commodities, through which the system gets closer to its goal. Having in mind that aspect, the process or its parts can be understood as a sequence of states, from the initial state to the final one, in which the process is completed. Commodity flows are usually represented by material flows, from their sources to the places, where they are consumed. Commodity flows are often split up in complicated ways into the places of their transformation, and they are concentrated into the places of their finalization and consumption. The main task of the modeling processes as commodity flows is to relate the basic objectives, resources, and limitations of the system. In line with that,
78
BUSINESS PROCESS IMPROVEMENT such models should allow for analysis for completeness, absence of conflicts, viability and effectiveness of individual functions. Modeling on this basis can be considered as decisive in terms of the aims of BPR and can be useful for various purposes. Some of them are: •
Examination of such models from the point of view of effectiveness and efficiency. Analysis of process models oriented toward the model improvement is usually based on process simulation [13].
•
Process improvement with the aim to utilize the results of process model analysis in order to increase process flexibility or/and productivity [19].
•
Process enactment. Business process models can be used for driving real processes in workflow-oriented style [4]
•
Process improvement based on optimization of its structural properties. Overview of the approaches to the measurement of BP structural properties BP structural analysis and assessment methods were emphasized using various perspectives by researchers during the last few years. It is an opportune moment for the recognition of traditional structural metrics oriented to process efficiency and new kinds of structural metrics, which deal with process effectiveness. Metrics for effectiveness should be primarily focused on structural properties with ability to influence outcomes of BP. Such structural properties of general interest can be relatively easily extracted from reengineering principles. They include such properties as simplicity, integrity, flexibility, (de) centralization, viability and others. Aggregated structural metrics of BP consisting of three of these properties Simplicity, Integration and Flexibility have been developed by Tjaden [24]. Measuring the process simplicity versus process complexity is based on enumeration of activities, material flows and number of persons performing activities. To quantify the relative degree of flexibility of a process a set of so called Flexibility Properties are used. Thesis similar to the ideas concerning process flexibility was developed earlier by Hayes et al [9]. The process integrity in this approach has to be first measured separately for each material flow and subsequently these individual values are combined into overall value for a set of activities. This aggregated approach in given segment of attributes contributes to the metrics of BP structural properties. Vanhoucke et al [26] performed a study on morphological and topological indicators of a network that can be used for testing software packages to generate networks, which could also be applied to business processes metrics. The aspects of process complexity and position of static models of BP structures were analyzed by Fathee et al [6]. The influence of process complexity on the use of static process model was stressed in their work. Research on implementing and comparison the potential complexity measures was presented by Latva-Koivisto [13]. In his work wide ranging alternative complexity measures such as Coefficient of Network Complexity, Cyclomatic Number, Complexity Index, Restrictiveness estimator (RT) and Numbers of the trees in a graph (T) are employed. Two of them namely “RT” and “T” will be used in the next section for the testing of distinct complexity indicators on different process graphs.
79
BUSINESS PROCESS IMPROVEMENT EVALUATION OF STRUCTURAL PROPERTIES FOR BUSINESS PROCESSES ON A HORIZONTAL LEVEL Theoretical basis and methodology Business processes are usually modeled as process charts, in which the activities that form the process and dependencies between them are composed. In the case of process models using commodity flows, the first activity creates output that is used as an input for the second activity. Within the outlined BP modeling method by the process mapping technique (see Figure 1), individual diagrams were designed based on the principles of the graph theory. The fundamental concept of graph theory is the graph G = (V, E) that consists of a set of vertices V(G) together with a set of edges E(G). The number of vertices in a graph will be denoted n while the number of edges will be denoted m. Each link connects two points, which are its endpoints. The points connected by a line are said to be adjacent. In contrast, a link that has a given point as an endpoint is said to be incident upon that point. Two lines that share an endpoint are also said to be incident [3]. In the proposed approach the structural properties of BP will be investigated by means of a topological analysis in terms to which the basic elements of the process structure—vertices (or nodes) and edges (or links) are subjected. The starting point of this analysis is an adjacency matrix of a graph, which is a binary n x n matrix A in which aij = 1 and aji = 1 if vertex vi is adjacent to vertex vj , and aij = 0 and aji = 0 otherwise. At the beginning we will convert a process map represented by the set of above-mentioned diagrams to directed graphs, in which the flow of movement follows the direction of the edges. The Digraphs will be composed as activity-on-node (AoN), in which the activities of the processes are modeled by nodes and dependencies as links. Since all the initial parameters of the investigated graphs are known with certainty, the problem is deterministic. As regards the basic research methodology that has been used, the motivation of research direction has been based on the application of general axioms of graph theory for specified problem domains, while extraction of findings has some features of a partially inductive approach. Determination of the indicators for process complexity The most frequent structure attribute of business processes in the scope of BPR is undoubtedly complexity. Scofield [21] describes business models and enterprises that tend to be more complex than anyone wants. He adds that an organization must understand the causes of enterprise complexity in order to build better models and longer-lasting architectures. This section outlines a possible way for a better understanding of causes of process complexity by treating the potential aggregate indicator of process complexity that consists of three separate sub-indicators, which are Binding of structure, Diameter of network and Structure diversity. Determined indicators are selected for investigating processes on the horizontal level. Binding of structure As one of many possible indicators of the complexity of the business processes structures the ‘redundancy measure index‘ of the structure linkage will be considered. This is based on the concept of graph binding. This term means the least possible number of linkage graphs, the reduction of which would lead to an incomplete graph containing isolated nodes. An incomplete graph is the opposite of a graph in which all vertices are adjacent to all oth-
80
BUSINESS PROCESS IMPROVEMENT ers. The minimum number of edges for graph binding is n—1. That is valid for both digraphs and non-directed graphs. Within digraphs, each link (i,j) has one element in the adjacent matrix aij = 1. Within the non-directed graph, each edge has two elements, which is valid for aij = aji To determine the measure of the binding structure, the following indicator expressing a relative measure of the size of the number of the “m” edges that occur within a given structure can be applied:
B=
m −1 n −1
(1)
With the minimum number of links, the value of this relation equals zero. In connection with the above described processes modeling technique, the indicator of the structure binding can be used in the analysis of the processes of the UEP type to interpreting the internal structure of integrated processes (IP). For this purpose we will convert commodity flow diagrams of the first decomposition stage from Figure 1 to the process structure that is described in Figure 2a. For process analysis by this indicator, only the internal structure of the investigated process shown in Figure 2b is relevant excluding the relations of the process to its immediate environment. The index value of the structure binding “B” of a given process, when m = 15 and mmin = 7 by formula (1) equals 1.14. The reduction of the structure linkages in conformity with the principles of reengineering can be obtained by the purposeful integration of compatible processes either sequentially or parallel. In this case, the integration of the processes IP31 all the way to IP 34 will be represented through the process IP 31-4 and similarly IP 37 and IP 38 will be integrated to the process IP 37-8. The new process structure is shown in Figure 2c. The index value obtained by this transformation will be reduced to B = 0. 67.
Figure 2. An example of the internal process structure reduction Diameter of network The diameter of a network or a graph’s diameter also appears to be a pertinent indicator for the comparison of the complexity of the business processes structures. This indicator is commonly defined in the graph theory as the longest shortest path in the network. That is, if the length (in point-topoint hops) of the shortest path between i and j is Li,j, then the diameter of the network, directed or undirected graph, L = maxi,j(Li,j). When applying this indicator to the processes at the same level as in the previous case, it is reasonable to test structures including those elements of
81
BUSINESS PROCESS IMPROVEMENT the environment, which interact directly with the elements of the internal structure of the process. The process analysis, using this indicator, will consider the initial state of the original structure of the UEP3 process that is shown in Figure 3a.
Figure 3. The network structures before and after integration of selected processes If we assume that “I ” represents a set of output nodes (from which the path is initiated) and “J” represents a set of input nodes (in which the performance of the network finishes) then for I={2’, 3’, 5’, 6’, 8’} and J={2’’, 3’’, 5’’, 6’’, 8’’} the following distance matrix can be created:
The matrix for case a) results in max Lij = L2’6’’ = L3’2’’ = L3’3’’ = L3’5’’ = L5’6’’ =L6’6’’ = 5, thus L = 5. A more favorable value of this indicator for the given process can be obtained by the purposeful integration, which is again represented by the joining of the sequentially arranged processes IP 37 and IP 38 into the process IP 37-8 and by the joining of the processes from IP31 all the way to IP34 into the process IP 31-4. Through such a modification of the process structure, the network structure presented in Figure 3b can be obtained. Then the Diameter of network for the changed structure will be calculated in the same way by distance matrix where the numbers of the graph nodes “I” and “J” remaining in an original state: Based on this, the new measured value L equals 4; this means that according to this indicator the business process complexity is reduced.
STRUCTURE DIVERSITY The formalization of the process structure diversity is based on the supposition that the investigated network structure can be represented as a transformation process of input effects into output ones encompassing distribu-
82
BUSINESS PROCESS IMPROVEMENT tion activities. When determining one of the possible indicators of the process structure complexity, it is supposed that the more heterogeneous transition paths from input nodes to output nodes imply a more complex process structure. Based on these suppositions, a measure of the degree of structure diversity can be assessed by the following indicator:
1 D= n1 .n2
n1
n2
i =1
j =1
∑ ∑c
ij
−1
(2)
in which “n1, n2” are numbers of initial and final nodes of the process structure and “cij” represents number of heterogeneous paths from the i-th input node to the j-th output node of the process (without any possibility to pass twice through the same node within one route). If the process structure does not contain alternative transition ways from input nodes to output nodes within the structure, then the structure diversity indicator D = 0. In order to apply the structure diversity indicator to the original structure of the UEP3 process shown in Figure 3a and to that obtained after the integration of the processes (Figure 3b), the values cij which are given in the following matrices, are determined first. Applying the values cij in the equation 2, the following figures of the indicators D are obtained: • D = (0.04 * 86) -1 = 2.44 • D = (0.04 * 29) -1 = 0.16 Aggregate indicator of process complexity In order to compare structural properties of the same process before and after the (re) design it is useful to combine the above sub-indicators “B”, “L”, “D” of process structure complexity into an aggregate indicator. Figure 4 shows the required changes of the structural properties of the business processes to achieve the optimization.
Figure 4. Comparison of structural properties before and after the BP (re) design
The individual complexity values of the sub-indicators “B”, “D”, “L” are expressed as the proportional reductions in Table 1. Based on this, the average proportional reduction of complexity equals 51.56 %.
83
BUSINESS PROCESS IMPROVEMENT Table 1 Sub-indicators of procThe index ess structure complexity values before (re) design
The index values after (re) design
Proportional reduction of complexity
Binding of structure “B” 1.14
0.67
41.23 %
Structure diversity
0.16
93.44 %
4
20.00 %
“D” 2.44
Diameter of network “L”
5
Establishing an Aggregate complexity indicator “AC” by using above subindicators means finding a formula, which expresses the composite proportional reduction of complexity of the business process structural properties. The following expression for an Aggregate complexity indicator can be formulated:
AC = log(( B + L + D ) / 3)
(3)
Using equation 3, the value of proportional reduction of complexity in the above case between before and after (re) design, is calculated as 54.61%.
COMPARISON OF PROCESS COMPLEXITY INDICATORS This section considers the comparison of the complexity measures for describing the structural attributes of a business process models. Alternatives measures to above indicators are the Restrictiveness estimator and the Numbers of the trees in a graph.
ALTERNATIVE INDICATORS OF PROCESS COMPLEXITY Restrictiveness estimator This indicator was originally defined and presented by Thesen [23] and applied to project networks measurements. According to De Reyck [5] the Restrictiveness estimator (RT) is practically the same measure as Order strength, which was defined by Mastor [16]. t is defined as the number of existing direct and indirect precedence relations divided by the theoretical maximum number direct and indirect precedence relations and therefore ranges from 0 to 1. Later Schwindt [20] applied this indicator to measure network complexity. Formally RT is expressed by the equation:
RT =
2∑ rij − 6(n − 1) (n − 2)(n − 3)
(4)
where rij is an element of the reachability matrix R = [rij ], such that rij =1 if there is a path from the vertex vi to vj , otherwise rij = 0, and n is a number of vertices in a graph. According to Latva-Koivisto [12], the RT indicator could also be used for cyclic graphs, even though this measure was not originally specified for cyclic graphs. Because the previously analysed graphs in Figure 3 belong to cyclical graphs, this assumption will be verified for these two cases of network infrastructure. Number of trees in a graph The second indicator that appears to be potent as alternative indicators of process complexity is the Number of trees in a graph (T). This indicator, based on the supposition that the number of distinct trees of graph reflects
84
BUSINESS PROCESS IMPROVEMENT the complexity of a graph, was developed by Temperley [22]. According to Latva-Koivisto this indicator, along with the Restrictiveness estimator, are potential complexity measures for business process models. Hence the decision to compare them with the proposed indicators. The Number of trees in a graph indicator is calculated using so called treegenerating determinant, which is defined for any graph containing no slings, and not more than one undirected or two directed lines joining a pair of points. The formal expression of this index is rather complex. Temperley and Latva-Koivisto deal with the relevant theoretical comments on this indicator and the calculation details. Comparison of different indicators In order to assess the relevance of the proposed sub-indicators and the AC indicator for the selected complexities they have been evaluated for a set of selected graphs. The group of graphs has been designed to reflect the typical variety of business process structures. For that reason previously analysed graphs of a more general form, together with the graphs used by Kaiman [11], and Latva-Koivisto, have been applied. The first two graphs, shown in Figure 5, represent the real-life process graphs, while other graphs were based on artificial process graphs. In total 8 graphs have been used in the order that does not completely depend on their expected complexity. The graphs are shown in Figures 5 to 10.
Figure 5. Real-life process graphs: Graph 1 (b), Graph 2 (a)
Figure 6. Latva-Koivisto’s process graphs: Graph 3 (a), Graph 4 (b)
85
BUSINESS PROCESS IMPROVEMENT
Figure 7. Kaiman’s process graph; Graph 5
Figure 8. Kaiman’s process graph; Graph 6
Figure 9. Kaiman’s process graph; Graph 7
Figure 10. Kaiman’s process graph; Graph 8 Summary of alternative measures Table 2 shows the results of the implementation of complexity indicators. The empty boxes in the table for the RT measure in the case of Graphs 1 and 2 are there because the values of that measure did not fall within the value interval [0,1]. Specifically, the value 1.095 was calculated for Graph 1 and 1.067 for Graph 2. Table 2 Graph No.
Number of edges
Number of nodes
Complexity subindicators B
L
D
AC
RT
T
1
10
9
0.67
4
0.16
0.207
-
12
2
28
18
1.14
5
2.44
0.456
-
1280
86
BUSINESS PROCESS IMPROVEMENT 3
15
10
0.67
3
6
0.508
0.250
7
4
15
10
0.67
3
7
0.551
0.536
48
5
30
22
0.43
8
21
0.992
0.611
384
6
31
22
0.48
9
50
1.297
0.789
768
7
29
22
0.38
10
44
1.258
0.884
256
8
22
22
0.05
19
1
0.825
0.989
2
On the basis of the evaluation of Table 2, it is possible to state the following: • The values of indicators AC and RT for Graphs 3 through 6 show comparable tendencies. For graphs 7 and 8 the trend is different. •
The T indicator does not provide a real difference in values in the case of Graphs 1 and 2. The difference between the values is too large.
•
The values of the indicators AC and T for Graphs 3—8 have approximately the same tendency, even in the case of Graph 7, whose complexity is apparently lower than in the case of Graph 6. That could signal some advantages for the indicator AC compared to RT.
•
The indicator T is apparently not applicable for cyclical graphs with multiple input and output nodes. The above analysis implies that the parallel use of alternative indicators may lead to an objective evaluation of the evaluation of the structural properties of BP in terms of their optimization. On the other hand, the results confirm the more or less known fact that a universal measure for the assessment of the complexity of process structures is probably not attainable. The complexity of the structures should therefore be assessed on the basis of predefined criteria, which differ for real processes according to their purpose. The proposed indicator AC, from this perspective, can be considered as a specific indicator for the assessment of the complexity of BP, based on the principles of reengineering. For that reason it may be used effectively in the optimization of structural properties of business processes.
EVALUATION OF STRUCTURAL PROPERTIES FOR BUSINESS PROCESSES ON A VERTICAL LEVEL The previous paragraph dealt with the analysis of the structural properties of the business processes on a horizontal level. This enables the comparison of various processes of the same class, or the improvements of processes after being redesigned. The presented example of the analysis is related to the latter possibility. The following section describes the procedure of the analysis of structural properties based on the principles of reengineering, where the criteria of suitability at the individual levels of the process models are different. With an analysis of this type, it is possible to consider the measure of irregular loading of the structure elements by means of the following indicator. Degree of the Structure Centralization In order to define the degree of centralization of the process structure, it is possible to use the so-called index of centralization “α”, which expresses the measure of (de) centralization of the structure elements. The index value can be obtained by means of the relation:
87
BUSINESS PROCESS IMPROVEMENT
α=
n 1 ∑ (V (k ) − V (i)), ( n − 1)(V (k ) − 1) i =1
(5)
i≠k
where V(i) = vi + vi expresses the total number of input edges and output edges of the i-th node, V(k) = max V(i). Index “α” can obtain two limiting values: • α = 1 in the case that the structure is centralized to the maximum degree •
α = 0 in the case that the structure is decentralized to the maximum degree This index will be used in the following example in order to evaluate the level of the degree of centralization of the modeled process structure based on the first three stages of the structured process decomposition shown in Figure 1. The following extracted process structures are analyzed: • System diagram, •
Context diagram for the UEP3 process, and the
• Commodity flow diagram of the 1st stage for the same process. Directed graphs will be used to determine the internal structure of the given process (the vertices without gray shading). The block diagrams of these processes are summarized in Table 3 together with the relevant characteristics. Table 3
88
BUSINESS PROCESS IMPROVEMENT When this indicator is used for the evaluation of structural attributes of the business processes according to the above- mentioned process mapping technique, it is necessary to use different criteria: • the system diagram structure should have a decentralized structure to the maximum degree, •
the context diagram structure should have a strong centralized structure,
•
and commodity flow diagrams should have significantly decentralized structure.
CONCLUSION
Apart from main intention to present a practicable approach to the measurement of the structural complexity of business processes, the chapter also outlined some conceptual aspects of the effectiveness of the creation of practical tools for business processes redesign consisting of modeling and subsequent analyzing of processes structural attributes. The modeling procedure presented based on structural process decomposition and has been verified by the solution of many practical problems of process redesign. Based on experience, this stage is the key to the way of successful enterprise reengineering and the subsequent workflow management. Undoubtedly the position of business process metrics is of major importance for both stationary and dynamic properties of the process structures.
REFERENCES: [1] Ashworth, C. and Goodland, M. SSADM A Practical Approach, McGrawHill, 1990. [ 2] Avison, D.E and Fitzgerald, G. Information Systems Development, Blackwell, 1991. [3] Borgatti, S.P. Graph theory, http:/www.analytictech.com/networks/graphtheorychap.doc Accessed December, 2004. [4] Deiters, W. Gruhn, V. Process management in practice- Applying the FUNSOFT net approach to large scale Processes, In: Special issue on process technology/Automated Software Engineering 5, 1998, 7-25. [5] De Reyck, B. On the Use of the restrictiveness as a measure of Complexity for Resource-Constrained Project Scheduling. Research Report 9535, Department of Applied Economics, Katholieke Universiteit Leuven, Belgium, 1995. [6] Fathee, M. M. The effects of complexity on business processes reengineering: values and limitations of modeling and simulation technologies, In Proceedings of the 30th conference on Winter simulation, 1998, 13391345. [7] Franken, H.M., de Weger, M.K. and Jonkers, H. Structural and quantitative perspectives on business process modelling and analysis. In: Proceedings of the 11th European Simulation Multiconference, Istanbul, June 1-4, 1997. Society for Computer Simulation International, Ghent, Belgium, 1997, 595-599. [8] Gane, C. and Sarson, E. Structured Systems Analysis: Tools and Techniques, Prentice Hall International, Inc., 1978. [9] Hayes, R., Wheelwright, S., & Clark, K. Dynamic Manufacturing. New York, NY: The Free Press, 1988.
89
BUSINESS PROCESS IMPROVEMENT [10] Jonkers, H. The application of hybrid modeling techniques for business process performance analysis In: A.R. Kaylan and A. Lehmann (eds.), Proc. 11th European Simulation Multiconference, Istanbul, Turkey, June 1997, 779-786. [11] Kaimann, R.A. Coefficient of Network Complexity and Coefficient of Network Complexity: Erratum. Management Science Vol. 21, No. 2, pp. 172177 and No. 10, pp. 1211-1212, October 1974. [12] Latva-Koivisto, A. Finding a complexity measure for business process models, Research Report, Helsinki University Of Technology, February , 2001,1-25 http://www.hut.fi/~alatvako/Kompleksisuus-erikoistyo_2001-0213.PD Accessed December, 2004. [13] Law, A. M. and Kelton, W. D. Simulation Modeling & Analysis (Second Edition), McGraw-Hill, 1991. [14] Liu, L., Pu, C., Ruiz, A. A systematic approach to flexible specification, composition, and restructuring of workflow activities. Journal of Database Management, 15 (1), 2004, 1-40. [15] Marca, D. A. SADT. Structured Analysis and Design Technique, McGrawHill. 1988. [16] Mastor, A. A. An experimental and comaparative evaluation of production line balancing techniques, Management Science 16, 1970, 728-746. [17] McReady, S. There is more than one kind of workflow software. Computerworld. November 2, 1992, 86–90. [18] Modrák, V. Evaluation of Structural Properties for Business Processes, In: 2004, {roceedings of the 6th International Conference on Enterprise Information Systems (ICEIS), Porto, 2004, 619-622. [19] Paper, D. and Dickinson, S. A Comprehensive Process Improvement Methodology: Experiences at Caterpillar's Mossville Engine Center (MEC), In: Cases on Information technology management, Idea Group Publishing, 1997 [20] Paulk M.C., Curtis B., Chrissis M.B. and Weber, C.V. Capability Maturity Model SM for Software, Version 1.1, Technical Report CMU/SEI-93-TR-024 ESC-TR-93-177, February 1993. [21] Schwindt, C. A new problem generator for different resource-constrained project scheduling problems with minimal an maximal time lags, WIOR – Report-449, University of Karlsruhe, 1995 [22] Scofield, M. Enterprise models: Anticipating Complexity, Enterprise Reengineering, Reengineering Resource Center Jan./Feb., 1996. [23] Temperley, H.M.V): Graph Theory and Applications. Ellis Horwood Ltd., England, 1981. [24] Thesen, A. heuristic scheduling of activities under resources and precedence restrictions, Management science 23, 1976, 412-422. [25] Tjaden, G.S. Business process structural analysis, Working paper, Georgia Tech Research Corp., October, 1999, 1-25. http://www.ces.btc.gatech.edu/report3.html Accessed December 2002 [26] Vanhoucke, M., Coelho, J., Tavares, L., Debels, D., "On the Morphological Structure of a Network", Working paper, October 2004. http://econpapers.hhs.se/paper/rugrugwps/04_2F272.htm, Accessed November, 2004.
90
Enhancing and Extending ERP Performance with an Automated Workflow System Robert J. Kearney, Image Integration Systems, Inc., USA A PRACTICAL VIEW OF THE BENEFITS OF WORKFLOW SYSTEMS IN ERP ENVIRONMENTS The effective integration of comprehensive, independent workflow systems with Enterprise Resource Planning systems can produce significant improvements in those business processes implemented with the ERP. This synergism fully delivers on the economies of scale promised for centralized ERP processing, while insuring and simplifying the requisite participation of the "expert," often decentralized, knowledge workers. Practical limitations of most ERP systems are considered, as is the manner in which automated workflow overcomes those limitations to effect greater business process improvement. The results at two businesses are considered, each of which benefited from faster, less costly and more controllable business processes.
ENTERPRISE RESOURCE PLANNING Over the last few decades, enterprise resource planning (ERP) changed and now defines the business processing landscape. Virtually every transactional business process is now “automated” by an ERP system, or the related systems for supply chain management (SCM) and customer relationship management (CRM). And the more recent advent of business-to-business (B2B) capabilities with the internet has, in some well-defined situations, extended ERP coverage across and beyond the borders of the enterprise. The efficiencies created continue to boost productivity. Without ERP the business models for many of today’s corporations would change dramatically, if in fact they could exist at all. ERP is a necessity for success in today’s business environment. The transactional volumes in sales, accounting, production, inventory, distribution and the like, that today’s businesses routinely process might be impossible otherwise. So in what way is the “promise” of ERP unfulfilled?
ERP OR ENTERPRISE TRANSACTIONAL PROCESSING Simply put, while ERP is necessary it is not sufficient. The implicit “promise” of ERP included: 1. standardization of the entire business process 2. a central repository of all relevant business process information 3. active, direct support of and participation by the business “experts” The shortfall against these expectations is principally due to the fact that for most organizations ERP is more accurately “ETP: enterprise transaction processing.” As such, the ERP system is not part of the business process or the associated information until a transaction is processed. Many, if not most, business processes are initiated by activities at the physical edge of the enterprise, but ERP does not usually extend that far. This part of the
91
ENHANCING AND EXTENDING ERP PERFORMANCE process, from the outside world to the ERP transaction, usually is standardized, but not by ERP. Most ERP transactional interfaces are necessarily designed for the data entry processor, not the business expert. Industries are replete with examples of organizational layers, changes and additions with the primary objective being to buffer the business expert from ERP, and sometimes, the ERP from them; the very people who need to see, create, approve, initiate and authorize the life-blood information of the enterprise. These business experts are often managers and supervisors of departments, functions or projects. But may also be those business knowledge workers in purchasing, production, sales administration, distribution, etc., on which the enterprise depends to operate effectively and profitably. The ERP system is certainly a great place to find discrete data, but only in the context of the transactional process. It wasn’t long ago that what is now Information Technology was justifiably called “data” processing. Most interaction with ERP still remains within the constraints of the transactional data processing interfaces. So for all the tremendous benefits that most ERP systems provide, they can not “internalize” much of the vital business process information; nor encourage and support the interaction of the business experts; nor automate and standardize entire business processes.
INDEPENDENT WORKFLOW SYSTEMS The development of workflow systems to automate and improve business processes has been, in part, driven by recognition of that which ERP systems can not provide well, or at all. In fact it is not unusual for businesses to implement an independent workflow system with, or immediately following implementation of a new ERP system. At the very least they recognize the potential for increased returns on their ERP investment by adding workflow to extend and enhance their new ERP after its use has stabilized. And for an increasing number of companies, new business processes are designed with the workflow system as a requisite partner with the new ERP system, and they are implemented together. Independent (i.e. not built into the ERP system) workflow products historically have been far easier to configure and use than workflow imbedded in the ERP. Their flexibility and orientation to the entire process, not just transaction processing, enable independent workflow products to be “business expert friendly.” The workflow engine is about the process; the ERP engine is about the transaction. Most ERP systems include at least some form of workflow, but it is either very tightly coupled with the ERP transactional processing or has only a subset of standard workflow functionality. In the first instance, development and maintenance of the workflow schema usually requires considerable efforts from specially trained and thus scarce and/or expensive IT development resources. In the second, the workflow is of very limited value and rarely capable of effecting entire processes. In neither case is it possible to respond quickly and effectively to the changing process requirements that characterize today’s business. Activity driven workflow systems have also been available for some time and, like ERP systems, are now mature and commonly accepted in a variety of business arenas. In ERP business environments they are now frequently be-
92
ENHANCING AND EXTENDING ERP PERFORMANCE ing used to support and supplement ERP capabilities. The workflow system complements the ERP in that it can begin the business process by recognizing an event at the edge of the enterprise, and subsequently manage the capture, creation and validation of transactional information through the process activities. The active participation of the business expert is supported with more usable, business-intuitive interfaces that need not be constrained by the requirements for ERP transactional data entry.
WORKFLOW: THE ERP DATA DELIVERY SYSTEM In a workflow-centric environment, ERP processing is just another automated activity in the workflow schema. Conversely, in an ERP-centric environment, workflow is the delivery mechanism by which the data required for transactional processing are provided. While ERP processing requires only data, the expert routinely requires knowledge of the business context within which these data exist in order to exercise his expertise. The initiating document often provides that, at the very least acting as the data “container.” With an improved perspective, informed operational decisions can be made: approve, order, pay, release, produce, etc., with each such decision ultimately leading to an ERP transaction. It is still common for the document from the world outside the ERP system to be paper, which is why “imaging” is often part of the business process improvement effort. From a processing perspective, it would be far more efficient if all business were transacted electronically, as B2B. But the “paperless” office remains an elusive target. That is not likely to change any time soon, and in fact the volume of paper generated by business continues to increase. By capturing the external document as an electronic object or image (whether from scanner, fax, email or web) the workflow system can better inform the expert, effectively internalizing that added information for both immediate and continuing use.
INTEGRATION OF WORKFLOW AND ERP SYSTEMS As opposed to stand-alone workflow systems, workflow in the ERP environment is specifically designed and structured to integrate and co-exist with ERP systems. Stated narrowly, the purpose of this integration is to enable workflow to deliver transactional data to the ERP system. Note also that this integration does not, and should not include transactional processing within the workflow system, as opposed to the ERP system(s). Any duplication creates the potential for conflicting and invalid results, subverting business process improvement at least. The ERP system is a powerful and finelytuned transaction processing machine that needs to be supported, not replaced. Sometimes this support is required for more than a single instance of one ERP system, and often for multiple, different ERP systems. A common business process can be established across these platforms by deploying a common workflow scheme that incorporates the ERP platforms as different processing activities. Effective integration of workflow with the ERP system will vary depending on the application, but often provides: 1. validation of new data against existing ERP records 2. extraction of related (process relevant) data from the ERP
93
ENHANCING AND EXTENDING ERP PERFORMANCE 3. automated presentation of the transactional data to the ERP data entry staff 4. automated naming (indexing) of all originating and supporting documents as part of the ERP transactional process 5. automated submission of transactional data directly to the ERP, through an ERP utility or edited interface 6. detection and response to changed status of a business condition in the ERP Items 1, 2 and 6 are implemented with customizable workflow agents (and/or exit programs) require the workflow system manufacturer have reasonable expertise with the particular ERP system. The effective integration for items 3, 4 and 5 commonly necessitates configurable integration modules, requiring considerable ERP expertise and often stringent testing for certification by the ERP manufacturer. This ensures that the workflow system does not in any way corrupt ERP transactional data or processing. The proper design and implementation of the workflow system in an ERP environment requires this level of integration to fulfill the promise of ERP. As illustrated below, this results in processes that are faster (origination to completion), cheaper (reduced manual content) and better (visible and measurable).
THE SHARED SERVICES CENTER MODEL The Shared Services Center (SSC) is a good business process environment in which to observe the effects of ERP processing with integrated workflow processes. The SSC is commonly established to achieve economies of scale that come from centralization of a corporation’s “admin” and processing functions to one physical location. The accounting, finance and human resources staffs, for example, support geographically disbursed business units from the SSC. ERP processing is done by this SSC staff with the remote users networked for various levels of ERP involvement, depending on their function. Centralization of this type is routinely practiced in industries such as construction and property management, where there is a portfolio of farflung, often changing business units (e.g. projects, properties). With an SSC, processing efficiencies stem from a centralized staff and their use of the ERP system, plus productivity increases from standardization of the entire business process with workflow. Process effectiveness is due to the workflow system as a delivery mechanism; delivering the right data to the right people and place at the right time. Through the workflow, the transactional data and documents are readily accessible by internet or private network to the business experts regardless of location. The full benefits of centralization can now be realized since, on the one hand, economies of scale are available, and on the other, neither geography nor the ERP interface is a constraint to the participation of de-centralized business experts. Experienced business systems professionals may recognize these heretofore irreconcilable factors as the root of the long-standing centralize versus decentralize systems argument. The effective use of workflow can render those arguments moot.
94
ENHANCING AND EXTENDING ERP PERFORMANCE CASE STUDIES Two cases are presented. They are illustrated by “swim lane” diagrams with activities segregated by type: manual, workflow or ERP. For some measures of productivity improvements exceeded 50 percent. Cycle times for transactional processing and costs per transaction were reduced, while the visibility and management control of the process increased dramatically. For a multinational manufacturing company, the major benefits derive from the workflow automation which streamlined a comprehensive multi-level management approval process for vendor invoice processing and payment through the ERP. And from the increased capabilities for fully automated, nearly hands-free processing. At a large construction company the workflow system enables the ERP-based, mission-critical pass-through-billing process. This process starts with receipt of diverse expense items in Accounts Payable and is completed when the documents are “passed through” to support and justify a customer’s billing in Accounts Receivable. The workflow system implemented by both companies is DocuSphere®, a business process improvement software product from Image Integration Systems.
THE MANUFACTURING COMPANY This large, multi national company has production facilities and administrative offices across North America and Europe. The entire enterprise is served by a single ERP system. The challenge was to control and reduce AP transactional costs and to standardize the entire AP process for all locations. After extensive analysis it was concluded that this was best achieved by establishing a SSC, there by eliminating the need for processing at each office. The resulting economies of scale would significantly reduce “manual content” per transaction, with commensurate reductions in staff and costs. It required that all vendor documents be sent directly to the SSC, and development of a comprehensive and controllable delivery process by which the transactional information and documents (vendor invoices, delivery tickets, etc.) could be presented to the ERP system ready to process (voucher). “Ready to process” includes review and approval by the business experts who generate and are ultimately responsible for the transaction, most of whom are in other offices, and in fact in other countries. And virtually none of these experts were users of the ERP system. Figure 1 represents the process for vouchering (i.e. make ready to pay) vendor invoices that were not generated by purchase orders. For many companies this type of transaction is most problematic in that the ERP system has no prior knowledge of the transaction, as it would if it were related to an ERP-generated purchase order. Note that for purposes of illustration, exception handling activities are not shown. They are a necessary part of the workflow process, and often require involvement by both business experts and AP processing experts. Many of the process activities implemented with workflow replace what would otherwise be manual activities. Only five of these 17 “delivery” activities (from numbers 4 through 20) remain manual. And these manual activities are all performed by the business experts, who possess the requisite business information. Workflow further supports the process by interrogating the ERP system with automated agents to validate and acquire process relevant data. (Activities 4, 7, 11, 14). The delivery is completed in activities
95
ENHANCING AND EXTENDING ERP PERFORMANCE 18 to 20, wherein the workflow system creates and submits voucher records (reflecting ERP specifications for transactional data) using an ERP utility for batched input. Documents related to each resulting transaction are automatically named (indexed) with the transactional data by the workflow system. As a result of the integration module functions, they are then accessible (viewable) from within both the ERP and workflow system. The transaction processing and reporting is then done by the ERP system, with AP staff review, to complete the process. Simply comparing the number of activities by type (manual, workflow, ERP) doesn’t fully measure the importance of each type. But it is instructional to note that, of 25 activities for this process, four are manual activities within AP, five are by business experts and three are ERP data processing activities. The remaining 13 process activities are provided by the workflow system. A note on activity number 3, “auto data capture.” To further reduce manual activities, this activity incorporates a relatively new capability; automated or advanced data capture (ADC). ADC is based on optical character recognition technology, and provides rules-based methods to extract data from scanned documents. A significant proportion of ERP data entry fields can be captured automatically, further replacing manual data entry. With auto voucher (1820) the resulting transactions can appear to be virtually hands-free. ADC is shown in workflow for convenience and arguably warrants a swim lane of its own. The results and benefits directly from, or made possible by, the workflow system are: 1. the process is fully defined and standardized by workflow, improving aspects of governance and regulatory compliance. 2. performance metrics for each activity and the process as a whole, are visible to and measurable by the process owners. 3. reduced manual content per transaction, resulting in overall productivity increases of over 50 percent 4. directly involved business experts, interacting appropriately and efficiently, reducing elapsed process time and error frequency. 5. reduced total process time; transactions that required weeks are now complete in days, while those that took days are often complete in hours. The existing ERP system continues to provide effective, robust, high-volume transaction processing, but contributes to the incremental benefits above only as a necessary sub-set of activities for processing within the workflow process.
96
ENHANCING AND EXTENDING ERP PERFORMANCE
Figure 1. SSC AP Voucher Process Activities
97
ENHANCING AND EXTENDING ERP PERFORMANCE THE CONSTRUCTION COMPANY Confronted by significant growth challenges, this company replaced their business systems with a single integrated ERP system. To fully satisfy their business objectives the new business processes necessarily included implementation of independent workflow with the new ERP system. This was particularly important for a common type of customer billing process often referred to as pass-through billing. This is illustrated with the simple example of “cost plus” contracts, wherein the customer agrees to pay the building contractor for all the legitimate construction project expenses, passed through to the customer, plus say 10 percent more, the contractor’s project profits. Processing problems, particularly with projects such as the design and construction of large manufacturing facilities, stemmed from the sheer volume of project expense items from AP, Payroll and similar ERP transactional processes. Understandably, the customer demands original documentation to justify the billed expenses. As a consequence, it could be several weeks after an expense item was paid before the customer was billed, this delay caused by the effort to find and organize all the originating documents (e.g. vendor invoices). This delayed creation of the bill, added time and unrecoverable expense (people), and increased the opportunity for error, including under-recovery as legitimate expense items were easily missed. Figure 2 represents the entire inter-departmental process, starting in AP and completing in Billing (Accounts Receivable). As in the first case, the combination of workflow and new ERP sytems allowed centralization and elimination of AP processors at job sites and regional offices. For simplicity Figure 2 is restricted to non-PO invoices, although in practice there are many PO-driven vendor invoices for construction projects, which the workflow handles with analogous process activities. The AP process issues and benefits in the first (manufacturer) case are equally evident here: standardized processes, performance metrics, reduced costs and total process time, and direct involvement of the business experts, such as project managers at the job sites. In fact, total AP process time is even more important in this case as it contributes directly to the dead time between the business action that creates the vendor expense and the revenue from billing the customer for that expense. In the pre-workflow process the elapsed process time to voucher entry (AP activity 13) averaged over three weeks. It has been reduced to under two weeks. Management is also currently evaluating ADC and automatic batch vouchering, as described earlier, with the potential to further reduce time and cost.
98
ENHANCING AND EXTENDING ERP PERFORMANCE
Figure 2. AP and Pass-Through Billing Activities
99
ENHANCING AND EXTENDING ERP PERFORMANCE Vendor invoices are manually vouchered by AP processors using one of the standard ERP data entry screens. As with automatic batch vouchering, the transactional documents are automatically indexed with relevant transactional data. In this instance the processor does data entry by “key from image,” working from the electronic image of the vendor invoice, and related documents, as there is no longer a paper document needed or available. Further, the workflow activities, rules and roles can be configured in a variety of ways to support virtually any business requirement for distribution to the AP processors of documents to be worked. This is most often done by “pushing” the next invoice document from a common work list to a processor who has become idle. As a consequence, the work load is spread evenly over the entire group of AP processors. The major benefits derive from billing the customer faster and more accurately, and thus recognizing the revenue sooner. This is something of a twofor-one result in that the new process implemented with workflow reduces costs in the AP department while independently increasing revenues in the Billing department. The reduced time to bill is a function of both the ERP system, which has a very effective pass-through billing module, and the workflow system. The originating document was earlier discussed as the data “container” providing context to the expert. In this case that business expert is in the customer’s organization, and has a right to the documents for review before payment. The workflow activities (numbers 4 and 5 in AR) organize all the supporting transactional documents required for the bill by using agents to interrogate the bill detail generated by the ERP system. Since the billing detail includes some of the same data that indexes these documents (e.g. voucher number), they can be organized and presented to match structure previously established within workflow (AR activity 1). Often iterations (drafts) are required before the bill includes all expense items, and only those items, that satisfy the terms and conditions of the customer’s contract. As a last activity in the workflow, the processors select the distribution medium for the customer invoice; print to paper or write to CD. The availability of a CD with the customer’s bill and all supporting details and documents, provides obvious benefits as compared to paper. In summary, the AP benefits in this case are much the same as in the first case. In addition however, the customer billing results and benefits due directly to the workflow system are: 1. more than a 60 percent reduction in total time to bill, from over six weeks to under three weeks on average. 2. qualitatively, an increase in completeness and accuracy in the customer bills, reducing disputed items and speeding receipts. Because time is money, these translate to significant revenue improvement.
THE FUTURE The foregoing is based on characteristics of the vast majority of ERP systems as installed and utilized today. New ERP vendor initiatives with their own workflow, business process management and similar products and architectures will undoubtedly change today’s landscape, presumably for the better. It is reasonable to assume that the flexibility and capability of independent workflow systems will increasingly appear both within ERP systems, and as
100
ENHANCING AND EXTENDING ERP PERFORMANCE complementary products from the ERP vendor. Over time therefore, the sources of effective workflow systems for use in ERP environments may change somewhat, but the business requirements will not. However, at least for the time being, this trend will have little or no effect on the tens of thousands of businesses that use, and for some time will continue to use, today’s ERP systems.
SUMMARY AND CONCLUSION ERP systems are most commonly and correctly perceived and utilized as transaction processing machines. In that role they excel. Workflow systems, integrated with the ERP system, can function as the data delivery mechanism for ERP transactional processing. Conversely, ERP transactional processing is but one of the many activities in the workflow. The integrated result provides capabilities that have been missing with ERP alone: standardization and automation of entire business processes, effective involvement and interaction with the business experts, and, the creation and capture of all relevant business process information. The improved business processes enable the promised economies of scale from centralized ERP processing.
101
Narrowing the Semantic Gap between Business Process Analysis and Business Process Execution Dr. Setrag Khoshafian, Pegasystems Inc., USA ABSTRACT The business process management (BPM) industry is growing rapidly, surpassing the expectations of even its most ardent supporters. Like most new technologies, BPM is enduring its own growing pains thanks to convergence, consolidation, and accelerated adoption. One of the critical areas of convergence that has not received sufficient attention is the semantic gap and interoperability challenges between business process analysis (BPA) tools and intelligent BPM engines. This interoperability challenge is further aggravated by the lack of robust business rules modeling tools. Business rules are now regarded as essential components of a next generation BPM (intelligent or smart BPM). Even though there are various BPM standardization efforts, the semantic gaps between BPA and run-time intelligent BPM engines are considerable. This paper will address these semantic gaps and identify solutions for continuous and iterative development of complex intelligent BPM applications.
INTRODUCTION The business process management market has grown steadily over the past five years. Most organizations have successfully built and deployed BPM applications with tangible returns on investment. However, as BPM starts to go mainstream, it faces challenges in the proliferation of BPM tools and solutions. Among the different components of BPM suites 1 , you have BPM engines, BPA tools, enterprise integration, business rules engines, and more. A comprehensive BPM solution will have a lifespan of several years and will, like all other software applications, go through several versions and iterations. If several tools are used to implement the solution, they need to work together seamlessly from modeling to design to deployment to execution to monitoring and back to modeling. But that is difficult to achieve. Let us take two of the components involved in modeling, designing and execution of business process solutions: business process analysis and business process management systems. There are four main approaches to interoperability between BPA and BPMS: •
Paper-based and modeling artifacts as starting point for requirement/analysis documents: This common approach has no pretenses of interoperability. The various implementers of BPM applications take whatever is produced through BPA tools—paper documents and electronic documents. The realm of the modeling and analysis tool comes from the business analysts. The execution tools, on
The credit for the BPM suite approach goes to Jim Sinur from Gartner. There are fundamental components of the BPM suite including Business Rules, Enterprise Application Integration, Analysis and Simulation, and Business Activity Monitoring.
1
103
NARROWING THE SEMANTIC GAP
•
•
•
the other hand, are more IT-driven and detailed. This chasm between business owners, who want to model and analyze, and IT, who designs, implements and deploys, is critical. The chasm gets accentuated when modeling/analysis and then design/implementation/deployment are done through different tools, each with its own platform, semantic models, versioning conventions, meta-model, etc. At best, you get requirement documents produced from business owners and thrown over the wall for IT to implement. Export/Import Models Using Standards: The second approach uses standards for interoperability. However, in BPM—as in many other technologies—the “nice” thing about standards is that there are so many to choose from. The standards bodies attempt to be complementary and co-exist to solve real problems. At least that’s the theory. Workflow Handbooks over the past several years feature articles on standardization progress. The three most pervasive standards are the UML activity diagram notation, the BPMN notation, and the BPEL Web Services Execution Language. There are many—too many—other standards (XPDL, JSR 207, BPSS, WSCDL, and many more). Interoperability and Partnerships between Modeling and Execution Platforms: The third approach is to have “partnerships” between modeling and execution platform vendors. This approach pairs the higher-level modeling offered by modeling and analysis tools with the lower-level IT design primitives of the platform. This is a patch-work integration at best, involving two and sometimes three products with predictable challenges for synchronization, maintenance, and flexibility. Advanced BPM applicaitons evolve continuously. Comprehensive and Unified Intelligent Business Process Management Platforms: A fourth approach is to use an intelligent and unified BPM platform that encompasses modeling, simulation, piloting, deployment and execution capabilities. There is a fundamental and real-world assumption here: a comprehensive BPM application is not a one-shot deal. You need to maintain, fix, improvement, enhance, and extend. It is even more critical for BPM since you can incrementally build bigger, richer, and more advanced applications from your building blocks: decision rules, or processes, or integration components. A unified platform avoids many of the pitfalls of BPM-BPA interoperability—especially when there are continuous planned and unplanned changes in the BPM applications.
EMERGENCE OF AGILE AND ITERATIVE APPROACHES Fifty years of computer science has shown the value of close affinity between modeling and execution. The more you can avoid mapping and transformations, the better. There are emerging requirements. We need to be able to handle uncertainty. We need to implement and deploy incrementally. And we need to maintain increasingly high levels of quality and performance. The delay between modeling and piloting should be reduced. The traditional “waterfall” methodology, with sequential and extended periods of analysis, then design, then implementation, then testing and deployment is ineffective. As a result, we now see growing adoption of methodologies (such as Unified Software Development Process, Extreme Programming, or Scrum) that attempt to address the continuous and iterative improvement lifecycle and address the waterfall approach’s limitations. These methodologies encourage agility, quick wins, unplanned and constant iterative change. While they may differ in their
104
NARROWING THE SEMANTIC GAP approaches to team formation and project planning, they all seek to avoid long phases and gaps from modeling and analysis to design and implementation, support incremental, iterative, and fast turn around for modifications to existing solutions. The underlying platforms and tools used within this kind of methodology obviously should not raise technical impediments to the iterative and continuous improvement approach. This implies the semantics of the modeling artifacts in the analysis phases should be very close if not identical to the semantics of the design and implementation constructs. The semantic gaps need to be eliminated.
BUSINESS PROCESS ANALYSIS TOOLS BPA tools are used to model and analyze processes. After discovering the processes and rules of the application, the processes are exported and deployed in a business process management system. The BPMS can provide more detailed design of the processes imported from the BPA tool. In business processes analysis and modeling, there are a number of tools and notations for modeling the processes. These tools provide various modeling constructs to design business process applications at a higher level and then deploy them to a process engine. Using a BPA tool you can model organizations, networks, data and, perhaps most importantly for this chapter, processes. The tools and notations could be arranged in various perspectives (e.g. business, technology, etc.). Here are some of the modeling constructs that BPA tools support: •
•
•
•
•
•
Strategy Diagrams: BPA tools often have high-level graphical representations for corporate goals and strategies—for cause-and-effect analysis, or strategic goal setting or as a general framework. Examples here include Balanced Scorecard and Fishbone illustrations. Process Diagrams: This is perhaps the most important component for BPM. The process diagrams typically include swim-lanes, tasks or activities and the overall flow of the process. Processes can have subprocesses and you can also specify simulation parameters and simulate the process. Business Policies and Decision Rules: The processes provide the procedural flow models of work, involving human participants, systems, and trading partners. You also need to model the business policies and decision rules. Rules such as risk level determinations, service level agreements and approval levels are examples of business policies and decision rules. These can be associated with processes or can pertain to the application or even the enterprise as a whole. Class or Data Modeling: This allows the modeler to provide the analysis of the business classes. Typically UML notation is used to represent the class hierarchies. Depending upon the specific capabilities of the modeling tool, the classes can be used in processes. Interaction Models: A number of interaction models are used to represent either use case diagrams, sequence diagrams or object interaction diagrams. These depict external actors using the system or internal objects interacting with one another. Organizational Model: The organizational model is used to represent the various organizational units and their relationships. Usually this is illustrated as a hierarchy.
105
NARROWING THE SEMANTIC GAP There are dependencies and associations between these models. For instance, a process model will use the organization model (the participants in the flows—typically in the swim lanes), the class or information model and also the business policies. The strategy model will use both the processes and policies:
INTELLIGENT BUSINESS PROCESS MANAGEMENT SYSTEMS2 An intelligent business process management system has several components. Why do we call it intelligent BPM? Primarily because it is a unified system that can handle different types of rules. In [Khoshafian 2004] 3 we defined intelligent BPM through: Intelligent Business Process Management = Flows (workflows) + Process Rules + Practice Rules + System Rules In addition, intelligent BPM will allow you to program both procedurally and declaratively. Examples of the former include all conventional and objectoriented programming languages as well as workflow constructs. Examples of the latter include the declarative rule and constraint systems. Intelligent BPM reminds us that processes and decision rules are two sides of the same coin. Today we take database management systems and content / document management systems for granted. We have also done reasonably well with digitizing our procedural logic and procedural programming. However, until
A word of caution: in some cases “Business Process Management” and “Business Process Management Systems” have been given different semantics or connotations. We think this is confusing and within this paper BPM and BPMS are synonyms (this is similar to “Database Management” and “Database Management Systems” – in most contexts they are synonyms).
2
[Khoshafian, 2004] “Web Services Orchestration and Management through Intelligent BPM,” by S. Khoshafian. Appeared in Workflow Handbook 2004 Edited by Layna Fischer, Future Strategies Inc., Lighthouse Point, Florida.
3
106
NARROWING THE SEMANTIC GAP recently, we had not done so well in digitizing the declarative rules in the context of BPM applications. Even as BPM systems emerged we soon realized that the rules that drive the business are often in people’s heads or in volumes of paper documents or in application programs. When we say “intelligent” BPM we mean exactly that—digitizing the scattered rules found in documents, program spaghetti code or in peoples’ heads. You still have knowledge workers participating in the processes and authoring the rules but now the system either replaces rote manual labor or guides, assists and complements the worker to get her/his job done more effectively. This self-guided assistance also applies to the back-end systems and trading partner activities. The following illustrates the various components of an intelligent business process management system: •
•
•
•
Integration : Business process management includes enterprise as well as business-to-business integration components. You can have back-end applications (such as CRM, HR, ERP applications), in-house applications, and newly developed components (e.g. EJB components) become participants in your processes and decision rules. BPM systems use applications as well as technology adapters to realize integration. Examples of technology adapters include EJB, MQ, JSM, .NET, and perhaps the most popular is Web services through SOAP interactions. Design: Most of the aforementioned modeling features found in BPA tools often exist in intelligent BPM suites, though usually with a different emphasis: details for deployment and execution. The design is more detailed since now you are focusing on execution. You can model the flows. You can also design your classes and author your decision rules. Other constructs include your integration services or connectors. BPM design can also include strategy models. You can simulate either your existing (“as is”) or designed (“to be”) processes. Performance analysis allows you to make modifications in terms of participants, flow logic, or digitization of your decision rules. Presentation rules: design also includes other components that you typically do not find in BPA. For instance, your forms and overall portal GUI is designed through the intelligent BPM tool. The design of object types, processes, integration components, and business rules could be performed in thick client platforms with an explicit deployment phase or in a thin client (browser based) environment with immediate run-time piloting and execution. Increasingly the trend is to have thin client design environments in conjunction with popular tools that are readily available on every client platform—such as Visio and Excel. Execution: The rules and flows are compiled and executed by the underlying execution engine. The best option is to have one core execution engine (e.g. a rule engine) that treats flows and processes uniformly and supports a consistent mathematical model of execution and resolution. The engine keeps track of the status of various processes, participants, and other objects that are involved in intelligent business process automation. This engine communicates and stores the process states in underlying relational database management systems. These process states are then used to analyze the performance of entire applications, specific processes and participants in processes. The engine supports security and access control. It also has transactional semantics. The execution engines
107
NARROWING THE SEMANTIC GAP
•
run within a multi-tier application server architecture (e.g. J2EE). They will have both client and server tiers. Portals: Intelligent BPM products also offer out-of-the box portals for different communities. These communities include end users, business owners and business analysts. The portals provide the run-time front end to the processes and business rules. Typically, end users will have access to a work list that displays the list of items assigned to the user. Management portals include business activity and business performance monitoring capabilities. The BAM reports graphically display the performance of various processes and applications as well as process participants.
STANDARDIZATION EFFORTS The previous sections provided an overview of the features offered by business process analysis tools and business process management platforms, respectively. The primary target audience of BPA tools is business analysts who model the information, processes, organization and strategies of the enterprise. BPA tools do not include process execution. They use higher level artifacts and models that are supposedly closer to the business owner or analyst’s perspective. However, as business process management systems evolve, they too are providing modeling, design, deployment and execution primitives. The overall iterative and continuous improvement trend of the industry is to reduce and eliminate the gap from modeling to design to execution and then back to modeling. Furthermore, the fewer mapping limitations or anomalies between modeling and design and implementation the better.
Given these different tools, on the surface a “logical” answer or suggestion would be: Why don’t we use standards? How about a standard for process definitions that is exported from one tool and imported into another. In fact, there is an abundance of BPM standards. These standards can be categorized into three basic categories: notation, process definition, process execution. This is not a comprehensive list, but it shows how the three areas are addressed by various standardization bodies and initiatives. We have provided here a higher level and integrated view of the standard categories. In other
108
NARROWING THE SEMANTIC GAP illustrations the process definition and executions are partitioned between “internal” and “external” sub-categories. Internal implies potentially modeled, deployed and executed process standard. The external sub-category deals with business-to-business (B2B) exchanges for interoperability especially between different process engines or platforms. Mapping Models The alternative Notation, Process Definition, and Process Execution standards can be mapped onto the three layers correspond to Object Management Group’s(OMG) model driven architecture (MDA). This architecture provides three layers of abstraction for models: a computation independent model (CIM) which is oriented to model businesses, a platform independent model (PIM) and a platform specific model (PSM).
The overall premise is that you have business models that can be represented independently of underlying software models or implementations. Notations or modeling languages such as BPMN or UML can be used to represent certain aspects of CIM bsuiness models. CIM concentrates on the business use cases, the required business results, and the business processes to achieve them, independent of software or underlying implementation systems. In an iterative development methodology the CIM will correspond to business modeling and requirements phases. But you will need to carry out business modeling and requirement processing for each iteration. In PIM, the goal is to provide more details for the software models without staying independent of underlying specific platforms. For instance, execution languages such as BPEL can be used to capture execution of Web Services choreographies, independent of the underlying business process management system. The last layer PSM provides detailed specifications on implementation platforms, and it is at this layer that you have specific extensions and capabilities from various vendors. So what is a “platform?” Typically in a large and complex enterprise architecture you have several layers. So a PIM from one perspective could be a PSM from another. A business process management system could be supported on a variety of platforms (e.g. WebSphere, WebLogic etc.). Hence the PSM models for the BPM application is the application server infrastructure where you execute your BPM applications. However, for other domains, such as component computing, the PIM could be the detailed design models of the components while the PSM could be either a J2EE or .NET platform—
109
NARROWING THE SEMANTIC GAP corresponding to the code. In MDA architecture you will then provide mappings. Here again, every time you use mappings there is the danger of loosing semantics. Once again this could be problematic if you are continuously improving and iteratively building your application. Mapping between CSM and PIM could be problematic if multiple tools are used. Metamodels Common metamodels and standardized exchange formats between models facilitate language sharing between tools, communities, and products, as well as mapping between various platforms and layers. OMG has introduced a four-layered metamodel framework and an exchange standard (XMI) for interoperability between tools that support common metamodels. The modeldriven architecture (MDA) framework is also attempting to standardize on a meta-model for processes and organizational models. The MDA framework supports a four-layered metadata architecture: 1. M3: MetaMeta Model, the most abstract layer specifying the Meta Object Framework (MOF). 2. M2: MetaModels for specific domains or disciplines: for instance data warehousing. 3. M1: Models, which are instances of the metamodel in M2. For example a process and practice rules complying to M2. 4. M3: instances of models. For instance a specific row in a table, an object instance or a process instance. Metamodels standardization provides value in “speaking the same language” between various tools. For instance OMG’s Business Integration Task Force is working on a metamodel (M2) for business process definitions. However, especially due to significant differentiations and extensions by various underlying platforms, it is still difficult to sustain interoperability simply by relying on the metamodel standards.
THE SEMANTIC GAP So how about the interoperability between BPA and intelligent BPM platforms? Business process applications are not “one shot” deals. There is a continuous round-trip to be sustained throughout the BPM applications’ life cycles. This lifecycle is illustrated as follows:
110
NARROWING THE SEMANTIC GAP Now if we were to map this onto tools what we get is:
Typically the BPMN or UML notation is used within the BPA modeling tool. BPA tools and vendors produce a number of artifacts—including flows or processes—and throw them over the wall. Usually they throw BPEL. There are other options or standards such as XPDL or XMI. BPM execution platforms incorporate additional details to the imported artifacts, to enable them for execution. This is of course one approach and it is marred by semantic gap challenges. Supporting standards or common metamodels or an MDA approach does not mean the respective tools automatically comply with the same underlying standards or meta-models. In addition, BPM vendors typically provide their own extensions and features as differentiators. Unified Intelligent BPM A unified, intelligent BPM system supports all the phases of iteraive and conituous development: modeling, analysis, design, implementation, deployment and testing. It also supports a unified platform to handle flows, policies, and integration components. A single object model and platform avoids many of the pitfalls of the semantic gap beween various components or tools. In the continuous round-trip and iterative development, you need a platform that allows quick building and demonstrating of wins, then sustains your applications’ iterative improvements and extensions. So there are two fundamental features of unified intelligent BPM platforms that support and satisfy the requirements for agility, handling of uncertainty, and continuous improvements of advanced intelligent BPM applications: •
•
Continuous Round-Trips: You must be able to go from modeling to design to implementation to deployment and then back to iterative improvements continuously and iteratively. With multiple products, and asynchronous import/export, differences in the underlying semantic models are inevitable. Unified and Integrated Platform: An intelligent BPMS has several components that must operate in the context of the same unified platforms. Within a single unified platform you can handle simple and
111
NARROWING THE SEMANTIC GAP complex rules in the context of flows and processes, using a unified information, integration, and organization models as illustrated here:
CONCLUSION Even though there are significant and important efforts to provide BPM standards and support interoperability between modeling and execution platforms, the semantic gaps are inevitable when one uses multiple products and this proves to be an impediment to continuous improvement and iterative development. Standards can be useful in bootstrapping and perhaps in the initial phases of a project. Metamodels can provide a common language between systems and perhaps facilitate integration. However, a process model or definition of a detailed design and execution environment is not a “one shot” deal. Instead, BPM applications need to be implemented for continuous change, they need to anticipate uncertainty and unplanned change, and they need to deliver quick wins. We need to reduce or eliminate transformations, mappings and import/export roundtrips between disparate systems. For productivity gains we need to use the full power and capabilities of the underlying platforms. This is an “on demand” age. Narrowing this gap between the world of modeling, which usually belongs to the business owners, and the design and execution environment that pertains to IT, is a new (at least “re-newed’) and compelling imperative. This imperative is best addressed through unified intelligent BPM platforms that provide a single environment for business owners, process architects, managers and users alike, without the demands and pitfalls of transformations, while supporting immediate deployment and execution of planned and unplanned changes in processes or decision rules.
112
Using SOA and Web Services to Improve Business Process Flow Case Study: District of Columbia's Oracle Database, Presentation Layer, SOAP/XML
Zachay B. Wheeler, Roberta Bortolotti, SDDM Technology, United States ABSTRACT SDDM Technology, small business enterprise, was tasked with the analysis of improving and automating the current business process of license issuance for the Department of Consumer and Regulatory Affairs (DCRA) of the District Columbia. DCRA is the business regulatory agency for the District of Columbia and provides license and licensee information to various local and federal agencies. SDDM Technology responded to this challenge, after careful analysis, by recommending, developing and implementing a Service-Oriented Architecture (SOA) and Web Services (WS) approach. The improved business process of issuing business license, by DCRA, entailed the development of a web-based (intranet) application using innovative technologies. The development of an n-tier application was essential and the SDDM Technology team was responsible for developing Business Logic and Data Access Tiers. The main challenges faced were the integration within distinctive platforms along the application architecture tiers including Java, XML, .NET and Oracle technologies; the handling of requests and data manipulation to the Web services focusing on increasing the performance; and the availability of the business rules ensuring flexibility for future enhancements in a useful enterprise IT environment. The SDDM Technology team applied Microsoft's .NET technology to develop the business rules. Specifically, an objectoriented approach was applied to the development of the business layer using VB .NET. ADO.NET was used in conjunction with Oracle packages to access and manipulate data from the Data Tier (Oracle database). Data was requested from and passed to the Presentation Layer (Java technology) using SOAP/XML. In short, the solution to the District of Columbia business license problem was resolved using Service-Oriented Architecture and Web Services, taking advantage of the available technologies mentioned in this abstract. Keywords: VB.NET, Oracle, SOAP, XML, SOA, Web Services
BACKGROUND The Department of Consumer and Regulatory Affairs (DCRA) is the business regulatory and enforcement branch of the District of Columbia. The DCRA is responsible for the issuance and enforcement of all licenses in the District of Columbia. Over the course of time however, the responsibility of issuing license was allocated to several other agencies in the District. For example, the Department of Health (DOH) became responsible for issuing dog licenses, pharmacy licenses and several other health licenses; the Department of Mental Health (DMH) became responsible for issuing mental health and mental health community licenses; other agencies became responsible for issuing agency specific licenses.
113
USING SOA AND WEB SERVICES In regard to license issued by the DCRA, prior to 2004, the agency faced a problem which would make a business owner have to apply for and maintain multiple business licenses at a particular address. For instance, a hotel owner that maintained a restaurant, cigarette vending machine, massage parlor, and food vending machine would have to obtain and maintain four separate business licenses. Each license would have its own fee and renewal period; hence the business owner would have four separate times of the year that he would have to interact with the Department of Consumer and Regulatory Affairs to renew a license. The issuance, maintenance and tracking of business license became a daunting task, the process became a burden to the business community and the efficiency of the Department of Consumer and Regulatory Affairs work force decreased significantly. In the summer of 2003, the Basic Business License (BBL) legislation was passed by the City Council and readily adopted by the DCRA and several related license issuing agencies. The BBL legislation is based on two specifics: Return all license issuance, regardless of license type, to the DCRA; and allow the business community to apply for and maintain one business license regardless of the number of business license types held by the business owner at that particular premise address. The business requirements for the development of this application were determined to be the following: • Intranet application should be the initial focus • The application should be at a minimum 3 tiers • The ability to share data between agencies over the Wide Area Network (WAN) • The ability for each agency to maintain a local copy of its data • The ability to extend proposed intranet application to the public for online license issuance and tracking. • The data repository should be Oracle (a standard eventually adopted by the city) It became apparent that neither the DCRA, outside agencies nor the District of Columbia had adopted a technology standard for application development. Hence, systems were written in a variety of development languages such as PowerBuilder, Java, .NET, VB, C++, and Fox Pro residing on disparate platforms such as UNIX and Microsoft. Also, a variety of data repositories where used: Oracle, SQL Server, Access, Excel, Fox Pro and Sybase just to name a few. The task of developing the application was split between three companies, and the solution architecture based on the use of web services in order to show flexibility in integrating different programming languages and platforms used throughout the DC Government network structure. The use of protocols based on the XML language to issue business licenses or to exchange data with another Web Service constituted a main requirement in order to extend this application to inter-organization communication for the overall system. FileNet, Inc., the company responsible for the presentation layer, developed this tier using java technology. SSDM Technology, on the other hand, used Microsoft .NET technology to build the Business Layer and developed the Data Access Layer using ADO.NET and Oracle stored proce-
114
USING SOA AND WEB SERVICES dures. The stored procedures were wrapped in Oracle packages. And, finally, Peake Technology developed the conceptual, logical and physical data model. Analysis of Challenges and Solution for the Business Process The design and analysis of a solution for this case led the SDDM Technology team to focus on having a result in which the work was done better, faster, more reliably and for less money. In the business process of license issuance in the District of Columbia, the solution was to draw a strong relationship between the workflow and web services. So, in order to face the challenges presented by this relationship, the implementation of the workflow was designed to answer questions such as who? What? When? in the business process. The use of Unified Model Language (UML) played an important role in analyzing the business process, depicting who were actors, systems, subsystems, services (use cases), and workflow in the system and their roles; what actions and transactions users make and if these transactions were manual or automatic; and, finally, when the users start or end a work, the order the transactions are made and if they are done sequentially or in parallel. There were several challenges that the SDDM Technology project team had to overcome. Of those challenges the concept of interoperability was the biggest hurdle (see Interoperability Challenges). Having a considerable challenge of integrating the workflow of this system with others in the District of Columbia in order to create a collaborative application built of Web Services, the business process became a set of tasks performed by Web Services in which workflow control and interaction are an inevitable demand. From a 100,000 ft perspective or enterprise level, the web service-SOA approach is utilized for simple data exchange between agencies, the public and the DCRA staff.
Figure 1: Simplified View of Interaction
115
USING SOA AND WEB SERVICES Based on Figure 1, many different clients will interface with and request data from the DCRA system. With regards to the agencies that issue their specific license, each would have their own individual business logic however some business rules generated by the DCRA could be reused, thus reducing future development time. The process of code reuse was accomplished by restricting the processing/business logic to the WS. The business logic was decoupled and modularized for autonomy and functional independence. This naturally led to ntier application architecture with the WS acting as a service layer. Using the modular decomposition approach each major license process or function was decomposed into 20 web methods or services. Because of the granular decomposition of the workflow (business rules) into services the services are able to be used individually or within and between services and applications. Interoperability Challenges The implementation of the WS-SOA approached presented many challenges to the development team. A short list of the major challenges is presented below. The integration within distinctive platforms along the application architecture tiers including Java, XML, .NET and Oracle technologies. There was a need for a technology platform- and language-independent, that leads to task-oriented development and workflows, loose-coupling and able to adapt existing applications to changing business conditions. The handling of requests and data manipulation to the Web services focusing on increasing the performance. SDDM Technology confronted three main data exchange challenges: primitive data type mappings, in order to guarantee that the correct data type of one platform is referenced to the same data type in the other; non-existent data types, which consist in handling data types that exists in one platform but not in the other; and complex data types, which consider the exposure of complex data type so that the other platform can use it. And the availability of the business rules ensuring flexibility for future enhancements in a useful enterprise IT environment. The use of an integration or migration strategy allows a service-oriented solution. Interoperability Resolution The interoperability challenges led to a solution which used web services. The use of web services not only affected how the application was designed but also imposed a variety of strategic and technical factors that met the requirements and purpose of this project. Seeking an inter-application and inter-organization communication by abstracting proprietary technology and establishing universal integral framework for the District of Columbia, the web services consisted solely of XML Technologies. XML schemas were used to ensure type and class compatibility, specifying the format for XML documents during the integration of the n-tier application. Interoperability between Presentation and Business Layers For requests and data manipulation, XML serialization intended to take complex data type used in our application and encode, save, transfer and decode it. With serialization, both presentation layer and business layer understand a particular data type before they attempt to exchange it and to establish a connection between them. In this case, the serialization occurred in the .NET platform
116
USING SOA AND WEB SERVICES
Figure 2 depicts the structure of this n-tier application. It is worth mentioning that the JAVA and .NET technologies come from different backgrounds and present some distinction on data types. So, the use of XSDs helped the two platforms agree on the format of an XML document, allowing mapping of a class to a defined XML Schema. This solution assured that each platform could exchange XML data with each other. A common data format was defined before development, and XML serialization in conjunction with XML Schemas were used to exchange data between .NET and Java. In this scenario where each application was built from scratch, the following solution was adopted to integrate the presentation and business tiers: • Use of XSD to define common or shared types and then generate platformspecific code from those shared types. • Creation of a central XSD repository for the development teams developing the presentation and business layers to provide for consistency in generating types across applications. • Use of elements that XSD recognizes and publish. The service-oriented architecture (SOA) describes the District of Columbia entire system of services dynamically looking around for each other, getting together to perform some application, and recombining in many ways. This model encourages the reuse of components and technology that evolves the way applications are designed, developed and put to use. The SOA model represents their distributed application across the network, allowing distributed communications of services such as the use of an Enterprise Service Bus (ESB), which is a common distribution network for service communication.
THE DCRA WORKFLOW AS-IS Business Process In the initial stages of the project SDDM Technology went beyond observing the license issuance process. SDDM staff members participated in the As Is license issuance process. An overview of some of the various business processes for issuing, renewing and exchanging license data is described below.
117
USING SOA AND WEB SERVICES ISSUE NEW LICENSE • • • • •
• • • •
• • • • • •
The client (business owner) enters the One-Stop Center (License Issuance Center). The client interacts with DCRA CSR (Customer Representative) and inquires about the particular license(s) that they are interested in obtaining. The client is handed an information (application) packet and the details are further explained to him, in regards to the necessary documents that are needed to obtain a license. The client leaves and obtains the necessary documents and fills out the application. The client returns with all of the necessary documents completed. These documents include license type approval from different agencies. For example, the Department of Health, Metropolitan Police Department, Department of Public Works, etc….. The client interacts with DCRA CSR and the CSR reviews the application package for completeness. The CSR logs the client in the IO system and hands the client a ticketed number. The client is called to different CSR for application processing. The CSR records the information in an access database with PowerBuilder front-end (the database was not designed with data relationships i.e. oneto-one, one-to-many, or many-to-many hence all data is saved into a single table). The CSR looks on a paper based fee chart to determine and calculate the fee. The CSR adds the fee to the application. The CSR prints the invoice. The client takes the invoice to the cashier (associated with a different agency) and pays the invoice amount (partial payment is not allowed therefore he pays the bill in full). The client returns to the One-Stop Center waits for the CSR to finish with the current client hands the stamp paid invoice to the CSR that processed the application. The CSR reviews the stamped receipt. The CSR prints the license and gives it to the client.
Renewal License Process The renewal process of the BBL can be initiated by several different routes that converge into the same process. The difference between the routes is how the client responds (walk-in, mail in or lockbox) to the renewal letter. • The Task Leader contacts the Information Technology (IT) department and requests that renewal bills be generated for a particular time period. Typically, they are sent out sixty (60) days prior to the renewal date. • The client responds by sending the payment either by waking in, sending the payment through mail or paying through the bank (lockbox). If updated documents are needed the client sends the documents through mail or walks them in. • The CSR reviews (if necessary) the updated documents for completeness. If they are not complete then the CSR types or fills out a deficiency letter by hand and sends the documents back to the client, however the CSR keeps the payment unless it is insufficient.
118
USING SOA AND WEB SERVICES • • • • • •
The CSR records how the payment was received using the modification module of the PowerBuilder application. The CSR calculates the fee using a paper chart to identify the correct amount of the fee. The CSR records the amount of the payment and using the update module adds the date payment received and how it came in (walking, mail in or lockbox). The CSR changes the status to paid in the system. The CSR prints the license. The CSR mails the license out to the client at the end of the day or gives it to the client (walk-in).
External Agency Process The AS-IS process for external agencies is bi-directional. In case one, the external agency wants information from the DCRA. In case two the DCRA wants license, review, inspection information from that agency. Case One • The external agency contacts the IT department of DCRA and submits a request for information about a particular business or business category(s). • The request is logged into the report request system by the IT CSR. • The IT CSR gives a paper copy of the request to the IT Report CSR (a single staff member dedicated to generate reports). • The IT Report CSR, using Visual Basic for Application Interfaces, connects to the access database defined in the previous processes and generates the report. • The requesting agency is then sent a copy of the data in .mdb, .xls format on a floppy or zip disk or paper report is generated and mailed to the agency. Case Two • The DCRA management contacts the external agency. • The external agency sends an .xls file or paper based report to the requesting manager through email.
TO-BE BUSINESS PROCESS After reviewing the processes described above it is obvious that many of the processes could be automated and after further review many of the process could and would be decoupled into individual services many of which can be used throughout the district for fee calculation, data exchange, license issuance, departmental reviews and departmental inspections. In short, WS-SOA was ideal for this particular enterprise. An overview of the services is provided using the Unified Modeling Language (UML) to define the various systems, subsystems, services and workflow. SDDM Technology relied heavily on the UML Use Case, Use Case Diagram, State Chart and Activity Diagrams for workflow modeling. The following Activity Diagram provides a general overview of the generic application process with interaction from the CSR, Presentation Layer and Web Service.
119
USING SOA AND WEB SERVICES
The following use case diagram provides general overview of the basic services developed.
IMPLEMENTATION In short, granularized independent business services will allow business logic reuse for other DC agencies and future applications at the DCRA.
120
USING SOA AND WEB SERVICES In fact, there are 20 web methods (services) defined for the DCRA WS. A snapshot of some of the services are provided below.
An example of the AddApplication service signature is provided and shows the calling interface with the complex XML data structure below. Once the AddApplication process has completed the AddApplication Response is returned to the calling interface. AddApplication Web Method (Service) Signature Add new Application information in the system. Test The test form is only available for methods with primitive types or arrays of primitive types as parameters. SOAP The following is a sample SOAP request and response. The placeholders shown need to be replaced with actual values. POST /BBLFINAL/BBL.asmx HTTP/1.1 Host: localhost Content-Type: text/xml; charset=utf-8 Content-Length: length SOAPAction: "http://tempuri.org/BBLFINAL/BBl/AddApplication"
121
USING SOA AND WEB SERVICES string string string string string string string string string string string string int int string string string string string string string string string string string int string string string int int string string string string string
122
USING SOA AND WEB SERVICES string string string string string string int string string string int int string string string string string string string string string string string int string string string int int string string string string string string string string string string
123
USING SOA AND WEB SERVICES string int string string string int int string string string string string string string string string string string int string string string int int string string string string dateTime string string dateTime string string string string int int string
124
USING SOA AND WEB SERVICES string string string string string string string string string string int string dateTime string dateTime string string string string string string string string int dateTime dateTime string string int dateTime dateTime string string string dateTime
125
USING SOA AND WEB SERVICES dateTime string string string string string dateTime dateTime string string string string string string string string string string int int string int int int string string string int int string HTTP/1.1 200 OK
126
USING SOA AND WEB SERVICES Content-Type: text/xml; charset=utf-8 Content-Length: length string
Once the AddApplication method is invoked and the place holders are filled with actual data, it acts as an agent of the application and is used to communicate and transfer data to the appropriate class. In this case, the AddApplication service of the WS is called and invokes the AddBaseData class which contains the method addNewApplication. Public Function AddApplication(ByVal oAddApp As AddBaseData) As String Try Return oAddApp.addNewApplication(oAddApp) Catch ex As Exception HandleException(ex) Finally End Try End Function If data access is needed then the data access layer is invoked and communication between the data access layer and data store is initiated. This communication remains open until the data access layer request is completed. The data access layer then responds to the calling application with the required data.
DATA ACCESS LAYER SDDM Technology created the data access layer using the Oracle Data Provider for .NET (ODP.NET) and VB.NET. The ODP.NET provider was capable of handling the various data types inherent to .NET. SDDM Technology was particularly impressed with ODP.NET’s ability to handle datetimestamp data types. In addition, the ODP.NET provider provided a better performance (speed) than the Oracle Managed Provider for .NET 1.1 Framework. Each table was wrapped into a class, each field was defined as a property in the given class and the respective methods were associated with each class i.e. add, update, delete, and retrieve. The individual class properties were used to validate basic business rules such as data type checking, null value validation, and field size checking.
127
USING SOA AND WEB SERVICES Oracle stored procedures were created to increase data access performance. Since the business logic was handled by the WS, the stored procedures were created for the basic tasks of adding, updating, deleting, and retrieving data. These individual procedures were grouped into four separate Oracle packages: pac_AddData, pac_UpdateData, pac_DeleteData, and pac_GetData. In addition, a package was created for generating and retrieving sequence values from the database. Example of Code Snippet of the basic package to Add Data for the License Periods of a License: drop package pac_AddData; CREATE OR REPLACE PACKAGE pac_AddData AS PROCEDURE addBBLLicPeriodsData(v_BBLAppID IN NUMBER, v_UpdateSeq IN NUMBER, v_IsCurrent IN VARCHAR2, v_IssueTMS IN DATE, v_IssueUserID IN VARCHAR2, v_EffectiveDate IN DATE, v_ExpiryDate IN DATE); END pac_AddData; -CREATE OR REPLACE PACKAGE BODY pac_AddData IS --Begin adding data to BBL_License_Periods table PROCEDURE addBBLLicPeriodsData(v_BBLAppID IN NUMBER, v_UpdateSeq IN NUMBER, v_IsCurrent IN VARCHAR2, v_IssueTMS IN DATE, v_IssueUserID IN VARCHAR2, v_EffectiveDate IN DATE, v_ExpiryDate IN DATE) IS BEGIN insert into BBL_License_Periods (BBL_APPLICATION_ID, UPDATE_SEQUENCE, IS_CURRENT, ISSUE_TMS, ISSUE_USER_ID, EFFECTIVE_DATE, EXPIRY_DATE) values (v_BBLAppID, v_UpdateSeq, v_IsCurrent, v_IssueTMS, v_IssueUserID, v_EffectiveDate, v_ExpiryDate); END addBBLLicPeriodsData; END pac_AddData;
CONCLUSION In summary, adopting and implementing the Web Service Oriented Architecture approach accomplished the goals set out by the DCRA. In the future, extensibility and scalability will be easier since many of the services are reusable in the area of license issuance and permits issuance. The ability to exchange data among the various District agencies and the public was greatly improved and reduced the total resource hours required for information exchange. The ability to strip out the processing (workflow) logic and implement that logic into independent granular web methods or services that are easily accessible to the DCRA and other agencies both local and federal provided a tremendous improvement in workflow and business processing for the District of Columbia. WS SOA allows business people in the DC government to consider using an existing application in a new way or offering it to a partner in a new way, thus potentially increasing the transactions between agencies.
128
Workflow and Business Rules– a Common Approach Heinz Lienhard and Urs-Martin Künzi, ivyTeam-SORECOGroup, Switzerland A BPM approach is proposed for addressing processes, Web Services and the use of business rules by the processes, starting from graphical models. Transparent, easy to manage and mathematically sound solutions are obtained in a coherent way. Jean Faget, Mike Marin et al stress in the Workflow Handbook 2003 on “Business Processes and Business Rules (BR): Business Agility Becomes Real” the point that “processes are not policies (business rules) and policies are not processes” and hence should be treated in different ways to obtain the necessary “separation of concerns.” On the other hand access to the BRs by the processes is vital, hence they recommend building bridges between BPM and BRM (Business Rule Management); i.e. an integration of the two systems is proposed. The new hype about the declarative way to handle rules reminds the authors of the logic programming (or declarative programming) craze some 20 years ago. A lot was promised then and very little achieved. Even the battle cry was the same as today: “WHAT, not HOW...” True, processes do have to satisfy business rules. But business rules themselves can be optimally managed via a management process or workflow. Using the appropriate workflow or BPM approach one can naturally take into account who is authorized to setup or change which rules (or policies) when and how. This requires certain features in the workflow system for depositing and accessing business rules in a database (repository) and the capability to use these rules for computing decision attribute values. And from within the actual business process one must be able to call on these rules or policies when necessary. And what about the above-mentioned “separation of concerns?” As described in an earlier Workflow Handbook1, the integration of workflow and Web services in a common platform offers, besides many other benefits, an elegant solution to separate the business rule part from the actual business process. It can be shown that starting with the business processes and using a modern BPM/Workflow approach one can obtain solutions that do satisfy the legitimate demands of the BR people without the need for yet another tool. To learn the use of a different tool to manage business rules and to properly integrate it with the BPM engine does not make life easier. As an alternative we propose an elegant way to address processes, separation of concerns, and the use of business rules starting from graphical process models. Transparent, easy to manage and mathematically sound solutions are obtained; in
1
Workflow Handbook 2003, “Web Services and Workflow—a Unified Approach”
129
WORKFLOW AND BUSINESS RULES—A COMMON APPROACH addition rules about rules (metarules) can be used together with a rule management workflow system just as business rules are used for business workflow. Business Processes or Business Rules? In the last couple of years, business seems to have been under BPM’s spell. But we hardly finished reading about the next 50 years on “The Rise and Rise of Business Process Management” 2 when—as seems inevitable in the IT world—a new hype gains momentum long before the 50 years are over. Having just started to ride “the third wave” we are now being told: “…processes are not that simple. In fact they are quite complex and therefore quite difficult to change.” (Ronald G. Ross 3). And—as a conclusion—we are asked to concentrate on the business rule approach, the new gospel now being preached. Interestingly, very similar claims are being made on both sides. The Business Rule Approach promises4: • “Increased speed of implementation. It can take a long time to change some computerized applications. A business rules engine can permit new business rules to be implemented immediately. This increases organizational agility.” • “Management of diversity. Many enterprises have operations that are increasingly diverse, or even customized. They find that no one set of business rules meets any particular situation…Organizations are looking for ways where new sets of rules can quickly be implemented for specific, perhaps even transient, situations. Business rules engines can meet these requirements. They permit organizations to expand their portfolio of operations and quickly take advantage of new opportunities.” • etc. And the BPM people tell us2: • “BPM provides enhanced business agility…” • “BPM provides a direct path from process design to a system for implementing the process. It’s not so much “rapid application development”; instead, it’s removing application development from the business cycle.” • etc. Of course, you should buy a Business Rule Engine before you have even come to grips with Business Process Management and the necessary Workflow Engine. Worse, in order to make real use of both systems you will have to integrate the two engines. Good luck!
2
Business Process Management—The Third Wave by H. Smith and P. Fingar, 2003
3
Principles of the Business Rule Approach by Ronald G. Ross, 2003
4
How to Build a Business Rule Engine by Malcolm Chisholm, 2004
130
WORKFLOW AND BUSINESS RULES—A COMMON APPROACH After many business processes have been successfully implemented as workflow using modern BPM, we are very confident that this approach is here to stay. By starting from graphical process models, which have been set-up together with the business people concerned, successful solutions have been obtained in a very efficient way. BPM has grown out of workflow management systems that have proven their usefulness over many years. Rule systems also have quite a history behind them. A couple of decades ago “rule-based systems,” also known as “expert systems,” were the solution to everything from computerized medical diagnostics to running your business successfully. And normal programming was predicted to be soon displaced by logic programming, allowing solving problems in a strictly declarative way: “WHAT, not HOW.” Unfulfilled claims went so far that at one point expert systems—and artificial intelligence in general—fell into utter disgrace. Apparently, enough years have passed to get the bandwagon rolling again. In a white paper to CEOs, CIOs etc., we read, “The biggest challenges facing business and IT in the 21st century are Change and Complexity. Artificial Intelligence and Business Rules are the answer (from BizRules.com).” No comment. It seems like the answer to the question in the title above would be processes (or the derived workflow). In many cases this is really the answer: the necessary rules to make decisions on how to proceed in a workflow are naturally integrated into the process model, i.e. are part of it. Modern BPM tools do allow adapting processes in a simple and efficient way. Hence, should some rules change over time; this does not pose a difficult problem.
SEPARATION OF CONCERNS But there are other cases where larger sets of rules are important, rules that may often change and have to be managed by people other than the ones concerned with the actual business processes. This seems to be especially true for insurance companies. Also, in such cases many of the rules may directly influence the flow in the business processes; but it may be a decisive advantage to separate the concerns for the processes from those regarding the business rules. Therefore, on the one hand these rules must be directly accessible by the business processes (should in a way be part of them), on the other hand they are preferably managed by specially authorized people (those not directly involved in the actual business processes). It turns out that the natural way to manage these rules, i.e. to modify them or to add new rules is to set up appropriate processes implemented as workflow or process-based Web services (see ref.1 above). By assigning the corresponding roles to the people concerned we can easily define who is to do what, when and where in the business rule management process (see examples below). And we do not need to buy and introduce a separate Business Rule Engine: we use what modern BPM has to offer. This has many significant advantages: • no need to train people to master yet another tool set • the look and feel of the rule system functionality is the same as the BPM look and feel • the form design used for BPM can serve to input and edit business rules (see example Fig.4) • the rule management processes can be monitored and audited in the same way the business processes are
131
WORKFLOW AND BUSINESS RULES—A COMMON APPROACH •
the rule system (i.e. the rule management processes) can easily be adapted via graphical process models (BPM)
We are convinced we can get the most mileage out of a BPM approach by extracting the actual business rules into dedicated processes to evaluate the rules and to manage them; hence the above title should read: Processes with Business Rules. Modern BPM Tools allow us to go from a process model directly to the workflow implementation (see ref.1), we therefore use process and workflow more or less interchangeably in the context of this paper.
CATEGORIES OF BUSINESS RULES It seems that there are more classification schemes of business rules around than rules ever implemented. We find “presentation rules,” “action assertions,” “producer,” “enabler rules,” “process trigger” etc. (see Ross5 or businessrulesgroup.org ). In the religious approach, where everything is Rules, this is hardly surprising. But business processes do play a fundamental role in businesses and, allowing for the fact that they belong technically to the branch of system science known as “Discrete Event Dynamic Systems” (precisely those systems that describe what activities take place under what circumstances), we actually have powerful and well-proven means to control the behavior of a business system in a mathematically sound way. Hence, one does not need rules to express these things: simple and clear conditions within the workflow usually suffice. One is asking for trouble trying to do it all with rules: invariably one winds up with immense heaps of all kinds of rules that are either trivial or will likely lead to problems in a— whatever grandiose—inference engine. But even if this is not the case, one often gets into asking questions and checking rules where it does not make much sense, like “given birth to how many children; how many miscarriages suffered etc.,” when in effect the applicant (e.g. for an insurance) turns out to be a boy… Better if the process (i.e. workflow) specifies what has to happen at a certain point and all the “rules” have to do is spelling out under what (usually simple and precise) specific conditions one or the other action has to occur. All that is usually needed are either rules that evaluate to true or false or possibly rules with a numerical value upon which is decided what has to happen next in the process.
PROCESSES COME FIRST Graphical process models (especially if they can be simulated and animated) help a great deal in making businesses transparent and much easier to understand. This is not hype but an experience made by people involved in such projects. As mentioned above, new BPM approaches support this—and much more: if people involved in a process are content with what they see in the model, the corresponding workflow is readily deployed. Piling up rules (as some advocate) makes the business neither more transparent nor more manageable nor easy to support by IT. Although many rules may actually be
5
“Principles of the Business Rule Approach” by Ronald G. Ross
132
WORKFLOW AND BUSINESS RULES—A COMMON APPROACH involved in a business, at a certain point in a process only one or few rules may be necessary.
BACK TO THE SEPARATION OF CONCERNS Nevertheless, the BPM approach can be further improved by heeding the advice of the “rules people”: to separate those rules from the actual business process that are likely to change often and/or have to be managed independently of the business workflow. But, as mentioned earlier, workflow can be just the way to evaluate and manage a rule base without having to resort to another tool. An adequate BPM/Workflow tool may be all that is really needed. By “adequate” we mean the ability to set up a workflow combined with Web service calls and to model and deploy process-based Web services as described in the Workflow Handbook 20036. All we have to do is to create another workflow application separated from the actual business processes that provide the necessary Web service interfaces (see the mentioned chapter in the Workflow Handbook 2003). The resulting BR evaluation and management application may reside on a different server anywhere in the world. Obviously, this approach might just as well be used to take out certain parts of a business workflow that we want to be run and managed separated from the main process. Web services in conjunction with workflow allow a very elegant solution to marry business processes with business rules.
A SIMPLE EXAMPLE In Fig.1 we give two graphical process models, one exhibiting part of a business process, the other showing a possible process for the evaluation of the business rules contained in the rule catalog (DB). These process models are built with standard process elements (see ref.6) from an element palette (e.g. with Decision, e-mail, DB-Step elements etc.). By double clicking on these elements they can be configured; i.e. masks are provided to insert parameters or complete assistants to configure elements like DB access, Web Service calls etc. without any programming. With the configured elements the models actually represent the workflow which is simply obtained by deployment (upload) of the models onto the corresponding application server. The process on the right side of Fig.1 uses special Start and End Elements. From this model a Web service is automatically generated during deployment (e.g. generation of the Web Service Definition in WSDL). On the left side of Fig.1 part of the actual business process is shown: A customer (customer number 1002) is ordering some items for a total price of $2000 (Attribute PriceOrder). In a database, information about the specific customer is accessed, like “Age,” “CustStatus” (gold in this case) etc. With this information a first rule is invoked and evaluated (CustRule2) via the mentioned Web service. Since the answer to the question whether a “young customer” is involved is returned as “no” the process proceeds to the next step, a rule call to obtain the rebate factor for a customer buying for more than $1000 and having the customer status “gold.” The corresponding IF-Rule (CustRule1) is accessed in the rule catalog and evaluated for the given terms ($2000, “gold”),
6
Workflow Handbook 2003, “Web Services and Workflow—a Unified Approach”
133
WORKFLOW AND BUSINESS RULES—A COMMON APPROACH and the business process moves on to modify the price according to the rebate ($1600).
Figure 1 Actual business process on the left; Web service process for rule evaluation on the right. Both are shown in the animation mode: the data object flowing through the process (dot) exhibits some of the attribute values. We have shown how a business workflow may use (evaluate) business rules which are separated from it in a special workflow application through an appropriate Web service interface. In the modeling phase (or when a business process has to be modified) one needs access to the rule catalog from the modeling environment to select and assign rules to particular process elements. In the above example this is done for the decision element “is young customer?” But for other elements like an event start or a trigger element such rule assignments will also be needed. This rule access can be done via a similar Web service interface as seen in Fig.1. This way one might read out the catalog and present it to the process designer for rule selection, i.e. to identify the name or ID of the rule required at a certain point in the process or workflow respectively.
RULE MANAGEMENT WORKFLOW So far we have shown how the business workflow can have a business rule evaluated that is separated from it. Now we turn to the rule management process (workflow). The same way we model the business process—and turn it into an operating workflow—we may implement the workflow to setup and manage business rules and their catalog. Fig.2 presents part of such a process.
134
WORKFLOW AND BUSINESS RULES—A COMMON APPROACH
Figure 2 Rule Management. Left: snapshot of the simulated process; right: excerpts of browser based interaction pages; above the display of rules, below the approval page for the BR manager. The process model is shown at the moment the data object (dot) is about to enter the Page Element “Approval.” The browser based interaction is on the right side of Fig.2. The user—here the BR manager after having logged-in—has activated his assigned task to check the proposed new rule. Now he has to decide whether to accept the rule (click on the OK button), or to refuse it and fill in a comment that will be mailed to the person soliciting the new rule. Only when clicking on the OK will the process proceed to enter the rule into the catalog (see process element “Enter new Rule into DB”). In case the rule is only modified no approval from the BR manager is required. Here “Business manager” actually indicates a role (in the classical workflow sense) which some users may have. Rule Management Processes with Metarules... Metarules? Well, we do not have to stop at what we did in the example above. For the rule management process given in Fig.2 one may ask the question whether the decision to have new rules approved by the BR manager (or somebody else) should be made with the help of another rule; i.e. a rule about rules: a metarule. The evaluation and management of such metarules can be done following exactly the same approach we proposed for the business rules: within a (meta) workflow handling the metarules. In reality the business rule catalog will consist of various classes of rules. A metarule may spell out for which classes of rule what kind of role has to be
135
WORKFLOW AND BUSINESS RULES—A COMMON APPROACH selected in the business rule management workflow (e.g. the one shown in Fig.2). Fig.3 exhibits such a scenario: Before the task of checking a new rule for approval is assigned, the metarule is accessed (called via Web service interface) to determine which role or person is responsible to decide in the given case. In principle the trick could be used again leading to metametarules and so on. But the iteration has to stop somewhere—in the end with the ultimate authority that can decide… …or performance-dependent rules Another interesting aspect of business and rule management interaction is the capability to let rules depend on specific information from business workflow monitoring. BPM usually offers such information for auditing or business workflow improvements. As an example a pricing rule may depend on process information about consumer reactions and be adapted accordingly. Now we need a Web service call in the opposite direction: from rule management to the business workflow. This of course requires a Web service interface in the latter.
Figure 3 Rule Management with Metarule.
RULES IN NATURAL ENGLISH Today one would expect a general acceptance of the fact that machines are not speaking or understanding natural English, but some in the religious BR camp seem to ignore this. While it may be necessary for the business audience to have some additional explanations in English; that does not change
136
WORKFLOW AND BUSINESS RULES—A COMMON APPROACH the fact that, computer systems—in the end—understand only the formal, mathematically sound, rule formulation. The “Business Rules Manifesto”7 clearly states: “5.3. Formal logics, such as predicate logic, are fundamental to well-formed expressions of rules in business terms, as well as to the technologies that implement business rules.” Actually, it may be counter-productive to use some pseudo-English, because humans and machines are likely to understand different things. What can and is being done successfully is the rule formulation support by templates; i.e. one may use well defined natural-language terms that are synthesized into viable expressions. Since BPM tools usually offer quite powerful form design capabilities, such a rule support might easily and efficiently be implemented using BPM. Just to give a flavor of the possibilities, Fig.4 (left side) shows a little process to build up Boolean expressions with the help of a template (right side) which is nothing but a browser-based workflow form. By clicking on Exit in the form the Boolean rule is stored into the rule catalog.
Figure 4 Build Boolean Expressions with Template.
RULE INFERENCE Up to now we have shown how to use explicit rules to arrive at decisions within the BPM environment. But sometimes we want answers that need inference from a whole rule base, i.e. from given facts and a number of rules to infer new facts. In logic programming this is done via the so called resolution
7
http://www.businessrulesgroup.org/brmanifesto.htm
137
WORKFLOW AND BUSINESS RULES—A COMMON APPROACH algorithm8 (e.g. implemented in Prolog). Such an algorithm can easily be embedded in the rule management workflow. Fig.5 gives an example of using resolution in the BPM context. On the right side the relevant rule base is shown to answer questions about the authorization of people to carry out specific tasks or actions, which cannot always be decided upon by a single rule. Also in this case the rule base is created and modified by a process.
Figure 5 Process with an inference based rule system. The rule base contains three inference rules and a number of facts. A concrete example would be the question “does Dave have the authorization to sign contracts?” (formally: permission(dave, SignContract). The answer will be yes, since he is one of Bob’s substitutes and Bob has the explicit permission to sign contracts.
WHAT ARE THE BENEFITS? A Common Approach There is no way around BPM—why not fully use what modern BPM has to offer? As has been shown before, the marriage of workflow and Web services brings decisive advantages (see e.g. ref.1): these services become easy to set up and to coordinate (or orchestrate) without the need for additional tools. What we propose is a ménage à trois i.e. a further marriage with business rule management—also done with BPM. This way we have: • business processes and business rule management system that become transparent, and their interaction easier to understand
8
Logic Programming. J. W. Lloyd. Springer-Verlag, 1984.
138
WORKFLOW AND BUSINESS RULES—A COMMON APPROACH • • • • • •
a common approach using a single tool: same look and feel, similar functionality straight-forward data exchange between the business process and the business rule world metarules in the context of the rule management process as a natural addition rules that can be adapted automatically as a consequence of the monitored business process data very high flexibility for processes and rules rule inference that is embedded within the common approach.
OUTLOOK
Practical experience with Business Rule Management within BPM will have a beneficial influence on the further development of BPM technology. What is already possible to do now will become very easy to do in the future, e.g. totally integrated calls to rule management from process elements (like “event starts,” “process triggers,” “decisions (gateways in BPMN)” etc). As well, rule inference may become a natural part of these systems. Starting from graphical models has proven very attractive for all parties involved—a picture is worth a thousand words…
139
State of BPM Adoption in Asia Ken Loke, Bizmann System (S) Pte Ltd., and Dr. Pallab Saha, Institute of Systems Science, National University of Singapore Excutive Summary Business outlook in Asia has been in a state of flux since the 1997 ASEAN financial crisis. Terrorism, out-break of epidemics and the Iraq war have further clouded economic projections. Businesses no longer can operate in the ways they used to. As the industry braces itself for greater competition, shorter product lifecycles, demanding consumers, lower margins and emerging markets, collaborative business practices are firmly establishing themselves as the way forward for successful and sustainable business operations. The needs for automating businesses have also drastically changed. There is a clear indication of a rapid change in the role IT has played in automating businesses. Companies have gotten smarter about implementing IT. An extract from an article titled, Radical promise of BPM by David Longworth, 2003: “Mark Evans is a CTO who takes a no-nonsense view of his role: ‘We are the people in the trenches. We have to get things done. We wanted to use process management to extend business processes and fill in what I call the white space.’ he explains. ‘ERP systems cover 55-65 percent of what you need, but you have manual processes and integrations. We wanted to take the processes and co-ordinate the architecture from cradle to grave.’” Building efficient business processes have always been a major area of focus for business architects. However, enterprises are starting to experience the limits of accruing business benefits out of processes within their own boundaries. This is leading enterprises to extend processes beyond their own boundaries to involve business and trading partners. It is obvious that this is causing enormous pressure to embrace changes in the way businesses operate. Businesses need to be operated efficiently. Business opportunities need to be tracked and capitalized. IT automation goes beyond answering transactional needs. Transactional systems merely provide data input and process them into output of comprehensive information. However, businesses need far beyond mere transactions. Business owners want to automate business operations. They want to optimize input and output, and reduce the cost of operations. Business owners constantly want to improve the way they run their businesses. From an IT perspective, this means capability to provide high agility to accommodate changing business needs. Companies are aware that their processes are fragmented and “siloed.” The drive for IT solutions to have workflow functionality is growing very strong. Workflow is the only technology that allows high configurability of work processes and provides seamless integration to back-end systems. Business Process Management (BPM), that aims to transform an enterprise’s process management capability, is the so-
141
STATE OF BPM ADOPTION IN ASIA lution that organizations are adopting to address some of the aforementioned issues.
EFFECT OF TECHNOLOGY ON BUSINESS APPLICATIONS Many organizations have diverse understanding of workflow and its applications. From “workflow” itself, everyone knows that all processes within a company, small or large need to flow across all operations. On the other hand, it is common to find many organizations run their operations on a silo fashion. Workflows become an unconnected and disintegrated process. For instance, if a company finds a problem in a specific department, common reaction is to deal with that specific departmental operation, secluding it from other indirect and even direct operational function group. Thereafter, the next step to resolution is to confine the problems and find a quick cure to it. In this way, although the problem maybe resolved, it has just build another element of silo effect. Although companies might have invested in ERP systems, the approach to functional operations confines within what the “modules” can do. ERP systems produce modules that are confined to what they have been designed for automating respective functional groups, even though these modules are connected with one another. Applications such as these are transactional software. This transactional ability only answers part of the challenges a company face. The challenge is indeed in multitudes. It is not only about addressing a functional group, as functional groups are not disparate. Functional groups connect with one another directly and indirectly. Achieving high efficiency from one functional group may not yield the same efficiency from the others. Hence, this phenomenon casts a further spell into the silo effects of “workflow” inherent within most organizations even with computerization.
THE CHALLENGE For most companies, having a back-end ERP or sub modules of ERP are common. These systems are required for transactional purposes, where data keyed in, processed, analyzed and output in specific designs. It can be as simple as using an Accounting system, where orders entered into the Order Entry module, Delivery Orders churned out and Invoices issued. Thereafter, reports generated, in specific formats that best suit an organization. However, these processed data that becomes information are data that have been “transacted” and information presented are mere “output” information. It called Effectiveness in productivity measurement. That is to say, the result of what has happened and the succinct focus is merely on the output. Let us now review over the impact of efficiency and effectiveness. These two affect Productivity. Efficiency is a form of metric measurement of productivity and Effectiveness is metric measurement of quality. Efficiency is a productivity metrics where it measures how fast one can do something. Hence, when a person is preparing contracts, efficiency metric can be "No. of contracts executed per hour or per man day.” This explains how efficient (i.e. fast) the person is at preparing contracts. Taking the similar example as above, effectiveness is a quality metrics that measures how good a person is at in performing a task. Hence, preparing contracts as an effectiveness metrics can be "No. of contracts produced by a
142
STATE OF BPM ADOPTION IN ASIA person in a given characteristic identified in that characteristic.” Here the difference between total number of contracts and contracts created with some mistakes, uncovered by the same person, since the person who prepared the contracts, was not able to detect them during the preparation. If the mistakes uncovered by the same person and total no. of contracts are the same, meaning that any mistakes were uncovered by the person, then, he is 100 percent effective in creating the contracts. It is obvious then that both Efficiency and Effectiveness are of paramount importance to the overall achievement for productivity excellence. However, most companies respond to a reactive action rather than to adopt a proactive approach. When a problem is detected, it always presents itself from the output results, which by then, things have happened. The resolution towards the problem usually takes on to the form of identifying how one would improve the Effectiveness, which is right, but little attention paid on the part of Efficiency.
ASEAN COMPANIES PERSPECTIVES Small Medium Business (SMB) forms a substantial part of businesses within ASEAN. Take for example in Singapore alone, SMBs number more than a 100,000. They form 90 percent of total business establishments and employs approximately 50 percent of the workforce. The intriguing characteristic of SMBs is that they are a diverse group; a mix of family-run businesses to professionally-managed companies. The highly successful and adaptable SMB companies are a growing portion of the ASEAN business community. This competitive community is increasingly focused on expanding globally. One of the significant challenges that SMBs are facing is their inability to leverage easily on information technology. Firstly, they may not have the economy of scale to invest on enterprise systems. Secondly, they may not have the expertise internally to implement the right solutions. The contrary belief that SMBs simply need standard “SMB solutions” to automate their business and that is all, is untrue. The real fact is that each company operates very differently and no one does the same thing similarly. Many SMBs realize, and that for them to compete and expand globally, they need a technology platform that can be easily customizable. In fact, many of these companies see the impending need to be more productive and do things faster, especially if they were to compete with much bigger companies. Furthermore, information is crucial and real time performance reports of their operations are essentials for them to be successful. Mr. Lor, the General Manager of Oceancash Pacific Bhd, a Malaysian based SMB who had successfully listed in the local board commented: “We have invested in a locally developed ERP software. The modules did what we wanted. However, we needed more. We wanted real time performance measurement of all our operations. We want to optimize our processes and automate them with some level of Business Intelligence. We want our reports integrated with data that are able to relate on one operational function group to the other. The processes routed with multiple hierarchy levels of approvals, set alerts on critical tasks, automatically runs straight through processes. We think that BPM is the answer to all these.”
143
STATE OF BPM ADOPTION IN ASIA Addressing the Challenge: BPM Unlike most IT Technologies, BPM bridges the needs of business users and the understanding of IT professionals. It is probably the only technology that can place two different persons (one from business background and the other from IT) to come together on the same table, collaborating their thoughts, and understand each other’s issues. BPM helps to define a corporate strategy and leverage IT in achieving it. BPM even helps people from different levels in an organization to define goals and instituting solutions in achieving them. With BPM, IT becomes an important enabling force in helping them to achieve it, instead of a bottleneck. The focus of BPM is in managing Efficiency and Effectiveness. The key word in this case is “managing.” No one is able to establish a one-time answer in improving his or her work processes. Improvisation to work processes is a continuous effort. Processes are constantly improved, redefined, and optimized. Managing these require an infrastructural design that helps companies to collect, collate data, and constructively present into information. It is only with collected information with meaningful data that companies can achieve their goals of process optimization.
Above is an example of BPM and BI working together to provide real-time information on business functions. The key is that BPM captures process data and metrics and BI allows the extraction and collating of that data for business purposes.1 “BPM and Business Intelligence are now widely recognized as tools that provide tangible and immediate benefits in operational efficiency.” Rob Whiter, General Manager of HandySoft Asia Pacific and former General Manager of Hummingbird. “Today’s managers are tasked with delivering demonstrable
1
SOXA version 3, HandySoft Global Corporation, Linus Chow, 2005
144
STATE OF BPM ADOPTION IN ASIA returns from any investments in this demanding environment. BPM in particular has been proven as suitable for both tactical point solutions and also as a key instrument for underpinning Enterprise Business Performance Management initiatives. BPM vendors have responded with Fast Deployment capabilities and richer compliance management capabilities that deliver much stronger ROI and lower risk projects.” BPM cuts across departments, applications and users, across different industries. It connects internal staff, suppliers, customers and partners. ROIs can be seen immediately, accountability, visibility are also achieved. Companies can anticipate what is going on and take the right course of proactive actions, not reactive actions.
HOW BPM ANSWERS TO THE CHALLENGES Business Process Management or BPM, is the practice of improving the efficiency and effectiveness of any organization by automating the organization's business processes. Many companies have business processes that are unique to its business model. Since these processes tend to change over time as the business responds and reacts to market conditions, BPM helps a company to adapt to the new conditions and requirements in order for the company to obtain a perfect fit. In order to implement BPM effectively, organizations must stop focusing exclusively on data and data management, and adopt a process-oriented approach that makes combines between work done by a human and a computer. They have to look beyond the “transactional” part of computerization. In another words, organizations should not simply reply just on application software alone for efficient and effective computerization. The idea of BPM is to collaborate or simply bring processes, people and information together.
145
STATE OF BPM ADOPTION IN ASIA Above is an example of a collaborative Design Environment (BizFlow Process Studio and BizFlow Integration Studio) providing a drag and drop workflow design interface with drill-down access to robust integration tools. This combined collaborative environment is the trend for successful BPM platforms, where they address all the core phases of managing business processes.2 “Best Practices varies not only from industry to industry, but also from country to country and culture to culture. A BPM system is the best solution to align companies’ processes to their most optimum business practice,” says Linus Chow, International Practice Manager. “Companies’ Best Practices are what separates them from their competition.”3 Identifying the business processes is relatively easy. It is easy and obvious to know the pain at specific processes. Breaking down the barriers between business areas, and finding owners for the processes is difficult. BPM not only involves managing business processes within the enterprise but also involves real-time integration of the processes of a company with those of its suppliers, business partners, and customers. BPM involves looking at automation horizontally instead of vertically. Implementing BPM helps organizations to see ROIs quickly and at times, almost immediately.
HOW BPM ANSWERS: CASE STUDY – PT MEDCO E & P INDONESIA PT Medco E & P Indonesia, a subsidiary of PT Medco Energi International Terbuka. The Group’s principal activities are the exploration, production of and support services for oil and natural gas and other energy industries, including onshore and offshore drilling. Other activities include production of methanol and its derivatives and raising funds by issuing debt securities and marketable securities. Medco Energi is an integrated energy company. Their businesses range from the exploration and production of oil and gas on the upstream side, to petrochemical and power generation industry on the downstream. They are using a leading ERP system for their Finance, Plant Maintenance, Human Resource and Materials Management modules. Alongside, but not integrated, is a locally developed e-procurement application which does vendor evaluation and the procurement process. Their challenge is to dynamically “connect” different systems, track the processes, optimize them and ensure audits in the approval stages. Prior to implementing BPM, procurement was carried out separately between the two systems. Users had the following problems: • • • • • •
Multiple entries opened up opportunities for error As users approach to several systems individually, productivity was low They could not track on process movements Unable to audit the process movement Process “flows” are “passive Response to problems was reactive
2
BizFlow 10, HandySoft Global Corporation, Linus Chow, 2005
3
BizFlow 10, HandySoft Global Corporation, Linus Chow, 2005
146
STATE OF BPM ADOPTION IN ASIA The introduction of BPM has brought efficiency and process innovation as follows: • • • • •
Workload balancing was achieved through workload distribution Automatic work distribution follows pre-defined business rules Automatic submission to decision-makers along the line of approval Automatic forwarding to the persons in charge Furthermore, users now, are provided with an integrated view of all the data required for the job for each work item through the Work item handler: • Electronic image of all documents • Internal and / or external data, scoring and assessment evaluation of vendors • Actual registration of vendor selection • This means that the users no longer have to waste time and effort collecting the data scattered in various systems. • Process Innovation: Adding Business Value to management resulting from New System (BPM): The example of this case study focused on automating the “e-procurement” process. The following are the innovations of each business area:
High level process Prepare SR
Prepare Contract
Prepare PO
Prepare Invoice
Prepare WAN
Bidding
Diagram shows the high level process
Initiator
Supervisor
Section Head
Manager
VP
Director
P.Director
Diagram shows the hierarchical approval levels
bizmann system (s) pte ltd
147
STATE OF BPM ADOPTION IN ASIA
Integrated Approval for Dynamic Processing
business process business intelligence Dynamically Integrated Approval
bizmann system (s) pte ltd
1. Prepare Service Request User only enters once via the BPM’s e-form. The system integrates with the existing ERP system. The request posts to the ERP system and the ERP system creates a Service Request Number. The BPM solution integrates with the e-procurement system. Immediately and automatically, e-procurement sends out invitation to selected vendors. The e-procurement system does the vendor evaluation. It updates the DB of the selected vendor and automatically starts the “Validate Contract” process. 2. Validate Contract The contract is routed according to business rules for respective approvals. After the necessary validation, the information is posted to the ERP system for updates. 3. Prepare Purchase Order Upon contracts validation, POs are issued through respective approvals. Budgets Availability, Allocations and Checking of Budgets are available in this sub-process. The BPM solutions also takes care of integration with the ERP system for Cost Centre management 4. Prepare Work Acceptance Notice Once the jobs are completed, the Work Acceptance is processed, approved accordingly and notices are sent to respective people. With the correct process initiated, the ERP system will be updated dynamically. 5. Validate Change Contract 6. Change Contract 7. Change Purchase Order
148
STATE OF BPM ADOPTION IN ASIA 8. Work Acceptance Notice Cancellation Change processes as above are available if Changes are required. The respective processes routed in accordance and data updated simultaneously. As a standard availability in the BPM solution, processes are archived and audit trailed.
WAN – Work Acceptance Notice
bizmann system (s) pte ltd
Seamless Integration to ERP
bizmann system (s) pte ltd
149
STATE OF BPM ADOPTION IN ASIA
Accounts Payable Process
bizmann system (s) pte ltd
Automatic posting to AP module
bizmann system (s) pte ltd
Distinct Benefits with the introduction of BPM are summarized as follows: • • • • • •
Authorization with Dynamic and Multiple sets of Approval limits Allows dynamic inference to multiple sets of approval limits Reduce processing time and yet retain integrity of approval control Allows Hierarchy control without impeding process flows Dynamic Organization Management Excellent solution to the Organization needs where work tasks are linked to User Roles, instead of User Defined
150
STATE OF BPM ADOPTION IN ASIA • •
Greatly enhances Resource Planning and Management Graphical Interface allows non-IT to do user roles administration, with authorization control • Reduction of processing time • Reduction of processing time through elimination of manual paperwork and process automation amongst departments • Elimination of job sorting and waiting time through automated routing based on job authorities • Reduction of data delivery cost • Reduction of cost required for delivery of paper documents • Savings of labor cost of reworking due to document loss or error • Job Standardization • Standardization of work processing, document forms and workflows • Pre-defined roles, authority and security rules • Enhanced Audit Trails for finding and correcting problems in processes • Easy process modification allowing positive adjustment to changes in the environment PT Medco E & P, Indonesia, Director of Business Shared Services, Pak Syamsurizal Munaf quoted “The implementation of BPM certainly brings about seamless integration between systems to systems and manual processes. It leverages the use of our existing ERP system and shortens the ROI. The flexibility of BPM allows us to conduct process improvements thereby creating an opportunity for our organization to improve her deliverables.”
FACTORS INFLUENCING ADOPTION OF BPM Notwithstanding the fact the benefits of BPM are tangible and measurable, however, in order for us to understand the generic factors that influence enterprises to adopt BPM, it is imperative to have a look at the current state of BPM. According to a survey conducted by the BPMInstitute4 in April 2004, the current state of BPM adoption is characterized by the following findings: •
Nearly 65 percent of the organizations are either in the preliminary planning stage or evaluating and learning about available solutions. • Almost 67 percent organizations have some form of business process organization in place, though in most cases their respective IT departments drive them. • Just about 10 percent organizations have indicated that processoriented approach is a normal business practice. • Organizations are generally involved in different aspects of BPM, which includes business process strategies, business process analysis, business process management and business process management systems. • Nearly 65 percent of the organizations are targeting core processes for BPM initiatives. In summary, it can be easily inferred from the above that while organizations recognize the need and criticality of managing their business processes as normal business practice, the actual on ground adoption of BPM still has
BPMInstitute’s State of Business Process Management, An Executive White Paper, 2004. 4
151
STATE OF BPM ADOPTION IN ASIA some way to go. So what factors are influencing the adoption of BPM? An in depth survey of current BPM literature and secondary sources of information of current BPM initiatives reveal several issues that are playing a crucial role in BPM adoption. The most critical factors are: Process Governance: In order for organizations to successfully manage business processes, it is crucial that they put proper governance mechanisms in place to institutionalize process orientation. Process governance entails specifying the decision rights and accountability framework to encourage ‘desirable behaviors’ in the management of business processes. Issues pertaining to governance include: •
What process related decisions must be made and who should make these decisions? • How will these decisions be made, communicated, enforced and monitored? • What controls must be inbuilt into processes to meet regulatory and compliance requirements? Process Frameworks: Business process frameworks play a crucial role in BPM adoption as they provide a head start to organizations. Current frameworks like Supply Chain Operations Reference (SCOR) 5 Model, Enhanced Telecom Operations Map (eTOM) 6 , Collaborative Planning, Forecasting and Replenishment (CPFR) 7 and Information Technology Infrastructure Library (ITIL) 8 address some critical business process aspects in various business verticals. However these frameworks are at different levels of development, thus limiting their use. Widespread awareness and availability of process frameworks that not only provide the high-level process architecture, but also detailed process specifications and process performance measures greatly aids BPM adoption. Maturity and Capability of BPM Tools: Almost all currently available BPM tools are at best primitive. They do not address all the core phases of managing business processes. These tools currently are skewed towards process analysis, process automation, integration and monitoring. This is because most BPM tools have an ERP, workflow automation or enterprise application integration origin. Lack of true BPM capability also stems from the lack of industry wide standards and frameworks. Hence, an interesting point to note here is that while BPM tools do cause adoption, they can also be the effect of BPM adoption (especially from tool vendors perspective). Service Oriented Approach: In 2003, Nicholas Carr published the heatedly debated article “IT Doesn’t Matter” 9 in the Harvard Business Review. Regardless of an organization’s position on this issue, modern day enterprises heavily depend on IT to conduct business. In today’s highly competitive environment, business processes underlie constant change. A ‘stable’ business process is a thing of 5
Available at www.supplychain.org.
6
Available at www.tmforum.com.
7
Available at www.vics.org.
8
Available at www.itil.co.uk.
9
IT Doesn’t Matter by Nicholas. G. Carr, Harvard Business Review, May / June 2003.
152
STATE OF BPM ADOPTION IN ASIA past. However enterprise software systems almost always suffer from lack of agility. A service oriented approach however addresses the agility issue by providing technology-independent high level architectural blueprints that focus on slicing, dicing and composition of the enterprise application layer in a way where components are created and exposed as services in the Service Oriented Architecture (SOA) that have direct relationship to business processes. Process enabled SOA represents fully leveraged architecture as it incorporates: (1) complex business processes with myriad rules, (2) sharing state between multiple clients and (3) long running processes. BPM incorporates the concept of ‘process processing’ and stresses that this is not limited to process automation alone, but encompasses the discovery, design, deployment, monitoring and improvement of business processes to ensure that they remain compliant with business objectives. SOA provides the backend functionality that is required by a BPM system in order to implement its process functionality. While successfully moving to a service based approach represents a potentially large shift for the enterprise, it certainly has positive impact on adoption of BPM. Regulations: Recent scandals like Enron, WorldCom and Computer Associates have prompted governments and authorities to tighten regulations on compliance and disclosures10. This for instance has been instrumental in laws like the Sarbanes-Oxley Act (SOX). While it is understood that SOX like acts target corporate governance procedures in organizations, a closer look at their requirements reveals that a lot can be achieved by way of compliance by incorporating process controls to prevent fraud. Controls within business processes allow organizations to improve internal controls as business processes provide visibility and transparency of into organization’s operations. Without well-defined processes and clear management of those processes, even identifying the points at which controls are to be inserted becomes challenging. Thus regulations like the SOX provide an impetus for organizations to become process oriented and start managing processes. Industry Standards, Policies and Guidelines: As is usual with any new idea, BPM landscape is currently characterized by several standards like Business Process Management Notation (BPMN), Business Process Management Language (BPML) and Business Process Execution Language (BPEL) etc. This soup of standards is further exacerbated by the existence of several standards related to web services. Existence of numerous standards (with none of them being dominant) deters organizations to adopt BPM, preferring to wait till the muddle is cleared. Organizations find it confusing and unnecessary to deal with so many uncertainties. Need for Collaborative Partnerships: Successful collaboration and partnering plays a critical role realizing the full benefits of BPM, given the fact that most end-to-end business processes are cross enterprise. Successful partnerships are based on the degree of interdependence between partners, the exclusivity of the relationship and the strategic shared goals of the relationship. Besides, uncertainties in business environments push organizations to seek partners and collaborate to address business challenges. Collaborations between firms offer higher Manager’s Guide to the Sarbanes-Oxley Act by Scott Green, John Wiley & Sons Inc., 2004. 10
153
STATE OF BPM ADOPTION IN ASIA levels of interorganizational coordination, greater stability and flexibility. Trust and commitment are critical antecedents to successful collaboration and partnering. Trust allows partners to resolve exceptions and work out difficulties with favorable attitudes and behavior and commitment is the continued desire on part of all partners to maintain a valued relationship. Emergence of Public Trading Exchanges: Many initial BPM implementations involved B2B trading exchanges like WWRE, GlobalNetExchange (GNX) and CPGMarket. According to WWRE, the greatest benefit that a retail B2B trading exchange provides is a total workflow system for the whole procurement process, which includes promotion, distribution, pricing and linking suppliers. As of 2001, GNX’s eight equity members had met only five percent of their $260 billion purchase volume commitment. Initial BPM implementations with GNX’s early adopters resulted in 5–20 percent reduction in inventory and 2–12 percent increase in on-shelf availability across participants. For organizations mulling BPM implementations, use of third party provider allows a BPM program to get up and running quickly, delivering faster results and ensuring long term scalability. Globalization and Outsourcing: Enterprises, in order, to stay competitive have moved on from outsourcing just their IT functions to outsourcing entire business processes. This phenomenon called Business Process Outsourcing (BPO) is seen as a tectonic shift for many organizations. Business managers tasked with cost reduction and higher efficiencies are forced to consider outsourcing as an option. However, outsourcing, which is now becoming a mainstream business strategy, is not without its challenges. A process-based approach allows organizations that are outsourcing have clear visibility and define metrics. In fact, organizations now base their service level agreements (SLA) on process metrics. Such strong linkages between business activities of the outsourcee with the outsourcer are made possible only with a process oriented approach. In effect globalization and outsourcing is actually forcing organization to adopt process orientation and management of processes. BPM Methodology: Lack of accepted BPM methodology deters organizations and leaves them highly dependent on expensive external consultants. Engagements with consultants tend to make BPM initiatives more ‘project oriented’ with a definite start and a certain end. Typically the organization’s enthusiasm and interest tend to wane once the BPM ‘project’ is over. Management of business processes does not get institutionalized. Currently professional bodies like the Business Process Management Group (BPMG) 11 are working toward creating a BPM Methodology (called REDKITE). Once frameworks of these kind become prevalent, adoption to BPM will definitely be aided as it will allow organizational process managers to take ownership of their process and start managing them.
BPM: A RESEARCH SYNTHESIS Analyzing the current state of adoption, future research and development in BPM can be predicted to fall into two categories: a focus on development of BPM as a discipline by academicians, analysts and consultants and intense activities in improving current BPMS, primarily by the tool vendors. As is
11
Available at www.bpmg.org and www.bpmredkite.com.
154
STATE OF BPM ADOPTION IN ASIA evident from the factors influencing BPM adoption, both will play and equally important role. These factors also provide adequate insights into plausible areas that are most likely to be researched and lead to new developments. Some of the plausible areas for research falling into the above to categories include: • •
• •
Methodology development with phases, activities, roles, deliverables, tools, success criteria and costs / investments required clearly specified. Frameworks development that includes enhancement of existing frameworks in greater depth and development of frameworks for newer business verticals like healthcare, banking, finance & insurance, travel & hospitality among others. Consolidations of existing standards to a more manageable number such that it is easier for both organizations and tool vendors. Enhancement of BPM systems with higher capabilities in upstream activities like process discovery and senior management activities like incorporation of governance mechanisms.
CONCLUSION •
•
•
•
•
There is a growing trend among businesses in Asia where organizations are adopting and institutionalizing BPM and BPMrelated practices. It helps companies to automate and perform their businesses better. Organizations realize the benefits of adopting BPM. Moving from functionally structured operating models to process managed models represents a huge shift. The benefits of such a shift however outweigh the pains and the costs. Small and Medium Businesses constitute a considerable share of total businesses in the ASEAN region. Many SMBs are actually detered by the fact that BPM may be ‘too expensive’ for them, but do not realize the biggest advantage they have over their larger counterparts, i.e. less ‘cultural baggage,’ which makes the shift easier. There will be an exponential growth in the adoption of BPM technologies within ASEAN companies. The way businesses and the marketplace are evolving will fuel this adoption. Acadamicians, consultants and solutions vendors are working together to bridge viable deliverables in various forms for ASEAN companies to adopt BPM. Companies have recognized the need to equip their workforce on process excellence. The intense competition and time to market factors further put pressure for companies optimize their current processes. Observing corporate governance in different degrees is something that most enterprises are taking seriously. The factors discussed in this paper that would influence BPM adoption needs inputs and involvement from the academia, consultants and BPM tools in equal measure. Currently the partnership between the three are largely isolated and tactical. All the aforsaid factors are certainly driving BPM adoption, companies will need to evaluate how they can gear themselves to implement this technology. The good thing is that robust BPM solutions is available now in the form different process templates and they are built on a
155
STATE OF BPM ADOPTION IN ASIA
•
modular configuration, allowing for “snap on” when it comes to collaboration. BPM solutions are not meant to replace existing IT systems, it leverages and even shorten ROIs for existing IT systems. With the way BPM technology is also evolving itself to be of an easier reach to companies, the answer to the rate of BPM adoption is clear. It is going to grow exponentially.
156
Section 2
Process Standards
Business Process Metamodels and Services Jean-Jacques Dubray, Attachmate, United States INTRODUCTION Since the very beginning of the industrial revolution each commercial organization has tried to codify and optimize its processes to achieve predictability, increase automation and quality levels while lowering costs. Except for few business types, it would be unimaginable today to run a business by deciding on the fly which activities need to be performed from sourcing to production to delivery and for every instance of a product sales cycle. Since the advent of the information age, most companies have leveraged the computer and its near-perfect ability to manage “state” at critical points of their business processes enabling some users to report the state of their activities while others get notified of the activities they need to accomplish (as computed by the system) when they use a given system. Based on these principles and requirements, the software industry has developed scores of business applications that automate the management of state covering all possible business process areas: sales, accounting, engineering, production, sourcing. Surprisingly and despite the critical need to be able to constantly reengineer processes, none of their architecture promote the concept of ‘business process” at a first-class-citizen level, visible and modifiable by the customer. Business processes are rather hard coded in ways that make it virtually impossible to change, cornering companies to adopt “best practices” implemented by vendors. One of the reasons is that most often these applications were developed following traditional monolithic architectures based on the Model-View-Controller pattern1 in which the relationships between the view and the model force the controller to be designed as a series of mostly unrelated actions. In addition, MVC and object-oriented technologies do not make any distinction between the content of a business entity and its status which are lump summed as “state.” As a result of the action granularity and the lack of state separation, these applications offer inflexible integration interfaces making it hard for customers to re-engineer processes that cross several of these applications. The implementation of end-to-end processes requires large scale integration projects are carried out to enable (Figure 1) users of a given application to either access state captured in other systems of record or for applications to notify others which in turn would create new activities in the target system as part of an end-to-end business process.
159
BUSINESS PROCESS METAMODELS AND SERVICES
1 2
State
State
State
State
Figure 1 Business process spanning multiple applications create the need 1) for users of a given application to access state captured in another application and 2) applications to notify each other changes in their respective state The bottom line is that, today, the state held in any given application must be constantly replicated into other applications via proprietary interfaces and technologies to achieve the appearance of end-to-end processes, while the part of processes that is under the control of any given application remains entirely static. With this type of architecture applied consistently at the enterprise level pretty much every aspect of business processes definitions: data, logic, user activities and interfaces… have been diluted behind billions of lines of code and millions of database tables. However, the situation is changing, slowly. The past ten years have seen the emergence of key technologies and concepts which offer the opportunity to rethink the way we build enterprise applications and possibly make business process a first class entity of this architecture: XML2 provides a crossplatform data format, extensible, transformable and semantically accessible3, SOAP4 and Web Services5 combined with the World Wide Web provide a cross platform distributed computing platform which foundation is the exchange of messages independent of the constraints of RPC or distributed objects. Finally, a Service Oriented Computing6 model has emerged sometime in 2003 enabling the development of Service Oriented Architectures and systems. Together these technologies have dramatically lowered the cost of connecting applications while being able to cross platform boundaries. The goal of this chapter is to show how the Service Oriented Computing model represents a new and robust foundation for “Business Process Management.” The first section focuses on the metamodel of a business process
160
BUSINESS PROCESS METAMODELS AND SERVICES and its relationship to services and service oriented computing concepts while the second section provides the architecture supporting this metamodel based on the SOC building blocks and principles.
BUSINESS PROCESS METAMODEL In the last fifteen years, several groups7,8,9,10,11 have claimed to be in the position of offering a metamodel with which business processes may be described either from a pure modeling aspect or both from a modeling and execution aspects. Several of these metamodels8,10,11 are based on the πCalculus theory12. However, because of the limited expressiveness of πCalculus semantics with respect to business process definitions, there is an intense effort to find alternative calculi13 while other branches of computer science have also explored the modeling of autonomous agent interactions. Interaction Oriented Programming14 suggests a model based on three layers: coordination, commitment and collaboration which can be associated to orchestration, contract and choreography in service oriented computing. In many respects a business process, or more generally a work process, is simply the coordinated and collaborative work of a series of agents (humans or software) performing “activities.” One of the agents is the initiator of a business process definition. This particular agent is often a human but could also be a system triggered by a particular event (time, data or information collection…) or participating in a different process and initiating processes based on the state of this other process creating some loosely coupled dependencies between these processes. Agents might all be statically known at start time of a process instance or may be added dynamically following some rules. Agents may vary from process instance to process instance. Any two activities may require the exchange of several messages as part of the business process definition. These interactions provide combined mechanism of state alignment and composition. With respect to state alignment, it cannot be expected that it will be always achieved by a direct interaction between two activities. This would entail too much coupling between the two activities. We will in the Architecture section how we can achieve greater decoupling using specifications like WS-CAF. An activity instance most often starts and completes within the lifecycle of a business process instance. It is conceivable however that a given activity instance may participate in different business process instances if a correlation mechanism exist between the activity instance and the business process instances. Any business process definition model must offer a composition operation at two levels: 1) an activity may itself be a process, 2) processes shall be able to interact, i.e. exchange messages, with each other. In particular, the second level of composition must allow for specifying arbitrary logical, organizational or legal boundaries and be independent of any technical boundaries.
161
BUSINESS PROCESS METAMODELS AND SERVICES
Agent A
Agent C
Initiator Agent B
Figure 2 A business process as a series of agent performing activities cooperatively and exchanging messages to synchronize their state, dashed lines represent message flows, solid lines represent events (start and completion events), a diamond represents a decision rule A business process definition is defined by the a) activity interfaces an agent must expose to be able to participate in this particular business process and b) the required choreography of messages between these interfaces (Figure 2). A choreography is said to be coordinated when it contains message routing rules outside the control of any given agent and which are used at run time to decide which agent a given message should be sent to. Otherwise, the choreography is said to be self-coordinated. In the business process definition, agents are usually represented by abstract roles. As such a business or work process is not about orchestrating15 but rather coordinating these activities by establishing a semantic and architecture in which meaningful units of work performed my multiple agents may be achieved. Business processes as well as all their activities are long running units of work. In addition of performing private work, agents perform interactions with each other by exchanging messages as part of activity execution. These interactions define for the most part the activity interfaces. Specific message exchanges geared towards managing the lifecycle or inquiring the status of the activity may also be part of an activity’s interface but in general ancillary to the business process itself. These interactions typically represent requests for action or information as well as notifications. Interactions are supported by specific communication protocols that provide a certain level of quality of service necessary for a viable interaction: reliable messaging, correlation, reliable processing, commitment oriented protocols… Interface specifications must be abstract to enable different agents to perform a given activity in a particular process. In a process definition, an agent is rarely explicitly defined and often represented by a role. All roles and therefore agents are peers from the business process point of view. At run-time a binding step will associate physical agents with roles, either statically before or when a business process instance starts or even dynamically as part of message information. Since an activity interface is defined by the messages it can exchange, this concept is well aligned with the concept of a service interface. This is the starting point of using Service Oriented Computing as a foundation to Business Process Management. This proposal is somewhat orthogonal to other approaches which favor the boundary of activity with operations (of services) over the association of a service definition itself (as a series of operations) to an activity boundary. For instance, Cabrera defines an activity as “a set of
162
BUSINESS PROCESS METAMODELS AND SERVICES actions spanning multiple Web Services that work jointly toward a common goal”16. Our thesis here is on the contrary that a web service (not just a single operation of course) is an activity. However, Web Services standards still suffer from some limitations to fully support this approach: WSDL5,17 (Web Service Definition Language), does not contain enough semantics to fully describe an activity interface. For instance, WSDL does not support constrains with respect to the sequencing of operation invocation of a service. Even though it is obvious that a Purchase Order business object can only be changed once it has been created, processed and accepted, WSDL (of π-calculus for that matter) does not offer any mechanism to specify that a changePurchaseOrder operation may only be invoked after a processPurchaseOrder. Some of the semantics of BPEL (Business Process Execution Language) would enable us to specify these rules as abstract processes18. Another limitation is that WSDL operation invocations are sessionless. There is no standard way of invoking a particular “instance” of a service, just like we expect to have activity instances as part of a business process instance. It is only with WS-Addressing19, which was submitted in the summer of 2004 as a W3C note (i.e. an input to a standardization process), that we have the ability to reference “end points” which may represent service invocation instances. The concepts of end points and service invocation instances are essential to both service orientation and BPM. Let’s start exploring the example that we will use all along this chapter. This example is modeled after an open source ERP system called Compiere20 that can be downloaded and installed for free. The process that we chose to model is called “Requisition-to-invoice” and is used to manage sourcing activities within a company from the issuance of an RFQ to the Purchase Order of the Goods, and the payment of the Invoice.
Supplier Cancel Purchase Order
Purchase Order
Requisition RFQ Service
Invoice
Material Receipt Purchase Order Service
Cancel Requisition
Invoice Service Invoice
Buyer
Figure 3 Requisition-to-Invoice business process, dashed lines represent interactions, solid line represent simple transitions The optimal granularity of services has been debated probably ever since the concept of service was ever articulated and will be for quite some time. We
163
BUSINESS PROCESS METAMODELS AND SERVICES have chosen to model services and activities such that they manage the lifecycle of business objects that participate in a business process (e.g. RFQ, Purchase Order and Invoice). There are ancillary business objects such as Requisition or Material Receipt which did not mandate to have a specific service associated to them as they represent simple notifications. This divide is consistent with recommendation from organizations such as the Open Applications Group21 which offers about 60 “integration scenarios” between these types of services. In the example, the three activities occur roughly in sequence. Roughly because the lifecycle of a purchase order only ends when it has been invoiced or even paid. We suggest creating a “Purchase Order” service from the buyer role’s perspective. This service features operations that are invoked from an internal and external point of view. We can define a certain number of operations as represented in Figure 4 Purchase Order Service Interface. Cancel Purchase Order
Purchase Order
receive
Requisition
Material Reciept
invoke
receive
send
invoke
Sales Tax
invoke
Insert DB
Cancel Requisition
send
Create PO
Purchase Order Service Invoice
Figure 4 Purchase Order Service Interface The corresponding WSDL file, according to the W3C WSDL 2.0 August 3rd 2004 draft17 would look like this: ... ... ...
165
BUSINESS PROCESS METAMODELS AND SERVICES We introduced two separate namespaces (publicns and ns1) to clearly separate the interface to the supplier (which might be defined by a standard organization) from the interface internal to the buyer. Overall the service interface is designed as an extension of the public interface. WSDL 1.1 or 2.0 do not offer any possibility to express the sequencing constraint of the operations. These semantics can be expressed via an abstract BPEL definition (v2.018). Here is the process purchase order interface behavior definition (as an abstract BPEL definition). abstractProcess=”yes” suppressJoinFailure="yes">
167
BUSINESS PROCESS METAMODELS AND SERVICES The lifecycle of activities from start to completion is somewhat independent from one another. In other words, activities don’t always necessarily happen in sequence, though it is a desirable feature to express that an activity may or must start only when another activity completes. On the other hand the message exchange between activities is expressed with a precise flow this flow is called a choreography. In the example we need to express a constraint that “enables” the process invoice activity to start, once and only once the process purchase order activity reaches the state when the purchase order has been successfully processed and accepted by the supplier. The process invoice activity itself will be instantiated when the buyer receives the supplier’s invoice irrespective of the operation of the process purchase order activity. The invoice interaction is not relevant to the process purchase order activity at all. Hence, the constraint does not need to be modeled via an interaction between two activities: it is rather a constraint at the choreography level an not at the activity level, enforced if necessary by the choreography run-time: an invoice can only be received when the process purchase order activity is in the correct state, the purchase order has been accepted by the supplier. This constraint is not executed but rather “monitored” for a possible exception violating the agreement between the buyer and seller. Two different choreographies can be expressed from the example. The first one is often called a “collaboration” and is composed of the interactions between business partners: purchase order, cancel purchase order and invoice. The private interactions between the buyer’s activities are not part of the collaboration. The ebXML Business Process Specification Schema22 is designed to be the metamodel of collaborations. The second choreography represents the requisition-to-invoice business process execute by the buyer, expressing all activity interactions including the one from the collaboration definition. Let’s start by expressing the collaboration definition using the latest draft of WS-CDL23
169
BUSINESS PROCESS METAMODELS AND SERVICES
170
BUSINESS PROCESS METAMODELS AND SERVICES Activity and Service Implementation Model Before we continue on completing the picture of the business process metamodel, I would like to expend on the implementation model of an activity, i.e. a service. Even though a service can potentially be implemented using any technology, and that’s a major goal of the web services technology stack, the long running asynchronous nature of an activity make it difficult to implement with traditional technologies. Today, this type of behavior is often implemented with an ad hoc architecture and code. Even though this is not clearly a goal of the specification, it can be noted that the WS-BPEL specification can potentially hold another role in the metamodel and the architecture by providing an elegant solution as a programming language for the implementation of a component which has an inherent long running lifecycle. As a matter of fact a new working group as part of the Java Community Process has been formed to combine Java and WS-BPEL constructs into BPEL-J24 and Microsoft Research is working on the Cω25 language. These new programming languages will probably become the preferred implementation language of activities and services because of their ability to associate database interactions, and simple data transformation (or validation) with long running message exchanges. I also would like to emphasize that WS-BPEL is often considered like a Web Service composition language. If the examples provided along with the specification often illustrate composition in terms of operation composition, this is not the only composition model supported by WS-BPEL, it also supports composition between a web service interface (involving multiple operations of any type) and other web service interfaces (Figure ). Such a web service many have one or more WS-BPEL definitions linking its operations with one another and the ones of the composed service. Any given service implemented in WS-BPEL does not have to expose a service interface which groups all the interactions specified in its definition. In some rare cases, the resulting service may actually not expose a single operation to the “outside.”
purchaseOrder
receive requisition
invoke
receive
send
invoke
salesTax
invoke
insertDB
send
createPO
Purchase Order Service
Figure 4: Activity implementation and composition model using WSBPEL
171
BUSINESS PROCESS METAMODELS AND SERVICES User Interaction Model Going back at the business process metamodel level, we are going to explore how human interactions can be made part of the metamodel. The first approach is to do nothing and construct proprietary systems that can associate web service interfaces to any given user, transforming the user into a message sender and receiver. Such services may be composed into others or choreographed as part of the business process definition each time a user interaction is needed. In the case of the create requisition service, it is expected that a user will create the requisitions from which the orders are created. We have two possibilities. Either the user interaction is internal to the create requisition or the user is exposed as a web service outside the create requisition service as part of the business process definition. The second approach is to define a user interaction model that can be added to any service that requires user interactions. One working group has addressed this problem from the consumer perspective, i.e. the user: WSRemote Portlets26. At the metamodel level, WS-BPEL and WS-CDL do not provide any specific semantics to support user interactions by create business rules based on an organizational structure of users and roles. For instance, it is impossible to specify that a given activity must be performed by a “manager.” This kind of decision must be made by the service implementation and passed dynamically to an another service. Control Flow The control flow represents the sequencing rules of activities within a business process. The Business Process Modeling Notation27 (BPMN) offers a large variety of constructs to model the control flow between activities. However, because BPMN v1.0 does not support the notion of explicit states within an activity, the sequencing rules only apply to the start and completion of an activity and cannot be based on reaching intermediary states. As we have seen in the Requisition-to-Invoice example (Figure 3) we cannot expect that all activities will be necessarily sequenced following their lifecycle. When exposed publicly, these intermediate states (not just the completion state) should either enable the start or require the creation of an activity instance as part of the current business process instance. These two types of transitions should be clearly differentiated in both a metamodel and a notation. An activity instance can be created by a message exchange between the two activities or via an intermediary monitoring the state of the activity (typically the business process engine or choreography engine). It would require two much coupling between the two activities to always require a direct message exchange. For instance, we could overtime specify transitions from any given state to many different activities (including entire processes). A direct message exchange would require modifications of the initial activity upon adding new transitions. Intermediary states of the process purchase order activity may include: • Order Created • Order Sent • Order Accepted • Order Reconciled • Order Closed • Order Exception
172
BUSINESS PROCESS METAMODELS AND SERVICES In the example, the process invoice activity will be enabled upon reaching the Order Accepted state. In order to be able to use state semantics any system must support state alignment capabilities. The notion of state has been introduced in the WS-CDL specification April 2004 draft but latter withdrawn for the lack of a viable state alignment protocol (see section “Logical Architecture”). BPMN also offers a notation mapping to WS-BPEL which can be used to graphically depict a service implementation. BPMN provides a very complete set of control flow semantics. BPMN suggests using the following flows between activities: Flow
Description
Uncontrolled flow
refers to flow that is not affected by any conditions or does not pass through a Gateway
Conditional flow
can have condition expressions that are evaluated at runtime to determine whether or not the flow will be used
Default flow
For Data-Based Exclusive Decisions or Inclusive Decisions, one type of flow is the Default condition flow. This flow will be used only if all the other outgoing conditional flow is not true at runtime
Exception flow
Exception Flow occurs outside the Normal Flow of the Process and is based upon an Intermediate Event that occurs during the performance of the Process
BPMN v1.0 also provides a very interesting concept (Gateways) to model the divergence (fork) and convergence (join) of sequence flows. Several types of gateways are available (Figure 5).
Figure 5 Graphical representation of BPMN Gateways Gateways can be used to specify pure fork (AND-Fork) and joins (AND-Join). There is also an exclusive gateway (XOR-Fork) restricting the flow such that only one alternative may be chosen. BPMN makes a very important distinction between data based and event based XOR gateways. In a data based scenario, the condition guards must be designed to be exclusive. In an event
173
BUSINESS PROCESS METAMODELS AND SERVICES based scenario, it is the reception of an event which decides when one activity starts, all the other events (and corresponding activities) controlled by the XOR gateway become disabled. There are many other semantics that need to be part of a business process definition metamodel. For instance transactional behavior, business interactions between business parties, business rules that can be used to select participants dynamically within a given process instance…However, they fall beyond the scope of this section which is focused on the core of the metamodel, i.e. activities, processes, composition vs coordination and the introduction of the notion of state.
ARCHITECTURE OF A BUSINESS PROCESS ENGINE The proposal for a metamodel combining WS-BPEL, WS-CDL and concepts such as state is relatively innovative though similar approaches have been suggested several years ago28,29 along the lines of complex adaptive systems (CAS). The OMG has developed a specification called EDOC (Enterprise Distributed Object Component) based on the notion of collaborative components30. The goal of this section is to specify possible architectures for a business process engine supporting this metamodel. Service Oriented Computing Technology Stack The architecture stack of service oriented computing is representing below ( Figure 6). Service Oriented Computing relies on both a Message and a Service stack. Cabrera et al have written a thorough review of these two stacks31.
Coordination Coordination(WS-CDL, (WS-CDL, WS-CAF) WS-CAF)
Discovery (UDDI, ebXML) Transaction (WS-CAF)
Context (WS-CAF, WS-ADD) (WS-CAF)
Description WS-DL 2.0
Accessibility, I18N, P3P
Composition Composition(WS-BPEL (WS-BPEL 2.0, 2.0, BPEL-J) BPEL-J)
Semantic Web (RDF, OWL)
User Interaction (WS-RP)
Policy (WS-Policy, …)
Management
Service Message
Security (WS-Trust, WS-Federation, SAML, …)
Service Impl,
SOC
Services Fabric
Business agreement (ebXML)
Rel. Messaging (WS-RM) Addressing (WS-Addressing) Protocol (SOAP 1.2)
Packaging
Syntax (XML 1.0, Infoset, XSD, NS, URI…)
Transport (HTTP, SMTP, IIOP…)
Runtime
Design time
Figure 6 Service Oriented Computing Architecture Stack (in blue the specifications that have been published as a standard, in yellow, specification which work is still in progress).
174
BUSINESS PROCESS METAMODELS AND SERVICES These technologies altogether enable us to create autonomous components capable of exchanging messages securely and reliably. Three specifications (WS-BPEL, WS-CDL and WS-CAF) provide the foundation of the architecture of a service oriented business process engine. WS-CAF is divided into several specifications itself: WS-CTX (Web Service Context), WS-CF (Web Services Coordination Framework) and WS-TXM (Web Services Transaction Management). WS-CTX defines the notion of a unit of work, its lifecycle and its context as a “shared scope of persistent data”32. In other words, this specification allows us to manage shared state within a distributed architecture. The context is defined as a web resource. The context may be managed by a dedicated service or passed by value amongst the participants of the unit of work. The WS-CTX also specifies the concept of a “unit of work lifecycle service” where a unit of work instance might be registered when it starts and completes. The WS-CF specification provides the definition of a coordination service (i.e. coordinator) that “provides additional features for persisting context operations and guaranteeing the notification of outcome messages to the participants”33.Coordinators maybe composed via as subcoordinators and act as a participant in a coordination. Coordinators bring a good level of decoupling between activities alleviating the need for specific message exchanges between activities. Logical Architecture An ideal architecture should at least be able to fulfill two very important functions: • Enable process engine composition which will be always the case when we cross any business boundaries. In the example (Figure 3), the supplier itself is likely to have a process engine handling its own part of the process. • Coordinate both web services which comply with a coordination standard such as WS-CAF and web services that don’t. In Figure 4, the process purchase order service require the invocation of a generic sales tax calculation service Every engine requires both a repository of business process definitions and a data store for process instance dehydration and rehydration. The first possible architecture is to build the process engine as an observer of the choreography of messages. Practically, this can only be achieved by centralizing all message exchanges via the engine. This is in particular how the process engine can implement rules that decide which service to use to perform a given activity (coordinated choreography).
175
BUSINESS PROCESS METAMODELS AND SERVICES Service 1 Repository
Engine Service 2
Service 3 Figure 7 Business process engine as an observer The second possible architecture is based on WS-CAF and as such requires that most of the services be WS-CAF compliant in order to participate in the business process.
Ctx Activity Repository
Service 1
Coordinator Service 2
Service 3 Figure 8 Distributed process engine based on WS-CAF In this case, there is no central decision maker. All services are responsible for decisions based on a shared context. The coordinator might help propagating state to the appropriate service(s).
CONCLUSION The software industry has long searched for a computing model where business or work processes would be explicit and where customers could change the business processes without significant coding projects. Programming languages like WS-BPEL, Service Orientation and web service technologies represent a major architectural advance to create a new generation of business process engines that can integrate with a wide variety of business functions and across business boundaries going far beyond the original concepts of business process orchestration that were defined in the late nineties34 and have hardly evolved since then. This new generation of process engines is expected to manage end-to-end business processes while being far more
176
BUSINESS PROCESS METAMODELS AND SERVICES flexible, far more business savvy and far more integrated with all aspects of IT as was laid out the business vision in the past twenty years. These concepts are poised to revolutionize software engineering and the way we build business applications.
T. Reenskaug, “The Model-View-Controller (MVC) its past and present,” 2003, University of Oslo 2 T. Bray et al, “Extensible Markup Language (XML) 1.0,” W3C Recommendation, February 10, 1998 3 J.J. Dubray et al “An extensible object model for business-to-business ecommerce systems,” OOPSLA 99, Business Object Component Workshop 4 D. Box et al, “Simple Object Access Protocol (SOAP) 1.1,” W3C Note May 8, 2001 5 E. Christensen et al, “Web Services Description Language (WSDL) 1.1,” W3C Note, March 15, 2001 6 M. Papazoglou et al “Service Oriented Computing,” Communications of the ACM, October 2003, Vol. 46 No. 10, p25. 7 WfMC http://www.wfmc.org 8 BPMI http://www.bpmi.org 9 OMG http://www.omg.org 10 OASIS, http://www.oasis-open.org 11 W3C, http://www.w3.org R. Milner “Communicating and mobile systems: π-calculus,” Cambridge University Press, 1999, ISBN 0-521-65869-1 13 A. Ferrara “Web Services: a Process Algebra Approach,” accepted for publication, proceedings of the ICSOC 2004, 14 M.P. Singh “Synthesizing Coordination Requirements for Heterogeneous Autonomous Agents” Autonomous Agents and Multi-Agent Systems. volume 3, number 2, June 2000, pages 107-132. 15 Nick K “Reference on Orchestration vs Choreography” 16 L. F. Cabrera et al “Coordinating Web Services Activities with WSCoordination, WS-AtomicTransaction, and WS-BusinessActivity,” January 28, 2004, MSDN 17 R. Chinnici et al, “Web Services Description Language (WSDL) Version 2.0 Part 1: Core Language,” W3C, Working Draft 3 August 2004 18 S. Adkary et al “Web Services Business Process Execution Language,” OASIS, Working Draft 01, 08 September 2004. 19 D. Box et al “Web Services Addressing (WS-Addressing),” W3C Note 10 August 2004. 20 Compiere, Inc. http://www.compiere.com/ 21 Open Applications Group Inc., http://www.openapplications.org 22 J.J. Dubray et al “ebXML Business Process Specification Schema v1.1,” OASIS & UN/CEFACT, August 2003 23 N. Kavantzas et al “ Web Services Choreography Description Language Version 1.0” W3C, Working Draft 27 April 2004 1
177
BUSINESS PROCESS METAMODELS AND SERVICES
M. Blow et al “BPELJ: BPEL for Java,” A Joint White Paper by BEA and IBM, March 2004 25 N. Benton et al “Modern Concurrency Abstractions for C#,” Microsoft Research, http://research.microsoft.com/Comega/ 26 A. Kropp et al “Web Services for Remote Portlets Specification,” v1.0 OASIS, August 2003 27 S. White et al “Business Process Modeling Notation v1.0,” May 2004, BPMI.org 28 M. Papazoglou, Agent-oriented technology in support of e-business: Enabling the development of "intelligent'' business agents for adaptive, reusable software. Commun. ACM 44, 4 (Apr. 2001), 71–77. 29 J. Sutherland et al. “Enterprise Application Integration Encounters Complex Adaptive Systems: A Business Object Perspective,” Proceedings of the 35th Hawaii International Conference on System Sciences, 2002. 30 “UML Profile for Enterprise Collaboration Architecture Specification,” February 2004 Version 1.0, formal/04-02-05,OMG 31 L. F. Cabrera et al “An Introduction to the Web Services Architecture and Its Specifications Version 1.0”, Microsoft, MSDN, September 2004. 32 E. Newcomer “Web Services Composite Application Framework”, Presentation to the OASIS WS-CAF Technical Committee, March 10, 2004. 33 D. Bunting et al. “Web Services Composite Application Framework (WS-CAF) Ver1.0” OASIS, July 28, 2003 34 S. Thatte, “XLANG: Web Services for Business Process Design”, Microsoft, 2001, published on DotNetGuru.org 24
178
Workflow and Service-Oriented Architecture (SOA) Arnaud Bezancon, Advantys, France ANOTHER ACRONYM FOR A NEW TECHNOLOGY? Information Technology is becoming more than ever a science in its own right, new architecture for designing information systems uses real mathematical formulae which are difficult for neophytes to understand: SOA = (EDI + EAI + XML + BPM) x WEB However, the needs expressed by organisations are very simple: How can we organise and optimise our business processes? Just a few years ago, the answer to this was also very simple: Workflow. Today however, several solutions to this question are possible; • Standardise exchanges of information between different information systems with EDI (Electronic Data Interchange) and XML documents using EAI (Enterprise Application Integration) solutions. •
Automate the circulation of information and optimise processing with a BPM (Business Process Management) tool.
•
Integrate these changes with Web standards such as HTTP protocol and Web Services.
As if this were not enough for companies to think about, it is now recommended that companies re-organise their system architecture in the form of “Services”. Moreover, to complicate this task further, more than twenty more or less competitive ‘standards’ are available… After 10 years of laborious migrations to object-oriented applications and 3tier architecture, must companies really start again from scratch? Are SOA and Workflow synonymous, complementary or competitive? How can the needs for optimisation of processes be met immediately, whilst also integrating the latest techonologies?
PRESENTATION OF SOA In basic terms, Service-Oriented Architecture is an approach to implementing applications based on the use of ‘services’. The ‘service’ is a re-usable component available either on the internal or external company network. This component takes the form of a Web Service, which means the architecture can be fully modular, independent of the services host platform and also enabling it to be based on standards such as HTTP, XML and SOAP. The Web Services provider can therefore easily offer “consumer” applications either internal or external to the company, with easily integrated functionalities via applications regardless of the medium: Rich Client (Windows), Thin Client (Web) or Mobile Client (PDA). The application which consumes web services can “cherry pick” supplier’s functionalities, independently of where they are located on the network, or of the technologies and databases used.
179
WORKFLOW AND SERVICE-ORIENTED ARCHITECTURE (SOA) The provision of a directory of Web services means that the most appropriate services for specific needs can easily be found. This concept is not a new one, the ‘object brokers’ DCOM and CORBA already provide these types of services. The SOA is materialising through its largescale adoption using software of solutions for standards such as XML and Web Services, and these technologies have made it possible to create complete applications. SOA’s goal is therefore to make the creation and consumption of web services a systematic feature of the company’s IT system architecture. It is recommended that services be processed individually in order to improve their re-useability in different, more complex work processes, also called business processes. It is therefore important that all necessary tools and standards be acquired rapidly in order to model and organise the execution of the various services. We thus arrive at the heart of the problem: Is the SOA a re-invention of Workflow? From a marketing point of view: the concept of ‘orchestrating’ web services is a little too abstract for companies, optimising business processes is far more explicit. EAI software designers oriented towards XML and Web services have naturally turned to SOA for a clear answer to a customer need. However, most of this software still specialises in the Workflow of automatic processes, completely automated synchronous or asynchronous processes. We are therefore still in the realm of the EAI. From a technical point of view: The SOA is an impressive platform for Workflow software. In fact, most BPM solutions can “consume” web services to execute automatic processes. From the opposite perspective, a process managed by a workflow engine can be seen as an asynchronous web service by another application. These functionalities are enabled using WFMC and OASIS standards such as ASAP and WF-XML 2.0. An organisation which implements a Service-Oriented Architecture will therefore considerably optimise its processes managed by the Workflow solution. Should an SOA be implemented before using a Workflow solution? This is not necessary, indeed it is not recommended. In fact, the Workflow project enables the company processes to be accurately modelled and also emphasizes the automatic processes which will be the future Web Services of your SOA. The implementation of workflow software is a natural step towards evolving your information system into SOA. Furthermore, from the point of view of the end-user, a web service still remains something which is difficult to materialise. Chronologically, you can therefore respond to immediate needs for optimisation of your processes with a workflow engine, then optimise your automatic processes by building SOA.
180
WORKFLOW AND SERVICE-ORIENTED ARCHITECTURE (SOA)
Figure 1: Workflow Server and SOA complementarity
MANAGING YOUR WORKFLOW PROJECT WITH SOA IN MIND It is too early yet to imagine the implementation of fully automated work processes. The fact is, collaborative project management and outsourcing both entail a significant number of human operations (especially of the approval type). In most administrative processes, automatable operations in the form of web services are still rare exceptions. However, the most significant productivity increases will be generated by such operations. For example, the implementation of a workflow system for a process of purchase request validation will result in a considerable increase in productivity. Automating the operation for registering a purchase request validated in the ERP using a web service, for example, saves even more precious time and eliminates the risk of input errors. Finally, responsiveness is a key element, the processes must therefore be adaptable according to internal or external changes in the organisation. It is therefore particularly important to integrate these parameters into your workflow implementation method whilst also focussing on building your SOA architecture. Here are a few of the important steps to ensuring success in this type of project: Make an inventory of your processes Most of the time, your workflow project is limited to the implementation of a process. However it is very interesting to look objectively at all your processes (by department, field, etc.). This step will enable you to identify the automatic processing operations that are common to different processes. Thus, when you are creating these ‘services’ you will be able to anticipate their reuseability, by avoiding situations where, for example, they would be too highly specialised with regards the consumer process.
181
WORKFLOW AND SERVICE-ORIENTED ARCHITECTURE (SOA) Work by process version In some cases, web service creation can represent a substantial sub-project in terms of your workflow. This can noticeably lengthen the workflow creation process, for example, due to a problem of resources for analysing and developing the web service. Automating a process with a workflow engine can lead to a significant increase in productivity and quality of service. Wherever possible, it is therefore appropriate to launch a Version One (as part of the pilot project for example) with simple web services. The production of this Version One can also feature any improvements or changes compared with initial plans for the web service. Design a SOA Manager The number of web services is set to increase sharply, along with the number of consumers. As with databases, it is important that your library of web services be organised efficiently. First, a person should be designated as responsible for updating the directory of available web services (description, version, security, etc.) including their use internally and possibly externally to the company. This is also the occasion for ensuring a level of quality monitoring especially in terms of standardisation parameters, performance, availability and security. To summarise, a pragmatic approach to these new technologies will enable you to achieve your workflow objectives whilst also evolving towards the SOA scenario. The efficient implementation of SOA requires a certain measure of corporate maturity vis à vis automation of company processes. However, in the current context, aiming to carry out all tasks simultanously, with a single tool and using the same developer profiles seems to be particularly difficult.
SOA STANDARDS OR WORKFLOW STANDARDS: WHAT’S BEST? As mentioned above, Service-Oriented Architecture constitutes a particularly powerful platform for accelerating and optimising the mangement of these Workflows. Here we are talking about different levels of modelling and execution. However, there are many common points, or even overlaps, between BMP/Workflow and SOA. The problem today actually lies with the relative competition between standardisation organisations. In fact, web service standards, XML, EAI and EDI converge in particular towards BPM and workflow. Historically, WFMC was the precursor in this field, in particular with XPDL which defined processes and WF-XML for inter-application exchanges. The association with OASIS to create the ASAP protocol constitutes a cross-over point with the other Web service type standards In the field of BPM, the “standard stack” is becoming increasingly difficult to understand. The positioning of BPEL in relation to XPDL and BPML is a perfect example of this. This multiplication of standards (BP.., WS..) whilst on one hand being presented as SOA, BPM or Workflow standards, poses a real problem when choosing a standard or, more likely, a combination of standards in order to build the technical solutions for managing the company’s processes. In the
182
WORKFLOW AND SERVICE-ORIENTED ARCHITECTURE (SOA) long term, the arrival of major actors on the market will resolve these problems de facto. Meanwhile, increasing process automation needs must be met. A conservative approach is to use the Workflow and SOA “core” standards, that is, the Workflow model defined by the WFMC and the Web Services through SOAP. More specifically, SOAP for inter-application communications, XPDL for defining processes, BPEL for organising web services and ASAP for managing asynchronous services are standards which make the implementation of workflows easier to integrate into an SOA architecture.
TECHNICAL CHALLENGES OF SOA The major issue with SOA architecture is the level of services offered by the web services. SOAP is simultaneously over-simplistic and over-complicated; too simplistic to handle issues of security, adressing and availablility, yet too complex to be implemented rapidly. Two distinct trends can therefore be noted: on one hand, a profusion of new specifications and standards (WS…) to provide the functional elements lacking in SOAP; on the other, simpler alternatives such as REST. Should we conclude therefore that this architecture is still in its infancy? Conceptually speaking, no, because the creation of collaborative and shared applications responds to the realities of today’s information systems. The increasing use of technologies and Web standards is undeniable. From a technical point of view, it is clear that packaged solutions are emerging on standards that are currently still under development. The best way to avoid future problems related to choice is to re-organise your SOA project within a functional framework, that is, to optimise business processes and therefore the Workflow project. Current Workflow solutions include numerous pre-packaged, stable functionalities for automating your processes. They enable immediate use of web services for carrying out automatic processes without the need to acquire additional tools.
CONCLUSION Service-Oriented Architecture is clearly the solution for organising information systems, responding on various levels to new development and communication challenges in applications. The work involved in system migration and choosing the appropriate moment to effect this migration are the main obstacles to rapid implementation in companies. To restrict SOA implementation to a single technical migration project could both impede its acceptibility in terms of budgetary concerns whilst also running the risk of choosing badly between solutions still in their infancy. On the other hand, integrating Web services in Workflow projects to perform automatic processes is the most pragmatic solution for gradual and efficient implementation of SOA in terms of ROI. The expertise and work of the WfMC have contributed significantly to this field. Collaboration with other standardisation organisations has enabled integration of the full functional and technical aspects. However, the emerging
183
WORKFLOW AND SERVICE-ORIENTED ARCHITECTURE (SOA) form of competition must not be allowed to reduce the number of similar standards by squeezing them out of the market. Workflow, BPM and SOA are therefore not competitors but the proliferation of marketing and techniques surrounding automation of processes are such that solutions are particularly difficult to understand from the client company’s point of view. In this particular context, those solutions presenting tools which are easiest to implement and use will almost certainly have the highest rate of success.
184
A Comparison of XML Interchange Formats for Business Process Modelling1 Jan Mendling and Gustaf Neumann, Vienna University of Economics and Business Administration; Markus Nüttgens, Hamburg University of Economics and Politics ABSTRACT This paper addresses heterogeneity of business process metamodels and related interchange formats. We present different approaches towards interchange format design and effects of interchange format specification first. Moreover, we derive the superset of metamodel concepts from 15 currently available XML-based specifications for business process modeling. These concepts are used as a framework for comparing the 15 specifications.
1. INTRODUCTION Heterogeneity of Business Process Modelling (BPM) techniques is a notorious problem for business process management. Although standardization has been discussed for more than ten years, the lack of a commonly accepted interchange format is still the main encumbrance to business process management [Delphi 2003]. The reason why interchange is still a problem can be attributed not at least to the different perspective of business analysts and system engineers on business processes [MR 2004]. Recently, various new specifications for Web Service-based BPM, Web Service composition, and Web Service choreography have been proposed. At least in the short run, they contribute to a further increase of heterogeneity of XML interchange formats for business process modelling. Yet, the interrelation of these formats is too little understood. This paper tries to identify the superset of concepts covered in metamodels of the various proposals. We propose to use this set of concepts as a framework for the comparison of BPM interchange formats. It might serve as a first step towards a reference model for BPM that unifies the different perspectives on BPM. The rest of the chapter is structured as follows. Section 2 gives an overview on interchange formats, their rationale, and general design criteria. Section 3 introduces a framework for comparison of different XML interchange formats for BPM based on concepts derived from the metamodels of 15 BPM specifications. In Section 4 these specifications are compared to the framework and briefly described. In Section 5 related work is discussed before Section 6 concludes the chapter with an outlook on future research.
An earlier version of this paper has been published in F. Feltz, A. Oberweis, B. Otjacques, eds.: Proc. of EMISA 2004, Luxembourg, Vol. 56 of Lecture Notes in Informatics, pages 129-140, Oct. 2004. Copyright Gesellschaft für Informatik (GI), Bonn, Germany
1
185
COMPARISON OF XML INTERCHANGE FORMATS 2. INTERCHANGE FORMAT SPECIFICATION The specification and standardization of interchange formats is a widespread strategy in order to achieve inter-operability of applications (see e.g. [Koegel 1992]). In essence, an interchange format defines the structure of a file via a grammar or a schema that represents data relevant for a certain application domain. Independent software components can then consume data files that other applications produce. As a consequence, a standardized interchange format provides for a simple integration of applications (see e.g. [HW 2004]). According to a survey on experience reports of interchange format design projects, three general effects of interchange format standardization can be distinguished: a pragmatic effect, an economic effect, and an effect of conceptual consolidation [Mendling 2004]. • The pragmatic effect establishes inter-operability between heterogeneous applications of the same or related domains. This simplifies collaboration between people that work with different applications. An agreed upon interchange format avoids discontinuity of media. Furthermore, the interchange format can be used as an intermediary format for translations between multiple applications reducing the number of translation programs from O(n2) to O(n) [WHB 2002]. •
The economic effect refers to positive network effects. These network effects caused by the standardization of an interchange format might leverage competition between software vendors, because interchangeability of application data reduces vendor lock-in. It becomes cheaper to change the vendor or to buy complementary software that uses the same interchange format [Crawford 1984]. This might motivate the development of new tools. Moreover, the specification of an interchange format might even create a market: multimedia applications are a good example for this case (cf. e.g. [Koegel 1992]).
•
The effect of conceptual consolidation is triggered by the standardization process of an interchange format. In order to be successful the interchange format has to reflect at least the commonly used concepts of a certain domain. Accordingly, the specification of an interchange format may be regarded as a special kind of reference modelling that leverages the explication of concepts and consolidation of terminology of a given domain [OMGM 1998].
All these three effects may be regarded as beneficiary. Standardization bodies like the Workflow Management Coalition have established standardization procedures in order to make these benefits effective. For a discussion of standardization processes in practice see e.g. [MNS 2005]. The specification of interchange formats involves three interrelated aspects: metamodel, serial representation and mappings between both (see Figure 1, grey area).
186
COMPARISON OF XML INTERCHANGE FORMATS
Metamodel
maps to
instantiates Model
Interchange Format instantiates
represents
Interchange Format Instance
Figure 1: Relationship between Metamodels and Interchange Formats. The metamodel is used to define the modelling language for a certain domain [KK 2002]. Various techniques are available for the definition of metamodels including ER-Diagrams [Chen 1976], UML Class Diagrams [OMG 2004], graphs [Winter 2002], or XML Schema [BLMM 2001, BM 2001]. In order to build the foundation of an interchange format a respective metamodel should meet certain design criteria. These design criteria include simplicity, completeness, generality, unambiguity, and extensibility [Mendling 2004]. • Simplicity refers to freedom of complexity [SDSK 2000] in order to provide a compact metamodel. This metamodel should be easy to understand for domain experts. In the context of XML this criterion might advocate not to use concepts like substitution groups. •
On the other hand, completeness demands that a sufficient set of concepts is included in order to provide the expressive power that is needed for representing all relevant aspects of the domain [Crawford 1984]. The representation of control flow is an example of a concept that a BPM metamodel has to include, among others, in order to be complete.
•
Generality has to be offered by the interchange format in order to be applicable in all scenarios that are relevant to the domain (see e.g. [Crawford 1984]). Especially those concepts should be taken into account that are included in existing tools (see e.g. [Eurich 1986]). This implies that a general BPM metamodel should not be designed only with e.g. supply chain scenarios in mind.
•
Moreover, the interchange format has to offer an unambiguous view on the domain. Precise terms need to be chosen and related semantics have to be defined formally. By this means an interchange format might prove valuable for the consolidation of terminology in the respective domain (see e.g. [OMGM 1998]). The Glossary of the Workflow Management Coalition illustrates the need for precise definition of terms [WfMC 1999].
•
Extensibility belongs to the most prominent criteria of interchange formats (see e.g. [Crawford 1984, Koegel 1992, SDSK 2000]). It provides for the inclusion of additional information in a predefined way. This is especially desirable, because future developments, new requirements, and changing technology might motivate unanticipated revisions of the format in a priori unknown directions. Extensibility grants a smooth integration of such new aspects. XPDL, for example, offers so-called ExtendedAttributes to capture additional information.
187
COMPARISON OF XML INTERCHANGE FORMATS Models complying with the metamodel of an interchange format need to be expressed in a serial representation. Such a serial representation may follow e.g. a byte encoding, a plain text encoding, or XML [BPSM 2000]. The structure of the serial representation is defined via a schema. Furthermore, XMLbased techniques like RDF [Beckett 2004], or GXL [Winter 2002] can be customized for business process modelling as well. A serial representation of an interchange format should also meet certain design criteria. These include readability, ease of implementation, platform independence, efficiency, free availability, and support of standards [Mendling 2004]. The identity of metamodel and serial representation is important in order to avoid loss of information [SDSK 2000]. Formally, this implies that isomorphic mappings between them must be available. There are different approaches to specify metamodel, interchange format, and respective mappings. These include the following: • Interchange Format Only: Some interchange formats like BPEL4WS [ACDG 2003] provide only an XML Schema. This schema can be regarded as a metamodel. Thus, no mappings need to be defined between metamodel and interchange format. •
Mappings Only: Another approach is taken by XMI [OMG 2003b]. In order to offer an interchange format for UML models, the XMI specification defines production rules (mappings) from the Meta-Object Facility (MOF) [OMG 2002] meta2model of UML to XML and XML Schema representation. Actually, XMI does not define the interchange format for UML models, but the rules to derive an interchangeable representation of models. As a consequence, XMI implicitly defines a set of interchange formats that correspond to a set of UML (meta)models.
•
Joint Specification: Frequently, the joint specification of a metamodel and a respective interchange format is given. For example, the Petri Net Markup Language (PNML) [BCHK 2003] defines a metamodel via a UML class diagram and an XML interchange format via a schema.
Although an interchange format should be isomorphic to the metamodel, actual software applications and tools use a proprietary internal model. This is frequently similar, but not identical to the standardized metamodel. Accordingly, the import and export of interchange format compliant files would be a homomorphic mapping to and from the proprietary model. Therefore, it is important for a metamodel and the related interchange format to meet the design criteria of completeness. An interchange format is more likely to gain acceptance when a complete set of modelling concepts is supported. The following section aims to identify the superset of concepts used in various metamodels of BPM interchange formats which is then used as a framework for comparing the different approaches.
3. METAMODEL CONCEPTS OF BUSINESS PROCESS MODELLING PROPOSALS Recently, Business Process Modelling has become subject of various specification and standardization efforts. Different consortia including Object Management Group (OMG), Organization for the Advancement of Structured Information Standards (OASIS), Business Process Management Initiative (BPMI), United Nations Centre for Trade Facilitation and Electronic Business
188
COMPARISON OF XML INTERCHANGE FORMATS (UN/CEFACT), World Wide Web Consortium (W3C), and Workflow Management Coalition (WfMC), as well as individual software vendors and academic groups have proposed metamodels and related interchange formats for Business Process Modelling. From the analysis of 15 specifications we gathered a list of 13 high-level concepts that are included in these metamodels. These include the following: • Task I/O: In this paper we use the term task to refer to basic units of work whose temporal and logical relationships are modelled in a process. The input and output (I/O) of these tasks may be modelled using simple or XML complex types. •
Task Address: The address specifies where or how a service can be located to perform a task. The address can be modelled directly via a URI reference of a service or indirectly via a query that identifies a service address.
•
Quality Attributes: When a set of potential services is generated via a query, quality attributes may be used to identify the “best” service.
•
Task Protocol: The protocol defines a set of conventions to control interaction with a service performing a task. Web Services use SOAP as a protocol.
•
Control Flow: The control flow defines the temporal and logical relationships between different tasks. Control flow can be specified via directed graphs, block-oriented nesting of control instructions, or process algebra.
•
Data Handling: Data handling specifies which variables are used in a process instance and how the actual values of these variables are calculated.
•
Instance Identity: This concept addresses how a process instance and related messages are identified. Correlation uses a set of message elements that are unique for a process instance in order to route messages to process instances. The generation of a unique identifier which is included in the message exchange is an alternative approach.
•
Roles: Roles provide for an abstraction of participants in a process. Roles are assigned to tasks and users to roles. A staff resolution mechanism can then allocate tasks of a process instance to users.
•
Events: Events represent real-world changes. Respective event handlers provide the means to respond to them in a predefined way.
•
Exceptions: Exceptions or faults describe errors during the execution of a process. In case of exceptions dedicated exception handlers undo unsuccessful tasks or terminate the process instance.
•
Transactions: ACID transactions define a short-run set of operations that have all-or-nothing semantics. They have to be rolled back when one partial operation fails. Business transactions represent longrunning transactions. In case of failure the effects of a business transaction are erased by a compensation process.
•
Graphic Position: The graphical presentation of a business process model contributes to its comprehensibility. The attachment of graphical position information can be an explicit part of the metamodel.
189
COMPARISON OF XML INTERCHANGE FORMATS •
Statistical Data: Performance analysis of a business process builds on statistical data such as costs or duration of tasks.
This list of concepts can be used to compare different BPM specifications. In the subsequent section we will use it to benchmark 15 BPM interchange formats for their completeness.
4. A COMPARISON OF BUSINESS PROCESS MODELLING PROPOSALS The 13 metamodel concepts gathered in the previous section are now considered for comparing the completeness of the 15 BPM interchange format proposals. The interchange formats are used in at least four different areas of application: • Composition: Composition refers to the definition of the internal implementation of executable business processes. Web Service composition defines executable business processes that are built from a set of Web Services. •
Choreography: Choreography defines externally observable behavior of a business process. Web Service choreography refers to the correct content and order of messages that two parties exchange in a business process.
•
Business Analysis: Business analysis refers to the presentation of business processes to managers. It builds on visualization of processes and annotation with statistics.
•
Formal Analysis: This application refers to the verification of different formal quality criteria. These include e.g. soundness [van der Aalst 2000].
Figure 2 gives an overview of the findings. A plus sign indicates that the concept mentioned on the left hand side of the row is included in the metamodel of the proposal mentioned at the top of the column. A minus sign denotes that the concept is not included. The figure shows that none of the specifications addresses all of the 13 concepts. BPEL4WS, BPMN, and WSFL yield good results each lacking only three concepts. BPDM which is still in progress of specification achieves the best score missing only two concepts. In this context it is important to mention that plus signs for a concept do not imply that the languages offer similar primitives to capture a high-level concept. Although control flow is the only concept supported by all specifications, there may be huge differences in the set of control flow primitives available in different language (see [AHKB 2003]). We will now discuss each proposal in detail.
190
BPDM BPEL4WS BPML BPMN BPSS EPML OWL-S PNML UML Act.D. WS-CDL WSCI WSCL WSFL XLANG XPDL
COMPARISON OF XML INTERCHANGE FORMATS
Task I/O Task Address Quality Attributes Protocol Control Flow Data Handling Instance Identity Roles Events Exceptions Transactions Graphic Position Statistical Data •
•
+ + + + + + + + + +
+ + + + + + + + + + - + -
+ + + + + + + + + -
+ + + + + + + + + + -
+ + + + + + -
+ + + -
+ + + + + + -
+ + -
+ + + + + + -
+ + + + + + + + + -
+ + + + + + + + -
+ + + + -
+ + + + + + + + + + -
+ + + + + + + + + -
+ + + + + + + +
Figure 2: Overview of BPM Interchange Formats BPDM: OMG’s Business Process Definition Metamodel (BPDM) is still in progress of standardization. BPDM will be MOF compliant. As a consequence, the respective BPDM interchange format will rely on XMI production rules. According to the Request for Proposals [OMG 2003a] the BPDM is expected to support implementational aspects like task input and out, address, protocol. Furthermore, BPDM will include procedural and rule-based control flow concepts. Data handling, instance identification, and roles are also supported as well as events, exceptions, and transaction compensation. The inclusion of audit information is also requirement. Yet, graphic position information of objects in a visual model is not mentioned. BPEL4WS: Business Process Execution Language for Web Services (BPEL4WS or BPEL) [ACDG 2003] has moved from a consortium of major software vendors to OASIS. BPEL is specified as an interchange format only via an XML Schema. BPEL models tasks as calls to Web Services whose input and output are specified by messages and whose address is identified via Uniform Resource Identifiers (URI) of WSDL port types. SOAP is used as the communication protocol. Control flow of BPEL can be modelled block-oriented or graph-oriented, data handling is expressed via variables and related operations. The identification of process instances is achieved via correlation sets. Roles of process participants are defined via so-called partner link types. Furthermore, BPEL supports handling of events and faults as well as compensation of transactions. BPEL can be used to describe
191
COMPARISON OF XML INTERCHANGE FORMATS executable Web Service composition as well as Web Service choreography. •
BPML: The Business Process Modeling Language [Arkin 2002] proposed by BPMI is very similar to BPEL [MM 2003]. As the main difference BPML allows to specify multiple processes in one XML document and related communication between those processes. Furthermore, BPML is not tied to WSDL. Accordingly, the communication protocol is left to a BPML compliant implementation.
•
BPMN: The Business Process Modeling Notation [White 2004] also developed by BPMI wants to unify the different graphical notations for business processes. The specification also provides a mapping to BPEL. Therefore, its metamodel reflects most of BPEL’s concepts except message correlation. Additional specifications will define a BPMN metamodel based on MOF. This will permit serialization with XMI production rules for XML interchange.
•
BPSS: The Business Process Specification Schema [CCKH 2001] is part of OASIS and UN/CEFACT’s work on ebXML. It includes a metamodel and XML Schema for Web Service choreography. Accordingly, it does not address implementational aspects like data handling or process instance identification. It supports the definition of roles, exceptions, and transactions in an inter-organizational message exchange.
•
EPML: The Event-Driven Process Chain (EPC) Markup Language (EPML) [MN 2005] is an academic proposal. It captures the control flow elements of EPCs. Further aspects can be defined via extensions. As EPML aims to facilitate graphical model interchange it includes graphical position information for each EPC model object.
•
OWL-S: OWL-Services (OWL-S) [APSS 2003] is an academic proposal for a service metamodel represented in OWL. OWL-S builds on an (input-output-preconditions-effects) quadruple to describe services. It also allows the definition of resources that we categorized as roles in Figure 2. OWL permits the definition of so-called groundings which is similar to a WSDL binding to a protocol and related endpoints.
•
PNML: The Petri Net Markup Language [BCHK 2003] is an academic proposal for an XML interchange format for Petri Net models. It supports the basic Petri Net syntax elements and can be extended to represent arbitrary Petri Net types. The eXchangeable Routing Language (XRL) [Norta 03] is based on PNML and can be executed on a dedicated infrastructure.
•
UML 2 Activity Diagram: Activity Diagrams of the Unified Modeling Language (UML) [OMG 2004] can be exchanged using XMI. Its metamodel includes concepts to model input and output of tasks, control flow, data handling, roles, events, exceptions, and graphical information.
•
WS-CDL: W3C’s Web Service Choreography Description Language [KBRY 2004] is up to now only available as a last call working draft. It builds on WSDL and SOAP and provides different algebraic control flow primitives. It also supports data handling, role definition, as well as exception and transaction modelling.
192
COMPARISON OF XML INTERCHANGE FORMATS •
WSCI: W3C’s Web Service Choreography Interface [AAFK 2002] provides a set of extensions to WSDL in order to describe process behavior of message exchanges. Beyond input and output message types, WSDL bindings, and correlation WSCI also supports roles, exception handling, and transactions.
•
WSCL: Hewlett-Packard’s Web Service Choreography Language [BBBC 2002] defines a minimal set of concepts in order to describe Web Service choreographies including message types, protocol, and service location. The specification contains a meta-model and a related XML Schema.
•
WSFL: IBM’s Web Services Flow Language [Leymann 2001] is one of the predecessors of BPEL. It includes most of the concepts despite transaction support, graphical position information, and statistical data. Control flow in WSFL is modelled via directed graphs.
•
XLANG: Microsoft’s XLANG [Thatte 2001] is the second predecessor of BPEL. It defines WSDL extensions to describe process behavior of a Web Service similar to WSCI. Additionally, it provides means for defining message correlation, roles, event and exception handling as well as transaction declaration.
•
XPDL: XML Process Definition Language [WfMC 2002] is a standardized interchange format for business process models proposed by WfMC. It includes various concepts like task input/output and address, control flow, data handling, roles, events, and exceptions. It is also the only specification that addresses process statistics like durations and costs.
5. RELATED WORK A lot of work on business process model interchange formats and related metamodels is dedicated to the comparison of only two or three proposals. Examples include comparisons of BPEL and BPML [MM 2003]; DAML-S (predecessor of OWL-S) and BPEL [MM 2002]; and XPDL, BPEL, and BPML [Shapiro 2002]. Other approaches define metamodels or lists and use them as a framework for comparison (see e.g. [BKKR 2003], [SAJG 2002], [RG 2002], or [zur Muehlen 2004]). Our approach complements this work by providing a list of concepts that are extracted from actual specifications. To our best knowledge our list of XML-based business process modelling specifications is exhaustive at the time this chapter is written. It extends the list of proposals gathered at the XML4BPM workshop [NM 2004] or those listed on Cover Pages [Cover 2003]. Another approach is taken by [AHKB 2003] who identify workflow patterns for control flow semantics. A similar approach seems to be well suited for each of the high-level metamodel concepts identified in this paper in order to build the foundation of a reference model for business process management. This will be subject to future research.
6. CONCLUSION AND FUTURE WORK In this chapter we discussed interchange format specification in the context of BPM. Furthermore, we presented a framework for comparing XML-based business process modeling specifications that builds on the superset of con-
193
COMPARISON OF XML INTERCHANGE FORMATS cepts extracted from the metamodels of 15 BPM specifications. Moreover, we applied this framework to compare the 15 BPM specifications. With our work we aim to contribute to a better comparison of heterogeneous approaches towards BPM. This may finally result in a BPM reference metamodel and a related general interchange format for BPM. Yet, the high-level metamodel concepts identified in this chapter need further in-depth analysis similar to the workflow pattern analysis reported in [AHKB 2003]. Such analysis will be subject to future research.
REFERENCES [AAFK 2002] Arkin, A., Askary, S., Fordin, S., Kawaguchi, K., Orchard, D., Pogliani, S., Riemer, K., Struble, S., Takacsi-Nagy, P., Trickovic, I., and Zimek, S.: Web Service Choreography Interface (WSCI) 1.0. W3C Note 8 August. World Wide Web Consortium. 2002. [ACDG 2003] Andrews, T., Curbera, F., Dholakia, H., Goland, Y., Klein, J., Leymann, F., Liu, K., Roller, D., Smith, D., Thatte, S., Trickovic, I., and Weerawarana, S.: Business Process Execution Language for Web Services, Version 1.1. BEA Systems, IBM Corp., Microsoft Corp., SAP AG, Siebel Systems. 2003. [AHKB 2003] van der Aalst, W. M. P., ter Hofstede, A. H. M., Kiepuszewski, B., and Barros, A. P.: Workflow Patterns. Distributed and Parallel Databases. 14(1):5–51. July 2003. [APSS 2003] Ankolenkar, A., Paolucci, M., Srinivasan, N., Sycara, K., Solanki, M., Lassila, O., McGuinness, D., Denker, G., Martin, D., Parsia, B., Sirin, E., Payne, T., McIlraith, S., Hobbs, J., Sabou, M., and McDermott, D.: OWL-S: Semantic Markup for Web Services (Version 1.0). OWL Services Coalition. 2003. [Arkin 2002] Arkin, A.: Business Process Modeling Language (BPML). BPMI.org. 2002. [BBBC 2002] Banerji, A., Bartolini, C., Beringer, D., Chopella, V., Govindarajan, K., Karp, A., Kuno, H., Lemon, M., Pogossiants, G., Sharma, S., and Williams, S.: Web Service Conversation Language (WSCL) 1.0. W3C Note 14 March. World Wide Web Consortium. 2002. [BCHK 2003] Billington, J., Christensen, S., van Hee, K. E., Kindler, E., Kummer, O., Petrucci, L., Post, R., Stehno, C., and Weber, M.: The Petri Net Markup Language: Concepts, Technology, and Tools. In: W. M. P. van der Aalst and E. Best, eds., Applications and Theory of Petri Nets 2003, 24th International Conference, ICATPN 2003, Eindhoven, The Netherlands. volume 2679 of Lecture Notes in Computer Science. pp. 483–505. 2003. [Beckett 2004] Beckett, D.: RDF/XML Syntax Specification (Revised). W3C Recommendation 10 February. World Wide Web Consortium. 2004.
194
COMPARISON OF XML INTERCHANGE FORMATS [BKKR 2003] Bernauer, M., Kappel, G., Kramler, G., and Retschitzegger, W.: Specification of Interorganizational Workflows - A Comparison of Approaches. In: Proceedings of the 7th World Multiconference on Systemics, Cybernetics and Informatics. pp. 30–36. 2003. [BLMM 2001] Beech, D., Lawrence, S., Moloney, M., Mendelsohn, N., and Thompson, H. S.: XML Schema Part 1: Structures. W3C Recommendation 02 May. World Wide Web Consortium. 2001. [BM 2001] Biron, P. V. and Malhorta, A.: XML Schema Part 2: Datatypes. W3C Recommendation 02 May. World Wide Web Consortium. 2001. [BPSM 2000] Bray, T., Paoli, J., Sperberg-McQueen, C. M., and Maler, E.: Extensible Markup Language (XML) 1.0 (Second Edition). W3C Recommendation 6 October. World Wide Web Consortium. 2000. [CCKH 2001] Clark, J., Casanave, C., Kanaskie, K., Harvey, B., Clark, J., Smith, N., Yunker, J., and Riemer, K.: ebXML Business Process Specification Schema Version 1.01. UN/CEFACT and OASIS. 2001. [Chen 1976] Chen, P.: The Entity-Relationship Model - Towards a Unified View of Data. ACM Transactions on Database Systems (TODS). (1):9–36. 1976. [Cover 2003] Cover, R.: Standards for Business Process Modeling, Collaboration, and Choreography. Website (last modified: November 20, 2003). Cover Pages. http://xml.coverpages.org/bpm.html. 2003. [Crawford 1984] Crawford, J. D.: An electronic design interchange format. In: Proceedings of the 21st Conference on Design Automation, 25-27 June 1984, Albuquerque, United States. pp. 683–685. 1984. [Delphi 2003] Delphi Group: BPM 2003 – Market Milestone Report. White Paper. 2003. [Eurich 1986] Eurich, J. P.: A Tutorial Introduction to the Electronic Design Interchange Format. In: Proceedings of the 23rd ACM/IEEE Design Automation Conference, June 1986, Las Vegas, NV, United States. pp. 327–333. 1986. [HW 2004] Hohpe, G. and Woolf, B.: Enterprise Integration Patterns. Addison Wesley. 2004. [KBRY 2004] Kavantzas, N., Burdett, D., and Ritzinger, G., Lafon, Y.: Web Services Choreography Description Language Version 1.0. W3C Working Draft 12 October 2004. World Wide Web Consortium. October 2004.
195
COMPARISON OF XML INTERCHANGE FORMATS [KK 2002] Karagiannis, D. and Kühn, H.: Metamodelling Platforms. Invited Paper. In: K. Bauknecht and A. Min Tjoa and G. Quirchmayer, eds.: Proceedings of the 3rd International Conference EC-Web 2002 - Dexa 2002, Aix-enProvence, France. Volume 2455 of Lecture Notes in Computer Science. pp. 182–196. 2002. [Koegel 1992] Koegel, J. F.: On the Design of Multimedia Interchange Formats. In: Proceedings of the 3rd International Workshop on Network and Operating System Support for Digital Audio and Video, 12-13 November 1992, La Jolla, California, United States. pp. 262–271. 1992. [Leymann 2001] Leymann, F.: Web Services Flow Language (WSFL). IBM Corp. 2001. [Mendling 2004] Mendling, J.: A Survey on Design Criteria for Interchange Formats. Technical Report JM-2004-06-02. Vienna University of Economics and Business Administration - Department of Information Systems. http://wi.wuwien.ac.at/˜mendling/publications/TR04-Interchange.pdf. Vienna, Austria, 2004. [MM 2002] McIlraith, S. and Mandell, D.: Comparison of DAML-S and BPEL4WS. Stanford University. http://www.ksl.stanford.edu/projects/DAML/Webservices/DAMLSBPEL.html. September 2002. [MM 2003] Mendling, J. and Müller, M.: A Comparison of BPEL4WS and BPML. In: Tolksdorf, R. and Eckstein, R., eds.: Proceedings of Berliner XML-Tage. pp. 305–316. 2003. [MN 2005] Mendling, J. and Nüttgens, M.: EPC Markup Language (EPML) - An XMLBased Interchange Format for Event-Driven Process Chains (EPC). Technical Report JM-2005-03-10. Vienna University of Economics and Business Administration - Department of Information Systems. http://wi.wuwien.ac.at/~mendling/publications/TR05-EPML.pdf. Vienna, Austria, 2005. [MNS 2005] zur Muehlen, M., Nickerson, J. V., Swenson, K. D.: Developing Web Services Choreography Standards – The Case of REST vs. SOAP. Decision Support Systems 38 (2005) in press. [MR 2004] zur Muehlen, M. and Rosemann, M.: Multi-Paradigm Process Management. In: Proc. of the Fifth Workshop on Business Process Modeling, Development, and Support - CAiSE Workshops. 2004. [NM 2004] Nüttgens, M. and Mendling, J., eds.: XML4BPM 2004, Proceedings of the 1st GI Workshop XML4BPM – XML Interchange Formats for Business Process Management at 7th GI Conference Modellierung 2004, Marburg
196
COMPARISON OF XML INTERCHANGE FORMATS Germany. http://wi.wu-wien.ac.at/˜mendling/XML4BPM/xml4bpm2004-proceedings.pdf. March 2004. [Norta 2003] Norta, A.: Web Supported Enactment of Petri-Net Based Workflows with XRL/flower. Technical report. Eindhoven University of Technology. http://tmitwww.tm.tue.nl/staff/anorta/XRL/documentation/ATPN04.pdf 2003. [OMG 2002] OMG, ed.: Meta Object Facility. Version 1.4. Object Management Group. 2002. [OMG 2003a] Koethe, M. R.: Business Process Definition Metamodel. Request for Proposals (bei/2003-01-06). Object Management Group. 2003. [OMG 2003b] OMG, ed.: XML Metadata Interchange (XMI). Version 2.0. Object Management Group. May 2003. [OMG 2004] OMG, ed.: Unified Modeling Language. Version 2.0. Object Management Group. 2004. [OMGM 1998] Ohno-Machado, L., Gennari, J. H., Murphy, S. N., Jain, N. L., Tu, S. W., Oliver, D. E., Pattison-Gordon, E., Greenes, R. A., Shortliffe, E. H., and Barnett, G. O.: The GuideLine Interchange Format. Journal of the American Informatics Association. 5(4):357–372. July 1998. [RG 2002] Rosemann, M. and Green, P.: Developing a meta model for the BungeWand-Weber ontological constructs. Information Systems. 27:75–91. 2002. [SAJP 2002] Söderström, E., Andersson, B., Johannesson, P., Perjons, E., and Wangler, B.: Towards a Framework for Comparing Process Modelling Languages. In: Banks Pidduck, A., Mylopoulos, J., Woo, C. C., and Özsu, M. T., eds.: Proceedings of the 14th International Conference on Advanced Information Systems Engineering (CAiSE). volume 2348 of Lecture Notes in Computer Science. pp. 600–611. 2002. [SDSK 2000] St-Denis, G., Schauer, R., and Keller, R. K.: Selecting a Model Interchange Format - The Spool Case Study. In: Proceedings of the 33rd Annual Hawaii International Conference on System Sciences (HICSS-33), 4-7 January, 2000, Maui, Hawaii. 2000. [Shapiro 2002] Shapiro, R.: A Comparison of XPDL, BPML and BPEL4WS. Draft version 1.4. Cape Visions. http://xml.coverpages.org/Shapiro-XPDL.pdf. 2002. [Thatte 2001] Thatte, S.: XLANG: Web Services for Business Process Design. Microsoft Corp. 2001.
197
COMPARISON OF XML INTERCHANGE FORMATS [van der Aalst 2000] van der Aalst, W. M.: Workflow Verification: Finding Control-Flow Errors Using Petri-Net-Based Techniques. In van der Aalst, W., Desel, J., Oberweis, A.: Business Process Management. volume LNCS 1806, pp. 161–183. Springer Verlag. 2000. [WHB 2002] Wüstner, E., Hotzel, T., and Buxmann, P.: Converting Business Documents: A Classification of Problems and Solutions using XML/XSLT. In: Proceedings of the 4th International Workshop WECWIS. 2002. [White 2004] White, S. A.: Business Process Modeling Notation. Specification. BPMI.org. 2004. [Winter 2002] Winter, A.: GXL – Overview and Current Status. In: Proceedings of the International Workshop on Graph-Based Tools (GraBaTs), Barcelona, Spain. 2002. [WfMC 1999] Workflow Management Coalition: Terminology & Glossary. Document Number WFMC-TC-1011, Document Status—Issue 3.0, February 1999. http://www.wfmc.org/standards/docs/TC-1011_term_glossary_v3.pdf. [WfMC 2002] Workflow Management Coalition: Workflow Process Definition Interface – XML Process Definition Language. Document Number WFMC-TC-1025, October 25, 2002, Version 1.0. Workflow Management Coalition. 2002. [zur Muehlen 2004] zur Muehlen, M.: Workflow-based Process Controlling. Logos Verlag. 2004.
198
How to Measure the Control-flow Complexity of Web Processes and Workflows Jorge Cardoso, Department of Mathematics and Engineering, University of Madeira, Portugal SUMMARY Several Web process and workflow specification languages and systems have been developed to ease the task of modeling and supporting business processes. In a competitive e-commerce and e-business market, organizations want Web processes and workflows to be simple, modular, easy to understand, easy to maintain and easy to re-engineer. To achieve these objectives, one can calculate the complexity of processes. The complexity of processes is intuitively connected to effects such as readability, understandability, effort, testability, reliability and maintainability. While these characteristics are fundamental in the context of processes, no methods exist that quantitatively evaluate the complexity of processes. The major goal of this chapter is to describe a measurement to analyze the control-flow complexity of Web processes and workflows. The measurement is to be used at design-time to evaluate the complexity of a process design before implementation.
INTRODUCTION The emergence of e-commerce has changed the foundations of business, forcing managers to rethink their strategies. Organizations are increasingly faced with the challenge of managing e-business systems, Web services, Web processes, and workflows. Web Services and Web processes promise to ease several current infrastructure challenges, such as data, application, and process integration. With the emergence of Web services, a workflow management system become essential to support, manage, and enact Web processes, both between enterprises and within the enterprise (Sheth, Aalst, & Arpinar, 1999). The effective management of any process requires modeling, measurement, and quantification. Process measurement is concerned with deriving a numeric value for attributes of processes. Measures, such as Quality of Service measures (Cardoso, Miller, Sheth, Arnold, & Kochut, 2004), can be used to improve processes productivity and quality. To achieve an effective management, one fundamental area of research that needs to be explored is the complexity analysis of processes. Process complexity can be viewed as a component of a Quality of service (QoS) model for processes, since complex processes are more prone to errors. For example, in software engineering it has been found that program modules with high complexity indices have a higher frequency of failures (Lanning & Khoshgoftaar, 1994). Surprisingly, in spite of the fact that there is a vast literature on software measurement of complexity, Zuse (Zuse, 1997) has found hundreds of differ-
199
HOW TO MEASURE CONTROL-FLOW COMPLEXITY ent software metrics proposed and described, while no research on process complexity measurement has yet been carried out. A Web process is composed of a set of Web services put together to achieve a final goal. As the complexity of a process increases, it can lead to poor quality and be difficult to reengineer. High complexity in a process may result in limited understandability and more errors, defects, and exceptions leading processes to need more time to develop, test and maintain. Therefore, excessive complexity should be avoided. For instance, critical processes, in which failure can result in the loss of human life, require a unique approach to development, implementation and management. For this type of processes, typically found in healthcare applications (Anyanwu, Sheth, Cardoso, Miller, & Kochut, 2003), the consequences of failure are terrible. The ability to produce processes of higher quality and less complexity is a matter of endurance. Our work borrows some techniques from the branch of software engineering known as software metrics, namely McCabe’s cyclomatic complexity (MCC) (McCabe, 1976). A judicious adaptation and usage of this metric during development and maintenance of Web process applications can result in a better quality and maintainability. Based on MCC, we propose a control-flow complexity metric to be used during the design of processes. Web process control-flow complexity is a design-time metric. It can be used to evaluate the difficulty of producing a Web process before its implementation. When control-flow complexity analysis becomes part of the process development cycle, it has a considerable influence in the design phase of development, leading to further optimized processes. This control-flow complexity analysis can also be used in deciding whether to maintain or redesign a process. Throughout this chapter, we will use the term “process” to refer to a Web process or a workflow and we will use the term “activity” to refer to a Web service or a workflow task.
CHAPTER STRUTURE This chapter is structured as follows. The first section presents the related work. We will see that while a significant amount of work in the software engineering field has been developed to quantify the complexity of programs, the literature and work on complexity analysis for Web processes and workflow are inexistent. In the next section, we discuss the analysis of processes’ complexity. We start by giving a definition for Web processes’ complexity. We then enumerate a set of properties that are highly desirable for a model and theory to calculate the complexity of processes. In this section, we also motivate the reader towards a greater understanding of the importance and use of complexity metrics for processes. The next section gives an overview of McCabe’s cyclomatic complexity. This overview is important since our approach borrows some of McCabe’s ideas to evaluate complexity. Subsequently, we discuss process control-flow complexity. We initiate this section giving the semantics of processes’ structure and representation. Once the main elements of a process are identified and understood, we show how control-flow complexity can be calculated for processes. Finally, the last section presents our conclusions and future work.
200
HOW TO MEASURE CONTROL-FLOW COMPLEXITY RELATED WORK While a significant amount of research on the complexity of software programs has been done in the area of software engineering, the work found in the literature on complexity analysis for Web processes and workflows is inexistent. Since the research on process complexity is inexistent, in this section we will discuss the progress made in the area of software complexity. The last 30 years has seen a large amount of research aimed at determining measurable properties to capture the notions of complexity of software. The earliest measures were based on analysis of software code, the most fundamental being a basic count of the number of Lines of Code (LOC). Despite being widely criticized as a measure of complexity, it continues to have widespread popularity mainly due to its simplicity (Azuma & Mole, 1994). An early measure, proposed by McCabe (McCabe, 1976), viewed program complexity related to the number of control paths through a program module. This measure provides a single number that can be compared to the complexity of other programs. It is also one of the more widely accepted software metrics. It is intended to be independent of language and language format. The search for theoretically based software measures with predictive capability was pioneered by Halstead (Halstead, 1977). Complexity measurement was developed to measure a program module's complexity directly from source code, with emphasis on computational complexity. The measures were developed as a means of determining a quantitative measure of complexity based on a program comprehension as a function of program operands (variables and constants) and operators (arithmetic operators and keywords which alter program control flow). Henry and Kafura (Henry & Kafura, 1981) proposed a metric based on the impact of the information flow in a program’ structure. The technique suggests identifying the number of calls to a module (i.e. the flows of local information entering: fan-in) and identifying the number of calls from a module (i.e. the flows of local information leaving: fan-out). The measure is sensitive to the decomposition of the program into procedures and functions, on the size and the flow of information into procedures and out of procedures. A recent area of research involving Web processes, workflows, and Quality of Service can also be considered related to the work in this chapter. Organizations operating in modern markets, such as e-commerce activities and distributed Web services interactions, require QoS management. Appropriate quality control leads to the creation of quality products and services; these, in turn, fulfill customer expectations and achieve customer satisfaction. Quality of service can be characterized according to various dimensions. For example, Cardoso et al. (Cardoso, Sheth, & Miller, 2002) have constructed a QoS model for processes composed of three dimensions: time, cost, and reliability. Another dimension that could be considered relevant under the QoS umbrella is the complexity of processes. Therefore, the complexity dimension could be added and integrated to the QoS model already developed (Cardoso, Miller et al., 2004).
PROCESS COMPLEXITY ANALYSIS The overall goal of process complexity analysis is to improve the comprehensibility of processes. The graphical representation of most process specifica-
201
HOW TO MEASURE CONTROL-FLOW COMPLEXITY tion languages provides the user with the capability to recognize complex areas of processes. Thus, it is important to develop methods and measurements to automatically identify complex processes and complex areas of processes. Afterwards, these processes can be reengineered to reduce the complexity of related activities. One key to the reengineering is the availability of a metric that characterizes complexity and provides guidance for restructuring processes. Definition of Process Complexity Several definitions have been given to describe the meaning of software complexity. For example, Curtis (Curtis, 1980) states that complexity is a characteristic of the software interface which influences the resources another system will expend or commit while interacting with the software. Card and Agresti (Card & Agresti, 1988) define relative system complexity as the sum of structural complexity and data complexity divided by the number of modules changed. Fenton (Fenton, 1991) defines complexity as the amount of resources required for a problem’s solution. After analyzing the characteristics and specific aspects of Web processes and workflows, we believe that the definition that is better suited to describe processes complexity can be derived from (IEEE, 1992). Therefore, we define process complexity as the degree to which a process is difficult to analyze, understand or explain. It may be characterized by the number and intricacy of activity interfaces, transitions, conditional and parallel branches, the existence of loops, roles, activity categories, the types of data structures, and other process characteristics. Process Complexity Measurement Requirements The development of a model and theory to calculate the complexity associated with a Web process or workflow need to conform to a set of basic but important properties. The metric should be easy to learn, computable, consistent and objective. Additionally, the following properties are also highly desirable (Tsai, Lopex, Rodriguez, & Volovik., 1986; Zuse, 1990): • Simplicity. The metric should be easily understood by its end users, i.e., process analysts and designers. • Consistency. The metric should always yield the same value when two independent users apply the measurement to the same process, i.e. they should arrive at the same result. • Automation. It must be possible to automate the measurement of processes. • Measures must be additive. If two independent structures are put into sequence then the total complexity of the combined structures is at least the sum of the complexities of the independent structures. • Measures must be interoperable. Due to the large number of existing specification languages, both in academia and industry, the measurements should be independent of the process specification language. A particular complexity value should mean the same thing whether it was calculated from a process written in BPEL (BPEL4WS, 2002), WSFL (Leymann, 2001), BPML (BPML, 2004), YAWL (Aalst & Hofstede, 2003), or some other specification language. The objective is to be able to set complexity standards and interpret the resultant numbers uniformly across specification languages.
202
HOW TO MEASURE CONTROL-FLOW COMPLEXITY These properties will be taken into account in the next sections when we introduce our model to compute the complexity of processes. Uses of Complexity Analyzing the complexity at all stages of process design and development helps avoid the drawbacks associated with high complexity processes. Currently, organizations have not implemented complexity limits as part of their business process management projects. As a result, it may happen that simple processes come to be designed in a complex way. For example, important questions that can be made relative to the process illustrated in Figure 2 (Anyanwu et al., 2003) are: “can the Eligibility Referral workflow be designed in a simpler way?”, “what is the complexity of the workflow?” and “what areas or regions of the workflow are more complex and therefore more prone to errors?”
Figure 1. Eligibility Referral Workflow The use of complexity analysis will aid in constructing and deploying Web processes and workflows that are more simple, reliable and robust. The following benefits can be obtained from the use of complexity analysis: • Quality assessment. Processes quality is most effectively measured by objective and quantifiable metrics. Complexity analysis allows calculating insightful metrics and thereby identifying complex and error prone processes. • Maintenance analysis. The complexity of processes tends to increase as they are maintained and over a period of time (Figure 2). By measuring the complexity before and after a proposed change, we can minimize the risk of the change. • Reengineering. Complexity analysis provides knowledge of the structure of processes. Reengineering can benefit from the proper application of complexity analysis by reducing the complexity of processes. • Dynamic behavior. Processes are not static applications. They are constantly undergoing revision, adaptation, change, and modification to meet end-users needs. The complexity of these processes and their continuous evolution makes it very difficult to assure their stability and reliability. In-depth analysis is required for fixing defects in portions of processes of high complexity (Figure 2).
203
HOW TO MEASURE CONTROL-FLOW COMPLEXITY Process Complexity Analysis and Process Reengineering 60 50 Complexity
40 30 20 Process Adaptation and Modification
10
Complexity Analysis and Process Reengineering
0 1
3
5
7
9
11 13 15 17 19 21 23 25 27 29 Time
Figure 2. Process Complexity Analysis and Process Reengineering
OVERVIEW OF MCCABE’S CYCLOMATIC COMPLEXITY Since our work to evaluate processes’ complexity borrows some ideas from McCabe’s cyclomatic complexity (MCC) (McCabe, 1976) to analyze software complexity, we start by describing the importance of MCC and illustrates its usage. This metric was chosen for its reliability as a complexity indicator and its suitability for our research. Since its development, McCabe’s cyclomatic complexity has been one of the most widely accepted software metrics and has been applied to tens of millions of lines of code in both the Department of Defense (DoD) and commercial applications. The resulting base of empirical knowledge has allowed software developers to calibrate measurements of their own software and arrive at some understanding of its complexity. Software metrics are often used to give a quantitative indication of a program’s complexity. However, it is not to be confused with algorithmic complexity measures (e.g. Big-Oh “O”-Notation), whose aim is to compare the performance of algorithms. Software metrics have been found to be useful in reducing software maintenance costs by assigning a numeric value to reflect the ease or difficulty with which a program module may be understood. McCabe’s cyclomatic complexity is a measure of the number of linearly independent paths in a program. It is intended to be independent of language and language format (McCabe & Watson, 1994). MCC is an indication of a program module’s control flow complexity. Derived from a module’s control graph representation, MCC has been found to be a reliable indicator of complexity in large software projects (Ward, 1989). This metric is based on the assumption that a program’s complexity is related to the number of control paths through the program. For example, a 10-line program with 10 assignment statements is easier to understand than a 10-line program with 10 ifthen statements. MCC is defined for each module to be e - n + 2, where e and n are the number of edges and nodes in the control flow graph, respectively. Control flow graphs describe the logic structure of software modules. The nodes represent computational statements or expressions, and the edges represent transfer of control between nodes. Each possible execution path of a software module has a corresponding path from the entry to the exit node of the module’s
204
HOW TO MEASURE CONTROL-FLOW COMPLEXITY control flow graph. For example, in Figure 3, the MCC of the control flow graph for the Java code described is 14-11+2=5.
Figure 3. Example of a Java program and its corresponding flowgraph Our major objective is to develop a metric that could be used in the same way as the MCC metric but to evaluate processes’ complexity. One of the first important observations that can be made from MCC control flow graph, shown in Figure 3, is that this graph is extremely similar to Web processes and workflows. One major difference is that the nodes of a MCC control flow graph have identical semantics, while process nodes (i.e., Web services or workflow tasks) can have different semantics (e.g., AND-splits, XOR-splits, OR-joins, etc). Our approach will tackle this major difference.
PROCESS CONTROL-FLOW COMPLEXITY Complexity metrics provide valuable information concerning the status and quality of process development projects. Access to this information is vital for accurately assessing overall process quality, identifying areas that need improvement, and focusing on development and testing efforts. In this section, we describe the structure and representation of Web processes and discuss how control-flow complexity is defined and computed for a Web process.
PROCESS STRUCTURE AND REPRESENTATION Control flow graphs can be used to describe the logic structure of Web processes. A Web process is composed of Web services and transitions. Web services are represented using circles and transitions are represented using arrows. Transitions express dependencies between Web services. A Web service with more than one outgoing transition can be classified as an AND-split, OR-split or XOR-split. AND-split Web services enable all their outgoing transitions after completing their execution. OR-split Web services enable one or more outgoing transition after completing their execution. XOR-split Web services enable only one outgoing transition after completing their execution. AND-split Web services are represented with a ‘•’, OR-split are represented with a ‘O’ and XOR-split Web services are represented with a ‘⊕’. A Web service with more than one incoming transition can be classified as an AND-join, OR-join or XOR-join. AND-join Web services start their execution when all their incoming transitions are enabled. OR-join services start their execution when a subset of their incoming transitions is enabled. XORjoin Web services are executed as soon as one of the incoming transitions is
205
HOW TO MEASURE CONTROL-FLOW COMPLEXITY enabled. As with AND-split, OR-split and XOR-split Web services, AND-join, OR-join and XOR-join Web services are represented with the symbols ‘•’, ‘O’ and ‘⊕’, respectively. An example of a Web process is shown in Figure 4. The process has been developed by the Fungal Genome Resource (FGR) laboratory in an effort to improve the efficiency of their processes (Cardoso, Miller et al., 2004). One of the reengineered processes was the DNA sequencing workflow, since it was considered to be beneficial for the laboratory’s daily activities.
Figure 4. The DNA Sequencing Workflow. Semantics of Processes The complexity of a Web process or workflow can be analyzed according to different perspectives. In our work we are interested in evaluating the complexity of processes from a control-flow perspective. In a Web process and workflow the control-flow logic is captured in a process model and function logic is captured in the applications, data, and people the model invokes. A process model includes basic constructs such as transitions, roles, Web services or tasks, XOR-splits, OR-splits, AND-splits, XOR-joins, OR-joins, ANDjoins and networks (sub-processes.) Our approach uses the idea introduced by McCabe. Numerous studies and experience in software projects have shown that the MCC measure correlates very closely with errors in software modules. The more complex a module is, the more likely it is to contain errors. Our goal is to adapt McCabe’s cyclomatic complexity to be applied to processes. As stated previously, one interesting remark is that all the nodes of MCC flowgraphs have identical semantics. Each node represents one statement in a source code program. On the other hand, the nodes in Web processes and workflows can assume different semantics. Thus, we consider three constructs with distinct semantics presents in process models: XOR-split, ORsplit, and AND-split. The three constructs have the following semantics: • XOR-split. A point in the process where, based on a decision or process control data, one of several transitions is chosen. It is assumed that only one of the alternatives is selected and executed, i.e. it corresponds to a logic exclusive OR. • OR-split. A point in the process where, based on a decision or process control data, one or more transitions are chosen. Multiple alternatives are chosen from a given set of alternatives. It is assumed that one or more of the alternatives is selected and executed, i.e. it corresponds to a logic OR. • AND-split. This construct is required when two or more activities are needed to be executed in parallel. During the execution of a process, when an AND-split is reached the single thread of control splits into multiple treads of control which are executed in parallel, thus allowing activities to be executed at the same time or in any order. It is assumed
206
HOW TO MEASURE CONTROL-FLOW COMPLEXITY that all the alternatives are selected and executed, i.e. it corresponds to a logic AND. Definition and Measurement of Control-flow Complexity The control-flow behavior of a process is affected by constructs such as splits and joins. Splits allow defining the possible control paths that exist through the process. Joins have a different role; they express the type of synchronization that should be made at a specific point in the process. Since we are interested in calculating the complexity of processes’ controlflow, the formulae that we will present evaluate the complexity of XOR-split, OR-split, and AND-split constructs. We call this measurement of complexity, Control-flow Complexity (CFC). Each formula computes the number of states that can be reached from one of the three split constructs. The measure is based on the relationships between mental discriminations needed to understand a split construct and its effects. This type of complexity has been referred to as psychological complexity. Therefore, the more possible states follow a split, the more difficulty the designer or business process engineer has to understand the section of processes and thus the process itself. In processes, the McCabe’s Cyclomatic complexity cannot be used successfully since the metric ignores the semantics associated with nodes of the graph. While the nodes (i.e. activities) of processes have distinct semantics associated, the nodes of a program’s flowgraph are undifferentiated. We now introduce several definitions that will constitute the basis for CFC measurement. Definition 1 (Process measurement). Process measurement is concerned with deriving a numeric value for an attribute of a process. Examples of attributes can include process complexity, duration (time), cost, and reliability (Cardoso, Miller et al., 2004). Definition 2 (Process metric). Any type of measurement related to a process. Process metrics allows attributes of processes to be quantified. Definition 3 (Activity fan-out). Fan-out is the number of transitions going out of an activity. The fan-out is computed using function fan-out(a), where a is n activity. Definition 4 (Control-flow induced mental state). A mental state is a state that has to be considered when a designer is developing a process. Splits introduce the notion of mental states in processes. When a split (XOR, OR, or AND) is introduced in a process, the business process designer has to mentally create a map or structure that accounts for the number of states that can be reached from the split. The notion of mental state is important since there are certain theories (Miller, 1956) that prove complexity beyond a certain point defeats the human mind’s ability to perform accurate symbolic manipulations, and hence results in error. Definition 5 (XOR-split Control-flow Complexity). XOR-split control-flow complexity is determined by the number of mental states that are introduced with this type of split. The function CFCXOR-split(a), where a is a activity, computes the control-flow complexity of the XOR-split a. For XOR-splits, the control-flow complexity is simply the fan-out of the split. CFCXOR-split(a)= fan-out(a)
207
HOW TO MEASURE CONTROL-FLOW COMPLEXITY In this particular case, the complexity is directly proportional to the number of activities that follow a XOR-split and that a process designer needs to consider, analyze, and assimilate. The idea is to associate the complexity of an XOR-split with the number of states (Web services or workflow tasks) that follow the split. This rationale is illustrated in Figure 5. Please note that in this first case the computation and result bear a strong similarity to the McCabe’s cyclomatic complexity.
Figure 5. XOR-split control-flow complexity Definition 6 (OR-split Control-flow Complexity). OR-split control-flow complexity is also determined by the number of mental states that are introduced with the split. For OR-splits, the control-flow complexity is 2n-1, where n is the fan-out of the split. CFCOR-split(a)= 2fan-out(a)-1 This means that when a designer is constructing a process he needs to consider and analyze 2n-1 states that may arise from the execution of an ORsplit construct.
Figure 6. OR-split control-flow complexity Mathematically, it would appear more obvious that 2n states can be reached after the execution of an OR-split. But since a process that has started its execution has to finish, it cannot be the case where after the execution of an OR-split no transition is activated, i.e. no Web service or workflow task is executed. Therefore, this situation or state cannot happen. Definition 7 (AND-split Control-flow Complexity). For an AND-split, the complexity is simply 1. CFCAND-split(a)= 1 The designer constructing a process needs only to consider and analyze one state that may arise from the execution of an AND-split construct since it is assumed that all the outgoing transitions are selected and followed.
208
HOW TO MEASURE CONTROL-FLOW COMPLEXITY
Figure 7. AND-split control-flow complexity The higher the value of CFCXOR-split(a), CFCOR-split(a), and CFCAND-split(a), the more complex is a process’s design, since developers have to handle all the states between control-flow constructs (splits) and their associated outgoing transitions and activities. Each formula to calculate the complexity of a split construct is based on the number of states that follow the construct.
CONTROL-FLOW COMPLEXITY OF PROCESSES Mathematically, control-flow complexity metric is additive. Thus, it is very easy to calculate the complexity of a process, by simply adding the CFC of all split constructs. The control-flow complexity is calculated as follows, where p is a Web process or workflow.
CFC ( p ) =
∑ CFC
ws∈{ xor − splits ∈ p}
XOR − split
( ws ) +
∑ CFC
ws∈{or − splits ∈ p}
OR − split
( ws ) +
∑ CFC
ws∈{ and − splits ∈ p}
AND − split
( ws )
The greater the value of the CFC, the greater the overall architectural complexity of a process. CFC analysis seeks to evaluate complexity without direct execution of processes. Example of CFC Calculation As an example, let us take the Web process shown in Figure 8 and calculate its CFC. The process has been developed by a bank that has adopted a workflow management system (WfMS) to support its business processes. Since the bank supplies several services to its customers, the adoption of a WfMS has enabled the logic of bank processes to be captured in Web processes schema. As a result, all the services available to customers are stored and executed under the supervision of the workflow system. One of the services supplied by the bank is the loan application process depicted in Figure 8. The Web process is composed 21 Web services, 29 transitions, three XORsplits (Check Loan Type, Check Home Loan, Check Car Loan), one OR-split (Archive Application) and one AND-split (Check Education Loan).
Figure 8. The Loan Application Process
209
HOW TO MEASURE CONTROL-FLOW COMPLEXITY It was decided that before placing the Web process in a production environment, a process complexity analysis was required to evaluate the risk involved with the reengineering effort. The results of the control-flow complexity analysis carried out are shown in Table 1. Split
CFC
CFCXOR-split(Check Loan Type)
3
CFCXOR-split(Check Home Loan)
3
CFCXOR-split(Check Car Loan)
2
CFCOR-split(Archive Application)
23-1
CFCAND-split(Check Education Loan)
1
CFC(Loan Application)
=16
Table 1. CFC metrics for the Web process from Figure 8 From these values the control-flow complexity can be easily calculated. It is sufficient to mathematically add the CFC of each split. Thus, the resulting CFC value is 16 (i.e., 3+3+2+23-1+1). Since the results of the CFC analysis gave a value considered to be low, it was determined that the Web process has a low complexity and therefore its implementation presented a low risk for the bank. Therefore, the Web process was deployed and implemented in a production environment. The CFC is a good indicator of the complexity of a process. As further research is conducted in this area it will become clear that in many cases it is necessary to limit CFC of Web process applications. Overly complex processes are more prone to errors and are harder to understand, test, and adapt. One important question that needs to be investigated and answered is what is both the meaning of a given metric (for example, what is the significance of the CFC of 16 obtained in our example) and the precise number to use as a CFC limit in a process development. This answer will be given from empirical results only when organizations have successfully implemented complexity limits as part of their process development projects. For example, when using McCabe complexity metrics, the original limit of 10 indicates a simple program, without much risk, a complexity metric between 11 and 20 designates a more complex program with moderate risk, a metric between 21 and 50 denote a complex program with high risk. Finally, a complexity metric greater than 50 denotes an untestable program with a very high risk. We expect that limits for CFC will be obtained and set in the same way, using empirical and practical results from research and from real world implementation. Verification To test the validity of our metric, we have designed a small set of processes. A group of students has rated each process according to their perceived complexity. The students had previously received a 15-hour course on process design and implementation. We have then used our CFC measurement to calculate the complexity of each process design. Preliminary data analysis performed on the collected data led to some interesting results. A correlation was found between the perceived complexity and the control-flow complexity measure.
210
HOW TO MEASURE CONTROL-FLOW COMPLEXITY Based on these preliminarily interesting results, we are now starting a project that will have as an objective the development of a large set of empirical experiments involving process designs. The purpose is to find the degree of correlation between the perceived complexity that designers and business engineers have when studying and designing a process and the results obtained from applying our control-flow complexity measure.
CONCLUSIONS AND FUTURE WORK Business Process Management Systems (BPMS)(Smith & Fingar, 2003) provide a fundamental infrastructure to define and manage business processes, Web processes and workflows. BPMS, such as Workflow Management Systems (Cardoso, Bostrom, & Sheth, 2004) become a serious competitive factor for many organizations. Our work presents an approach to carry out process complexity analysis. We have delineated the first steps towards using a complexity measurement to provide concrete Web process and workflow design guidance. The approach and the ideas introduced are worth exploring further since Web processes are becoming a reality in e-commerce and e-business activities. In this chapter we propose a control-flow complexity measurement to be used during the design of processes. Process control-flow complexity is a design-time measurement. It can be used to evaluate the difficulty of producing a Web process design before implementation. When control-flow complexity analysis becomes part of the process development cycle, it has a considerable influence in the design phase of development, leading to further optimized processes. The control-flow complexity analysis can also be used in deciding whether to maintain or redesign a process. As known from software engineering, it is a fact that it is cost-effective to fix a defect earlier in the design lifecycle than later. To enable this, we introduce the first steps to carry out process complexity analysis. Future directions of this work are to validate the complexity measurement to ensure that clear and confident conclusions can be drawn from its use. In addition to this, although the validity of the proposed complexity measurement was tested using a few empirical studies that formed the basis for its development, further work is required to validate its usability in contexts other than the ones in which the method was developed. In order to achieve these goals, it is necessary to evaluate a variety of processes and produce automated tools for measuring complexity features.
REFERENCES Aalst, W. M. P. v. d., & Hofstede, A. H. M. t. (2003). YAWL: Yet Another Workflow Language (Revised Version). (QUT Technical report FIT-TR-2003-04). Brisbane: Queensland University of Technology2003. Anyanwu, K., Sheth, A., Cardoso, J., Miller, J. A., & Kochut, K. J. (2003). Healthcare Enterprise Process Development and Integration. Journal of Research and Practice in Information Technology, Special Issue in Health Knowledge Management, 35(2), 83-98. Azuma, M., & Mole, D. (1994). Software Management Practice and Metrics in the European Community and Japan: Some Results of a Survey. Journal of Systems and Software, 26(1), 5-18. BPEL4WS. (2002). Web Services. IBM. Retrieved, from the World Wide Web: http://www-106.ibm.com/developerworks/webservices/
211
HOW TO MEASURE CONTROL-FLOW COMPLEXITY BPML. (2004). Business Process Modeling Language. Retrieved, 2004, from the World Wide Web: http://www.bpmi.org/ Card, D., & Agresti, W. (1988). Measuring Software Design Complexity. Journal of Systems and Software, 8, 185-197. Cardoso, J., Bostrom, R. P., & Sheth, A. (2004). Workflow Management Systems and ERP Systems: Differences, Commonalities, and Applications. Information Technology and Management Journal. Special issue on Workflow and E-Business (Kluwer Academic Publishers), 5(3-4), 319-338. Cardoso, J., Miller, J., Sheth, A., Arnold, J., & Kochut, K. (2004). Quality of service for workflows and web service processes. Web Semantics: Science, Services and Agents on the World Wide Web Journal, 1(3), 281-308. Cardoso, J., Sheth, A., & Miller, J. (2002). Workflow Quality of Service. Paper presented at the International Conference on Enterprise Integration and Modeling Technology and International Enterprise Modeling Conference (ICEIMT/IEMC’02), Valencia, Spain. Curtis, B. (1980). Measurement and Experimentation in Software Engineering. Proceedings of the IEEE, 68(9), 1144-1157. Fenton, N. (1991). Software Metrics: A Rigorous Approach. London: Chapman & Hall. Halstead, M. H. (1977). Elements of Software Science, Operating, and Programming Systems Series (Vol. 7). New York, NY: Elsevier. Henry, S., & Kafura, D. (1981). Software Structure Metrics Based On Information-Flow. IEEE Transactions On Software Engineering, 7(5), 510-518. IEEE. (1992). IEEE 610, Standard Glossary of Software Engineering Terminology. New York: Institute of Electrical and Electronic Engineers. Lanning, D. L., & Khoshgoftaar, T. M. (1994). Modeling the Relationship Between Source Code Complexity and Maintenance Difficulty. Computer, 27(9), 35-41. Leymann, F. (2001). Web Services Flow Language (WSFL 1.0). IBM Corporation. Retrieved, from the World Wide Web: http://www4.ibm.com/software/solutions/webservices/pdf/WSFL.pdf McCabe, T. (1976). A Complexity Measure. IEEE Transactions of Software Engineering, SE-2(4), 308-320. McCabe, T. J., & Watson, A. H. (1994). Software Complexity. Crosstalk, Journal of Defense Software Engineering, 7(12), 5-9. Miller, G. (1956). The Magical Number Seven, Plus or Minus Two: Some Limits on Our Capacity for Processing Information. The Psychological Review. Sheth, A. P., Aalst, W. v. d., & Arpinar, I. B. (1999). Processes Driving the Networked Economy. IEEE Concurrency, 7(3), 18-31. Smith, H., & Fingar, P. (2003). Business Process Management (BPM): The Third Wave: Meghan-Kiffer Press. Tsai, W. T., Lopex, M. A., Rodriguez, V., & Volovik., D. (1986). An approach measuring data structure complexity. Paper presented at the COMPSAC 86. Ward, W. (1989). Software Defect Prevention Using McCabe’s Complexity Metric. Hewlett Packard Journal, 40(2), 64-69. Zuse, H. (1990). Software Complexity Measures and Models. New York, NY: de Gruyter & Co. Zuse, H. (1997). A Framework of Software Measurement. Berlin: Walter de Gruyter Inc.
212
An Example of Using BPMN to Model a BPEL Process Dr. Stephen A. White, IBM Corp., United States ABSTRACT The Business Process Modeling Notation (BPMN) has been developed to enable business user to develop readily understandable graphical representations of business processes. BPMN is also supported with appropriate graphical object properties that will enable the generation of executable BPEL (Business Process Execution Language). Thus, BPMN creates a standardized bridge for the gap between the business process design and process implementation. This paper presents a simple, yet instructive example of how a BPMN diagram can be used to generate a BPEL process.
INTRODUCTION The Business Process Modeling Notation (BPMN) has been developed to enable business user to develop readily understandable graphical representations of business processes. BPMN is also supported with appropriate graphical object properties that will enable the generation of executable BPEL. This paper presents an example of an order shipping process, yet instructive example of how a BPMN diagram can be used to generate a BPEL process. When mapping a BPMN diagram to BPEL (version 1.1)1, a decision must be made as to the basic structure of the BPEL document. That is, will the BPEL format be based on the BPEL graph structure (the flow element) or the BPEL block structure (the sequence element)? This choice will affect how much of the BPMN Sequence Flow will map to BPEL link elements. Using a block structure as the foundation for mapping, link elements only are used when there is a specific section of the Process where parallel activities occur. Using the graph structure as the foundation for mapping, most Sequence Flow will map to link elements, as the entire BPEL process is contained within a flow element. Since the BPMN 1.0 specification2 takes the approach of defining the mapping of BPMN elements to BPEL elements mainly through the use of block structures, this paper will take the approach of using the graph structure for the mapping.
THE EXAMPLE: TRAVEL BOOKING PROCESS The example that will be used in this paper is a basic version of a travel booking process. This example will illustrate a few situations that occur within BPMN diagrams and how they map to BPEL, such as parallel flow, and loops. Figure 1 shows the original representation of the BPEL Process.3
1
http://www-106.ibm.com/developerworks/webservices/library/ws-bpel/
2
http://www.bpmi.org/bpmn-spec.htm
3
http://publib.boulder.ibm.com/infocenter/adiehelp/index.jsp?topic=
/com.ibm.etools.ctc.bpel.doc/samples/travelbooking/travelBooking.html
213
USING BPMN TO MODEL A BPEL PROCESS
Figure 1. Travel Booking Process with WebSphere Studio Figure 2 shows how the same process can be modeled with BPMN. It could be noted that Figure 2 shows the process model laid out in a horizontal direction, going left to right, while the original diagram in Figure 1 shows the process laid out in a vertical direction, going top to bottom. Process tools that create strictly BPEL process tend to lay out diagrams in a vertical direction. Although not universal, the difference between the two modeling techniques tends to separate business analysts, who tend to opt for horizontal diagrams, and IT specialist or software developers, who tend to opt for vertical diagrams. While BPMN does require any specific directionality, most BPMN diagrams flow horizontally.
214
USING BPMN TO MODEL A BPEL PROCESS
Figure 2. Travel Booking Process with BPMN The Process begins with the receipt of a request for a travel booking. After a check on the credit card, reservations are made for a flight, a hotel, and a car. The car reservation may take more than one attempt before it is successful. After all three reservations are confirmed, a reply is sent.
Setting up the BPEL Information BPMN Diagrams, such as seen in Figure 2, can be used within many methodologies and for many purposes, from high-level descriptive modeling to detailed modeling intended for process execution. When one of the purposes of the process model is to define process execution and create the BPEL file for this purpose, then the process model will have to be developed with a modeling tool designed for this purpose. The diagram itself will not display all the information required to create a valid BPEL file. A diagram with all that information would be too cluttered to be readable. A BPMN diagram is intended to display the basic structure and flow of activities and data within a business process. Therefore, a modeling tool is necessary to capture the additional information about the the process that is necessary to create an executable BPEL file. In the sections that follow, the details behind the diagram objects will be highlighted, and it will be shown how they provide the content for the mapping to BPEL. The modeling tool will need to define some basic types of information about the Process itself to fill in attributes of a BPEL process element of the BPEL document. Example 1 displays how basic information collected about the BPMN Diagram and the “Travel Booking Process” within the diagram will be mapped to set up the preliminary BPEL information. BPMN Object/Attribute
BPEL Element/Attribute
Business Process Diagram
See next row… mapped to attributes of a process element
ExpressionLanguage = “Java” expressionLanguage="Java"
215
USING BPMN TO MODEL A BPEL PROCESS Business Process
The process element
Name = “Travel Booking Process”
name="travelBookingProcess"
ProcessType = “Private”
abstractProcess="no" or not included
SurpressJoinFailure = “Yes”
suppressJoinFailure="yes"
Example 1 Mapping Basic attributes of the Business Process The tool that creates the BPEL file will need to define parameters, such as targetNamespace, the location of the supporting WSDL files, and other parameters, including those specific to the operating environment, to enable the BPEL file to function properly. This information will be based on the configuration of the environment for that modeling tool The partnerLink elements are defined prior to the definition of the process, but the information about these elements will be found in the Properties of the Tasks of the BPMN Process. The Tasks of the Process that are of type Service and implemented as a Web service will define the Participant of the Web Service. The Participants and their Properties will map to the partnerLink elements. Example 2 displays two examples of how the Properties defined for the implementation of a Web service for a Task will map to the partnerLink elements that will be defined at the head of the BPEL document. BPMN Object/Attribute
BPEL Element/Attribute
Implementation = “Web service”
Invoke, but some properties, below, will map to a partnerLink
Participant = “ProcessStarter”
name="ProcessStarter"
BusinessRole =
myRole="TravelProcessRole"
partnerLinkType="ProcessStarterPLT"
“TravelProcessRole” Participant =
name="HotelReservationService"
“HotelReservationService”
partnerLinkType="HotelReservationServicePLT"
BusinessRole =
myRole="HotelReservationRole"
“HotelReservationRole” Example 2 Mapping Web Service Properties to partnerLink The BPEL code for the partnerLink definitons can be seen in Example 3.
Example 3. Setting up the partnerLink Elements for the process.
216
USING BPMN TO MODEL A BPEL PROCESS The variable elements are also defined prior to the definition of the process. These will be created from Properties associated with a Process within a BPMN diagram. A modeling tool will allow the modeler to define these Properties. Properties of type structure will be used to group sets of Properties into packages that will map to the BPEL message elements. The message elements will actually be defined in a WSDL document that supports the BPEL document. The variable elements will be defined in the BPEL document and will reference the message elements. Example 4 displays two examples of how the Properties defined for a Process will map to the variable and message elements that will be defined at the head of the BPEL document and in supporting WSDL documents. BPMN Object/Attribute
BPEL Element/Attribute
Property
BPEL variable and WSDL message
Name = “input”
For BPEL variable: name="input" messageType="input" For WSDL message: name="input"
Type = “structure”
The sub-Properties of the structure will map to the WSDL message elements
Property
For WSDL message, in the part element:
Name = “airline”
name=”airline “
Type = “string”
type=”xsd:string”
Property
For WSDL message, in the part element:
Name = “arrival”
name=”arrival”
Type = “string”
type=”xsd:string”
Eleven more sub-Properties are included
Eleven more part elements are included
Ten more structure Properties are included
Ten more BPEL variable elements and WSDL message elements will be included
Example 4. Mapping Process Properties to BPEL variable and message The BPEL code for the variable definitons can be seen in Example 5.
Example 5. Setting up the variable Elements for the process.
217
USING BPMN TO MODEL A BPEL PROCESS The WSDL code for the message definitons can be seen in Example 6.
Example 6. Setting up the message Elements for the WSDL Document All the Sequence Flow that are seen in Figure 2, except four, will map to BPEL link elements. In addition, the process will require three link elements that are not represented by Sequence Flow within Figure 2. These exceptions will be explained below. When the flow element for the process is defined, the link definitions will precede the definitions of the process activities. The BPEL code for the link definitions can be seen in Example 7. The name for each link will be automatically generated by the modeling tool.
Example 7. Setting up the link Elements for the flow.
The Start of the Process The Process is started with the receipt of a message request for the booking of a travel itinary through a Message Start Event (see Figure 3). After the request has been received, a check for the validity of the submitted Credit Card information is performed. As can be seen above in Figure 2, the Check Credit Card Task has an Error Intermediate Event attached to its boundary. This Event will be used for handling an incorrect credit card number. The mapping for this type of fault handling will be shown in the section entitled “Error (Fault) Handling” below.
218
USING BPMN TO MODEL A BPEL PROCESS
Figure 3. The Beginning of the Travel Booking Process The Receive Message Start Event is the mechanism that initiates the Process through the receipt of a message. This will map to a BPEL receive element. Example 8 displays the properties of the Receive Message Start Event and how these properties map to the attributes of the receive element. BPMN Object/Attribute
BPEL Element/Attribute
Start Event (EventType: Message)
receive
Name = “Receive”
name="Receive"
Instantiate = “True”
createInstance="yes"
Message = “input”
variable="input"
Implementation = “Web service”
See next three rows…
Participant = “ProcessStarter”
partnerLink="ProcessStarter"
Interface = “travelPort”
portType="wsdl0:travelPort"
Operation = “book”
operation="book"
Example 8. Mapping of the Message Start Event Example 9 displays the resulting BPEL code that will be generated for the Receive Message Start Event.
Example 9. BPEL code for the “Receive” Start Event Note that the Check Credit Card Task in Figure 3 contains small icons in the upper right corner of its shape. These icons are used to provide a graphical indication as to the type of Task that will be performed. Figure 3 and the figures below will display these type of icons. The icons are not part of the standard BPMN notation, but are part of extensibility of BPMN. It is expected
219
USING BPMN TO MODEL A BPEL PROCESS that process modeling tools will utilize such icons as fit the capabilities and expected usage of the tool. The location of the icons within the Task shape is at the discretion of the modeler or modeling tool. For the purposes of this paper, these icons will aid in showing how the diagram maps to BPEL. The Check Credit Card Task follows the Start Event through a connection of a Sequence Flow. The Sequence Flow indicates a dependency relationship between the BPEL elements mapped from the Start Event and then from the Task. Since all the BPEL activities are contained within a flow, the dependency will take the form of a BPEL link element ("link1") seen in Example 7). The link element will be connected to the activities with the inclusion of a source element added to the receive element (Example 9) and a target element added to the first element mapped from Check Credit Card Task (an assign activity in Example 11 below). The Check Credit Card Task will map to a BPEL invoke element. But this Task, as seen in Figure 3, has two icons in the upper right corner. The first icon (two blue bars stacked) indicates that a data mapping is required for the main Task to be performed. Some of the data that has been received from the input to the Process (the Message Start Event) will be mapped to the data structure of the message for the credit card checking service. As can be seen in Figure 1, the original process included a separate activity each time that a data mapping was required. However, many business process methodologies do not consider such data mapping as a separate Task that would warrant its own shape and space on the diagram. Such data mapping is usually included as pre- or post-activity functions for a business Task, although it may be used as a stand-alone Task in some situations, as we shall see later in the process. As a group, this data mapping will be defined as properties of the Task and will be mapped to a BPEL assign element. The individual property mappings will be combined to be sets of copy elements within the assign element. The second icon (purple and green arrows pointing in different directions) indicates that the main function of the Task will be a Service type of Task, implemented through a Web servce, which maps to a BPEL invoke element. Example 10 displays the properties of the Check Credit Card Task and how these properties map to the attributes of the assign and the invoke. BPMN Object/Attribute
BPEL Element/Attribute
Task (TaskType: Service)
invoke
Name = “Check Credit Card”
name="checkCreditCardRequest"
InMessage
inputVariable="checkCreditCardRequest"
OutMessage
outputVariable="checkCreditCardResponse"
Implementation = Web service
See next three rows…
Participant
partnerLink="creditCardCheckingService"
Interface
portType="wsdl4:creditCardCheckingServiceImpl"
Operation
operation="doCreditCardChecking"
220
USING BPMN TO MODEL A BPEL PROCESS Assignment
assign. The name attribute is automatically generated by the tool creating the BPEL document.
From = input.cardNumber
within a copy element, paired with the next row from part="cardNumber" variable="input"
To = within a copy element, paired with the previous row checkCreditCardRequest. to part="cardNumber" cardNumber variable="checkCreditCardRequest" AssignTime = Start
This means that the assign element will precede the invoke
From = input.cardType
within a copy element, paired the next row: from part="cardType" variable="input"
To = within a copy element, paired the previous row checkCreditCardRequest. to part="cardType" cardType variable="checkCreditCardRequest" AssignTime = Start
This means that the assign element will precede the invoke. All Assignments with an AssignTime of “Start” will be combined for one assign element.
Example 10. Mapping of the “Check Credit Card” Task Example 11 displays the resulting BPEL code that will be generated for the Check Credit Card Task.
Example 11. BPEL code for the “Check Credit Card” Task
221
USING BPMN TO MODEL A BPEL PROCESS There is a sequential relationship where the assign element must precede the invoke element as determined by the assignment properties of the Task. This relationship will result in a BPEL link ("link2") element. In this case, there is no corresponding Sequence Flow in the BPMN diagram as there was for the "link1" link.
Creating Parallel Flow After the Check Credit Card Task, three main activities will occur. They will involve the checking of car, hotel, and flight reservations (see Figure 4). These activities are not dependent on each other so they can be performed at the same time, in parallel. The checking of the car reservation is more complicated and will be dealt with in the next section; only a data mapping activity, which precedes the check for the car, will be seen in this section.
Figure 4. Parallel Flow within the Process The parallelism in the Process is indicated by the three outgoing Sequence Flow from the Check Credit Card Task. The three target activities for these Sequence Flow are available to be performed at the same time. Each of the three Sequence Flow will result in three BPEL link elements ("link3," "link6," and "link9"). The three link elements will be included within source elements in the “checkCreditCard” invoke element (see Example 11). There will be a corresponding target element in each of the three assign elements defined below in this section (see Example 12, Example 13, and Example 15). Starting from the bottom of Figure 4, the mapping of the Check Flight Reservation Task and its Propeties is very similar to the mapping of the Check Credit Card Task (see Example 10). In this case too, the mapping results in an assign element that will precede an invoke element. A link element ("link4") that does not have a correspoding Sequence Flow must be added to create the sequential dependency between the assign and invoke. Example 12 displays the resulting BPEL code that will be generated for the Check Flight Reservations Task.
222
USING BPMN TO MODEL A BPEL PROCESS
Example 12. BPEL code for the “Check Flight Reservations” Task Moving up the stack of Tasks in Figure 4, the mapping of the Check Hotel Reservation Task and its Properties is also very similar to the mapping of the Check Flight Reservation Task in that an assign element will also precede an invoke element. Again, a link element ("link7") must also be created to create the sequential dependency between the assign and invoke. Example 13 displays the resulting BPEL code that will be generated for the Check Hotel Reservations Task.
Example 13. BPEL code for the “Check Hotel Reservations” Task The Task at the top of Figure 4 is there to prepare data for the Check Car Reservation Task (see Figure 5). The other Tasks in Figure 4 have hidden the data mapping, as indicated by the icon in the upper right of the shape. This is not possible for the car reservation, since the Task for checking the reservation is included within a loop. The data mapping is only needed once, while the checking of the reservation may happen many times. Thus, the data mapping is placed in a separate Task that does nothing except the data mapping. Example 14 displays the properties of the Data Map Task and how these properties map to the attributes of the assign element.
223
USING BPMN TO MODEL A BPEL PROCESS BPMN Object/Attribute
BPEL Element/Attribute
Task (TaskType: None)
None, but assignment properties will create a mapping to an assign. Otherwise a BPEL empty element would have been created.
Name = “Data Map”
None.
Assignment
assign. The name attribute is automatically generated by the tool creating the BPEL document.
From = input.carCompany
within a copy element, paired the next row from part="carCompany " variable="input"
To = carReservationRequest.
within a copy element, paired the previous row
company
to part="company" variable="carReservationRequest"
AssignTime = Start
This doesn’t have any direct effect since the Task is of type “None.” All Assignments with an AssignTime of “Start” will be combined for one assign element.
There are four other From/To These will map to additional from and to Assignments that are not elements within a copy. shown Example 14. Mapping of the “Data Map” Task Example 15 displays the resulting BPEL code that will be generated for the Data Map Task.
Example 15. BPEL code for the “Data Map” Task
Mapping a Loop A loop occurs in the part of the Process where the car reservation is checked and then evaluated (see Figure 5).
224
USING BPMN TO MODEL A BPEL PROCESS
Figure 5. A Loop within the Process The two Tasks can be done mutliple times if the evaluation determines that the reservation does not meet the specified critera. The loop is constructed by a decision Gateway that splits the flow based on the evaluation results. The Check Again Sequence Flow starts at the Gateway and then connects to an upstream object, creating the loop. It is the configuration of this section of the Process, its connections throght Sequence Flow, which determines how it is mapped to BPEL, rather than the strict dissection of the objects and mapping them, as has been shown above. For Gateways that are not involved in a loop – and this Process does not have such as example – the mapping to BPEL will depend on whether the mapping is based on a graph structure (flow) or block structure (sequence). For a block structure mapping, the Gateway would be mapped to a switch. Each of the outgoing Sequence Flow from the Gateway would map to a case element within the switch and the Expression for the Sequence Flow would map to the condition of the case. For a graph structure mapping, each of the outgoing Sequence Flow from the Gateway will map to separate link elements (the incoming Sequence Flow to the Gateway does not map to a BPEL element). The Condition for each of these Sequence Flow will be the transitionCondition of the source element in the activity that precedes the Gateway. In this case, it would have been the mapped invoke from Evaluate Reservation Result Task. However, since a loop is created by the Sequence Flow from the Gateway and, due to the acyclical nature of the flow, link elements cannot be used in a target element that is in an upstream activity within a flow, this means that a while element must be created to handle the loop. The contents of the while will be determined by the boundaries set by the Gateway and the target activity that is upstream from the Gateway. As can be seen in Figure 5, the Check Car Reservation and Evaluated Reservation Result Tasks are within the loop and will map to the contents of the BPEL while. Consistent with the decision for mapping the whole Process, the contents of the while will be mapped to the graph structured elements. This means that the main element of the while will be a flow, and the BPMN Task mappings will fit with that flow.
225
USING BPMN TO MODEL A BPEL PROCESS The BPMN Check Again Sequence Flow, which connects to the upstream Check Car Reservation Task, has a branching Condition. This Condition would typically map to as the condition attribute for the while. In this case, however, the condition is written with the Java programming language, and an extension in the form of a condition element is used to hold the Java code. Example 16 displays the resulting BPEL code that will be generated for the Loop from the Gateway.
Example 16. BPEL code for the Loop from the Gateway The Check Car Reservation Task and its Properties have a straightfoward mapping to an Invoke element. Example 17 displays the resulting BPEL code that will be generated for the Check Car Reservations Task.
Example 17. BPEL code for the “Check Car Reservation” Task The Evaluate Reservation Result Task is different from the previous Tasks in this Process. For BPMN, it is a Task of type Script. This means that when the Task is reached in the Process, a service will not be called, but the engine that is executing the Process will perform a script that has been defined for the Task. In this case, the script is written in the Java programming language. The script will check the results of the three reservation checks (flight, hotel, and car) and determine whether all three were successful or whether the trip could not be booked as planned.
226
USING BPMN TO MODEL A BPEL PROCESS To enable the performance of the script, the BPEL invoke activity will be extended to hold the Java code that the Process engine will perform. The extension will be the addition of a script element that contains a javaCode element that holds the script code. Example 18 displays the properties of the Evaluate Reservation Result Task and how these properties map to the attributes of the invoke element. BPMN Object/Attribute
BPEL Element/Attribute
Task (TaskType: Script)
Invoke Since the TaskType is “Script” these attributes are automatically set: partnerLink="null" portType="wpc:null" operation="null" inputVariable is not used outputVariable is not used
Name = “Evaluate Reservation Result”
Name="evaluateReservationRequest"
Script = [Java Script]
The Invoke element is extended to add the wpc:script element, which contains a wpc:javaCode element. The Java code is included within the wpc:javaCode element.
Example 18. Mapping of the “Evaluate Reservation Result” Task Example 19 displays the resulting BPEL code that will be generated for the Evaluate Reservation Result Task. ]]>
Example 19. BPEL code for the “Evaluate Reservation Result” Task In Figure 5, there is a Sequence Flow between the Check Car Reservation Task and the Evaluate Reservation Result Task. This Sequence Flow will map to a link element ("link13"), which will be named in the source element of the "carReservationRequest" invoke and the target element in the "evaluateReservationRequest" invoke. The Sequence Flow between the Evaluate Reservation Result Task and the Gateway marks the end of the loop, thus the end of the while. Therefore, a link element is not required.
227
USING BPMN TO MODEL A BPEL PROCESS Synchronizing Parallel Flow The Process has three parallel paths that follow the Check Credit Card Task. These three paths converge and are synchronized before the Confirmation Task, as represented by the Parallel Gateway (see Figure 6). This synchronization means that all three paths must complete at that point before the flow of the Process can continue.
Figure 6. Flow Synchronization within the Process The Confirmation Task is another Task that defines a script written in Java. Thus, the mapping to BPEL will be similar to the mapping of the Evaluate Reservation Result Task and its Properties (see Example 18). Example 20 displays the resulting BPEL code that will be generated for the Confirmation Task. ]]>
Example 20. BPEL code for the “Confirmation” Task Within the BPEL code for the process, the actual synchronization of the paths will occur in the "Confirmation" invoke. There is not a separate synchronization element, such as the Parallel Gateway in BPMN. BPEL uses the link elements within a flow to create dependencies, including synchronization, between activities. Because the "Confirmation" invoke has three target elements (for "link9," "link10," and "link11"—see Example 20), this invoke must wait until it receives a signal from all three link elements before it can be performed. The source elements for these elements are within the
228
USING BPMN TO MODEL A BPEL PROCESS "flightReservationRequest" invoke, the "hotelReservationRequest" invoke, and the while activity, respectively. The lack of a joinCondition for the "Confirmation" invoke means that there must be at least one positive signal, but the fact that there must be all three signals, positive or negative, requires that all three preceding activities must have been completed, thus synchronizing the flow. It should be noted that the Done Sequence Flow from the Gateway, which maps to the "link12" link element, has a Condition which is used for the branching from the Gateway. Thus, the source element that names "link12" could have a transitionCondition defined. This really is not needed, however, since the "link12" link will not be triggered until the while, its source, has completed. This means that for any transitionCondition for that link will always be true when it is triggered. If there had been another outgoing Sequence Flow from the Gateway, then the Conditions for the Sequence Flow would have an effect on the flow of the Process.
The End of the Flow After the “Confirmation” Task, the Process ends with a reply being sent back to the initiator of the Process (see Figure 7). The reply is bundled into a Message End Event.
Figure 7. The Conclusion of the Process The Message End Event will map to a reply element. Also, the Sequence Flow from the Confirmation Task to the Reply End Event will map to a link element ("link12"), with the source and target naming the link in the respective BPEL activities. Example 21 displays the properties of the Reply Message Ene Event and how these properties map to the attributes of the reply element.
229
USING BPMN TO MODEL A BPEL PROCESS BPMN Object/Attribute
BPEL Element/Attribute
End Event (EventType: Message)
reply
Name = “Reply”
name="Reply"
Message = “output”
variable="output"
Implementation = “Web service”
See next three rows…
Participant = “ProcessStarter”
partnerLink="ProcessStarter"
Interface = “travelPort”
portType="wsdl0:travelPort"
Operation = “book”
operation="book"
Example 21. Mapping of the “Reply” End Event Example 22 displays the resulting BPEL code that will be generated for the Reply Message End Event.
Example 22. BPEL code for the “Reply” Message End Event
Error (Fault) Handling As can be seen in Figure 8, the Check Credit Card Task has an Error Intermediate Event attached to its boundary. The Intermediate Event will react to a specific error (fault) trigger, interrupt the Task, and direct the flow to its outgoing Sequence Flow. The error will be triggered if the entered credit card number is invalid.
Figure 8. Fault Handling for the Process If the error occurs, then the Handle Fault Task will process the information and prepare the message that will be sent back. The Reply Message End Event will send the message. This means that the Process will end and all the other activities in the Process will not occur. Thus, the Intermediate Event, as it leads to an End Event without merging back into the rest of the
230
USING BPMN TO MODEL A BPEL PROCESS Process, interrupts the entire Process. The BPEL mechanism for interrupting the Process will be a faultHandlers element within a scope. This scope will envelope the entire contents of the process, which means that the main process flow (see Example 7) will actually be contained within the scope – along with the faultHandlers. The flow will run within the scope until it completes normally, unless the faultHandlers is triggered, thereby interrupting the flow. The contents of the faultHandlers will be the activities that will be performed if the is faultHandlers triggered. This means that the Handle Fault Task and the Reply Message End Event mappings will be placed within the faultHandlers. The Handle Fault Task is a script Task and its mapping to BPEL will be similar to that of the Evaluate Reservation Result Task (see Example 18). The Reply End Event will be mapped the same way as the Reply End Event of the main part of the Process (see Example 22). Consitent with the way that the main Process and the loop were mapped to BPEL, the fault handling section will be placed within a flow (within the faultHandlers). The Sequence Flow from the Error Intermediate Event to the Handle Fault Task will not be reflected with a link element since the mapping of the Handle Fault Task will be the first activity within the flow of the faultHandlers. However, the Sequence Flow from the Handle Fault Task to the Reply End Event will map to a link ("link14") with the source and target naming the link in the respective BPEL activities. Example 23 displays the resulting BPEL code that will be generated for the Process Fault Handling, including the code for the Handle Fault Task and the Reply End Event.
231
USING BPMN TO MODEL A BPEL PROCESS ]]>
Example 23. Mapping of the Fault Handling for the Process
CONCLUSION This paper provides an example of how a BPMN Business Process Diagram can be used to represent an executable process. To create the executable process, the diagram objects and their properties are dissected and then mapped into the appropriate BPEL elements. Although this paper is not intended to cover all aspects of mapping BPMN diagrams to BPEL, it takes a step by step illustration of the specific objects of a travel booking process and their mapping to BPEL. Thus, this example shows how a BPMN diagram can serve the dual purpose of providing a business-level view of a business process and allow the generation of process executable code through BPEL.
232
A Simple and Efficient Algorithm for Verifying Workflow Graphs Sinnakkrishnan Perumal1 and Ambuj Mahanti, Indian Institute of Management Calcutta, India INTRODUCTION A workflow depicts how a set of tasks of any business process get executed within the associated business constraints, when it is triggered by a business event. A complete workflow process definition comprises task definitions, resource requirements, execution dependencies between the tasks, temporal constraints in executing the tasks, data flows across the various tasks and application mapping of the tasks. It is necessary to remove process modeling errors for avoiding the malfunctioning of the workflow system due to these errors. Although process modeling errors could occur in the other workflow process definition components mentioned above, here we are only concerned with the modeling errors associated with execution dependencies between the tasks. Execution dependencies between tasks depict the control flow of the workflow process. In ad-hoc workflow processes, control flow does not follow the process structure strictly. Errors in process structure of a workflow process whose control flow follows the process structure strictly are termed as structural conflicts of that workflow process. Executing a workflow process that contains structural conflicts may lead to business loss, reduce customer satisfaction, increase overload of employees, lead to negative brand image, reduce profits and consume substantial managerial time. Hence, identifying and eliminating structural conflicts in any workflow process has significant business importance. Workflow processes should be conceptually represented in an appropriate process definition language to analyze and review them before deploying in the real world business environment. Such a representation is also useful in communicating the workflow process across the designers, users, knowledge engineers, managers and technical personnel. Further, process models when represented could be verified using approaches appropriate to the process definition language used. Conceptual representations could be made using Workflow Nets (WF-nets), Workflow Graphs, Object Coordination Nets (OCoNs), Adjacency Matrix, Unified Modeling Language (UML) diagrams, Evolution Workflow Approach and Propositional Logic. Currently, workflow verification algorithms exist for WF-nets, Workflow Graphs, UML diagrams, Propositional Logic and Adjacency Matrix representations. Of these, algorithms based on WF-nets and Workflow Graphs are popular. WF-nets are based on Petri nets and many formal analysis techniques of Petri nets have been used to derive theoretical solutions to the issues faced in designing WF-nets. Although many complicated process language constructs that are useful in the business environment can be implemented using WFnets, Workflow Management Council (WfMC) adopts only six basic process language constructs. WfMC has adopted this approach to keep the modeling very simple and lucid.
This research was partially supported by Infosys Technologies Limited, Bangalore under the Infosys Fellowship Award.
1
233
SIMPLE AND EFFICIENT ALGORITHM For execution of a business process (such as an order request) for a business event, a subset of tasks in the workflow process are executed as per the object data (customer data, environment data, business process related data and business domain data) given as part of the business execution and the execution dependencies between the tasks. This subset of tasks along with the control flow used for the business process execution are together called as an instance. So far, most Workflow Management Systems (WfMSs) provide only simulation tools for validating workflow models using the trial-and-error method2. These simulation tools can be used for execution of a subset of the instances of a workflow process to verify for structural conflicts that could arise in the corresponding scenarios. However, a large workflow process could have many instances and it becomes tedious to verify for all instances of the workflow process. Verification of a workflow process for structural conflicts is a computationally complex problem and many approaches can be used to do this. However, the approach adopted for workflow verification should correspond to the process definition language. Due to the computational complexity of the problem, only few approaches succeed in doing the workflow verification under reasonable time limits for all kinds of workflow graphs. Woflan tool has been created by H.W.M. Verbeek and W.M.P. Van der Aalst for verifying the structural conflict errors in WF-nets3. Apart from verifying structural conflict errors, Woflan can also be used for inheritance checking. Flowmake tool has been created by Wasim Sadiq and Maria E. Orlowska based on their Graph Reduction algorithm for verifying the structural conflict errors in Workflow graphs4. Flowmake is based on the Graph Reduction algorithm by Wasim Sadiq and Maria E. Orlowska. This algorithm is not complete as it fails to check the verification problems in special kind of workflow graphs called overlapped workflow graphs. It is necessary to have a verification algorithm that checks all kinds of workflow graphs, as it is impossible to seclude overlapped graphs while designing business workflow processes. For this purpose, Hao Lin et al., defined an algorithm to verify all kinds of workflow graphs5. However, as explained in a later section, this algorithm uses complicated rules and hence becomes difficult to comprehend visually. The main contribution of this chapter is that a new workflow verification algorithm named Mahanti-Sinnakkrishnan algorithm has been proposed to verify structural conflict errors in workflow graphs. This algorithm is presented along with visual step-by-step trace of the algorithm, correctness and completeness proofs and complexity proofs. This algorithm has several advantages over the existing algorithms such as, (a). it is much simpler to comprehend visually, (b). it consumes much lesser time compared to the existing algorithm, (c). it is easier to detect any errors that could be committed during the implementation of this algorithm, (d). it is based on well known graph analysis techniques, and (e). it does not disturb the original workflow graph structure while doing the verification. This algorithm has a limitation
Henry. H. Bi and J. Leon Zhao: Applying Propositional Logic to Workflow Verification. Information Technology and Management 5(3-4): 293–318 (2004).
2
H. M. W. (Eric) Verbeek, Wil M. P. van der Aalst: Woflan 2.0: A Petri-Net-Based Workflow Diagnosis Tool. ICATPN 2000: 475-484.
3
4 Wasim Sadiq, Maria E. Orlowska: Analyzing Process Models Using Graph Reduction Techniques. Inf. Syst. 25(2): 117-134 (2000).
Hao Lin, Zhibiao Zhao, Hongchen Li, Zhiguo Chen: A Novel Graph Reduction Algorithm to Identify Structural Conflicts. HICSS 2002: 289.
5
234
SIMPLE AND EFFICIENT ALGORITHM that it cannot be used for cyclic workflow graphs. However, this limitation exists in other “workflow graph”-based verification algorithms as well. It is intended that modifications are made to this algorithm in the future to verify cyclic graphs as well.
WORKFLOW GRAPH REPRESENTATION Workflow graphs are represented using directed graphs. These graphs consists of a set of nodes V and a set of edges E. Edges are also called as flows, in which case they are denoted as F. Workflow graphs cannot have more than one directed edge between any pair of nodes. Nodes in a workflow graph can be either condition nodes (denoted as C) or task nodes (denoted as T). Condition nodes represent the OR-split and OR-join process language constructs, and correspondingly, a condition node can be either an OR-split node or an OR-join node. Task nodes represent the Sequence, AND-split and AND-join process language constructs, and correspondingly, a task node can be a Sequence node, an AND-split node or an AND-join node. OR-split nodes and AND-split nodes are termed as split nodes. AND-join nodes and OR-join nodes are termed as merge nodes. In the workflow graph representations used in the literature, a node can be either a sequence node, a split node, or a merge node. A sequence node has one inward edge and one outward edge. A split node has one inward edge and more than one outward edge. A merge node has more than one inward edge and one outward edge. Start node and end node are two special nodes. Start node of the workflow graph will not have any inward edge. End node of the workflow graph will not have any outward edge. Without loss of generality, it can be safely assumed that workflow graphs can have only one start node and only one end node. A path is a sequence of nodes in the workflow graph such that any two consecutive nodes in the path are connected through a directed edge from the parent node to the child node. OR-split node is used to create one or more mutually exclusive paths passing through that node. Hence, while traversing the workflow graph for an instance, only one child node of an OR-split node will be chosen. An OR-join node is used to synchronize such mutually exclusive paths merging at that node. “OR” in the OR-split and OR-join terminology is a misnomer in that the paths splitting at an OR-split node are mutually exclusive paths, and similarly, the paths merging at an OR-join node are mutually exclusive paths. Hence, XOR-split and XOR-join nodes could have been the appropriate terms for the OR-split and OR-join node types described above. However, since the literature uses the terminology of OR-split and OR-join nodes, we will use this terminology in this chapter unless otherwise explicitly mentioned. AND-split node is used to create two or more concurrent paths passing through that node. Hence, while traversing the workflow graph for an instance, all child nodes of an AND-split node will be chosen. AND-join node is used to synchronize such concurrent paths merging at that node. While executing an instance of a business process, workflow graph is traversed starting from the start node. During this graph traversal, only a subset of the nodes in the workflow graphs is activated. Subgraph formed by traversing nodes and edges, for executing an instance of a business process, is called an instance subgraph. An OR-split node activates only one of its child nodes during the graph traversal. Similarly, an AND-node activates all its child nodes. OR-join node expects only one mutually exclusive path in its in-
235
SIMPLE AND EFFICIENT ALGORITHM put side (i.e., an OR-join node should be activated by only one of its parent nodes), while executing an instance subgraph. If an OR-join node gets more than one path in its input side for any instance subgraph, then the workflow graph is said to have Lack of Synchronization structural conflict at this ORjoin node. Lack of synchronization structural conflict leads to undesired multiple executions of the paths following the OR-join node having this structural conflict, in at least one scenario of workflow process execution. Similarly, AND-join node expects all concurrent paths in its input side (i.e., an AND-join node should be activated by all of its parent nodes), while executing an instance subgraph. If an AND-join node does not get all of the paths in its input side when executing an instance subgraph, then the workflow graph is said to have Deadlock structural conflict at this AND-join node. During execution, workflow process waits for infinite time at any AND-join node that has deadlock structural conflict as the AND-join node waits for getting activated by all of its parent nodes. Deadlock and lack of synchronization are the structural conflicts detected by workflow verification algorithms.
EXISTING ALGORITHM–1 FOR WORKFLOW GRAPH VERIFICATION Wasim Sadiq and Maria E. Orlowska proposed a workflow verification algorithm based on workflow graphs6. This algorithm is based on the graph reduction rules that reduce the complexity of the workflow verification for the resulting graph. These rules do not alter the structural characteristics of the workflow graph in that, if the original workflow graph had a “Lack of Synchronization” structural conflict then the resulting graph after graph reduction will also have the same structural conflict. Similarly, if the original graph had a “Deadlock” structural conflict, then the resulting graph will also have the same structural conflict. Graph reduction rules are applied on the workflow graph one after another. If none of the graph reduction rules can reduce the workflow graph any further, then the verification algorithm stops. If the resulting graph is an empty graph, then the workflow graph does not have any structural conflict. If on the other hand the resulting graph is not an empty graph, then the workflow graph has at least one structural conflict. For applying each graph reduction rule, each node of the graph is visited and checked if any pattern corresponding to that graph reduction rules is applicable on the chosen node. If the graph reduction rule is applicable, then it is applied to reduce the graph and the resulting graph is used for further graph reduction. Graph reduction rules are shown individually in figure 1 and shown through an example in figure 2. Various graph reduction rules used in this algorithm are given as follows.
Wasim Sadiq, Maria E. Orlowska: Applying Graph Reduction Techniques for Identifying Structural Conflicts in Process Models. CAiSE 1999: 195-209.
6
236
SIMPLE AND EFFICIENT ALGORITHM
(a)
(b)
(c)
Figure 1: Graph Reduction rules are shown individually Adjacent Reduction Rule This rule has four sub-rules for four different patterns in the workflow graphs. (i). If the node visited is a terminal node having only one directed edge associated with it, then the node and the edge can be removed from the graph. (ii). If the node visited is a sequential node having only one incoming edge and only one outgoing edge, then the head node of its outgoing edge is changed as its preceding node, and the node and its incoming edge are removed from the graph. (iii). If the node visited is not removed by the first two sub-rules, then the node should be either a split node or a merge node. If the node and its preceding node are split nodes and both are of same type (node type can either be condition node or task node), then the head node of its outgoing edges is changed as its preceding node and the node and its incoming edge are removed from the graph. (iv). If the node and its succeeding node are merge nodes and both are of same type, then the tail node of its incoming edges is changed as its succeeding node and the node and its outgoing edge are removed from the graph.
237
SIMPLE AND EFFICIENT ALGORITHM
Figure 2: Graph Reduction rules explained through an example Closed Reduction Rule Resulting workflow graph after graph reduction may have more than one directed edge between any pair of nodes. This rule removes all but one directed edge between such nodes. Overlapped Reduction Rule This rule reduces a special pattern in workflow graph structure. This pattern has four levels. Level 1 contains an OR-split node. Level 2 has AND-split nodes that only have the Level 1 OR-split node as its parent and all child nodes of these nodes are OR-join nodes. Level 4 contains an AND-join node. Level 3 has OR-join nodes, each of which have all Level 2 AND-split nodes as its parent nodes, and is connected to the Level 4 AND-join node. Complexity of this algorithm is O(N2). However, this algorithm does not verify all kinds of workflow graphs.
238
SIMPLE AND EFFICIENT ALGORITHM EXISTING ALGORITHM–2 FOR WORKFLOW GRAPH VERIFICATION Hao Lin et al., presented three graph reduction rules in lieu of “Overlapped Reduction Rule” described in the previous algorithm for verifying all kinds of workflow graphs7. They also presented an algorithm to use these rules. Complexity of this algorithm is O((N+E)2.N2). This algorithm cannot solve workflow graph with cycles.
MAHANTI-SINNAKKRISHNAN WORKFLOW GRAPH VERIFICATION ALGORITHM The algorithm proposed in this chapter is a search based algorithm using concepts from Depth-First Search and AO* algorithm. AO* algorithm is used for processing AND-OR graphs8,9. A workflow graph can have five types of nodes namely, AND-join, AND-split, OR-join, OR-split and sequence nodes. In our algorithm, for the purpose of uniformity, we treat a sequence node as an AND-split node having a single child node. An instance subgraph could be defined as follows. Start node of the workflow graph belongs to any instance subgraph. If a node n of the graph belongs to any instance subgraph and n is an OR-split node, then exactly one child node of n and the edge connecting n to this child node belongs to the instance subgraph. If a node n of the graph belongs to any instance subgraph and n is not an OR-split node, then all child nodes of n and the edges connecting n to these child nodes belong to the instance subgraph. If a workflow graph does not have any OR-split node, then the workflow graph contains only one instance subgraph. Such a workflow graph is structural conflict free, if and only if the instance subgraph is structural conflict free. However, if a workflow graph contains many OR-split nodes, then various combinations of the paths emanating from these OR-split nodes will lead to various instances of the workflow graph. If a brute force method is used for workflow verification, then all the instance subgraphs have to be checked for structural conflicts. Hence, such a method is computationally expensive. In Mahanti-Sinnakkrishnan workflow verification algorithm, we check only a subset of the instance subgraphs of the workflow graph for structural conflicts. Later, we prove that if these instance subgraphs are free of structural conflicts, then the complete workflow graph is structural conflict-free. Definitions G - Implicit Graph, i.e., the original complete workflow graph G’ - At any moment during the execution of the algorithm, the explicit graph G’ is defined as the portion of implicit graph G that has been traversed so far. - Explicit graph after generating ith instance Gi’ I - Total number of instance subgraphs generated by the algorithm - ith instance subgraph IGi V - set of nodes of graph G E - set of edges of graph G Hao Lin, Zhibiao Zhao, Hongchen Li, Zhiguo Chen: A Novel Graph Reduction Algorithm to Identify Structural Conflicts. HICSS 2002: 289.
7
A. Mahanti, A. Bagchi: AND/OR Graph Heuristic Search Methods J. ACM 32(1): 2851 (1985).
8
9
Nils J. Nilsson: Principles of Artificial Intelligence. Springer 1982.
239
SIMPLE AND EFFICIENT ALGORITHM N - number of nodes in V This algorithm is iteration-based and each iteration has two phases, one for creating an instance subgraph called Create_Instance_Subgraph (denoted CIS) and the other one for verifying an instance subgraph called Verify_Instance_Subgraph (denoted VIS). In CIS, a new instance subgraph is created by marking the edges of the workflow graph. Marking of the workflow graph refers to marking the edges of the graph, such that for an OR-split node exactly one edge connecting it to its child nodes is marked and for an AND-split node all the edges connecting it to its child nodes are marked. An instance subgraph can be obtained by traversing just the marked edges of the explicit graph starting from the start node. VIS traverses the marked edges of the explicit graph generated using CIS and verifies the new instance subgraph for structural conflicts. If an instance subgraph has been verified as free of structural conflicts, then Prepare_for_Next_Instance procedure is used to prepare the data structures for creating a new instance using CIS in the next iteration. However, if VIS finds that the instance subgraph is having a structural conflict error, then the error is reported and the algorithm stops. Mahanti-Sinnakkrishnan algorithm traverses the workflow graph by generating new instances in a defined order. Any instance subgraph of a workflow graph can be generated by choosing exactly one child node of any OR-split node and all child nodes of any other type of node. Hence, instance subgraphs vary just due to the choice taken while choosing a child node of an OR-split node. First instance subgraph is created in MahantiSinnakkrishnan algorithm by choosing the left-most child node of any ORsplit node that is expanded. In subsequent iterations, instance subgraph is created by using the same choice of child nodes as used in the immediate previous iteration for all OR-split nodes except for a special OR-split node called PED-OR node. PED-OR node refers to Partially Explored, Deepest ORsplit node along a path. For this OR-split node, at least one of its child nodes (through the edge connecting this node to the child node) has been considered for instance subgraph creation in previous iterations. In the new iteration, a new unexplored child node of this PED-OR node is chosen in the left-to-right order. Use of PED-OR nodes is explained through examples in a later section. Data structures used in the algorithm: G - Used to represent the implicit graph G G’ - Used to represent the explicit graph G’ Z - Stack for storing the nodes that have to be expanded for creating a new instance subgraph OR_Split_Stack - Stack for storing the partially explored OR-split nodes such that the top node of the stack has the PED-OR node for the next iteration Y - Stack used in VIS for storing the nodes that have to be visited for verifying the instance subgraph created by CIS In the first iteration, CIS expands from the start node of the workflow graph. In the subsequent iterations, CIS expands from the new child node chosen for the PED-OR node corresponding to that iteration. CIS ignores any already
240
SIMPLE AND EFFICIENT ALGORITHM expanded node and the subgraph below it. For an OR-split node that is not expanded yet, CIS labels it as expanded and chooses the first child node of this OR-split node for further expansion. For any other node that is not expanded yet, CIS labels it as expanded and all the child nodes of it are considered for further expansion. CIS expands the nodes in the left-to-right, depth-first manner. Nodes expanded by CIS are installed in G’ and all traversed edges are marked. VIS verifies the instance subgraph generated by CIS by traversing just along the marked edges starting from the start node. If an OR-join node is visited more than once for an instance subgraph, then a structural conflict error is reported and the algorithm stops. If after visiting all the nodes it is found that the number of visits to any visited AND-join node does not match with its number of parent nodes, then a structural conflict error is reported and the algorithm stops. Prepare_for_Next_Instance procedure removes the top node of OR_Split_Stack, if all child nodes of this node have been chosen for instance subgraph creation in previous iterations. If OR_Split_Stack is not empty after this operation, then the top node of it is used as PED-OR node for the next instance subgraph creation. Algorithm Verify_Workflow(Graph G) Initialization: Initialize a stack Z containing only the start node. Initialize a stack called OR_Split_Stack to NIL. Initialize the explicit graph G’ by installing the start node in it. Label start node as not expanded in G’. Do Call the procedure Create_Instance_Subgraph(G, G’, Z, OR_Split_Stack). Call the procedure Verify_Instance_Subgraph(G’). Call the procedure Prepare_for_Next_Instance(G, G’, Z, OR_Split_Stack). While OR_Split_Stack is not empty Procedure Create_Instance_Subgraph(G, G’, Z, OR_Split_Stack) While Z is not empty do Pop the top node from Z. Let this node be called “q”. If q is not already expanded in G’ then In G’, label q as expanded If q is OR-split node then Install the first child node of q in G’ if it is not already present in G’. Install the edge to this child node of q in G’ and mark this edge. Push this child node to the top of Z. Push q to OR_Split_Stack. Else Install all the child nodes of q in G’ if they are not already present in G’. Install the edges to these child nodes of q in G’ and mark these edges. Push these child nodes to the top of Z in, say, right-to-left order such that the left-most child node is on the top of Z. End while End Procedure Procedure Verify_Instance_Subgraph(G’) Label all nodes of G’ as “not visited”.
241
SIMPLE AND EFFICIENT ALGORITHM Set VisitCount to zero for all AND-join nodes in G’. Initialize a stack Y containing only the start node. While Y is not empty do Pop the top node q from Y If q is not visited already then Push the marked child nodes of q to the top of Y in, say, right-to-left order such that the left-most child node is on the top of Y. If q is an already visited OR-join node then Report “Structural Conflict: Lack of Synchronization” Error and Exit If q is an AND-join node then Increment the VisitCount of q. Label q as “visited” End while If number of parents(i.e., MergeCount) did not match with the VisitCount for any visited AND-merge node in G’ then Report “Structural Conflict Error: Deadlock” and Exit. End Procedure Procedure Prepare_for_Next_Instance(G, G’, Z, OR_Split_Stack) If all child nodes of the top node of OR_Split_Stack have already been considered for creating instance subgraph then Pop the top node from OR_Split_Stack. If OR_Split_Stack is not empty then For the top node p of OR_Split_Stack, generate the next child node. Install this generated child node in G’ if it is not already present in G’. Install the edge from p to this child node in G’. Shift the marking below p to the edge connecting this child node. Push this child node to the top of Z. End Procedure
Figure 3: Mahanti-Sinnakkrishnan algorithm for workflow verification
Payment Request
C1
US $
Approval from Finance Director
A$
C2 Approved
Rejected
Prepare Check for ANZ Bank
Prepare Check for CITIBANK
Reject Request
C3
C4
Signatures from Finance Director Update Account Database
Issue Check
C5
File Payment Request
Implicit Graph G.
242
SIMPLE AND EFFICIENT ALGORITHM (a)
(b) Payment Request
US $
Approval from Finance Director
C1
A$ C2 Approved Prepare Check for ANZ Bank
Prepare Check for CITIBANK
C3
C4
Rejected Reject Request
Signatures from Finance Director
Update Account Database
Issue Check
C5
File Payment Request
Explicit Graph G3'. Instance is Valid.
(c)
(d)
Figure 4: Check Issue process showing trace of Mahanti-Sinnakkrishnan algorithm. Instance subgraph is obtained by following marked edges starting from the start node in the explicit graph. Work-out of the Proposed Algorithm A Check Issue business process is used to trace the execution of the algorithm10 and the corresponding workflow graph is given in figure 4. Acronyms used to represent the various nodes of this workflow graph is given in table 1(a) and the trace of the algorithm showing the iteration by iteration content of stack Z and OR_Split_Stack after each node expansion in CIS, and content of stack Y after each node is visited in VIS, in each iteration is given in table 1(b). Node no.
Node name
Node Notation
0
Payment Request
PR
1
C1
C1
2
Approval from Finance Director
AfFD
3
C2
C2
Table 1(a): Table showing the acronyms used for representing various tasks of the workflow process presented in figure 4 Node no.
Node name
Node Notation
4
Prepare Check for ANZ Bank
PCfAB
Check Issue business processes depicted in figures 4(a), 5(a), 7(a) and 8(a) are adopted from the paper: Wasim Sadiq, Maria E. Orlowska: Analyzing Process Models Using Graph Reduction Techniques. Inf. Syst. 25(2): 117-134 (2000).
10
243
SIMPLE AND EFFICIENT ALGORITHM Node no.
Node name
Node Notation
5
Prepare Check for CITIBANK
PCfC
6
Reject Request
RR
7
C3
C3
8
C4
C4
9
Signature from Finance Director
SfFD
10
Update Account Database
UAD
11
Issue Check
IC
12
C5
C5
13
File Payment Request
FPR
Table 1(a) (continued): Table showing the acronyms used for representing various tasks of the workflow process presented in figure 4 Iteration no.
Instance Creation and Verification Create Instance Subgraph
Verify Instance Subgraph Node visited
Y
PR
C1
C1
PCfAB
PCfAB
C3, C4
C3
UAD, C4
UAD
IC, C4
C1
IC
C5, C4
C5, C4
C1
C5
FPR, C4
C5
FPR, C4
C1
FPR
C4
C4
SfFD
FPR
C4
C1
C4
SfFD
C1
Node expanded
Z
PR
C1
C1
PCfAB
C1
PCfAB
C3, C4
C1
C3
UAD, C4
C1
UAD
IC, C4
IC
1
SfFD
OR_Split_Stack
SfFD
C1
Table 1(b): Table showing the iteration by iteration content of stack Z and OR_Split_Stack after each node expansion in CIS, and content of stack Y after each node is visited in VIS, in each iteration for the workflow graph given in figure 4
244
SIMPLE AND EFFICIENT ALGORITHM Iteration no.
Instance Creation and Verification Create Instance Subgraph Node expanded
Z
AfFD
C2
C2
PCfC
PCfC
OR_Split_Stack
C2 C2
2
Verify Instance Subgraph Node visited
Y
PR
C1
C1
AfFD
AfFD
C2
C2
PCfC
PCfC
C3, C4
C3
UAD, C4
UAD
IC, C4
IC
C5, C4
C5
FPR, C4
FPR
C4
C4
SfFD
SfFD Node expanded
Z
OR_Split_Stack
RR 3
Node visited
Y
PR
C1
C1
AfFD
AfFD
C2
C2
RR
RR
C5
C5
FPR
FPR
Table 1(b) (continued): Table showing the iteration by iteration content of stack Z and OR_Split_Stack after each node expansion in CIS, and content of stack Y after each node is visited in VIS, in each iteration for the workflow graph given in figure 4 Correctness and Completeness Proof We define “workflow path” below any node n of the workflow graph as a hyper-path containing exactly one child node of any OR-split node and all the child nodes of any AND-split node and the corresponding edges. A maximal path below any node n in a workflow graph is defined as one of the different paths connecting that node with the end node. An OR-split node n is called to be the deepest OR-split node of the workflow graph G, if there is no other OR-split node below it as successor. There could be many deepest OR-split nodes existing in a workflow graph. Consider one such deepest OR-split node n. Let there be k maximal workflow paths below this node. Let an instance subgraph IGp passing through node n be free of structural conflicts. Let IGm1 … IGmk be the instance subgraphs obtained by modifying the instance subgraph IGp by choosing the various workflow paths below node n. If
245
SIMPLE AND EFFICIENT ALGORITHM the instance subgraphs IGm1 … IGmk are free of structural conflicts, then it could be assumed that there is only one maximal workflow path present below node n. In other words, the choice of the child node for node n for any instance subgraph of the workflow graph will not affect the structural conflict properties of that instance subgraph (i.e., if a instance subgraph was free of structural conflicts by choosing the jth child node of node n, then a new instance subgraph obtained by choosing any other child node will also remain free of structural conflicts). We call this property as “deepest OR-split invariance property”. If node n has this property, then for subsequent steps node n could be safely ignored while computing the deepest OR-split nodes of the workflow graph. Iteratively, this property could be applied on various OR-split nodes of the workflow graph so that the number of effective maximal workflow paths of a workflow graph becomes just one. Workflow graph will have structural conflicts, if and only if any of the instance subgraphs generated during this process is found to have structural conflicts. Thus, through this intuitive reasoning we establish that workflow verification is performed correctly by our algorithm and errors are reported, if any. Termination Proof For creating an instance subgraph, at least one new edge will be traversed. Exactly one instance is created for any iteration. Thus, the algorithm will terminate after finitely many iterations, since there are only E edges in the workflow graph G. Complexity Proof For creating each instance subgraph, a maximum of O(E) computations will be made in CIS as no edge of the instance subgraph is traversed more than once while expanding the nodes in CIS for that instance subgraph. For verifying each instance subgraph, a maximum of O(E) computations will be made in VIS as no edge of the instance subgraph is traversed more than once while visiting the nodes in VIS for that instance subgraph. For creating each instance subgraph, a new child node of the PED-OR node (which is a special OR-split node) is chosen as the starting node for CIS. Hence, number of instances generated is less than the sum of number of child nodes of all OR-split nodes in G. Let EOSi denote the number of child nodes for the ith ORsplit node and let NOS be the number of OR-split nodes in the workflow graph G. Then, complexity for verifying workflow graphs using MahantiSinnakkrishnan algorithm is, N OS
O(E)*∑ E OSi = O(E)*O(E) = O(E2). i=1
Work-out of the Proposed Algorithm for Various Workflow Graphs Figure 5(a) depicts the workflow graph for Check Issue business process. In this process, the payment request can be either for Australian dollars (A$) or for U.S. dollars (US$). If the payment request is for A$, then a check has to be prepared for ANZ Bank and signatures have to be obtained from the Manager. If on the other hand the payment request is for US$, then a check has to be prepared for CITIBANK. After preparing check for CITIBANK, signatures have to be obtained from Finance Director and funds have to be transferred to US$ account, and these two activities can happen independent of each other. In the implicit graph for this process, we find that both “Signatures
246
SIMPLE AND EFFICIENT ALGORITHM from Finance Director” node and “Transfer funds to US$ account” node are both connected to the OR-join node C2, while they are the child nodes of the AND-split node “Prepare check for CITIBANK”. This leads to lack of synchronization structural conflict. Explicit graphs obtained after generating various instance subgraphs of this workflow graph are shown along with the edge markings in the figures 5(b) and 5(c). Figure 6(a) depicts the workflow graph for Complaints processing business process. This process has two subparts, and both these subparts have to be executed within the stipulated time limits for the process to complete successfully. If the process completes successfully, then it is notified that the processing is OK. Even if one of the subparts gets timed-out, then it is notified that the processing is not OK. In the implicit graph for this process, we find that “Processing-1” and “Processing-2” nodes are connected to the AND-join node “Processing-OK”, while “Timeout1” and “Timeout-2” are connected to the OR-join node “C3”. This leads to lack of synchronization structural conflict. Explicit graphs obtained after generating various instance subgraphs of this workflow graph are shown along with the edge markings in the figures 6(b) and 6(c). Figure 7(a) depicts the workflow graph for another Check Issue business process. In this process, if the payment request is for US$ and it is approved from the Finance Director, then “Issue Check” node gets activated only by “Signature from Finance Director” node. Since “Issue Check” node is an AND-merge node, it waits for getting activated from “Update Accounts Database” node as well. This leads to deadlock structural conflict. Explicit graphs obtained after generating various instance subgraphs of this workflow graph are shown along with the edge markings in the figures 7(b) and 7(c). Figure 8(a) depicts the workflow graph in which a dummy node AND-join is introduced before the node “C2” of figure 5(a) to remove lack of synchronization structural conflict. Explicit graphs obtained after generating various instance subgraphs of this workflow graph are shown along with the edge markings in the figures 8(b) and 8(c). Figure 9 is a toy problem depicting a complicated overlapped forty two nodes workflow graph. In this problem, there are multiple levels of overlapping. In an overlapped workflow graph, an AND-split node is directly connected to an OR-merge node in a peculiar manner such that no structural conflict is introduced due to this. Table 2 gives the trace of MahantiSinnakkrishnan algorithm for this workflow graph. In this table, for each iteration PED-OR node from which CIS starts expanding is given along with the nodes expanded by CIS and nodes visited by VIS. Figure 10 is another toy problem depicting a complicated overlapped workflow graph. Table 3 gives the trace of Mahanti-Sinnakkrishnan algorithm for this workflow graph.
247
SIMPLE AND EFFICIENT ALGORITHM
(a) Payment Request
C1 A$
US $
Prepare Check for ANZ Bank
Signatures from Manager
Prepare Check for CITIBANK
Signatures from Finance Director
Transfer funds to US$ Account
C2
Update Accounts Database
Issue Check
Explicit graph G2'. [VisitCount(C2)=2]>[MergeCount(C2)=1]. Lack of Synchronization at C2.
(b)
(c)
Figure 5: Check Issue process showing lack of synchronization structural conflict
248
SIMPLE AND EFFICIENT ALGORITHM
(a) Register
C1
Processing-1
C2
Timeout-1
Processing-2
C3
Processing-OK
Processing-NOK
C4
Explicit Graph G2'. [VisitCount(C4)=2]>[MergeCount(C4)=1] Lack of Synchronization at C4
(b)
(c)
Figure 6: Complaints processing process showing deadlock structural conflict11
Implicit graph shown in this figure is adopted from the paper: Wil M. P. van der Aalst: The Application of Petri Nets to Workflow Management. Journal of Circuits, Systems, and Computers 8(1): 21-66 (1998).
11
249
SIMPLE AND EFFICIENT ALGORITHM
(a)
(b)
(c)
Figure 7: Check Issue process showing lack of synchronization structural conflict
250
SIMPLE AND EFFICIENT ALGORITHM
(a) Payment Request
A$
C1
US $
Prepare Check for ANZ Bank
Signatures from Manager
Prepare Check for CITIBANK
Signatures from Finance Director
Transfer funds to US$ Account
Dummy node
C2
Update Accounts Database
Issue Check
Explicit graph G2'. Instance is Valid.
(b)
(c)
Figure 8: Check Issue process showing how a dummy node can be used to remove structural conflicts
251
SIMPLE AND EFFICIENT ALGORITHM
Figure 9: Toy problem with multiple level overlapping and no structural conflicts12 Instance Creation and Verification Iteration no.
1
2
PED-OR node
-
C2
Create Instance Subgraph:
Verify Instance Subgraph:
Sequence of node expansion
Sequence of node visit
C1, T1, C2, T5, C10, T17, C18, T21, C11, T13, C19, C7, T7, C12, T18, C20, C13, T14, C21
C1, T1, C2, T5, C10, T17, C18, T21, C11, T13, C19, C7, T7, C12, T18, C20, C13, T14, C21
C6, T6
C1, T1, C2, C6, T6, C10, T17, C18, T21, C11, T13, C19, C7, T7, C12, T18, C20, C13, T14, C21
3
C1
T2, C3
C1, T2, C6, T6, C10, T17, C18, T21, C11, T13, C19, C3, C7, T7, C12, T18, C20, C13, T14, C21
4
C3
T8
C1, T2, C6, T6, C10, T17, C18, T21, C11,
Implicit graphs shown in the figures 9 and 10 are adopted from Hao Lin, Zhibiao Zhao, Hongchen Li, Zhiguo Chen: A Novel Graph Reduction Algorithm to Identify Structural Conflicts. HICSS 2002: 289.
12
252
SIMPLE AND EFFICIENT ALGORITHM T13, C19, C3, T8, C12, T18, C20, C13, T14, C21
5
6
7
8
C1
C4
C1
C5
T3, C4, T9, C14, T15, T19, C15, C9, T11, C16, T16, T20, C17
C1, T3, C4, T9, C14, T15, C18, T21, T19, C19, C15, C9, T11, C16, T16, C20, T20, C21, C17
C8, T10
C1, T3, C4, C8, T10, C14, T15, C18, T21, T19, C19, C15, C9, T11, C16, T16, C20, T20, C21, C17
T4, C5
C1, T4, C8, T10, C14, T15, C18, T21, T19, C19, C15, C5, C9, T11, C16, T16, C20, T20, C21, C17
T12
C1, T4, C8, T10, C14, T15, C18, T21, T19, C19, C15, C5, T12, C16, T16, C20, T20, C21, C17
Table 2: Table showing the trace of Mahanti-Sinnakkrishnan algorithm for figure 9
Figure 10: Toy problem with multiple level overlapping and no structural conflicts
253
SIMPLE AND EFFICIENT ALGORITHM Instance Creation and Verification Iteration no.
PED-OR node
Create Instance Subgraph:
Verify Instance Subgraph:
Sequence of node expansion
Sequence of node visit
C1, F1, C2, F3, M5, S3, M3, S2, M6, M4
C1, F1, C2, F3, M5, S3, M3, S2, M6, M4
C2
F4, M1, S1, M2
C1, F1, C2, F4, M1, S1, M5, S3, M2, M3, S2, M6, M4
3
C1
F2, C3, F5
C1, F2, M1, S1, M5, S3, C3, F5, M2, M3, S2, M6, M4
4
C3
F6
C1, F2, M1, S1, M5, S3, C3, F6, M2, M6
1
-
2
Table 3: Table showing the trace of Mahanti-Sinnakkrishnan algorithm for figure 10
OTHER METHODS OF WORKFLOW VERIFICATION Propositional Logic This method uses the following process language constructs, Sequence, AND-split, AND-join, XOR-split, XOR-join, OR-split, OR-join and cycle. XORsplit and XOR-join constructs used in this method correspond to OR-split and OR-join constructs used in workflow graphs. Using OR-split construct, more than one process path can be executed concurrently. OR-join construct synchronizes such concurrent executions initiated by OR-split constructs. This method uses logical deduction to do the workflow verification. The algorithm for this method and its correctness are given by Henry H. Bi and J. Leon Zhao13. Complexity of this method is O(N2), where N refers to the number of tasks in the workflow process. This method is not complete as it cannot detect structural conflicts in all kinds of overlapped workflow structures. WF-nets WF-nets are based on Petri nets. Petri nets are directed graphs that have two types of nodes called “places” (P) and “transitions” (T). Places and transitions are together called as nodes. Places in Petri nets act as the intermediate states of the process. Places and transitions are connected by edges (F). Edges are also called as flows. Edges do not connect any place directly to another place. Similarly, edges do not connect any transition directly to another transition. A set of places that have a directed edge to a transition are called the input places of that transition. Similarly, a set of places that have a directed edge from a transition are called the output places of that transition. Similarly, input transitions and output transitions can be defined. WFnets are Petri nets that have a single place that has no input transitions called source place, a single place that has no output transitions called sink place, and all the other places and transitions connected between the source place and the sink place. A place can contain one or more tokens. At any
Henry. H. Bi and J. Leon Zhao: Applying Propositional Logic to Workflow Verification. Information Technology and Management 5(3-4): 293–318 (2004).
13
254
SIMPLE AND EFFICIENT ALGORITHM moment, the tokens in the various places determine the state of the system. A transition is enabled if all its input places have at least one token each and it is then ready to fire. If the transition fires, it takes one token from each of its input places and places one token in each of its output places. Initial state of WF-net comprises one token in its source place. This method can verify the WF-nets with loops. This method of verifying is complete as it can verify for all kinds of WF-nets. However, this method is very complicated. Further, getting a visual comprehension of the trace of this algorithm is difficult. Function Petrify14 converts a workflow graph into a Petri net. Complexity of verifying such a converted WF-net is O(k2.l), where “k” refers to sum of the cardinality of the set of condition nodes in the workflow graph and the cardinality of the set of the edges in workflow graph, and “l“ refers to the sum of the cardinality of the set of the task nodes in the workflow graph and the cardinality of the set of the edges in the workflow graph. Matrix Algebra This method uses adjacency matrix representation of workflow processes to do the verification. Inline block is a set of nodes that present a common set of inward edges comprising all the edges that are directed at the various nodes in the inline block from the nodes outside the inline block. Similarly, it has a common set of outward edges. If an inline block has all the inward edges only from the start node and has all the outward edges directed to the end node, then the inline block is said to have “Blocked transition property”. This property is used to reduce the complexity of the workflow verification by identifying inline blocks in the workflow process model. Cyclic workflows can be verified in this method by dividing the cycle into main path(s) and feedback path(s) and then verifying them separately as acyclic workflows. This approach also provides the workflow abstraction. Computational complexity of this method is not given in the referred literature. This approach can verify complex workflow structures (including overlapping workflow structures and cycles) by analyzing all instance flows within each inline block. Complexity of this method is O((N+E)2.N2). This method is given by Yongsun Choi15.
IMPLEMENTATION DETAILS We implemented the proposed algorithm in C language for Linux platform. Data structure used for representing each node of a workflow graph had node type (which can be AND-join node, AND-split node, OR-join node, or OR-split node), number of child nodes, child node numbers, number of parent nodes, parent node numbers, Boolean for expanded status, and Boolean for visited status. Graph was represented as an array of nodes and array index of each node formed its node number. Input Graph was read from a file and this file contained for each node, its node number, node type and various child nodes. We tested our algorithm using test graphs presented in the literature. Moreover, we tested our algorithm by using a random workflow 14 Function Petrify is described in the paper: Wil M. P. van der Aalst, Alexander Hirnschall, H. M. W. (Eric) Verbeek: An Alternative Way to Analyze Workflow Graphs. CAiSE 2002: 535-552.
Yongsun Choi: A Two Phase Verification Algorithm for Cyclic Workflow Graphs. ICEB 2004: 137-143.
15
255
SIMPLE AND EFFICIENT ALGORITHM graph generator, which was coded in C language for Linux platform. Random workflow graph generator gets the number of nodes in the graph as input from the user and generates a random workflow graph accordingly.
CONCLUSION Workflow verification has significant business importance, as the structural conflicts in a business process could lead to business loss, reduce customer satisfaction, increase overload of employees, lead to negative brand image, reduce profits and consume substantial managerial time. This problem has been solved in a simple and elegant manner by the proposed algorithm. This algorithm is much easier to understand, as it uses search based techniques like Depth-First Search and has significant advantages in terms of time complexity when compared to other workflow verification algorithms available in the literature. We hope that due to these advantages of this algorithm, workflow verification will be increasingly made available in the process modelling tools and workflows will be increasingly verified before deploying in the real world business environment.
ACKNOWLEDGEMENT We acknowledge the help provided by Mousumi Das and Tanusree Bag in drawing the workflow graphs.
256
ASAP/Wf-XML 2.0 Cookbook—Updated Keith D Swenson, Fujitsu Software Corporation, United States OVERVIEW Wf-XML is a protocol for process engines that makes it easy to link engines together for interoperability. Wf-XML 2.0 is an updated version of this protocol, built on top of the Asynchronous Service Access Protocol (ASAP), which is in turn built on Simple Object Access Protocol (SOAP). This article is for those who have a process engine of some sort, and wish to implement a Wf-XML interface. At first, this may seem like a daunting task because the specifications are thick and formal. But, as you will see, the basic capability can be implemented quickly and easily. This article will take you through the basics of what you need to know in order to quickly set up a foundation and demonstrate the most essential functions. The rest of the functionality can rest on this foundation. The approach is to do a small part of the implementation in order to understand how your particular process engine will fit with the protocol.
ASSUMPTIONS It is assumed that you have a Business Process Management System (BPMS) onto which you are trying to fit this protocol. The specific design of that BPMS, the philosophy behind it, the technology it is built on, does not matter. Wf-XML defines a standard web service interface to your engine, which is an abstraction of what is going on inside. This article deals with two levels of implementation: Level 1 is the implementation at the ASAP level of the protocol. At this level, there are “service instances” which correspond to process instances, but there is limited information about how those processes run, and how to configure the server to have more or fewer process definitions. Level 2 is the Wf-XML level where not only is it an asynchronous service, but it is a particular kind of asynchronous service which can provide a process definition and additional process oriented details about the service. Level 1 (ASAP) must be implemented before you can consider implementing Level 2. All BPM Systems are assumed to have a collection of process definitions that can be referred to by unique name or ID. At level 1, the definition of those processes does not matter, nor does it matter how those definitions are stored and retrieved. The process definition may, but does not need to, have an external representation. All that is necessary is that the definition can be addressed using a unique ID. The web service that you access in order to invokve a process from a given definition is called a factory, and so you can think of the factory ID being the same as the process definition ID. It is assumed that there is a way to start a process instance for a particular process definition and that there is a set of data values that needs to be provided at the time the process is started. It is assumed that each process instance has a unique ID of some sort that can be used to access the current state of the process. Again, it does not matter how that process instance is stored, or what technology is used to retrieve it. All that is necessary is that there is a unique ID that can be used to access the current state.
257
ASAP / WF-XML 2.0 COOKBOOK—UPDATED It is assumed that the system in question has a way to send and receive SOAP messages. The above assumptions are sufficient for implementing the basic factory/instance roles of the protocol. It is assumed that BPM systems will have a way to send a SOAP message and go into a persistent wait state such that SOAP messages at any point in the future could reactivate and continue the process. This will allow the BPM system to start a remote process instance and wait for its completion in the observer role of ASAP. The above is the minimum capability that is necessary to interact via ASAP or Wf-XML. These interactions can then be extended with additional parts of the protocol depending upon the capabilities of the BPM system. •
If the BPM system offers a way to list all the instances, then there is a command to allow the observer (or other client) to browse the currently running process instances.
•
If the BPM system offers a way to list all the process definitions, then Wf-XML can be used for browsing the process definitions.
•
For Level 2 implementation you must have an external representation of the process definition. This can be XPDL. Several dozen process tools have found a way to map their process definitions to XPDL, so it would be a good choice for interoperability, but any XML based process description (e.g. BPEL) can be used so long as it can be understood on both ends of the connection. For level 2, the assumption is that submitting new process definitions can create new factories.
•
If the process instance is composed of a set of currently running activities, then Wf-XML can provide a way to list these activities and to access their currently running state. These latter capabilities are optional extensions that can be made, but are not required for the basic interaction.
CAVEAT This article is being written before either ASAP or Wf-XML 2.0 are formally ratified. That means that the specs may change in ways that make this article obsolete. Keep in mind as you read this article that the names of the operations may be slightly different and the names and arrangement of the tags may change from what is presented here. But the general concepts are sure to be the same and you will still find the general implementation strategy to be helpful. Before attempting any interoperability tests, please check with the official specifications to be sure that your implementation is valid.
ASAP PRIMER An Asynchronous Service Access Protocol (ASAP) has two sides. In the diagram below, the client side (Observer) is on the left, while the service side (Factory and Instance) is on the right. You will probably want to implement both sides, but it is recommended that you start by implementing the service side first. This immediately makes your BPM system a service for other systems.
258
ASAP / WF-XML 2.0 COOKBOOK—UPDATED CreateInstance
Factory Factory
Observer Observer
Instance Instance XML Data:
Completed
IN IN/OUT OUT
First implement the factory operation: CreateInstance that accepts details from the observer (with an immediate response back to the observer). The service runs for a while and, when finished, it makes a Completed request back to the observer (accepting an immediate response back). If you are using a process engine, this is enough to demonstrate the basic ability to invoke remotely via ASAP, and to return the result of the operation later.
CreateInstance
Factory Factory
Observer Observer
GetData
SetData
Notify Completed
Instance Instance XML Data: IN IN/OUT OUT
Next, we add some additional functions to the Instance resource. The GetData method will allow the observer to retrieve the current state of the instance, allowing for polling interactions. The SetData method will allow the observer to change the data values later. Since many services may not allow changing of data after the start, it is acceptable to simply return a fault message (exception). Finally, if the Service Instance goes through a number of state changes before it completes, it could make Notify operation requests back to the observer.
259
ASAP / WF-XML 2.0 COOKBOOK—UPDATED WF-XML 2.0 Wf-XML extends the ASAP protocol by adding some additional capabilities between business process management systems. Such engines usually have a way to install and remove process definitions (factories). Wf-XML then adds a new resource called a container resource. On this resource you can invoke the operations of ListFactories in order to discover the factories that are installed on the container. Then you can create new Factories by supplying a new process definition. The factory resource is extended by the ability to retrieve the process definition and the ability to change the process definition to a new one as supplied. A version-numbering scheme is provided to keep track of the changes. ListFactories
Container Container
CreateFactory
Observer Observer
Factory Factory ListInstances
Instance Instance ListActivities Get/SetData
Activity Activity
Wf-XML also adds the concept of an activity resource in order to give additional information about the process instance that is running. With ASAP we know that the process is open and running. When ListActivities is called, a list of the currently active activities is returned; telling you what step the process is currently at, so that you can monitor the advancement of the process.
LEVEL 1 (SERVER) – CREATE AND COMPLETE (SERVER) This section describes how to implement the server side of the CreateInstance to Completed cycle.
FACTORY RESOURCE ADDRESS Each process definition is represented as a factory resource. If you have 113 process definitions, you will have 113 factory resources. SOAP requests are made to the factory resource in order to do things like start instances and list all instances. Those new to the ASAP protocol might find this a little uncomfortable at first. You might ask, “Why not just make a single operation that you pass the name of the process definition to?” In fact, this is what you will be doing, but we represent the factory as a web resource for a very important reason.
260
ASAP / WF-XML 2.0 COOKBOOK—UPDATED Consider how the Web works. When you access a document on the Web, you can use a single URL value. Encoded into the URL are the name of the machine to make the request to and the address of the document on the machine. Some addresses will invoke servlets that retrieve the document from a special place, or generate it on the fly. You, as the receiver of the document do not need to be concerned how the URL was composed. All that is important is that everything that the server needs to know in order to deliver the document is encoded into the URL. A link can be placed in a page. That link contains the URL, and by clicking on the link the document is retrieved. When process engines are linked using ASAP, the URL of the factory will contain all the information necessary to locate the factory. For a BPM engine, this is easy because each process definition has a unique name or ID. It does not matter whether the ID is a numeric value, or simply a unique name, as long as there is a string of characters that uniquely identifies the particular process definition. Some systems may need to combine two or more values to make a unique ID. For example, a system that offers process definition versions may need to combine the name and the version in order to make a unique ID. The address of a factory can be composed two different ways. In both ways, there is a ‘base URL’ which can be thought of as the address of the handler that receives the SOAP message, and the process definition ID as an extension. The process definition ID can be encoded as a URL parameter. Consider the case below where the address of a servlet is given, followed by examples of process definition IDs: 1 2 3 4
Servlet: Factory: Factory: Factory:
http://server:8080/bpm/factory.jsp http://server:8080/bpm/factory.jsp?id=84352 http://server:8080/bpm/factory.jsp?id=Purchase+Order http://server:8080/bpm/factory.jsp?id=Expense&version=1.3
One of the advantages of using URL parameters is that you can easily use multiple values, and the typical servlet engine will parse the values automatically. Because the order of the values does not matter, you gain flexibility. The second way is to map the handler as part of the path and then extend the path with the details of the unique ID. For example: 1 2 3 4
Servlet: Factory: Factory: Factory:
http://server:8080/bpm/factory/ http://server:8080/bpm/factory/84352 http://server:8080/bpm/factory/Purchase+Order http://server:8080/bpm/factory/Expense/1.3
The interoperability demonstrations in June 2004 between Fujitsu, TIBCO, Handysoft, Advantys, and a couple open source projects has validated this this approach is compatible with .Net (C# and VB), Java (AXIS and custom Java), C++, and other technologies that underly web service implementations today. It does not really matter how you construct the URL, so pick a technique that makes the most sense for the technology used to send and receive SOAP messages.
PROCESS INSTANCE ADDRESS The CreateInstance operation will be called on the factory resource and will cause a process instance to be created. In order to allow the process instance to be accessed later to check status or perform other operations, the process instance needs an URL. Since BPM systems are designed to access
261
ASAP / WF-XML 2.0 COOKBOOK—UPDATED process instances on demand, they always have some form of unique ID for retrieving them. Again, you will have some form of handler at a base address, which is then extended with the specifics for the process instance ID. 1 2
Servlet: http://myserver:8080/bpm/instance.jsp Instance: http://myserver:8080/bpm/instance.jsp?id=84352
You may find yourself asking the question: “Why form a whole URL for the instance? Why not just use the factory address and pass the instance ID as a field in the message?” There are two answers. The first is one of packaging. We want to provide a single value back to the caller. Any number of different values can be packed into a single URL, using well-defined rules. So you may, if you wish, extend the factory address with the process instance ID. But you may also decide to do it other ways. If the server takes on the full responsibility of packing the values together by returning a complete URL, and of course parsing the values apart when used, then the server can use any scheme to encode any amount of data into the URL. The second reason is that the instances and the factory may be served from a different host machine. This may be used as a primitive form of load balancing, or there may be a technical reason for running a process on a particular host. An infinite number of URL addresses are readily available for free so we use this approach to hide the technical requirements of the system from the caller.
TRANSPORT STRATEGY The above examples use ‘http’ addressing, meaning that the XML of the SOAP message will be passed over http. ASAP and Wf-XML are not limited to http addresses; they can be delivered in any way that a SOAP message can be delivered. That being said, initial implementations typically use http because it is easy to work with and debug. Also, all implementations of Wf-XML are sure to support http transport of SOAP messages, hence interoperability with other systems is assured if you support this transport. The experience of the June 2004 interoperability demonstrations reinforced this conclusion. Some features are optional when using http. This document is simplified somewhat by the assumption that you will be using http. Just keep in mind if you use a different transport you may need to implement the protocol a slightly different way. Please check the specification for details.
CONTEXT DATA AND RESULT DATA Before implementing the first operation, you need to decide a structure for the context data and result data. The context data is a structure used for sending data to the factory or process instance. Think of it as the input data. If your BPM engine supports updating the process variables, then this structure is also used for the SetProperties operation. The Result data is an XML structure for communicating data in the other direction. The Completed operation and Notify operations contain the Result data XML structure. Most BPM services store the context data in a set of process variables. How the BPM system stores the data does not matter. The ASAP clients of your server will never see your variables directly. They only see the XML representation of them. Therefore it is common to describe the context data XML structure as being the process variables, but your job is to accept this XML and then save the values as process variables. Furthermore, at the end of the process, or when requested, you need to be able to read the process variables and produce the Result data XML structure.
262
ASAP / WF-XML 2.0 COOKBOOK—UPDATED Why are there two different structures: context and result? There is a need to have values that can be set, values that can be read, and values that can be both set and read. Using XML schema to define structure, there is no convenient way to indicate what parts of the structure are IN variables, OUT variables, or IN/OUT variables. IN variables are in the context data structure. OUT variables are in the result data structure. IN/OUT variables are in both structures, and by convention will be found at the same XPath location in both structures. It is possible that all variables are IN/OUT, so the context and result structures can be identical. The ASAP specification leaves it up the implementation to define these XML structure. Any XML structure that can be described by XML Schema is allowed. Most BPM systems allow different process definitions to define different sets of variables, so it is insufficient to define a single context data structure for the entire BPM system. Many BPM systems have, for each process, a set of name/value pairs. The simplest way to construct the context data XML structure is to generate tags using the name of the variable that contain the value of the variable. If your BPM system offers typed variables, then be sure to consider which XML Schema type the value most closely matches, or use String if type checking is not needed. If the variable itself contains a complex record structure, you may want to think about how best to map this structure into XML. If the variable contains XML directly, be sure to put the XML in the message as XML, and not encoded into a string, so that the variable can be easily extracted without having to parse the result multiple times. You will find it most convenient to parse the request or response message into a DOM tree. The context data branch of the tree can be easily iterated through. For each tag, look for a corresponding variable and set the value. For generating response data, iterate though your variables; and generate a DOM tag element with the name of the variable and the contents as the value of the variable. If you write this in a generalized way it will work for all process definitions automatically. Be careful though. It is possible that your BPM system allows variables to be named in ways that are not allowed as XML tag names. For example, you may allow space characters in variable names, but XML tag names may not contain spaces in them. This means that you need to somehow encode the variable name into an acceptable tag name. Experience has shown that it is often possible to just use a simplified version of the variable name where any offending characters are simply stripped out of the name. This is not a reversible conversion, since it is possible for more than one variable name to be simplified to the same tag name. Human nature is such that variables usually have names that are different enough so that the simplified names are still unique within the process. Generally, it is sufficient to validate the process definition by iterating through the variables when it is first installed, checking that the simplified names are all unique within the process—simply throw an exception if they are not, and let the process designer fix the problem.
SECURITY Security of information systems involves authentication, authorization and privacy. One must take care that a Wf-XML or ASAP implementation does not become a way around the built in security mechanisms of a BPM system.
263
ASAP / WF-XML 2.0 COOKBOOK—UPDATED Privacy is the simplest requirement to meet. Those requiring privacy must be able to send and receive the SOAP messages over an SSL link—usually by using HTTPS (but SOAP allows other ways to send a message and privacy can be ensured by using SSL). If you are sending SOAP messages over SMTP, then you may need to encrypt the contents of the email message to ensure privacy. All of these solutions are out of the scope of the protocol. Authentication: The first rule is that every SOAP request MUST be authenticated. Never allow unauthenticated requests into the server. Usually server 1 (client) authenticates to server 2 (service) through the normal means so that server 2 is assured that the call is coming only from server 1. Authorization: Server 2 then makes available only the information that server 1 needs to know. Once you have this set up you need to be concerned about who is using server 1 that can cause server 1 to invoke a service on server 2. If server 2 gives privileged information to server 1, then you must assume that any user of server 1 can get that privileged information. It is best if server 2 only gives information to server 1 that would be acceptable to be given to any user of server 1. This is not always possible. The server is, after all, acting on behalf of another, and may need access to restricted information to accomplish its task. If server 2 allows server 1 access information that is not generally accessible, then you must be sure that server 1 guards the information with the same rules that server 2 guards the information with. It is not my goal to solve the problem in this article, but simply to raise the issue here that if you have access control on information, you need to think carefully on how to maintain the same access control now that you have two servers cooperating. This has very little to do with the protocol, but the implementation needs to be careful that the protocol does not become a backdoor to privileged information.
RECEIVING: CREATEPROCESS (ON FACTORY) The first operation you will create will be the one that creates a process instance within a BPM system. An example message is given below. Implementing the CreateProcess handler entails receiving this message, parsing the XML into a DOM, and then doing the appropriate operations with the elements. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
http://sender.com/observer.jsp http://rec.com/factory.jsp?id=213 abc123 Yes Yes http://sender.com/observer.jsp Expense Approval for Jones Expense Approval for Jones This is a … Jenny B Jones $317.45
264
ASAP / WF-XML 2.0 COOKBOOK—UPDATED 21 22 23 24 25 26
San Jose 95134
Line 5: The sender may or may not be the same value as the observer. Currently, the only reason Sender Key is needed is to include it in the response message. Check the latest specification because this tag may be affected by the evolving WS-Addressing specification. Line 6: This is the address that the sender was trying to deliver to, so this should be the address of the factory. If you wish, you can compare this value to the assumed source URL for the SOAP messages, and, if different, indicate some kind of error. Initially, though, use this value only to copy into the response. Check the latest specification because this tag may be affected by the evolving WS-Addressing specification. Line 7: The request ID is needed only to copy back into the response so that the response can be accurately matched with the request. This really should not be necessary when using http protocol, but if you receive a request ID, simply copy that value into the response message. This too may be effected by WS-Addressing. Line 8: The easiest implementation is to always send a response to every request. When using http this is the desired implementation anyway. For initial implementations ignore this value and always return a response. Later, when you are getting ready to certify a completed implementation, revisit this value. Line 13: The ‘StartImmediately’ option is for the relatively rare case that someone might want to create a process instance without starting it, and then use a ChangeState operation to start it later. Most of the time, systems will simply wait until they are ready to start the process instance, and perform the create and start in the same call. So most of the time this will be ‘Yes’. For the initial implementation, ignore this value, or if you wish to be proper, check that the value is ‘Yes’ and return a fault (exception) if it is anything else. Line 14: This is the observer key and it MUST be stored someplace, even for the simplest implementation of the protocol. Ultimately, the BPM engine should have a special place to record this for every process instance. On the assumption that you are creating this Wf-XML interface for an existing engine, and that you do not have the luxury of changing the BPM engine, then consider using a process variable to store this string value. If this is not possible, then the interface layer can store this by maintaining a persistent map from process instance ID to observer ID. This option is less than desirable because it is more effort to keep the process instances and observers consistent. However you store it, it is absolutely required at process end, that this observer key can be retrieved in order to call the Completed operation on the observer. Line 15: If your BPM engine has a way to name process instances, use this value for the name, otherwise you can ignore it. Line 16: If your BPM engine has a subject for each process instance, use this value for the subject, otherwise you can ignore it.
265
ASAP / WF-XML 2.0 COOKBOOK—UPDATED Line 17: If your BPM engine has descriptions for the process instances, use this value for the description, otherwise you can ignore it. Line 18: This is the context data that is the initial data for the process. See the section on Context data above. Note that the context data tags are in a different namespace from the other tags. This must be done in order to prevent name clash and to allow variables with any name to be represented. The namespace is often associated with an XML Schema definition, and in this case that is the schema of the context data we discussed before. Remember that each factory (each process definition) has a unique definition of the context data. In this example, the name space identifier includes the ID of the process definition. This is not necessary, but it is a reminder that the schema depends on the factory. Similarly the URL includes the ID in such a way that it will be possible later to implement a servlet to return the XSD file for that process definition for a validating parser. It is not necessary to actually generate the XSD in this initial version, but now the pattern is set. Lines 19-22: This data must be parsed out of the XML and converted into whatever form is suitable for starting a process instance on your BPM system according to that particular process definition. That should be all the information that you need to create and start a process instance. Make the appropriate calls on your BPM engine to instantiate the process before generating the response. A sample response is given below, with a similar discussion after it. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
http://sender.com/observer.jsp http://rec.com/factory.jsp?id=213 abc123 http://rec.com/instance.jsp?id=456
Line 5: Copy this from the request. Line 6: Copy this from the request. Line 7: Copy this from the request. Line 12: Generate this URL for the Instance resource using the process instance ID and whatever encoding style you decided upon (according to the section on process instance address). And there you are, you are done with your first ASAP operation.
SENDING: COMPLETED (TO OBSERVER) Upon receiving an ASAP request to start an asynchronous server (process instance), you make a commitment to send a Completed message when that process instance completes. Different BPM systems will offer different degrees of flexibility in making this come about. Your system might offer a way for an external program to register a “call back” upon the termination of a process instance, in which you can call the code to send the response. If this
266
ASAP / WF-XML 2.0 COOKBOOK—UPDATED is not available, you might have to include an activity at the end of the process that invokes the code to send the completed message. If there is no way to cause the BPM system to proactively do something at the end of the process, you might have to resort to having the Wf-XML interface-layer poll the process instance and, when it detects that the status has changed, send the Completed message. Ultimately, upon completion of a process, the BPM system should somehow automatically check to see if an observer URL exists, and send a completed message to it. Once you have decided how to invoke it, the coding of the completed message is quite straightforward. Next is an example of the completed message, followed by line descriptions. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
http://rec.com/instance.jsp?id=456 http://sender.com/observer.jsp http://rec.com/instance.jsp?id=456 $317.45 Yes
Line 5: The Sender Key will be the URL of the process instance. Line 6: The Received Key will be the observer URL you recorded for this process instance. Note, there is no reason to specify ResponseRequired since the default for this tag is ‘yes’. And, there is no reason to specify a request ID since you are using http and the response will be synchronous. Line 11: It is critical that the Instance URL be specified here. The observer system may be observing many different remote subprocesses. This field is used to determine which one of them is now completed. Line 13-14: The Result Data must be read from the process variables and put into the XML as per the mapping decided upon in the above topic, “Context Data and Result Data.” Send this XML to the observer URL and receive the response like the one below. 1 2 3 4 5 6 7 8 9 10 11 12
http://rec.com/factory.jsp?id=213 http://sender.com/observer.jsp
The only thing that you need to do is to make sure that the CompletedRs tag exists, as shown on line 10. Merely the fact that a response was successfully
267
ASAP / WF-XML 2.0 COOKBOOK—UPDATED received, should be sufficient to assume the message was delivered. If you receive an error, or for any other reason do not receive a successful response, you should keep retrying to send until it is successful. This retrying behavior is best used in conjunction with the headers from the WSReliability specification from OASIS to avoid sending duplicate messages by mistake. By implementing the above, you are ready for the Level 1 (Server) interoperability demonstration and a service.
LEVEL 1 (CLIENT) — REMOTE SUBPROCESS Level 1 (Client) implements the client side of the CreateInstance to Completed cycle—providing the capability to invoke a remote asynchronous service and wait for it to complete. To do this, you need a process engine of some sort that is able to send a SOAP message and then wait for a SOAP request that can cause the process to continue. It is assumed that this will be done as part of a process and that the details are stored in a process instance. The process instance, then, plays the observer role in the protocol. In order to keep the concepts straight, we will call the observer process instance (the waiting process) the “parent” process instance. The new process instance (that is invoked in the other service) will be called the “child” process instance, or the sub-process instance. When configuring a process to be able to create a subprocess, the process designer must provide two things: (1) the URL of the factory of the subprocess, and (2) a way to map data from the parent into the child and vice versa.
DATA MAPPING The parent process instance holds some data in process variables and it needs to give some data to the remote factory in order to create the subprocess. We cannot assume that the schema of these is the same. There will need to be a translation mechanism. Values from the parent process instance should be collected and transformed to produce values for the subprocess. How this is done is completely up to you. One simple approach is to have the creator of the parent to child link provide an XSLT transform script that produces the “context data” part of the message, which can then be sent in the CreateProcess. If you are expecting data to come back from the subprocess, then another transform will need to be provided. There are graphical-mapping tools which, given two schemas, can generate mappings both ways.
OBSERVER ADDRESS As far as the service side of the protocol is concerned, you can create the Observer URL in any way you wish. What you need to keep in mind when defining this URL is that you are going to receive the Completed SOAP request at this address. It will be highly convenient to include the process instance ID of the parent process that is waiting for this event. Then, when the request comes, the handler can easily gain access to the correct parent process instance and make use of information stored there to correctly handle the request. In a very real sense, the parent process instance is really the observer of the invoked process, so the observer URL should be the process instance URL.
268
ASAP / WF-XML 2.0 COOKBOOK—UPDATED In some cases, you can have the process instance observing multiple subprocesses simultaneously. It might be convenient to code more information into the observer URL. For instance, if there is a particular node in the process that represents the remote sub-process request, you could include the ID of that node in the observer URL. With ASAP and Wf-XML you have complete freedom to include any information into the observer URL, whatever you need in order process the arriving requests.
SENDING: CREATEPROCESS (TO FACTORY) This is, of course, the same command described at the top of the article, but now it is described from the client perspective, and includes considerations for constructing this message and receiving the response. Again, a sample message follows. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
http://sender.com/observer.jsp?parent=44 http://rec.com/factory.jsp?id=213 http://sender.com/observer.jsp?parent=44 Expense Approval for Jones Expense Approval for Jones This is a … Jenny B Jones $317.45 San Jose 95134
Line 5: This is the URL of your handler Line 6: This is the URL of the factory specified by the process designer. No need to specify a request ID. Omit Response required, which is ‘yes’ by default, and omit StartImmediately, which is also ‘yes’ by default. Line 12: Part of the data mapping needs to include a specification of the name of the subprocess. Line 13: Part of the data mapping needs to include a specification of the subject of the subprocess. Line 14: Part of the data mapping needs to include a specification of the description of the subprocess. Line 15-20: Data mapping needs to be able to generate the context data part of the message from the parent process variables. You should anticipate the following response.
269
ASAP / WF-XML 2.0 COOKBOOK—UPDATED 1 2 3 4 5 6 7 8 9 10 11 12 13 14
http://sender.com/observer.jsp?parent=44 http://rec.com/factory.jsp?id=213 http://rec.com/instance.jsp?id=456
The only item of data that is important is line 11 the InstanceKey. This needs to be saved in the parent process instance.
RECEIVING: COMPLETED (ON OBSERVER) You need to be able to receive this at any time. The handler will load the process instance, transform the result data, set the appropriate process variables, and deliver the proper event to satisfy the wait condition so that the process instance will continue processing. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
http://rec.com/instance.jsp?id=456 http://sender.com/observer.jsp?parent=44 http://rec.com/instance.jsp?id=456 $317.45 Yes
Line 11: Check that the URL is from the anticipated process instance. Match it to the process instance URL given at the time of creation. Completed events from earlier invocations are possible; they need to be ignored. Line 12-15: This is the result data that must be transformed and stored in variables in the parent process instance. Send a message like this to confirm receipt. 1 2 3 4 5 6 7 8 9 10 11 12
http://rec.com/factory.jsp?id=213 http://sender.com/observer.jsp?parent=44
270
ASAP / WF-XML 2.0 COOKBOOK—UPDATED With Level 1 and Level 2 implemented, you can demonstrate a full round trip: a process in one engine reaches a node that starts a second process in another engine; the second process reaches the end and sends a completion event; and the first process receives it and continues. Because the exchange is based in XML on an open specification, you can choose from a variety of vendor products on either side of the interchange.
LEVEL 2 (SERVER) – INTROSPECTION At the Level 2 remote sub-process invocation is functional, the next important step for improving interoperability is to make it easier to set up the process that invokes the sub-process. The assumption is that a “process designer” who is not a programmer designs the process. The average process designer is not expected to be able to write an XSLT script to transform the data from one process schema to the other. Many process design tools offer GUI based features for transforming data. A tool that offers these capabilities needs to be able to retrieve the schema of the remote service. This Level 3 section concentrates on adding ASAP capabilities that support the design tool.
THE CONTAINER RESOURCE URL The container resource is needed to represent the fact that a single BPM system can contain many different process instances. The container then represents the BPM system as a whole. It will have a fixed URL address that can be used to ask questions of the system as a whole. For example, what are the process definitions (factories) that are already present in the system? This is the resource that you use for adding new process definitions to the collection. There is not much thought that needs to be given to the URL of the container. Every BPM system installation will only need a single fixed URL. An example is given below: 1
Container: http://rec.com/container.jsp
The idea is to give the URL for the container to the process design tool. Using that URL the design tool can list all the factories for selection. Once a given factory is selected, a request can be made to retrieve the factory properties (including the schema of the context data structure and the result data structure) so that the process designer can pick the data mapping from a data transform tool. The design tool can also request a process definition in a standard format to let the process designer see what the process looks like.
RECEIVING: LISTFACTORIES (ON CONTAINER) Here is an example of a request message you might receive: 1 2 3 4 5 6 7 8 9 10 11
http://sender.com/observer.jsp http://rec.com/factory.jsp?id=213 abc123 Yes
271
ASAP / WF-XML 2.0 COOKBOOK—UPDATED 12 13 14
Line 12: the first child of the body tag always indicates the operation being requested. Generate a response similar to the one shown following. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
http://sender.com/observer.jsp http://rec.com/factory.jsp?id=213 abc123 http://rec.com/factory.jsp?id=10 http://rec.com/factory.jsp?id=21 http://rec.com/factory.jsp?id=22 http://rec.com/factory.jsp?id=108 http://rec.com/factory.jsp?id=184 http://rec.com/factory.jsp?id=213
Given that your BPM system has a way to list the installed process definitions, you need only iterate through the process definitions, and generate a line for each one that includes the URL to the Factory. Please check the specification because at the time of this writing the format of the factory tag is not well defined.
RECEIVING: GETPROPERTIES (ON FACTORY) The GetProperties command is structured the same for all resources, but the properties of the Factory are different from the properties of the instance. A detailed description follows. 1 2 3 4 5 6 7 8 9 10 11 12 13 14
http://sender.com/observer.jsp http://rec.com/factory.jsp?id=213 abc123 Yes
Lines 3-10: Handle the header in the normal fashion. Line 13: The only significant thing in the message is the presence of the GetPropertiesRq tag. A response should be constructed along the lines of the example following.
272
ASAP / WF-XML 2.0 COOKBOOK—UPDATED 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37
http://sender.com/observer.jsp http://rec.com/factory.jsp?id=213 abc123 http://rec.com/factory.jsp?id=213 Expense Approval Expense Approval This is a … P90D
Line 3-9: Handle the header like normal, echo these values from the request. Line 12: Return the URL of the factory here, as a second check Line 13: The name of the process definition Line 14: The subject of the process definition, if there is one Line 15: The description of the process definition, if there is one Line 16-25: Context data schema. You have determined how to map the incoming requirements for a process into XML, this is the XML Schema of that XML. In this very simple example, each incoming variable is marked as a string type. If you have strong typing and can make a better match than string, then use that. The more meta information that is given about the structure requirements, the more the design tool can do to assure that your system gets it. By marking each element with minOccurs="0" you make each element optional—if present it will be used, if not, it will be ignored. Line 26-33: Result data schema. Like the context data, but it describes the data that will be returned in the Completed message. Line 34: Expiration. How long are you guaranteeing that process instance information will be available after the Completed message is delivered? Some systems will keep this information forever, so you can use the setting provided set for 90 days in this example. If it is unlikely that details of the process instance will still be available 90 days after completed, read the spec and XML Schema to determine the correct setting here. The client may need ad-
273
ASAP / WF-XML 2.0 COOKBOOK—UPDATED ditional information from the process instance after receiving the Completed message. The factory specifies how long that data is going to hang around (at a minimum). After this time, there is no guarantee that the service instance address will be good.
RECEIVING: GETDEFINITION (ON FACTORY) WfXML adds a GetDefinition operation to the factory resource that returns the process definition in the specified format. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
http://sender.com/observer.jsp http://rec.com/factory.jsp?id=213 abc123 Yes XPDL
The header is handled as in the previous samples. The child of the Body tag is the operation name and the Format tag holds the format. XPDL and BPEL are the formats known at this point in time, more will surely be developed in the future. The response is shown next. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
http://sender.com/observer.jsp http://rec.com/factory.jsp?id=213 abc123
After implementing Level 3, it is possible for a design tool to introspect the process definition and to help link a parent process to a sub-process.
274
ASAP / WF-XML 2.0 COOKBOOK—UPDATED FURTHER EXTENSION – PROPERTY AND NOTIFICATIONS (SERVER) Level 4 is implementation to support getting and setting properties, and notification. This enables changes in data to be propagated from parent process to sub-process while both processes are running.
STATUS MAPPING You need to decide how to expose the status of process instances as defined by your BPM system in terms of the status values defined by the spec. Here is a list of the values defined: 1 2 3 4 5 6
open.notrunning.suspended open.running closed.completed closed.abnormalCompleted closed.abnormalCompleted.terminated closed.abnormalCompleted.aborted
It is not necessary to be able to be in all these states. Instead, it is important to consider all of the states that your BPM system naturally supports, and to figure out which of these states best expresses that state. Most of the time the process instance is in the ‘open.running’ state, and if your system does not have suspend, then this is the only state that you need to report. If there is a suspended state that holds up operations, which can later be resumed, then indicate this with the ‘open.notrunning.suspended’ value. The commands for ‘suspend’ and ‘resume’ are not defined by the standard, hence you must use the ‘ChangeState’ operation. A ChangeState operation to this state is the same as a ‘suspend’ operation, while a ChangeState to ‘open.running’ is equivalent to a ‘resume’ operation. The states starting with ‘closed’ are terminal states—there can be no transitions back to an open state. Clearly if the process ends normally, it will be in the ‘closed.completed’ state. Other terminal states should be mapped as well as possible. Read the specification for more help on this.
RECEIVING: GETPROPERTIES GetProperties is used to retrieve properties as well as the current status of the process instance. 1 2 3 4 5 6 7 8 9 10 11 12 13 14
http://sender.com/observer.jsp http://rec.com/instance.jsp?id=456 abc123 Yes
Lines 3-10: Handle the header as specified in the past. Line 12: The presence of the GetPropertiesRq tag indicates the request type. 1 2 3 4
275
ASAP / WF-XML 2.0 COOKBOOK—UPDATED 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27
http://sender.com/observer.jsp http://rec.com/instance.jsp?id=456 abc123 http://rec.com/instance.jsp?id=456 open.running Expense Approval for Jones Expense Approval for Jones This is a … http://rec.com/factory.jsp?id=213 http://sender.com/observer.jsp $317.45 Yes
Lines 3-9: Handle the header as the past. If a RequestId is sent, then return it. Line 12: Fill in the instance key. Line 13: Map the internal process state into the state value as per the section above. Line 14: If the process instance has a name, fill it in here, otherwise omit the tag. Line 15: If the process instance has a subject, put it here, otherwise omit. Line 16: If the process instance has a description, put it here, otherwise omit. Line 17: Fill in the URL of the factory (process definition) that this instance belongs to. Lint 18-20: When subscribe and unsubscribe are supported, there could be multiple observers, otherwise there will only be a single observer. Initially, it is not necessary to support subscribe and unsubscribe. Line 22-23: Result data encoded as described in the Context and Result Data section.
RECEIVING: SETPROPERTIES Used by clients to change context data. The structure is roughly similar to that of the CreateProcess command, and should be treated in a similar manner. 1 2 3 4 5 6 7 8 9 10 11 12
http://sender.com/observer.jsp http://rec.com/instance.jsp?id=456 abc123 Yes
276
ASAP / WF-XML 2.0 COOKBOOK—UPDATED 13 14 15 16 17 18 19 20 21 22 23 24
Expense Approval for Jones This is a … 4 Jenny B Jones $317.45 San Jose 95134
Line 3-10: Handle the header as specified in the past. Line 13: If present, a new value for subject. Line 14: If present, a new value for description. Line 15: If present, a new value for priority. Line 16-21: New values for the context data, if changes are allowed copy these into the process instance variables in the same manner as the CreateInstance operation. Send a response back that is exactly the same as the response to GetProperties. (so it is not duplicated here).
RECEIVING: CHANGESTATUS Change status can be used to transition a process into another state. There are no commands for the transitions themselves, such as ‘suspend’ and ‘resume’. Instead change the state into the destination state that you want. There is no guarantee that you can transition from any state into any other state, so if asked to transition to a state where the transition is not allowed, you need only return a fault message indicating the failure. Here is an example request message. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
http://sender.com/observer.jsp http://rec.com/instance.jsp?id=456 abc123 Yes open.notrunning.suspended
Lines 3-10: Handle the header as in previous examples. Line 13: This tag is the only value you need to read. Determine what internal state the specified state maps to. If there is no corresponding internal state, return a fault message. If it is not possible to transition to that state from the current state, return a fault message. Otherwise, execute the command to transition into the desired state. Below is a response message.
277
ASAP / WF-XML 2.0 COOKBOOK—UPDATED 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
http://sender.com/observer.jsp http://rec.com/instance.jsp?id=456 abc123 open.running
Lines 3-9: Handle the header in the normal manner. Line 12: Specifies the actual state that was transitioned to. Note that this may not be the same as was requested because it may be a more detailed (specific) state.
SENDING: STATECHANGED This is a SOAP request from the instance resource back to the observer resource notifying it that the state has changed. If your BPM system has the ability to proactively notify the interface layer on state changes, then consider sending this message to the observer. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
http://rec.com/instance.jsp?id=456 http://sender.com/observer.jsp open.running open.notRunning.suspended
Line 11: The current state. Line 12: The state that it changed from, if available. You may receive a response from this, but it can be safely ignored. There need be no guarantee of the delivery of this message.
INTEROPERABILITY DEMONSTRATIONS, 2004-2005 In June 2004, an interoperability demonstration was planned to be held. The first version of this article was designed to help support that demonstration. A demo client and server needed to be developed. Jeff Cohen, a software developer with immense talent, volunteered his time for this. He was personally interested in using Microsoft’s .Net framework as the means of generating and responding to SOAP messages. By taking this approach, he has demonstrated that ASAP protocol can be implemented using the existing SOAP tools and libraries that are easily available today. Jeff’s implementation was chosen as the reference implementation to test all other implementations with because he does not work for a BPM vendor, and his approach is a pure
278
ASAP / WF-XML 2.0 COOKBOOK—UPDATED implementation of a simple client, without any preconceived notions that might be introduced by implementations on top of an existing BPM engine. He started by using the .Net capability to generate C# code from our existing WSDL file definition of the services. This C# source is available for download from the OASIS ASAP TC website. In early April 2004 an announcement went out to invite participants to implement the protocol, and to demonstrate the ability to interoperate with other implementations. John Fuller had already been developing his EasyASAP open source project, and had also been a major driving force in closure of many of the open technical issues with the spec. Fujitsu, HandySoft, and TIBCO accepted the invitation, bringing the number of participants to five: three BPM vendors and two open source initiatives. Doing the five implementations exposed some gaps in the specification – grey areas that had been interpreted differently by different people. Getting the systems to talk required working out the details, and filling in the gaps in the specification. Mayilraj Krishnan of Cisco Systems helped tremendously by editing and maintaining the specification document through this period. In the days leading up to June 23 the five implementations were tried in almost every combination to make sure all possibilities were covered. Participants had to host a machine on the Internet with their implementation running on it, at their locations in Virginia, California, Nebraska, and South Africa. The exact addresses were specified for the exact configurations to be run. Fujitsu hosted one reference client in California, while TIBCO hosted a redundant backup client in England. HandySoft was able to demonstrate basic interoperability within three weeks of declaring their intent to participate in the demo. The only thing left was the actual demonstration. On June 23, from 8:30 to 9:15, a plenary session at their BPM conference was kindly hosted by Brainstorm Group. Internet connections were brought in to the podium for the demonstration as well as to display slides to remote attendees. A bridge was set up between the podium microphone and the telephone system for phone conference attendees. The only surprise was the volume of attendees at the last minute exceeded some of the preset limits, so some people were turned away from being able to attend the live demonstration. A repeat demonstration was hosted a week later to allow more people to attend. Following on the footsteps of this demonstration, a second demonstration was planned at the Fall WfMC meeting, which was held in Pisa Italy, and at which we hosted a local BPM Workshop open to the first 100 local attendees, as well as another 600 on line. For this demonstration, a 6th implementation was added: Advantys implemented both the client and server parts of the protocol, and was able to fully interoperate with the previous 5. Then, in February 2005, a third live interoperability demonstration was held, and a 7th implementation was added to the previous 6: the Enhydra open source project, supported by Together Teamlösungen GmbH, and their Shark workflow engine along with the JaWE process editor. There for the first time we demonstrated Level 2 interoperability between the design tool and three different workflow engines. The success of the protocol is undeniable at this point.
279
ASAP / WF-XML 2.0 COOKBOOK—UPDATED CONCLUSION The purpose of this article is to help those with a BPM system to start implementing a Wf-XML interface. Hopefully, this article has given you an understanding of the concepts, and a strategy for quickly implementing the necessary interactions. Level 1 is sufficient for an interoperability demonstration that should always be done first. After the demonstration, an evaluation of the approach should be performed. Armed with the knowledge of how the protocol fits with your system, the implementation of the rest of the protocol should be relatively straightforward. It is worth reflecting on the value of implementing the standard. A standard asynchronous service linking tool can browse to your service; it can pick up the details of what variables are expected to be sent to in upon starting, and also find out the details of what values will be returned at the end; using this, it can allow the user to map into and out of these structures; then the protocol allow the service to be started. At any point in time, this external program can ask for the status of your service. When anything changes in your process, you can send a message keeping that program up to date. Finally, when your service is completed, you can send the final notification. Without a standard, every asynchronous service would do these things differently. Linking up would be not just be a matter of mapping the data values, but mapping the eight or nine key operations. Without the standard, the semantics of those operations may not be exactly the same. Small interface routines would have to be written to translate the meanings correctly. This is the approach that some vendors are promoting. In those cases, they simply stop at the point that they have a complete description of their interface, on the assumption that a programmer will be around to write the interface logic. ASAP and Wf-XML are designed to be used by nonprogrammers. This is the key. Please look at the ASAP site and the WfMC for tools for the testing and support. ASAP can be found on the OASIS website at: http://www.oasis-open.org/committees/tc_home.php?wg_abbrev=asap Wf-XML can be found at WfMC, and at the following discussion forum: http://www.wfmc.org/ http://www.workflow-research.de/Forums/ index.php?s=1cbbcce3cc5da8944dd340191c817831&act=SF&f=7 or from http://www.workflow-research.de/Forums/ select Workflow Research Forums-> Workflow Management Coalition-> WfMC Technical Committee
ACKNOWLEDGEMENTS Many people have helped in the formation of the specifications, and also in this document. I want to thank John Fuller for attending all the meetings as the secretary of the ASAP technical committee. Also Sameer Predhan for his helpful comments on an early release. Finally, all the members of WfMC Workgroup 4 for their unending support for completion of this project.
280
Section 3
Appendices
WfMC Structure and Membership Information WHAT IS THE WORKFLOW MANAGEMENT COALITION? The Workflow Management Coalition, founded in August 1993, is a nonprofit, international organization of workflow vendors, users, analysts and university/research groups. The Coalition’s mission is to promote and develop the use of workflow through the establishment of standards for software terminology, interoperability and connectivity between workflow products. Comprising more than 250 members spread throughout the world, the Coalition is the primary standards body for this software market.
WORKFLOW STANDARDS FRAMEWORK The Coalition has developed a framework for the establishment of workflow standards. This framework includes five categories of interoperability and communication standards that will allow multiple workflow products to coexist and interoperate within a user’s environment. Technical details are included in the white paper entitled, “The Work of the Coalition,” available at www.wfmc.org.
ACHIEVEMENTS The initial work of the Coalition focused on publishing the Reference Model and Glossary, defining a common architecture and terminology for the industry. A major milestone was achieved with the publication of the first versions of the Workflow API (WAPI) specification, covering the Workflow Client Application Interface, and the Workflow Interoperability specification. The Audit Data specification was added in 1997, being followed by the Process Definition Import/Export specification. A further version of WAPI covers Application Invocation APIs, completing the Coalition’s initial deliverables across the five interface functions. Further work includes the completion of a common object model with object bindings for IDL and OLE, interoperability extensions for security, and additional interoperability models. The Coalition has validated the use of its specifications through international demonstrations and prototype implementations. In direct response to growing user demand, live demonstrations of a workflow interoperability scenario have shown how business can successfully exchange and process work across multiple workflow products using the Coalition’s specifications.
WORKFLOW MANAGEMENT COALITION STRUCTURE The Coalition is divided into three major committees, the Technical Committee, the External Relations Committee, and the Steering Committee. Small working groups exist within each committee for the purpose of defining workflow terminology, interoperability and connectivity standards, conformance requirements, and for assisting in the communication of this information to the workflow user community. The Coalition’s major committees meet three times per calendar year for three days at a time, with meetings usually alternating between a North
283
MEMBERSHIP STRUCTURE
AND DETAILS
American and a European location. The working group meetings are held during these three days, and as necessary throughout the year. Coalition membership is open to all interested parties involved in the creation, analysis or deployment of workflow software systems. Membership is governed by a Document of Understanding, which outlines meeting regulations, voting rights etc. Membership material is available at www.wfmc.org.
COALITION WORKING GROUPS The Coalition has established a number of Working Groups, each working on a particular area of specification. The working groups are loosely structured around the “Workflow Reference Model” which provides the framework for the Coalition’s standards program. The Reference Model identifies the common characteristics of workflow systems and defines five discrete functional interfaces through which a workflow management system interacts with its environment—users, computer tools and applications, other software services, etc. Working groups meet individually, and also under the umbrella of the Technical Committee, which is responsible for overall technical direction and co-ordination.
WORKFLOW REFERENCE MODEL WORKING GROUPS In order to progress the Coalition’s objectives, the following working groups have been established: Working Groups
Objectives
Reference Model & Glossary
Specify a framework for workflow systems, identifying their characteristics, functions and interfaces. Development of standard terminology for workflow systems.
Process Definition Tools Interface (1)
Definition of a standard interface between process definition and modeling tools and the workflow engine(s).
Workflow Client Application Interface (2)
Definition of APIs for client applications to request services from the workflow engine to control the progression of processes, activities and workitems.
Invoked Application Interface (3)
A standard interface definition of APIs to allow the workflow engine to invoke a variety of applications, through common agent software.
Workflow Interoperability Interface (4)
Definition of workflow interoperability models and the corresponding standards to support interworking.
Administration & Monitoring Tools, Interface (5)
Definition of monitoring and control functions. To develop the Coalition’s policy on product conformance against its specifications and agree an approach to vendor certification.
284
MEMBERSHIP STRUCTURE
AND DETAILS
WORKFLOW REFERENCE MODEL DIAGRAM
WHY YOU SHOULD JOIN Being a member of the Workflow Management Coalition gives you the unique opportunity to participate in the creation of standards for the workflow industry as they are developing. Your contributions to our community ensure that progress continues in the adoption of royalty-free workflow and process standards. WfMC work this past year included: • exciting progress on Interface 4 with the introduction of Wf-XML 2.0, with several live demonstrations of ASAP-Wf-XML interoperability • new activity in Interface 5 – the WfMC Audit Data Specification • increased adoption of Interface 1: XML Process Definition Language (XPDL) • Publication of the Workflow Handbook 2004 in April and its CDROM Companion in October, each time with your company listed as a valuable member. We had three face-to-face member meetings, two joint meetings with the Business Process Management Initiative (BPMI.org) and many, many teleconferences. Efforts with BPMI.org have concentrated on the simplification of the standards stack and future work required for standards convergence.
285
MEMBERSHIP STRUCTURE
AND DETAILS
MEMBERSHIP CATEGORIES The Coalition has three major categories of membership per the membership matrix following. All employees worldwide are welcome to attend all meetings, and will be permitted access to the Members Only area of our web site. Full Membership is appropriate for Workflow and Business Process Management (BPM) vendors, analysts and consultants. You may include up to three active members from your organization on your application and these may be replaced at any time by notifying us accordingly. Full Member
Associate /Academic Member
Individual Member
Visitor Fellow (by election only)
Annual fee
$3500
$1500
$500
$0
$100 per day
Hold office
Yes
Yes
Yes
Yes
No
Nominate somebody for office
Yes
Yes
No
No
No
Committee membership
Yes
Yes
Yes
Yes
Observer
Voting right on standards
Yes
Yes
Active Participant s only
Active Participants only
No
Voting right on WfMC.org business
Yes
Only current officers
Only current officers
Only current officers
No
Company reps in Meetings without visitor fee
4
1
(transferable)
(transferable)
individual only
individual only
Fee required
FULL MEMBERSHIP This corporate category offers exclusive visibility in this sector at events and seminars across the world, enhancing your customers' perception of you as an industry authority, on our web site, in the Coalition Handbook and CDROM, by speaking opportunities, access to the Members Only area of our web site, attending the Coalition meetings and most importantly within the workgroups whereby through discussion and personal involvement, using your voting power, you can contribute actively to the development of standards and interfaces. Full member benefits include: • Financial incentives: 50 percent discount all “brochure-ware” (such as our annual CDROM Companion to the Workflow Handbook, advertising on our sister-site www.e-workflow.org), $500 credit toward next year’s fee
286
MEMBERSHIP STRUCTURE
• •
•
•
AND DETAILS
for at least 60 percent per year meeting attendance or if you serve as an officer of the WfMC. Web Visibility: a paragraph on your company services/products with links to your own company website. User RFIs: (Requests for Information) is an exclusive privilege to all full members. We often have queries from user organizations looking for specific workflow solutions. These valuable leads can result in real business benefits for your organization. Publicity: full members may choose to have their company logos including collaterals displayed along with WfMC material at conferences / expos we attend. You may also list corporate events and press releases (relating to WfMC issues) on the relevant pages on the website, and have a company entry in the annual Coalition Workflow Handbook Speaking Opportunities: We frequently receive calls for speakers at industry events because many of our members are recognized experts in their fields. These opportunities are forwarded to Full Members for their direct response to the respective conference organizers.
ASSOCIATE AND ACADEMIC MEMBERSHIP Associate and Academic Membership is appropriate for those (such as IT user organizations) who need to keep abreast of workflow developments, but who are not workflow vendors. It allows voting on decision-making issues, including the publication of standards and interfaces but does not permit anything near the amount of visibility or incentives provided to a Full Member. You may include up to three active members from your organization on your application and these may be replaced at any time by notifying us accordingly.
INDIVIDUAL MEMBERSHIP Individual Membership is appropriate for self-employed persons or small user companies. Employees of workflow vendors, academic institutions or analyst organizations are not typically eligible for this category. Individual membership is held in one person's name only, is not a corporate membership, and is not transferable within the company. The password to the 'members only' area is given to the individual only and is not transferable. If three or more people within a company wish to participate in the WfMC, it would be cost-effective to upgrade to corporate Associate Membership whereby all employees worldwide are granted membership status.
FELLOWS The WfMC recognizes individuals from within its existing membership who have made sustained and outstanding contributions to WfMC objectives far and above that expected from normal member representation. Fellows are nominated by voting members and then elected into this category at committee meetings.
VISITORS We welcome visitors at our meetings; it is an excellent opportunity for you to observe first hand the process of creating standards and to network with
287
MEMBERSHIP STRUCTURE
AND DETAILS
members of the Coalition. Your role will be as an observer only, and you are not eligible for a password, or for special offers available to WfMC members. You must pre-register and prepay your Visitor attendance fee. If you decide to join WfMC within 30 days of the meeting, your membership dues will be credited with your visitor fee.
HOW TO JOIN Complete the form on the Coalition’s website, or contact the Coalition Secretariat, at the address below. All members are required to sign the Coalition’s “Document of Understanding” which sets out the contractual rights and obligations between members and the Coalition.
THE SECRETARIAT Workflow Management Coalition (WfMC) email:
[email protected] URL: www.wfmc.org 2436 North Federal Highway #374, Lighthouse Point, FL 33064, United States Phone +1 954 782 3376, Fax +1 954 782 6365
288
Workflow Management Coalition Membership Directory The WfMC’s membership comprises a wide range of organizations. All members in good standing as of February 2005 are listed here. There are currently two main classes of paid membership: Full Members and Associate Members, which includes Academic membership. Fellows are elected by the voting members for outstanding contributions to the WfMC and pay no membership fee. They are listed separately under the Officers and Fellows Appendix. Each company has only one primary point of contact for purposes of the Membership Directory, but has the right to appoint a representative to each of the Steering, External Relations and the Technical Committees. Within this Directory, many Full Members have used their privilege to include information about their organization or products. The current list of members and membership structure can be found on our website.
ADOBE SYSTEMS INC. Full Member 2001 Butterfield Road Downers Grove, IL 60515 United States Steve Rotter Global Accounts Receivable Manager Tel: [1] 408-536-6000
[email protected]
ADVANTYS Full Member 1250 Rene Levesque West, Suite 2200 Montreal, Quebec, H3B 4W8 www.advantys.com Alain Bezancon President Tel: [1] 514 989 3700 Fax:[1] 514 989 3705
[email protected] Established in 1995, ADVANTYS is a leading ISV offering the Smart Enterprise Suite (SES).The SES provides a modular yet fully integrated set of solutions. Within a single environment organizations of all sizes benefit from a coherent access to features encompassing content management, collaborative work, workflow and development components. ADVANTYS’s practical approach to technology benefits its broad base of customers worldwide through a range of reliable, scalable, affordable and easy-to-use products. SES full-web environment uses industry-proven technologies allowing SMEs as well as major companies from all sectors to quickly and easily build their web information systems. As an active member of international organizations like Workflow Management Coalition or OASIS group, ADVANTYS demonstrates its technological leadership and its ability to quickly and practically integrate standards as soon as they become mature and users-beneficial. This leadership brought Gartner to list ADVANTYS in the Smart Enterprise Suite Magic Quadrant 2004.
AIIM INTERNATIONAL Full Member 1100 Wayne Avenue, Suite 1100 Silver Springs, MD, 20910 United States www.aiim.org Betsy Fanning Director, Standards & Content Development Tel: [1] 240-494-2682 Fax:[1] 301-587-2711
[email protected] AIIM International is the global authority on Enterprise Content Management (ECM). The technologies, tools and methods used to capture, manage, store, preserve and deliver
289
APPENDIX-MEMBERSHIP DIRECTORY information to support business processes. AIIM promotes the understanding, adoption, and use of ECM technologies through education, networking, marketing, research, standards and advocacy programs.
AMERICAN FAMILY INSURANCE Associate Member 6000 American Parkway, Mailstop - Q18T Madison, WI. 53783-0001 United States www.amfam.com Mary L. Williams I/S Application Technology Manager – Workflow Tel: [1] 608-242-4100
[email protected]
ARMA INTERNATIONAL Associate Member 13725 West 109th Street Suite 101 Lenexa, KS 66215 Untied States Peter R Hermann Executive Director & CEO Tel: [1]913-217-6025 Fax:[1]913-341-3742
[email protected]
BANCTEC / PLEXUS Full Member Jarman House, Mathisen Way, Poyle Road Colnbrook, SL3 0HF, United Kingdom www.banctec.com Marlon Driver Tel: [44] (175) 377-8875
[email protected] For over 10 years Plexus has pioneered the development of workflow automation and currently supports some of the largest workflow implementations in the world. We specialize in providing core business process automation technology from small scale up to distributed enterprise wide deployment. Our products are available across a range of Unix, Linux and Windows platforms and support for the database market leaders ensures flexible integration into any environment. Our global presence enables us to partner and provide solutions for a wide range of business cultures and requirements. Our technology partners work alongside us to stimulate the continued product evolution necessary to supply our users with the best tools to harness their information systems as part of the business process.
BEA SYSTEMS Full Member 2315 North First St. San Jose, California, 95131 United States www.bea.com Yaron Y Goland External Standards Coordination Tel: [1] 408-570-8000 Fax:[1] 408-570-8901
[email protected]
BIZMANN SYSTEM (S) PTE LTD Associate Member 73 Science Park Drive #02-05, CINTECH I Singapore Science Park I Singapore 118254 http://www.bizmann.com Ken Loke Director
290
APPENDIX-MEMBERSHIP DIRECTORY Tel: [65] +65-62711911
[email protected] Bizmann System (S) Pte Ltd is a Singapore-based company with development offices in Singapore and Malaysia, developing business process management (BPM) solutions and providing business process consultation services within the ASIA region. Bizmann develops and implements business improvement solutions based on leading development engine such as award winner BPM software, Bizflow. To further increase functionalities and to provide complete end-to-end deliverables, Bizmann enhance Bizflow development engine by developing additional intelligent features and integration connectors. Bizmann System has set up a Regional PROCESS KNOWLEDGE HUB for the Asia market. Bizmann introduces Best Practices through the Process Knowledge Hub and emphasizes quick deployment. All business process designs/templates are developed by Bizmann as well as imported from the United Sates and other advanced countries to facilitate cross knowledge transfers. Bizmann develops and implements BPM applications across all industries. Unlike conventional solutions, BPM solutions address the fundamental of process challenges that all companies face. It allows companies to automate and integrate real and disparate business processes safely, and securely extend processes to a wide variety of users via the Web. Bizmann BPM solutions rapidly accelerate time-to-value with configure-to-fit process templates and Bizmann’s best-in-class business services, designed to address the unique challenges that companies face.
BOC ITC LTD Full Member 80 Haddington Road Dublin 4, Ireland www.boc-eu.com Tobias Rausch Tel: [353] 1 6375 240 Fax: [353] 1 6375 241
[email protected] BOC is a software development house and a strategic consultant in business process and knowledge management projects. It was founded in 1995 in Vienna as a spin-off from the Department of Knowledge Engineering at the University of Vienna. BOC is the developer of the BPM toolkit ADONIS and provides its customers with competence, assists them in identifying their IT potentials, optimising their business processes, better utilizing their knowledge assets and the optimal deployment of their human and IT resources. Anticipating market needs continuously, BOC has developed products for Balanced Scorecard Management (ADOscore), IT Service and Architecture Management (ADOit), Supply Chain Management (ADOlog), E-learning, and Knowledge Management and has successfully implemented a number of large re-organisation projects in Europe in the banking, insurance, telecommunications, health care and public administration sectors. ADONIS advanced architecture allows the integration with various WFM, ERP systems & CASE tools, and offers a number of different modeling methodologies. In the insurance sector BOC is the market and technology leader in the field of business process management. Customers and partners include some of the largest financial institutions and telecommunication companies in Europe as well as software houses and consulting companies. BOC currently employs 108 employees and freelancers, with companies in Athens, Berlin, Dublin, Madrid, Vienna and Warsaw.
BPMI.ORG Association Member 1155 S. Havana Street, #11-311 Aurora, CO 80012 United States Philip Lee Tel: 303-355-0692 Fax: 303-333-4481
[email protected]
BPM KOREA SOFTWARE INDUSTRY ASSOCIATION [KOSA] Full Member Green B/D. 11F
291
APPENDIX-MEMBERSHIP DIRECTORY 79-2, Garakbon-Dong, Songpa-Gu Seoul 138-711 South Korea www.sw.or.kr Kwang-Hoon Kim Tel: [82] (2) 405--4535 Fax: [82] 2-405-4501
[email protected]
CACI PRODUCTS COMPANY Advanced Simulation Lab 1455 Frazee Road Suite #700 San Diego, CA 92108 Mike Engiles SIMPROCESS Product Mgr Tel: [1] 703-679-3874
[email protected]
CCLRC Associate Member Rutherford Appleton Laboratory Chilton Didcot Oxon OX11 0QX United Kingdom www.cclrc.ac.uk Trudy Hall Solutions Developer Tel : 44]-1235-821900
[email protected]
CONSOLIDATED CONTRACTORS INTL. COMPANY Associate Member 62B Kifissias Marroussi Athens Attiki 15125 Greece www.ccc.gr Aref Boualwan Product Manager Tel : [30] 6932415177
[email protected]
CORPORATE STREAMLINING COMPANY INC. Associate Member 146 West Beaver Creek Unit 2 Richmond Hill Ontario L4B 1C2 Canada www.corporatestreamlining.com Ron Lutka President Tel: [1] 416 243-7143 Fax:[1]416 243-6461
[email protected]
DST SYSTEMS, INC. Full Member 330 W. 9th Street, 7th Floor Kansas City, Missouri 64105 United States www.dstsystems.com www.dstawd.com
[email protected] Tracy Shelby Systems Officer Tel: [1] 816 843-8194 Fax:[1] 816 843-8190
[email protected] AWD (Automated Work Distributor) is a comprehensive business process management, imaging, workflow, and customer management solution designed to improve productivity
292
APPENDIX-MEMBERSHIP DIRECTORY and reduce costs. AWD captures all communication channels, streamlines processes, provides real-time reporting, and enables world-class customer service. AWD clients include banking, brokerage, healthcare, insurance, mortgage, mutual funds, and video/broadband companies. DST has a unique perspective among software vendors: With more than 9,000 AWD users throughout our business process outsourcing (BPO) centers and affiliate companies, AWD is a critical component of our success in the software and BPO markets. DST Technologies is a wholly owned subsidiary of DST Systems, Inc.
EFCON A.S. Associate Member Jaselska 25 602 00 Brno Czech republic www.efcon.cz Miroslav Vavera Sales and Marketing Director Tel: [420] 5-4142-5611 Fax:[420] 5-4142-5613
[email protected]
FILENET CORPORATION Full Member 3565 Harbor Blvd. Costa Mesa, CA, 92626, United States www.filenet.com Carl Hillier Product Manager, Business Process Management Technologies Tel: [1] 714 327 5707 Fax:[1] 714-327-3490
[email protected] FileNET Corporation (NASDAQ: FILE) provides The Substance Behind eBusiness by delivering Business Process Management software solutions. FileNET enables organizations around the globe to increase productivity, customer satisfaction and revenue by linking customers, partners and employees through efficient and flexible eBusiness processes. Headquartered in Costa Mesa, Calif., the company markets its innovative solutions in more than 90 countries through its own global sales, professional services and support organizations, as well as via its ValueNET(r) Partner network of resellers, system integrators and application developers.
FISERV LENDING SOLUTIONS Full Member 901 International Parkway, Suite 100 Lake Mary, Florida 32746 www.fiservlendingsolutions.com Chris Berg Tel: 800-748-2572, ext. 4224 Fax: 407-829-4270
[email protected] Fiserv Lending Solutions is a leading provider of workflow and process management software for the lending industry. Working as a collaborative business partner, we offer comprehensive loan management solutions that enable lenders to increase productivity, integrate key systems, and implement process automation.
FLOWRING TECHNOLOGY CO. LTD. Full Member 12F,#6,Lane 99, Puting Rd Hsinchu Taiwan 300 Taiwan http://www.flowring.com./ Chi-Tsai Yang VP and CTO Tel: [886] 3-5753331 Fax:[886] 3-5753292
293
APPENDIX-MEMBERSHIP DIRECTORY
[email protected]
FORNAX CO Associate Member Taltos u. 1. Budapest 1123 Hungary http://www.fornax.hu Mr. Zoltan Varszegi Business Development Consultant Tel: [36] 1-457-3000 Fax:[36] 1-212-0111
[email protected]
FUJITSU SOFTWARE CORPORATION Full Member 3055 Orchard Drive San Jose, CA, 95134-2022, United States www.i-flow.com Keith Swenson Chief Architect Tel: [1] 408-456-7963 Fax:[1] 408-456-7821
[email protected] Fujitsu Software Corporation, based in San Jose, California, is a wholly owned subsidiary of Fujitsu Limited. Fujitsu Software Corporation leverages Fujitsu's international scope and expertise to develop and deliver comprehensive technology solutions. The company's products include INTERSTAGE(tm), an e-Business infrastructure platform that includes the INTERSTAGE Application Server and i-Flow(tm); and Fujitsu COBOL. i-Flow streamlines, automates and tracks business processes to help enterprises become more productive, responsive, and profitable. Leveraging such universal standards as J2EE and XML, i-Flow delivers business process automation solutions that are easy to develop, deploy, integrate and manage. i-Flow has a flexible architecture that seamlessly integrates into existing environments. This allows you to leverage your IT infrastructure investments and allows you to easily adapt to future technologies.
GITUS, S.R.O. Associate Member Pod Arealem 302 Praha 10, 102 00 Czech Republic www.gitus.cz Jan Kolac Project Manager Tel: [420] 26-6799201 Fax:[420] 26-6799200
[email protected]
GLOBAL 360, INC Full Member One Tara Blvd, Suite 200 Nashua, NH 03062, USA www.global360.com Ken Mei Director, International Sales Support Tel: [1-603-459-0924
[email protected] Global 360 is a leading provider of Business Process Management and Analysis Solutions. Global 360 gives you a 360-degree view of enterprise processes and more importantly, the ability to efficiently manage your complete process lifecycle by leveraging our core technologies - content management, process management, goal management, process modeling, forecasting, simulation, analysis, reporting and optimization solutions.
294
APPENDIX-MEMBERSHIP DIRECTORY HANDYSOFT CORPORATION Full Member 1952 Gallows Road, Suite 100 Vienna, VA 22182, USA www.handysoft.com Julie Miller Product Manager Tel:[1] 703-442-5674
[email protected] HandySoft Global Corporation is the premier provider of configurable software solutions that simplify and automate business processes; capture and enforce best practices; improve productivity and quality while reducing costs; integrate information technology; and foster collaboration among employees, customers, and partners. The foundation for HandySoft's industry and departmental solutions is BizFlow, the award-winning platform for business process management, automated workflow, and collaboration. BizFlow offers complete capabilities for building and managing automated business processes, including tools for designing and monitoring the processes, presenting and accessing work, integrating existing IT systems, and administering the platform itself. As a global solutions provider, HandySoft has headquarters locations in Vienna (USA), Seoul (Korea) and London, UK, with strategic partner representation throughout the world. HandySoft has implemented business process management solutions at hundreds of sites worldwide. For more information call +1-703-442-5600, email
[email protected] or visit www.handysoft.com.
HITACHI LTD. SOFTWARE DIVISION Full Member 5030 Totsuka-Chou Tosuka-Ku, Yokohama, 2448555, Japan Ryoichi Shibuya Senior Manager Tel: [81] 45 826 8370 Fax:[81] 45 826 7812
[email protected] Hitachi offers a wide variety of integrated products for groupware systems such as e-mail and document information systems. One of these products is Hitachi’s workflow system Groupmax. The powerful engine of Groupmax effectively automates office business such as the circulation of documents. Groupmax provides the following powerful tools and facilities: A visual status monitor shows the route taken and present location of each document in a business process definition Cooperative facilities between servers provide support for a wide area workflow system Groupmax supports application processes such as consultation, send back, withdrawal, activation, transfer, stop and cancellation. Groupmax is rated to be the most suitable workflow system for typical business processes in Japan and has provided a high level of customer satisfaction. Groupmax workflow supports the WfMC Interface 4.
HYLAND SOFTWARE INC. Full Member 28500 Clemens Road Westlake, OH. 44145 United States www.onbase.com Darrell Boynton Product Marketing Manager Tel : [1] 440-788-5863
[email protected]
IBM CORPORATION Full Member Mail Point 206, Hursley Park Winchester Hampshire, SO21 2JN, United Kingdom www.software.ibm.com/ts/mqseries/workflow
295
APPENDIX-MEMBERSHIP DIRECTORY Klaus Deinhart Worldwide Brand Market Management Tel: [44] (196) 281-6788 Fax:[44] (196) 281-8338
[email protected] Process Integration with IBM WebSphere The IBM WebSphere® process integration product family allows you to model and automate business processes across disparate systems and organizations. Process Integration is often at the heart of many business and technology initiatives such as connecting to a B2B exchange, taking a product or service online, standardizing customer information or integrating a newly purchased application. The Bottom line is that integrated processes make it easier to implement business strategy. Process automation and workflow management Companies want to integrate and manage highlevel business processes that involve multiple people and applications across functional areas. There are two parts to this: one, the ability to model processes, analyze them and identify ways to improve them (cut costs or time) is central. Two, companies need a way to take these process models and actually automate the process (generate to-do lists for employees, route documents for approval). Key functions: Model business processes, Analyze business processes, Monitor business processes, Optimize business processes. Critical features: State management, Data persistence, Manual intervention "”n” events Centralized execution, intelligent assignment of tasks, Event dependencies. Products: IBM WebSphere MQ Workflow provides capabilities to design, document, execute, control, improve and optimize the business processes, so you can focus on your company's business goals. HOLOSOFX BPM Suite enables you to rapidly define and model business processes, as well as execute processes across people, departments, and systems in a consistent and costeffective way. IBM CrossWorlds provides sophisticated business object management and process automation capabilities. IBM CrossWorlds' patented Common Object Model, industry templates, extensive connectivity and object management runtime environment enable faster and easier integration. Use it to quickly automate individual steps within a process as well as streamline processes for competitive advantage.
IMAGE INTEGRATION SYSTEMS, INC. Associate Member 885 Commerce Drive, Suite B Perrysburg, OH 43551 United States www.iissys.com Bradley T. White President Tel: [1] 419-872-1930 Fax:[1] 419-872-1643
[email protected]
INTEGIC CORPORATION Associate Member 14585 Avion Parkway Chantilly, Virginia, 20151, United States www.integic.com Steve Kruba ChiefTechnology Tel: [1] 703-502-1366 Fax:[1] 703-222-2840
[email protected]
IPI SCRITTURA Full Member 18 E 41st St, 18th Floor New York, NY, 10017, United States www.ipiscrittura.com Linda Watson Tel: [1] 212-213-5056 Fax:[1] 212-213-5352
[email protected]
296
APPENDIX-MEMBERSHIP DIRECTORY IPI Scrittura delivers an integrated Java language web-based suite of Business Process Management (BPM), Workflow and Document Management components that are designed to optimize and streamline the legal, trading and operations areas of financial services institutions that are burdened with high levels of complex contractual documentation.
IVYTEAM – SORECO GROUP Associate Member Alpenstrasse 9, P.O. Box Zug, 6304, Switzerland www.ivyteam.com Heinz Lienhard Founder; Consultant Tel: [41] 41 710 80 20 Fax:[41] 41 710 80 60
[email protected]
KAISHA-TEC Full Member c/o G. Long, Kaisha-Tec, Mitaka Sangyo Plaza Annex, 3-32-3 Shimo Renjaku, Mitaka-shi, Tokyo 181-0013, Japan www.kaisha.com Tel: [81] 422 47 2397 Fax:[81] 422 47 2396
[email protected] ActiveModeler/ActiveFlow is a unique workflow combination product based on the No.1 selling process modeler in Japan. Process Visualization is used to create optimized workflows and to speed well-defined development. ActiveFlow provides industrial strength workflow in a Microsoft-centric platform again with some unique features. The workflow satisfies well even the demanding Japanese workflow market including back-office and ecommerce integration.
LUCENT TECHNOLOGIES Associate Member 600 Mountain Ave, Room 2B-308A Murray Hill, NJ 07974-0636 United States Vatsan Rajagopalan Technical Manager/Architecture Tel: [1] 908-582-8137 Fax:[1] 908-582-5180
[email protected]
METODA S.P.A. Associate Member Via San Leonardo, 52 Salerno 84131 Italy Raffaello Leschiera Tel: [39] 0893067-111 Fax: [39] 0893067-112
[email protected]
NEC SOFT LTD. Full Member 1-18-6, Shinkiba, Koto-ku Tokyo, 136-8608, JAPAN www.nec.com Japan Country Chair Yoshihisa Sadakane Sales and Marketing Senior Manager Tel: [81]3-5569-3399 Fax: [81]3-5569-3286
297
APPENDIX-MEMBERSHIP DIRECTORY
[email protected]
NEUSOFT CO.,LTD Associate Member Neusoft Park,Hun Nan Industrial Area Shenyang, Liaoning 110179 P.R.C http://www.neusoft.com Zhao Dazhe Prof. / President Assistant,Director of Scientific Tel: [86]2-483-665401
[email protected]
OBJECT MANAGEMENT GROUP (OMG) Association Member First Needham Place, 250 First Avenue, Suite 201 Needham, MA 02494 United States Jamie Nemiah Director of Liaisons Programs www.omg.org Tel: [1] 781-444-0404 Fax:[1] 781-444-0320
[email protected]
OPEN TEXT Full Member Werner-von-Siemens-Ring 20 Technopark 2 85630 Grasbrunn, Germany http://www.opentext.com Michael Cybala Manager PDMS & BPM Segment Marketing Tel: [49] (0) 831 960450-802 Fax: [49] (0) 89 4629-33-2700
[email protected]
ORACLE CORPORATION Full Member 500 Oracle Parkway Redwood City, CA 94065 United States www.oracle.com Mark Craig Senior Product Manager, Oracle Tel: [1] 650 506 7000 Fax:[1] 650 506 7000
[email protected] Oracle Workflow is a complete business process management system thatsupports business event and business process based automation, integration, and collaboration. Its technology enables the modeling, automation, and continuous improvement of business processes, routing information of any type according to user-defined business rules. Oracle Workflow is a scalable, production workflow system tuned for the high volumes associated with enterprise applications and long lived transactions. Oracle Workflow supports traditional application workflows, such as business document approvals, as well as systems integration workflows. Oracle Workflow is leveraged across the Oracle9i Database, Oracle9i Application Server, Oracle9i Collaboration Suite, and Oracle9i Internet Developer Suite to support business process management and integration requirements. It is a core part of the Oracle E-Business Suite 11i Technology Stack, enabling customers to standardize on one workflow product for prepackaged applications including Enterprise Resource Planning (ERP), Customer Relationship Management (CRM), and Human Resources Management Systems (HRMS) as well as custom built solutions. Oracle Warehouse Builder, utilizing among others Oracle Workflow and XPDL, is a data integration tool mainly aimed at the
298
APPENDIX-MEMBERSHIP DIRECTORY enterprise data warehouse market. The tool is able to define and design the enterprise ETL process including the schemas that hold the integrated data. Based on a versatile and scalable metadata repository Warehouse Builder generates SQL code utilizing the Oracle9i database as storage and transformation platform. Its proven scalability and reliability combined with it's openness make Warehouse Builder a tool of choice for many corporations facing the challenge of integrating data.
PEGASYSTEMS INC Full Member 101 Main Street Cambridge, MA 02142 United States www.pegasystems.com Laura E. Sudnik Sr. PR Manager Tel : [1] 617-374-9600 ext. 6278
[email protected]
PERSHING LLC Associate Member One Pershing Plaza, 8th Fl Jersey City, NJ. 07399 United States Regina DeGennaro VP - Workflow Solutions Tel: 201-413-4588
[email protected]
SAVVION, INC. Full Member 5104 Old Ironsides Drive Suite: 205 Santa Clara, CA 95054 United States http://www.savvion.com Don Nanneman Vice President, Marketing
[email protected] Tel: [1] 408 330 3400 Fax: [1] 408 330 3444 Savvion's award-winning Business Process Management system, Savvion BusinessManager(tm), enables organizations to automate and manage critical business processes by integrating the people and systems that execute those processes. From supply-chain to service management, help desk and employee self-service applications, Savvion enables an organization to define and deploy desktop and mobile solutions to execute a company's business strategies. A GUI designer and a business rules and event management engine, empower business and IT staff to collaborate to define business processes. BusinessManager automatically deploys those applications over fixed and wireless Internets to desktop and wireless devices. Infrastructure, such as SAP, PeopleSoft, Siebel and i2 systems, is then integrated to deliver a complete desktop and mobile solution. Managers gain real time visibility into the status of each process to quickly take action to resolve bottlenecks or reassign tasks.
SIEMENS MEDICAL SOLUTIONS Associate Member 51 Valley Stream Parkway Mail Stop B9C Malvern, PA. 19355 United States www.siemens.com Anup Raina VP of Global Marketing & Strategy Tel: 610-219-6300
[email protected]
SOURCECODE TECHNOLOGY HOLDINGS Full Member Gateview House A2, Constantia Park
299
APPENDIX-MEMBERSHIP DIRECTORY Gauteng 2000, South Africa http://www.k2workflow.com Adriaan van Wyk Tel: [27] 11 675 1175 Fax:[27] 11 675 1664
[email protected]
STRATOS S.R.L. Full Member via Pavia 9/a 1 RIvoli TORINO 10098 Italy Luca Oglietti Tel: [33] 39.011.9500000
[email protected]
TIBCO SOFTWARE, INC. Full Member 3303 Hillview Avenue Palo Alto, CA 94304 USA http://www.tibco.com/software/process_management/default.jsp Justin Brunt Research Director Tel: [44] (0) 1793 441300 Fax : [44] (0) 1793 441333
[email protected] TIBCO Software Inc is the leading independent business integration software company in the world, demonstrated by market share and analyst reports. In addition, TIBCO is a leading enabler of Real-Time Business, helping companies become more cost-effective, more agile and more efficient. TIBCO has delivered the value of Real-Time Business to over 2,000 customers around the world. TIBCO provides one of the most complete offerings for enterprise-scale BPM, with powerful software that is capable of solving not just the challenges of automating routine tasks and exception handling scenarios, but also the challenges of orchestrating sophisticated and long-lived activities and transactions that involve people and systems across organizational and geographical boundaries.
TOGETHER TEAMLÖSUNGEN GMBH Associate Member Elmargasse 2-4 Wien, A-1191, Austria www.together.at Alfred Madl Geschaftsfuhrer Tel: [43] 5 04 04 122 Fax:[43] 5 04 04 11 122
[email protected]
UNIVERSITY OF FINANCE AND MANAGEMENT IN BIALYSTOK Academic Member ul. Ciepla 40, Bialystok Podlaskie 15-472 Poland www.wsfiz.edu.pl Tomasz Matwiejczuk, PhD Tel: 48 85 6785823 Fax: 48 85 6750088
[email protected]
UNIVERSITY OF MUENSTER Academic Member Department of Information Systems, Leonardo-Campus 3 Muenster, 48149, Germany Tobias Rieke Tel: [49] 251 833-8100
300
APPENDIX-MEMBERSHIP DIRECTORY Tel: [49] 251 833-8109
[email protected]
VIGNETTE CORPORATION Full Member 1601 South MoPac Expressway, Building 2 Austin, TX, 78746-5776, United States www.vignette.com Clay Johnson Staff Engineer Tel: [1] 512-741-1133 Fax:[1] 512-741-4500
[email protected] Vignette is the leading provider of content management solutions used by the most successful organizations in the world to interact online with their customers, employees and partners. By combining content management with integration and analysis applications, Vignette enables organizations to deliver personalized information wherever it is needed, integrate online and enterprise systems and provide real-time analysis of the customer experience. Vignette products are focused around three core capabilities that meet the needs of today's e-business organizations: Content Management - the ability to manage and deliver content to every electronic touch-point. Content Integration - the ability to integrate a variety of e-business applications within and across enterprises. Content Analysis - the ability to provide actionable insight into the customer's relationship to a business.
VIZYON NET ARASTIRMA Full Member Tubitak MAM Kampüsü Teknoloji Gelistirme Bölgesi B Blok No:22 Kocaeli Gebze 41470 Turkey www.livechainworkflow.com Fahri Kaan Toker IT Manager Tel : (90- 216) 411 17 39 Fax : (90- 216) 363 13 80
[email protected] Vizyon Net Ltd is a software company with development offices in Istanbul and Tubitak Technological Development Zone, developing workflow, business process management, and document management solutions. Live Chain Workflow Studio is a web-based workflow management system, which provides assurance of business process standardization and automation, developed by Vizyon Net. The primary use areas of Live Chain Workflow Studio are managing internal purchase requests, order tracking, time-off & travel requests, production planning, budgeting, project tracking and management, and ISO 9000 types of applications. Live Chain Workflow Studio provides delivery of the right information to the right person and to the right application at the right time. It is modularly structured to allow flexibility and expandability to fit client needs.
W4 (WORLD WIDE WEB WORKFLOW) Full Member 4 rue Emile Baudot 91873 Palaiseau Cedex, France www.w4global.com Jean Faget Chairman Tel: [33] 1 64 53 19 12 Fax:[33] 1 64 53 28 98
[email protected] Created in 1996, W4 is now a leading company on the European market with an installed base of more than 80 major accounts in France and worldwide. Founded by Jean Faget, Chairman of the French Chapter of the WorkFlow Management coalition, W4 edits the native Internet W4®Enterprise solution. W4®Enterprise provides our clients with the means to enhance their reactivity and competitiveness, to master their work organization and to
301
APPENDIX-MEMBERSHIP DIRECTORY optimize their processes aiming at productivity, quality, tracking and agility. W4®Enterprise combines the edition of a workflow solution, the expertise, the advice and the related services to gain a better mastery of the work processes’ evolution and of the urbanization of the company’s information system. Therefore, it globally meets the market’s requirements. W4®Enterprise is used to create any application requiring the mastery of process engineering and of business processes, and becomes naturally integrated into the new e-business solutions: CRM, e-business, e-procurement, market places… With W4®Enterprise, you have at your disposal a stable, performing, and scalable workflow product, which enables you to capitalize the investments already made. W4's references include France Telecom, Sofres, Groupama.
WEBMETHODS Full Member 504 Tumbling Hawk Acton, MA 01718 United States www.webMethods.com Oleg Levin Tel: [1] 978-549-9041
[email protected]
WORK MANAGEMENT EUROPE Associate member Barbizonlaan 94 2980 ME Capelle aan den Ijssel The Netherlands www.wmeonline.com Cor H. Visser Tel: [31] (10) 207 5454 Fax: [31] (10) 207 5401
[email protected]
WORKFLOW & GROUPWARE STRATEGIES Full Member 37 rue Bouret 75019 Paris, France Martin Ader Analyst Tel: [33] (1) 42 38 08 15 Fax:[33] (1) 42 38 08 02
[email protected] W&GS provides consulting services to assist enterprises in deploying the proper work management technologies (Workflow, Groupware, Knowledge Management) according to their activity profiles and corporate priorities. Assistance covers projects from initial opportunity analysis up to product selections and project planning and auditing. Clients include France Telecom, l'Oréal, Danone, Bouygues Telecom, and regional government. Martin Ader, the W&GS founder, has 16 years of workflow experience including research, development, marketing, and application deployment. He works both on consulting and as an international industry analyst in the workflow area. He is the author of the Workflow Comparative Study (comparing 12 workflow engines in detail) that was sold in more than 25 countries from continents. He has conducted several missions for workflow vendors related to product positioning, requirements analysis, and development strategies.
302
WfMC Officer Positions 2005 STEERING COMMITTEE Chairman
Jon Pyke
Fellow
Jean Faget
W4
Keith Swenson
Fujitsu Software
Yoshihisa Sadakane
NEC Soft
Chairman and Co-chair
David Hollingsworth Keith Swenson
Fujitsu
Vice Chairman (Europe)
Justin Brunt
TIBCO
Vice Chairman (Americas)
Mike Marin
FileNet
Vice Chairman (Asia-Pacific)
Ryoichi Shibuya
Hitachi Ltd
Vice Chairman (Europe) Vice Chairman (Americas) Vice Chairman (Asia-Pacific)
TECHNICAL COMMITTEE
EXTERNAL RELATIONS COMMITTEE Chairman
Betsy Fanning
AIIM International
Martin Ader
W&GS
Bob Puccinelli
DST Systems
(Asia-Pacific)
Dr Kwang-Hoon Kim
BPM Korea Forum
SECRETARY / TREASURER
Cor Visser
Work Management Europe
INDUSTRY LIAISON CHAIR
Betsy Fanning
AIIM International
USER LIAISON CHAIR
Charlie Plesums
Fellow
Vice Chairman (Europe) Vice Chairman (Americas) Vice Chairman
303
WfMC Country Chairs AUSTRALIA & NEW ZEALAND
RUSSIA
Carol Prior MAESTRO BPE Limited Tel: +61 2 9844 8222
[email protected] BRAZIL Alexandre Melo Officeware Ltda Tel: +55 11 816 3439
[email protected] CANADA Ron Lutka Corporate Streamlining Co Inc. Tel: +1 416 243 7143
[email protected] FRANCE Jean Faget W4 Global Tel: + 331 64 53 17 65
[email protected] GERMANY Tobias Rieke University of Muenster Tel: [49] 251 833-8100
[email protected] ITALY Luca Oglietti Stratos Tel. +39.011.9500000
[email protected] JAPAN Yoshihisa Sadakane NEC Soft Tel: +81-3-5569-3399
[email protected] KOREA Dr. Kwang-Hoon Kim BPM Korea Forum Tel: +82-31-249-9679
[email protected] POLAND Tomasz Matwiejczuk University of Finance and Management in Bialystok Technology Centre Tel: +48 85 6785823
[email protected]
Maria Camennova Business Logic Tel: +7-095-7851131
[email protected] SINGAPORE & MALAYSIA Ken Loke Bizmann System (S) Pte Ltd Tel: +65 - 6271 1911
[email protected] SOUTH AFRICA Mark Ehmke Staffware South Africa Tel: +27 11 467 1440
[email protected] SPAIN Elena Rodríguez Martín Fujitsu Software Tel. +34 91 784 9565
[email protected] TAIWAN Erin Yang Flowring Technology Co. Ltd. Tel: +886-3-5753331 ext. 316
[email protected] THE NETHERLANDS Fred van Leeuwen DCE Consultants Tel: +31 20 44 999 00
[email protected] UNITED KINGDOM Sharon L. Boyes-Schiller Skyscape Solutions Tel: +44 (0)1462 892 101
[email protected] UNITED STATES Bob Puccinelli DST Systems Tel: +1 816-843-8148
[email protected] Betsy Fanning AIIM International Tel: +1 301 755 2682
[email protected]
304
WfMC Technical Committee Working Group Chairs 2005 WG1—Process Definition Interchange Model and APIs Chair: Robert Shapiro, Fellow Email:
[email protected] WG2/3—Client / Application APIs open WG4—Workflow Interoperability Chair: Keith Swenson, Fujitsu Email:
[email protected] WG5—Administration & Monitoring Chair: Michael zur Muehlen, Stevens Institute of Technology Email:
[email protected] WG on OMG Chair: Ken Mei, Global 360 Email:
[email protected] Conformance WG Chair: Michael zur Muehlen, Stevens Institute of Technology Email:
[email protected] WGRM—Reference Model Chair: Dave Hollingsworth, Fujitsu
[email protected] WG9—Resource Model Chair: Michael zur Muehlen, Stevens Institute of Technology Email:
[email protected] BPMI-WfMC Joint Working Group Chair: Mike Marin, FileNet Email:
[email protected]
305
WfMC Fellows The WfMC recognizes individuals who have made sustained and outstanding contributions to WfMC objectives far and above that expected from normal member representation. WfMC Fellow—Factors: • To be considered as a candidate, the individual must have participated in the WfMC for a period of not less than two years and be elected by majority vote within the nominating committee. •
Rights of a WfMC Fellow: Receives guest member level of email support from the Secretariat; pays no fee when attending WfMC meetings; may participate in the work of the WfMC (workgroups, etc), may hold office.
Robert Allen United Kingdom Mike Anderson United Kingdom Wolfgang Altenhuber Austria Richard Bailey United States Emmy Botterman United Kingdom Katherine Drennan United States Mike Gilger United States Michael Grabert United States Shirish Hardikar United States Paula Helfrich United States Hideshige Hasegawa Japan Dr. Haruo Hayami Japan Nick Kingsbury United Kingdom Klaus-Dieter Kreplin Germany
Emma Matejka Austria Dan Matheson United States Akira Misowa Japan Roberta Norin United States Sue Owen United Kingdom Jon Pyke United Kingdom Charles Plesums United States Harald Raetzsch Austria Michele Rochefort Germany Joseph Rogowski United States Michael Rossi United States Sunil Sarin United States Robert Shapiro United States Dave Shorter (Chair Emeritus) United States David Stirrup United Kingdom
306
Keith Swenson United States Tetsu Tada United States Austin Tate United Kingdom Rainer Weber Germany Alfons Westgeest Belgium Marilyn Wright United States Dr. Michael zur Muehlen United States
Appendix—Author Biographies ARNAUD BEZANCON (
[email protected]) Chief Technical Officer ADVANTYS, Canada 1250 Rene Levesque West, Suite 2200 Montreal, Quebec H3B 4W8 Arnaud Bezancon, IT Engineer (Orsay, France), is the CTO of ADVANTYS. (France) which he co-founded in 1995 with his brother Alain (BBA, HEC, Montreal). ADVANTYS is specialized in the implementation of high-added value solutions in the field of corporate Inter-Intra-Extranets. As a CTO, Arnaud manages the software division of ADVANTYS and launched the aspSmart line of components which are daily used by thousands of web developers worldwide. Arnaud also designed a content management solution (PubliGen) a groupware tool (GroupGen) and the workflow management system: WorkflowGen. Arnaud has developed a pragmatic approach of technology based on real life experience. ADVANTYS solutions are used by major corporations (ACCOR, AREVA, CEA, DELOITTE, LEGRAND, SAINT GOBAIN, etc).
ROBERTA BORTOLOTTI (
[email protected]) 3225 Warder St NW Washignton DC 20010 United States Roberta Bortolotti is a consultant and system analyst with wide experience in applying technologies to real-world business solutions in different markets, including South and North America. She participated as an analyst in a project for DCRA of the District of Columbia, jointly with SDDM Technology, in which web services were implemented to make the business process for issuing business licenses more efficient and reliable. Her consulting expertise comprises applying both .Net and Java technologies using web services in web-based and distributed applications. Ms. Bortolotti also has a Master’s Degree in Information Systems from Strayer University.
JORGE CARDOSO (
[email protected]) Professor University of Madeira Portugal Departamento de Matemática e Enegenharias Funchal 9000-390 Portugal Prof. Dr. Jorge Cardoso currently holds an assistant professor position at the University of Madeira (Portugal). His research concentrates on business process management, process complexity, workflow QoS management and semantic composition of Web processes and workflows. Last year, he organized the first International Workshop on Semantic Web Services and Web Processes Composition (SWSWPC 2004). Recently he co-edited a book entitled "Semantic Web Process: powering next generation of processes with Semantics and Web services". He has published several book chapters, journal and conference papers and served as a program committee member for over twenty conferences. More information can be found at http:// dme.uma.pt/jcardoso. Jorge Cardoso received a B.A. (1995) and a M.S. (1998) in Computer Science from the University of Coimbra (Portugal), and a Ph.D. (2002) also in Computer Science from the University of Georgia (USA).
307
APPENDIX—AUTHORS JOSEPH M. DEFEE (
[email protected]) Senior Vice President CACI Products Company Advanced Simulation Lab 1455 Frazee Road Suite #700 San Diego, CA 92108 United States Joe DeFee is Senior Vice President and manager of the Advanced Systems Division Group at CACI. He has 24 years of information systems design, software development, enterprise architecture development, and business process reengineering experience. For the last 12 years, he has focused on business process reengineering, business process simulation technology, software reengineering, and aligning information technology to business objectives for customers. He is the co-author of CACI's RENovate methodology; a formal methodology for modernizing customers business processes and information technology. CACI is a member of the Workflow Management Coalition.
JEAN-JACQUES DUBRAY (
[email protected]) Senior Technical Architect Attachmate 3617 131st Ave Bellevue, WA. 98006 United States Jean-Jacques is a Senior Technical Architect at Attachmate. He is a graduate of Ecole Centrale de Lyon and earned his Ph.D. at the Faculty of Science of Luminy. He has designed and led the implementation of an XML and Web Services based business process engine at NEC Systems Labs in 1998. Since then he has contributed to several BPM standards: BPML, ebXML BPSS, and WS-CDL. For the past four years Jean-Jacques has focused on developing a model driven application model compatible with business process management for building information systems.
LAYNA FISCHER (
[email protected]) General Manager and Executive Director WARIA, WfMC, BPMI.org 2436 North Federal Highway, #374, Lighthouse Point, FL 33064 USA As WfMC General Manager, Layna Fischer works closely with the WfMC Committees to promote the mission of the WfMC and is tasked with the overall management of membership logistics, meetings, conferences, publications and websites. Ms Fischer is also an Executive Director of the Business Process Management Initiative (BPMI.org) handling similar duties as for the WFMC, and chairs WARIA (Workflow And Reengineering International Association), a position she has held since 1994. She is also the director of the annual Global Excellence Workflow Awards. As president and CEO of Future Strategies Inc., Ms Fischer is the publisher of the business book series New Tools for New Times, as well as the annual Excellence in Practice volumes of award-winning case studies and the annual Workflow Handbook, published in collaboration with the WfMC and their companion CDROM series.
308
APPENDIX—AUTHORS PAUL HARMON (
[email protected]) Executive Editor Business Process Trends 1819 Polk #334 San Francisco CA. 94109 United States In addition to his role as Executive Editor and Founder of Business Process Trends, Paul Harmon is Chief Consultant and Founder of Enterprise Alignment, a professional services company providing educational and consulting services to managers interested in understanding and implementing business process change. Paul is a noted consultant, author and analyst concerned with applying new technologies to real-world business problems. He is the author of Business Process Change: A Manager's Guide to Improving, Redesigning, and Automating Processes (2003). He has previously coauthored Developing E-business Systems and Architectures (2001), Understanding UML (1998), and Intelligent Software Systems Development (1993). Mr. Harmon has served as a senior consultant and head of Cutter Consortium's Distributed Architecture practice. Between 1985 and 2000 Mr. Harmon wrote Cutter newsletters, including Expert Systems Strategies, CASE Strategies, and Component Development Strategies. Paul has worked on major process redesign projects with Bank of America, Wells Fargo, Security Pacific, Prudential, and Citibank, among others. He is a member of ISPI and a Certified Performance Technologist. Paul is a widely respected keynote speaker and has developed and delivered workshops and seminars on a wide variety of topics to conferences and major corporations through out the world.
ROBERT J KEARNEY (
[email protected]) Vice President Sales & marketing Image Integration Systems 885 Commerce Drive Street Suite B Perrysburg, OH. 43551 United States Mr. Kearney joined Image Integration Systems in 1995 and is an IIS partner. Prior to that, he spent over 20 years in a variety of management and consulting positions focused on the application of information technology and quantitative methodologies to improve business processes and operating performance. He holds a BS in Mathematics from the University of Buffalo, and an MS in Operations Research from Case-Western Reserve University. IIS provides automated workflow, content management and document imaging to improve business processes, increase productivity and visibility, and reduce transactional costs. The IIS DocuSphere® product family is standardsbased, scalable, web-enabled and has been certifed for use with ERP systems from J.D. Edwards, PeopleSoft and SAP. Customers range from mid sized to multi national corporations in a variety of industries.
DR. SETRAG KHOSHAFIAN (
[email protected]) VP of BPM Technology Pegasystems Inc. 101 Main Street Cambridge, MA. 02142 United States
309
APPENDIX—AUTHORS Dr. Khoshafian is Vice President of BPM for Pegasystems Inc., the leader in rules-driven business process management. He is a recognized expert not only in BPM, but also XML, object orientation, databases, and Web services technologies. He has held senior-level positions in the BPM industry for more than 15 years, and is the lead author of seven books on technology, as well as numerous articles on e-business, Web-centric process management, databases, object-orientation, and distributed object computing. He also presents frequently at seminars and conferences for both technical and business audiences. Prior to joining Pegasystems, Dr. Khoshafian served as Senior Vice President of Technology for Savvion. He holds both MS and Ph.D. degrees in computer science from the University of Wisconsin, Madison.
URS-MARTIN KÜNZI (
[email protected]) R&D Soreco-ivyteam Alpenstrasse 9, P.O.Box Zug CH-6304, Switzerland Urs-Martin Künzi is a mathematician; his subjects of interest include Mathematical logic and theoretical computer science. He worked at the Universities in Zürich (where he wrote his PhD), Bonn, Freiburg i.Br., Bern and at the Academy of Science in Novosibirsk. Now, on the one hand he is lecturer at the University of Applied Sciences in Rapperswil (Switzerland). On the other hand he works at the ivyTeam which he joined 1996. He is responsible for the architecture and development of Xpert.ivy (previously called ivyGrid), a standard software for Web-based workflow and intranet/internet solutions.
CHRIS LAWRENCE (
[email protected]) Business Architect Old Mutual SA Mutual Park PO Box 66 Cape Town 8000 South Africa Chris Lawrence has designed and implemented business solutions in the UK, US and Southern Africa over a 25-year career in Financial Services IT ranging from systems analysis and design to business architecture, by way of project and systems management, business analysis, data analysis and design, quality training, and process re-engineering. If he has a current specialty, it is where process architecture meets holistic delivery and transition methodologies. In 1996 he moved from the UK to Cape Town to co-found a strategic business-enablement competency (eventually titled Global Edge), employing a version of Sherwood International's Amarta architecture to support Old Mutual’s international expansion. He developed Global Edge's delivery methodology and process-architectural approach, and is currently playing a similar role on a grander scale for Old Mutual South Africa’s administration and service organisation (OMSTA). Chris has Philosophy degrees from both Cambridge and London Universities, and is married with one son.
310
APPENDIX—AUTHORS HEINZ LIENHARD (
[email protected]) Founder of ivyTeam Soreco-ivyteam Alpenstrasse 9 , P.O.Box Zug CH-6304, Switzerland Founder of ivyTeam. He lives and works in Switzerland at the lovely lake of Zug. With ivyTeam he has successfully brought together the web application and the workflow world. He received a Master’s degree in electrical engineering from the ETH (Switzerland), a Master’s degree in mathematical statistics from Stanford University (California, USA) and the Dr. h.c. from the informatics department of ETH, Lausanne (Switzerland). For many years he headed the central R&D labs of Landis&Gyr Corp., now part of the Siemens group, where he built up important R&D activities in system theory, automatic control, informatics and microtechnology.
KEN LOKE (
[email protected]) Director / Founder Bizmann System (S) Pte Ltd 73 Science Park Drive #02-05 CINTECH I Singapore Science Park I Singapore 118254, Singapore Ken Loke is the Director and Founder of Bizmann System, responsible for regional business development, research and development for BPM/Workflow solutions, and partnerships development within Asean. He has more than 15 years of vast experience and success in IT solutions marketing and partnerships development. His strong understanding and indepth knowledge of workflow automation on both hardware and software technologies are his key strengths in introducing business process management solutions in Asean. Through his enthusiastic efforts in introducing business process/workflow and BI automations, Ken initiated the setting up of an enterprise solutions centre in the heart of Singapore’s business district where a showroom has been setup to display workflow automation in an office mock up environment. This idea is probably the first in Asia Pacific, where visitors can witness how business process/workflow is automated and benefits seen in a simulated office environment. His idea has attracted BPM vendor like HandySoft’s Bizflow. Prior to founding Bizmann System, Ken was an Assistant Director with Canon, where he pioneered the launch of Canon’s first digital multi-function copier in Singapore in 1995. Ken holds a Bachelor Degree in Business Administration, major in Marketing Management.
DR. AMBUJ MAHANTI (
[email protected]) Professor, Management Information Systems, Dean (Planning & Administration), Indian Institute of Management Calcutta, Diamond Harbour Road, Joka, Kolkata - 700104, West Bengal, India. Dr. Ambuj Mahanti works as Professor of Management Information Systems at Indian Institute of Management Calcutta. He did Master’s level studies in Statistics from the University of Calcutta, Kolkata and in Computer Science from Indian Statistical Institute, Kolkata. He did superior doctoral work in
311
APPENDIX—AUTHORS Computer Science from the University of Calcutta and earned the degree Doctor of Science (D.Sc.) in Computer Science. He was United Nations Fellow to U.S.A. on several occasions. During 1984-1988, he executed the UNDP project on Computer Aided Management at Indian Institute of Management Calcutta in the capacity of Associate Project Coordinator. In 1989, he was National Fellow and represented India in the international seminar on Top Management Forum: Strategic Management & Information Technology, Tokyo. He has consulted many reputed organizations which include Tea Board of India, Royal Norwegian Embassy, Hindustan Copper Limited and Indian Oil Corporation. As Visiting Associate Professor, he taught at the Department of Computer Science, University of Maryland at College Park (UMCP) during 1990-1992. He has published extensively in international journals and conferences including several publications in Journal of ACM, Theoretical Computer Science, Artificial Intelligence, Information Processing Letter, etc. He has guided many doctoral students and his current research interests include Workflow Management, Combinatorial Auctions, Network Optimization, Data Mining, Web Mining, e-Business Security and Artificial Intelligence.
JAN MENDLING (
[email protected]) Department of Information Systems and New Media Vienna University of Economics and Business Administration, Austria Jan Mendling is a research assistant at the Department of Information Systems and New Media at the Vienna University of Economics and Business Administration. Together with Markus Nüttgens he is author of the EPC Markup Language (EPML). Jan has studied at University of Trier (Germany), UFSIA Antwerpen (Belgium) and University of Munich (Germany). He received a diploma degree in Business Computer Science (Dipl.-Wirt.-Inf) and a diploma degree in Business Administration (Dipl.-Kfm.). Currently, he is working on his PhD thesis.
DEREK MIERS (
[email protected]) Industry Analyst Enix Consulting Ltd 9 Lonsdale Rd London W4 1ND United Kingdom Derek Miers founded Enix in 1992 to provide strategy and technology consulting to astute blue chip commercial organizations, vendors and system integrators. Over the years he has developed unique perspectives on the use of BPM and other process-oriented technologies ranging from understanding and modeling processes to actively supporting them through modern BPMS environments, enterprise content management and Web Services. From a consulting perspective, Derek has enjoyed engagements with some of the worlds most well known brands, providing a range of training, executive level facilitation and technology selection services. He is the author and publisher of Process Product Watch—a detailed, evaluation level guide to BPM tools and technologies. Derek recently undertook a major evaluation of the leading BPM Suites for BP Trends (available free of charge at www.bptrends.com). As part of his contributing work, Derek has been a regular conference presenter
312
APPENDIX—AUTHORS and visiting lecturer at several European universities and business schools. In recognition of his insights and perspectives, Derek was elected as a Director of BPMI.org and was instrumental in development of BPM Think Tank held in Miami March 2005.
DR. GUSTAF NEUMANN (
[email protected]) Univ.-Prof. Department of Information Systems and New Media WU Wien Vienna University of Economics and Business Administration, Austria Prof. Dr. Gustaf Neumann is Chair of Information Systems and New Media at the University of Economics and Business Administration (WU) in Vienna, Austria, since October 1999. Before joining WU he was a full professor at the University of Essen, Germany (1995 to 1999) and visiting scientist at IBM's T.J. Watson Research Center in Yorktown Heights, NY (1985-1986 and 1993-1995). In 1987 he was awarded the Heinz-Zemanek award of the Austrian Association of Computer Science (OCG) for best dissertation. Professor Neumann has published books and papers in the areas of program transformation, data modeling, and information systems technology with a focus on e-learning applications. He is a founding member of the Virtual Global University, head of the EC IST project UNIVERSAL, the IST Project Elena, member of the Steering board of the Network of Excellence ProLearn, and technical director of the learn@wu project, which is one of the most intensively used e-learning platforms worldwide. Gustaf Neumann is the author of several widely used open source software packages, such as the TeX-dvi converter dvi2xx, diac, the graphical frontend package Wafe, the Webbrowser Cineast, and the object oriented scripting language XOTcl.
DR. MARKUS NÜTTGENS (
[email protected]) Univ.-Prof. Department of Information Systems and New Media Vienna University of Economics and Business Administration, Austria Markus Nüttgens is full professor of information systems at University Hamburg, Germany. Prior to joining University Hamburg, Markus was a teaching assistant at the CIM-Technology Transfer Center (CIM-TTZ), assistant professor at the Department of Law and Business Administration and deputy director of Institute of Information Systems, University of Saarland, Germany. He has conducted various research projects with focus on information systems architecture and business process management in the industrial, service, and public sector. His research interests include methods and tools for business process modelling, analysis, and optimization. He was initially involved in the development of the modelling technique “Event-driven Process Chain (EPC)” and is head of the BPM-Laboratory at University Hamburg, Germany. He is member of the steering committee of the German special interest group on information systems (German Society of Informatics e.V.). Markus holds a Ph.D. and a Master’s degree in Business Administration from the University of Saarland, Germany.
313
APPENDIX—AUTHORS SINNAKKRISHNAN PERUMAL (
[email protected]) Doctoral Student Management Information Systems, Indian Institute of Management Calcutta, Diamond Harbour Road, Joka, Kolkata - 700104, West Bengal, India. Sinnakkrishnan Perumal is a doctoral student of Management Information Systems at Indian Institute of Management Calcutta. His research is partially supported by Infosys Technologies Limited, Bangalore, India under the Infosys Fellowship Award. He received a Bachelors degree in Computer Science and Engineering from Government College of Technology, Coimbatore, India. He worked as Senior Software Engineer at WIPRO Technologies, Bangalore for several years in the area of telecommunication switches and routers, and protocols such as Synchronous Optical Network (SONET) and Synchronous Digital Hierarchy (SDH). His current research interests include Workflow Verification, Administrative Workflow in Government Sectors, Internet-based Workflow Management, and e-Governance.
CHARLES A. PLESUMS (
[email protected]) Fellow, WfMC www.plesums.com 5702 Puccoon Cove Austin, TX. 78759 United States Charlie Plesums is a fellow of the WfMC. He has been involved in Automated Workflow management since the early 1980s in the context of large scale document image systems in the insurance industry. He then spent about 10 years with a multinational consulting firm as their principal consultant in imaging and workflow management, assisting dozens of companies in the installation and conversion of their work management systems. Earlier in his career Charlie was a professor of computer science, a consultant, and a chief information officer.
JON PYKE (
[email protected]) WfMC, Chair CTO TheProcessFactory Faris Lane, Woodham Surrey KT15 3DN United Kingdom Jon was the Chief Technology Officer and a main board director of Staffware Plc from August 1992 until was acquired by Tibco in 2004. He demonstrates an exceptional blend of Business/People Manager; a Technician with a highly developed sense where technologies fit and how they should be utilized. Jon is a world recognized industry figure; an exceptional public speaker and a seasoned quoted company executive. As the CTO for Staffware Plc, Jon was responsible for a team of 70 people, geographically split into two countries and four locations. Jon's primary responsibility was directing the product development cycle. Furthermore, Jon had overall executive responsibility for the product strategy, positioning, public speaking etc. Finally, as a main board director he was heavily involved in PLC board activities including
314
APPENDIX—AUTHORS merges and acquisitions, corporate governance, and board director of several subsidiaries. Jon has written and published a number of articles on the subject of Office Automation, BPM and Workflow Technology. These publications include work for the British Computer Society entitled “Office Automation – The Good News” and the “Workflow Report” for publication by Cambridge Market Intelligence. Jon is currently writing a book of Business Process Management – scheduled for publication by Cambridge University Press in the Spring 2004. Jon co-founded and is the Chair of the Workflow Management Coalition. He is an AiiM Laureate for Workflow—and was awarded the Marvin Manheim award for Excellence in workflow in 2003.
DR PALLAB SAHA (
[email protected]) Project Specialist Institute of Systems Science National University of Singapore, Singapore Prior to joining ISS, Dr Pallab was instrumental in managing Baxter's Environmental Health and Safety in Bangalore as its Head of Projects & Development. During his Ph. D, he was awarded the PDA, TCI Award for Excellence in Research and Best Ph.D. Thesis Award (for doctoral work). Pallab has research and consulting interests in object oriented technologies, business engineering, business dynamics and use of Six Sigma within the realms of software development and process improvement.
KEITH D. SWENSON (
[email protected]) Vice Chair Steering Committee, Co-Chair Technical Committee, WfMC Chief Architect Fujitsu Software Corporation 1250 E. Arques Avenue, Sunnyvale, CA, 94085 United States Keith Swenson is current Chief Architect and Director of Development at Fujitsu Software Corporation for the Interstage family of products. He is known for having been a pioneer in web services, and has helped the development of standards such as WfMC Interface 2, OMG Workflow Interface, SWAP, WfXML, AWSP, and is currently working on standards such as ASAP & Wf-XML 2.0. In 2004 he was awarded the Mannheim Award for Significant contribution to the field of workflow. He has led efforts to develop software products to support work teams at MS2, Netscape, and Ashton Tate.
MODRÁK VLADIMÍR (
[email protected]) Associate Professor of Manufacturing Engineering Technical University of Košice Bayerova 1 Prešov 080 01 Slovakia Associate Professor of Manufacturing Engineering and head of Department of Manufacturing Management at Technical University of Košice in Slovakia. He obtained a Ph.D in Manufacturing Technology and his research interest includes Business Process Modeling, Logistics and Quality Management. Dr. Modrák has also been active as Visiting Lecturer at the University of Applied
315
APPENDIX—AUTHORS Science in Wildau (Germany). He is a Vice-Editor in Chief of Slovak journal on Manufacturing Engineering and active member of Information Resources Management Association (USA).
ZACHARY WHEELER (
[email protected]) Owner/Software Architect SDDM Technology 3225 Warder St NW Washington DC 20010 United States Mr. Wheeler is a graduate of Howard University, located in the District of Columbia. He is founder and owner of SDDM Technology a small business enterprise located in the District of Columbia specializing in Information Technology. He has a particular interest in business process modeling, process automation and process reengineering.
STEPHEN A. WHITE (
[email protected]) BPM Architect IBM 600 Anton Blvd. Floor 5, Costa Mesa, CA. 92626 United States Stephen White is currently a BPM Architect at IBM. He has 20 years experience with process modeling—ranging from developing models, consulting, training, modeling tool design, product management, and standards development. He served on the BPMI.org Board of Directors in 2003 and 2004 and chairs the BPMI Notation Working Group, which is developing BPMN.
316
Index Administration & Monitoring, 284 Asynchronous Service Access Protocol (ASAP), 180, 257, 278 asynchronous service, 280 Audit Data specification, 283 BPA tools, 103-111 BPDM, 190 BPEL-J, 171 BPM deployments, 23, 24 BPM interchange formats, 185 BPM reference metamodel, 194 BPML, 192 BPMN Business Process Diagram, 232 BPSS, 192 Business Activity Monitoring (BAM), 55 Business Intelligence (BI), 55, 59, 145 business process analysis (BPA), 69, 103 business process complexity, 82 Business Process Execution Language (BPEL), 154, 163, 183, 213 Business Process Management (BPM), 141 Business Process Management (BPM), 18, 179 Business Process Management Initiative (BPMI), 188 Business Process Management Notation (BPMN), 104, 109, 111, 140, 153, 172, 190, 192 Business Process Management Systems (BPMS), 211, 257 Business Process Modelling (BPM), 185 Business Process Outsourcing (BPO), 49, 154 Business Process Reengineering (BPR), 75 Business Rule Engines, 130 Business Rule Management (BRM), 129, 140 Business Rules (BR), 42-46, 129, 116 business-to-business (B2B), 109 complexity of processes, 199 complexity, 187, 199 control-flow complexity, 199 coordination service, 175 customer relationship management (CRM), 91 data access layer, 127 definition, 283 Department of Consumer and Regulatory Affairs (DCRA), 113 EasyASAP open source, 279
317
INDEX ebXML Business Process Specification Schema, 168 enterprise transaction processing, 91 Extreme Programming (Scrum), 104 extrinsic workflow, 50 factory resource, 260 Flow Diagrams, 77 Gartner Group, 55 IDL, 283 independent workflow system, 92 integrated function and workflow (IFW), 31 integrated process (IP), 76 intelligent business process management, 106 Invoked Application, 284 layered approach, 48 McCabe’s cyclomatic complexity, 200 membership, 285 metametarules, 136 metamodel, 188, 185 metarules, 136 model-driven architecture (MDA), 110 Model-View-Controller pattern, 159 OASIS, 188 Object Management Group (OMG), 188 OLE, 283 Petri nets, 254 Poisson distribution, 64 process complexity analysis, 201 Process Definition, 284 process definitions, 260 Quality of service (QoS), 199, 201 Reference Model & Glossary, 284 Sarbanes-Oxley Act (SOX), 27, 153 service level agreements (SLA), 154 Service Oriented Architecture (SOA), 113, 117, 153, 160, 179 Shared Services Center (SSC), 94 Simple Object Access Protocol (SOAP), 257 SIMPROCESS, 64 Simulation, 53-70, 74 Small Medium Business (SMB), 143 standards structural complexity, 89 subprocesses, 34, 36, 40, 41, 44, 47, 74 supply chain management (SCM), 91
318
INDEX swim lane diagrams, 95 unified BPM, 104, 111 Unified enterprise process (UEP), 76 Unified Model Language (UML), 115, 192 Unified Software Development Process, 104 Web process, 199 Web Service Definition Language, 163 Web Services (WS), 113, 200 Web Services standards, 163 WfMC Interface 1, 20 WfMC Interface 2, 20 WfMC Interface 3, 21 WfMC Interface 4, 21 WfMC Interface 5, 21 WF-XML 2.0, 180, 257, 280 work management, 41 Workflow API (WAPI), 283 Workflow Client, 284 Workflow Management Coalition reference model, 20, 285 Workflow Management Systems (WfMSs), 234 workflow management tools, 17, 18 workflow pattern analysis, 194 workflow system, dedicated enterprise, 19 workflow verification algorithm, 234 WS-Addressing, 163 WS-BPEL (see Business Process Execution Language (BPEL) WS-CAF, 161, 175 WS-CDL, 192 WSCI, 193 WS-Remote Portlets, 172 XPDL, 182, 193
319
Additional Workflow and BPM Resources NON-PROFIT ASSOCIATIONS AND STANDARDS RESEARCH ONLINE •
AIIM (Association for Information and Image Management) http://www.aiim.org
•
AIS Special Interest Group on Process Automation and Management (SIGPAM) http://www.sigpam.org
•
BPR On-Line Learning Center http://www.prosci.com
•
Business Process Management Initiative http://www.bpmi.org
•
IEEE (Electrical and Electronics Engineers, Inc.) http://www.ieee.org
•
Institute for Information Management (IIM) http://www.iim.org
•
ISO (International Organization for Standardization) http://www.iso.ch
•
Object Management Group http://www.omg.org
•
Organization for the Advancement of Structured Information Standards http://www.oasis-open.org
•
Society for Human Resource Management http://www.shrm.org
•
Society for Information Management http://www.simnet.org
•
The Open Document Management Association http://nfocentrale.net/dmware
•
The Workflow Management Coalition (WfMC) http://www.wfmc.org
•
Wesley J. Howe School of Technology Management http://attila.stevens.edu/workflow
•
Workflow And Reengineering International Association (WARIA) http://www.waria.com
•
Workflow Comparative Study http://www.waria.com/books/study-2003.htm
•
Workflow Portal http://www.e-workflow.org
320