Enterprise Information Systems: 12th International Conference, ICEIS 2010, Funchal-Madeira, Portugal, June 8-12, 2010, Revised Selected Papers (Lecture Notes in Business Information Processing)
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Lecture Notes in Business Information Processing Series Editors Wil van der Aalst Eindhoven Technical University, The Netherlands John Mylopoulos University of Trento, Italy Michael Rosemann Queensland University of Technology, Brisbane, Qld, Australia Michael J. Shaw University of Illinois, Urbana-Champaign, IL, USA Clemens Szyperski Microsoft Research, Redmond, WA, USA
73
Joaquim Filipe José Cordeiro (Eds.)
Enterprise Information Systems 12th International Conference, ICEIS 2010 Funchal-Madeira, Portugal, June 8-12, 2010 Revised Selected Papers
13
Volume Editors Joaquim Filipe Department of Systems and Informatics Polytechnic Institute of Setúbal Rua do Vale de Chaves - Estefanilha 2910-761 Setúbal, Portugal E-mail: joaquim.fi[email protected] José Cordeiro Department of Systems and Informatics Polytechnic Institute of Setúbal Rua do Vale de Chaves - Estefanilha 2910-761 Setúbal, Portugal E-mail: [email protected]
ISSN 1865-1348 e-ISSN 1865-1356 ISBN 978-3-642-19801-4 e-ISBN 978-3-642-19802-1 DOI 10.1007/978-3-642-19802-1 Springer Heidelberg Dordrecht London New York Library of Congress Control Number: 2011922508 ACM Computing Classification (1998): J.1, H.3, H.4, I.2, H.5
The present book includes extended and revised versions of a set of selected papers from the 12th International Conference on Enterprise Information Systems (ICEIS 2010), held in Funchal, Portugal, during June 8–12, 2010. The conference was sponsored by the Institute for Systems and Technologies of Information, Control and Communication (INSTICC), held in cooperation with AAAI, WfMC, ACM SIGMIS, ACM SIGCHI, ACM SIGART and IEICE/SWIM. The conference was organized in five simultaneous tracks: Databases and Information Systems Integration, Artificial Intelligence and Decision Support Systems, Information Systems Analysis and Specification, Software Agents and Internet Computing and Human-Computer Interaction. The book is based on the same structure. ICEIS 2010 received 448 paper submissions, from 58 countries in all continents. From these, after a blind review process, only 62 were accepted as full papers, of which 39 were selected for inclusion in this book, based on the classifications provided by the Program Committee. The selected papers reflect state-of-the-art research that is often oriented toward real-world applications and highlight the benefits of information systems and technology for industry and services, thus making a bridge between the academic and enterprise worlds. These high-quality standards will be maintained and reinforced at ICEIS 2011, to be held in Beijing, China, and in future editions of this conference. Furthermore, ICEIS 2010 included five plenary keynote lectures given by Michel Chein (LIRMM, University of Montpellier 2, France), David L. Olson (University of Nebraska, USA), Anind K. Dey (Carnegie Mellon University, USA), Runtong Zhang (Beijing Jiaotong University, China) and Robert P. W. Duin (TU Delft, The Netherlands). We would like to express our appreciation to all of them and in particular to those who took the time to contribute with a paper to this book. On behalf of the conference Organizing Committee, we would like to thank all participants. First of all the authors, whose quality work is the essence of the conference and the members of the Program Committee, who helped us with their expertise and diligence in reviewing the papers. As we all know, producing a conference requires the effort of many individuals. We wish to thank also all the members of our Organizing Committee, whose work and commitment were invaluable. October 2010
Jos´e Cordeiro Joaquim Filipe
Organization
Conference Chair Joaquim Filipe
Polytechnic Institute of Set´ ubal / INSTICC, Portugal
Program Chair Jos´e Cordeiro
Polytechnic Institute of Set´ ubal / INSTICC, Portugal
Organizing Committee Patr´ıcia Alves S´ergio Brissos Helder Coelhas Vera Coelho Andreia Costa Patricia Duarte Bruno Encarna¸c˜ao Mauro Gra¸ca Raquel Martins Liliana Medina Elton Mendes Carla Mota Vitor Pedrosa Daniel Pereira Filipa Rosa Jos´e Varela Pedro Varela
Portugal Portugal Portugal Portugal Portugal Portugal Portugal Portugal Portugal Portugal Portugal Portugal Portugal Portugal Portugal Portugal Portugal
Senior Program Committee Sen´en Barro, Spain Jean B´ezivin, France Albert Cheng, USA Bernard Coulette, France Jan Dietz, The Netherlands Schahram Dustdar, Austria Ant´ onio Figueiredo, Portugal
Nuno Guimar˜ aes, Portugal Dimitris Karagiannis, Austria Pericles Loucopoulos, UK Andrea de Lucia, Italy Kalle Lyytinen, USA Yannis Manolopoulos, Greece Jos´e Legatheaux Martins, Portugal
VIII
Organization
Masao Johannes Matsumoto, Japan Marcin Paprzycki, Poland Alain Pirotte, Belgium Klaus Pohl, Germany Jos´e Ragot, France Colette Rolland, France Narcyz Roztocki, USA Abdel-badeeh Salem, Egypt
Bernadette Sharp, UK Timothy K. Shih, Taiwan Alexander Smirnov, Russian Federation Ronald Stamper, UK Antonio Vallecillo, Spain Fran¸cois Vernadat, France Frank Wang, UK
Program Committee Michael Affenzeller, Austria Miguel Angel Martinez Aguilar, Spain Patrick Albers, France Abdullah Alnajim, Saudi Arabia Bernd Amann, France Vasco Amaral, Portugal Andreas Andreou, Cyprus Gustavo Arroyo-Figueroa, Mexico Wudhichai Assawinchaichote, Thailand Juan C. Augusto, UK Cecilia Baranauskas, Brazil Bernhard Bauer, Germany Lamia Hadrich Belguith, Tunisia Noureddine Belkhatir, France Nadia Bellalem, France Orlando Belo, Portugal Manuel F. Bertoa, Spain Peter Bertok, Australia Minal Bhise, India Felix Biscarri, Spain Oliver Bittel, Germany Danielle Boulanger, France Jean-Louis Boulanger, France St´ephane Bressan, Singapore Luis M. Camarinha-Matos, Portugal Olivier Camp, France Roy Campbell, USA Gerardo Canfora, Italy Ang´elica Caro, Chile Jose Jesus Castro-schez, Spain Luca Cernuzzi, Paraguay Sergio de Cesare, UK Maiga Chang, Canada
Daniela Barreiro Claro, Brazil Jose Eduardo Corcoles, Spain Antonio Corral, Spain Karl Cox, UK Sharon Cox, UK Alfredo Cuzzocrea, Italy Jacob Cybulski, Australia Bogdan Czejdo, USA Mohamed Dahchour, Morocco Suash Deb, India Vincenzo Deufemia, Italy Kamil Dimililer, Cyprus Jos´e Javier Dolado, Spain Juan C. Due˜ nas, Spain Hans-Dieter Ehrich, Germany Jo˜ ao Faria, Portugal Antonio Fari˜ na, Spain Massimo Felici, UK Antonio Fern´ andez-Caballero, Spain Edilson Ferneda, Brazil Maria Jo˜ ao Silva Costa Ferreira, Portugal Paulo Ferreira, Portugal Filomena Ferrucci, Italy Barry Floyd, USA Rita Francese, Italy Ana Fred, Portugal Mariagrazia Fugini, Italy Jose A. Gallud, Spain Ana Cristina Bicharra Garcia, Brazil Marcela Genero, Spain Joseph Giampapa, USA Ra´ ul Gir´ aldez, Spain Pascual Gonzalez, Spain
Organization
Robert Goodwin, Australia Silvia Gordillo, Argentina Virginie Govaere, France Janis Grabis, Latvia Maria Carmen Penad´es Gramaje, Spain Juan Carlos Granja, Spain Sven Groppe, Germany Maki K. Habib, Egypt Sami Habib, Kuwait Yaakov Hacohen-Kerner, Israel Sven Hartmann, Germany Christian Heinlein, Germany Ajantha Herath, USA Suvineetha Herath, USA Wladyslaw Homenda, Poland Jun Hong, UK Wei-Chiang Hong, Taiwan Jiankun Hu, Australia Kaiyin Huang, The Netherlands Akram Idani, France Arturo Jaime, Spain Ivan Jelinek, Czech Republic Paul Johannesson, Sweden Michail Kalogiannakis, Greece Nikos Karacapilidis, Greece Nikitas Karanikolas, Greece Stamatis Karnouskos, Germany Marite Kirikova, Latvia Alexander Knapp, Germany Stan Kurkovsky, USA Rob Kusters, The Netherlands Alain Leger, France Daniel Lemire, Canada Joerg Leukel, Germany Qianhui Liang, Singapore Matti Linna, Finland Stephane Loiseau, France Gabriel Pereira Lopes, Portugal Jo˜ ao Correia Lopes, Portugal Maria Filomena Cerqueira de Castro Lopes, Portugal V´ıctor L´opez-Jaquero, Spain Pericles Loucopoulos, UK Miguel R. Luaces, Spain
IX
Christof Lutteroth, New Zealand Mark Lycett, UK Cristiano Maciel, Brazil Rita Suzana Pitangueira Maciel, Brazil Edmundo Madeira, Brazil Laurent Magnin, France S. Kami Makki, USA Mirko Malekovic, Croatia Nuno Mamede, Portugal Broy Manfred, Germany Pierre Maret, France Tiziana Margaria, Germany Herve Martin, France Katsuhisa Maruyama, Japan David Martins de Matos, Portugal Andreas Meier, Switzerland Michele Missikoff, Italy Ghodrat Moghadampour, Finland Pascal Molli, France Francisco Montero, Spain Carlos Le´on de Mora, Spain Paula Morais, Portugal Fernando Moreira, Portugal Nathalie Moreno, Spain Haralambos Mouratidis, UK Pietro Murano, UK Tomoharu Nakashima, Japan Ana Neves, Portugal Engelbert Mephu Nguifo, France Rocco Oliveto, Italy Hichem Omrani, Luxembourg Samia Oussena, UK Sietse Overbeek, The Netherlands Tansel Ozyer, Canada Eric Pardede, Australia Rodrigo Paredes, Chile Vicente Pelechano, Spain Massimiliano Di Penta, Italy Dana Petcu, Romania Leif Peterson, USA Ramalingam Ponnusamy, India Abdul Razak Rahmat, Malaysia Jolita Ralyte, Switzerland Srini Ramaswamy, USA T. Ramayah, Malaysia
X
Organization
Pedro Ramos, Portugal Hajo A. Reijers, The Netherlands Ulrich Reimer, Switzerland Marinette Revenu, France Yacine Rezgui, UK Nuno de Magalh˜ aes Ribeiro, Portugal Michele Risi, Italy Alfonso Rodriguez, Chile Daniel Rodriguez, Spain Oscar M. Rodriguez-Elias, Mexico Jose Raul Romero, Spain Agostinho Rosa, Portugal Gustavo Rossi, Argentina Francisco Ruiz, Spain Roberto Ruiz, Spain Danguole Rutkauskiene, Lithuania Ozgur Koray Sahingoz, Turkey Priti Srinivas Sajja, India Belen Vela Sanchez, Spain Manuel Filipe Santos, Portugal Jurek Sasiadek, Canada Daniel Schang, France Sissel Guttormsen Sch¨ ar, Switzerland Jianhua Shao, UK Alberto Silva, Portugal Spiros Sirmakessis, Greece Chantal Soule-Dupuy, France Martin Sperka, Slovak Republic
Marco Spruit, The Netherlands Martin Stanton, UK Chris Stary, Austria Janis Stirna, Sweden Renate Strazdina, Latvia Stefan Strecker, Germany Chun-Yi Su, Canada Ryszard Tadeusiewicz, Poland Sotirios Terzis, UK Claudine Toffolon, France Theodoros Tzouramanis, Greece ˆ Jos´e Angelo Braga de Vasconcelos, Portugal Michael Vassilakopoulos, Greece Christine Verdier, France Aurora Vizcaino, Spain Bing Wang, UK Hans Weghorn, Germany Gerhard Weiss, The Netherlands Wita Wojtkowski, USA Viacheslav Wolfengagen, Russian Federation Robert Wrembel, Poland Mudasser Wyne, USA Haiping Xu, USA Lili Yang, UK Lin Zongkai, China
Auxiliary Reviewers Laden Aldin, UK Mohammad A.L. Asswad, UK Evandro Baccarin, Brazil Matthew Bardeen, Chile Ana Cerdeira-Pena, Spain Jos´e Antonio Cruz-Lemus, Spain F´elix Cuadrado, Spain Andrea Delgado, Spain Hugo A. Parada G., Spain Boni Garc´ıa, Spain Rodrigo Garcia-Carmona, Spain Yiwei Gong, The Netherlands Carmine Gravino, Italy Andreas Kupfer, Germany
Bernardi Mario Luca, Italy Francisco Martinez-Alvarez, Spain Isabel Nepomuceno, Spain Antonio De Nicola, Italy Oscar Pedreira, Spain Beatriz Pontes, Spain Rabie Saidi, France Federica Sarro, Italy Diego Seco, Spain Clare Stanier, UK Sarah Tauscher, Germany Luigi Troiano, Italy Mark Vella, Australia Philip Windridge, UK
Organization
Invited Speakers Robert P.W. Duin, TU Delft, The Netherlands Runtong Zhang, Beijing Jiaotong University, China David Olson, University of Nebraska, USA Michel Chein, LIRMM, University of Montpellier 2, France Anind K. Dey, Carnegie Mellon University, USA
Part I: Databases and Information Systems Integration Multi-flow Optimization via Horizontal Message Queue Partitioning . . . . Matthias Boehm, Dirk Habich, and Wolfgang Lehner
An XML-Based Streaming Concept for Business Process Execution . . . . Steffen Preissler, Dirk Habich, and Wolfgang Lehner
60
A Framework to Assist Environmental Information Processing . . . . . . . . . Yuan Lin, Christelle Pierkot, Isabelle Mougenot, Jean-Christophe Desconnets, and Th´er`ese Libourel
76
Using Visualization and a Collaborative Glossary to Support Ontology Conceptualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Elis C. Montoro Hernandes, Deysiane Sande, and Sandra Fabbri A Strategy to Support Software Planning Based on Piece of Work and Agile Paradigm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Deysiane Sande, Arnaldo Sanchez, Renan Montebelo, Sandra Fabbri, and Elis Montoro Hernandes Evaluating the Quality of Free/Open Source Systems: A Case Study . . . . Lerina Aversano and Maria Tortorella Business Object Query Language as Data Access API in ERP Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Vadym Borovskiy, Wolfgang Koch, and Alexander Zeier
90
104
119
135
XIV
Table of Contents
Part II: Artificial Intelligence and Decision Support Systems Knowledge-Based Engineering Template Instances Update Support . . . . . Olivier Kuhn, Thomas Dusch, Parisa Ghodous, and Pierre Collet
Part III: Information Systems Analysis and Specification Process Mining for Job Nets in Integrated Enterprise Systems . . . . . . . . . Shinji Kikuchi, Yasuhide Matsumoto, Motomitsu Adachi, and Shingo Moritomo
299
Table of Contents
Identifying Ruptures in Business-IT Communication through Business Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Juliana Jansen Ferreira, Renata Mendes de Araujo, and Fernanda Araujo Bai˜ ao A Business Process Driven Approach to Manage Data Dependency Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Joe Y.-C. Lin and Shazia Sadiq Using Cases, Evidences and Context to Support Decision Making . . . . . . Expedito Carlos Lopes, Vaninha Vieira, Ana Carolina Salgado, and Ulrich Schiel An Adaptive Optimisation Method for Automatic Lightweight Ontology Extraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Fabio Clarizia, Luca Greco, and Paolo Napoletano Automating the Variability Management, Customization and Deployment of Software Processes: A Model-Driven Approach . . . . . . . . . Fellipe Ara´ ujo Aleixo, Mar´ılia Aranha Freire, Wanderson Cˆ amara dos Santos, and Uir´ a Kulesza A Formalization Proposal of Timed BPMN for Compositional Verification of Business Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Luis E. Mendoza Morales, Manuel I. Capel Tu˜ n´ on, and Mar´ıa A. P´erez From Coding to Automatic Generation of Legends in Visual Analytics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Guillaume Artignan and Mountaz Hasco¨et
XV
311
326 340
357
372
388
404
Part IV: Software Agents and Internet Computing Improving QoS Monitoring Based on the Aspect-Orientated Paradigm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Mario Freitas da Silva, Itana Maria de Souza Gimenes, Marcelo Fantinato, Maria Beatriz Felgar de Toledo, and Alessandro Fabricio Garcia Directed Retrieval and Extraction of High-Quality Product Specifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Maximilian Walther, Ludwig H¨ ahne, Daniel Schuster, and Alexander Schill Using XML Schema Subtraction to Compress Electronic Payment Messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Stefan B¨ ottcher, Rita Hartel, and Christian Messinger
421
436
451
XVI
Table of Contents
Enhancing the Selection of Web Sources: A Reputation Based Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Donato Barbagallo, Cinzia Cappiello, Chiara Francalanci, and Maristella Matera Simulation Management for Agent-Based Distributed Systems . . . . . . . . . Ante Vilenica and Winfried Lamersdorf
Enterprise Information System Trends David L. Olson1 and Subodh Kesharwani2 1
Department of Management, University of Nebraska – Lincoln Lincoln, NE 68588-0491, Nebraska, U.S.A. 2 Indira Gandhi National Open University, New Delhi-110068, India [email protected], [email protected]
Abstract. Enterprise information systems (EIS) reflect the expansion of enterprise resource planning to include supply chain management and customer relationship management functionality. The evolution of EIS development is reviewed. The general state of EIS research is considered. The features expected of the next generation of EIS software are forecast. Emerging issues of EIS risk management and open source opportunities are discussed. Conclusions discuss inferences the authors make concerning the development and expansion of these important systems to better support organizations of all types. Keywords: Enterprise information systems, Risk management, Open source.
software product, which they expanded to include a slate of other modules. Prior to entry into the ERP market directly, Oracle was the leading database software vendor. This paper reviews the evolution of enterprise information systems, and synthesizes survey results of motivations. The evolution of EIS is discussed in section 2. Section 3 discusses the authors’ views on the next generation of EIS. Developing issues related to risk management and developments in open source EIS software are seen as important emerging research areas in section 4. Section 5 provides conclusions.
2 Development of EIS In the early 1970s, business computing relied upon centralized mainframe computer systems. These systems proved their value by providing a systematic way to measure what businesses did financially. The reports these systems delivered could be used for analysis of variance with budgets and plans, and served as a place to archive business data. Computing provided a way to keep records much more accurately, and on a massively larger scale than was possible through manual means. But from our st perspective at the beginning of the 21 century, that level of computer support was primitive. Business computing systems were initially applied to those functions that were easiest to automate, and that called for the greatest levels of consistency and accuracy. Payroll and accounting functions were an obvious initial application. Computers can be programmed to generate accurate paychecks, considering tax and overtime regulations of any degree of complexity. They also can implement accounting systems for tax, cost, and other purposes; because these functional applications tend to have precise rules that cover almost every case, so that computers can be entrusted to automatically and rapidly take care of everything related to these functions. Developments of critical importance over the years reflect the evolution of systems, through MRP to ERP and EIS. • 1960s The focus of manufacturing systems in this era was on Inventory Control. Software packages were designed to handle inventory based on traditional inventory concepts. • 1970s MRP (Material Requirement Planning) systems emerged in this era. This system translated the Master Schedule built for the end items into time-phased net requirements for the sub-assemblies, components, and raw materials planning and procurement. • 1980s The concept of MRP I evolved which was an extension of MRP to shop floor and distribution management activities. • 1990s The term MRP I was extended to come in a new form known as ERP, which covered areas like engineering, finance, human resources, and project management as a whole. Enterprise resource planning is a technological approach for EIS. • 2000s ERP-II or MRP III (Money Resource Planning) originated with the motive to emphasize on the planning of money in a optimal manner. • 2005 The term EIS was formed with the objective to a contemporary view to include web enabled features, full integration, multi-enabled systems.
Enterprise Information System Trends
5
Prior to 2000, ERP systems catered to very large firms, who could afford the rather high costs of purchasing ERP systems. Even focusing on a selected few modules would typically cost firms $5 million and up for software. After 2000, demand dropped, in part because firms were often concerned with Y2K issues prior to 2000, which motivated many ERP system acquisitions. Demand noticeably dropped off after 2000 came and went. Vendors reacted in a number of ways. The market consolidated, with Oracle purchasing PeopleSoft (who had earlier acquired JD Edwards). Microsoft acquired a number of smaller ERP software products, consolidating them into Microsoft Dynamics, which caters to a lower priced market, thus serving a needed gap in ERP coverage for small-to-medium sized businesses. Notably, SAP advertises that they can serve small business too. But it appears that they are more valuable in the large scale enterprise market. There are many other systems, to include open sourced systems (at least for acquisition) like Compiere in France. Many countries, such as China, India, and others have thriving markets for EIS systems designed specifically for local conditions, although SAP and Oracle have customers all over the globe. The first generation of ERP software was characterized by procedural legacy code, with a bloated support infrastructure that required many consultants that drained profit from organizations. The emergence of object-oriented architecture and associated design and development tools, to include visual design methodologies such as UML, created a paradigm shift in computer program design and development.
3 The Next Generation of Enterprise Software The next generation (EIS) involved enhancement of the system by use of contextrelevant pop-up and pull-down menus. Productivity was enhanced and errors and frustration lowered along with training time required. EIS emphasizes Webenablement to support e-business, with integration of server-side databases, use of standards such as XML and J2EE, and addition of supply chain management, customer relationship management, and business intelligence. There also is improved security made available through standard approaches such as data encryption and digital certificates. Current business circumstances demand innovative forms of enterprises. The next generation enterprise system will be a powerful two-way, real-time system applying current technology to systematically deal with innovative changes in strategy and execution at high-speed without error, persistently focused on the customer needs. Next generation enterprise systems will operate on demand. By on demand, we mean responding to an enterprises’ customer requests in real time. Some of the next generation enterprise system features are as follows: • The ability to modify the present system in response to dynamic organizational business needs. • BAAN started Intelligence Resource Planning and MRP-III to meet changing needs. New generation ERP systems include component architecture that is very much modular in character. • Future trends include looking-backward compatibility, generalized solutions, friendlier interfaces, object orientation, and Web enablement.
6
D.L. Olson and S. Kesharwani
• User preferences like cost understanding, specificity and faster solutions are provided. • Extension to SCM, E-commerce, Intelligent Applications and Customization are prime focuses in the future. The Internet represents the foremost new technology enabler, enabling speedy supply chain management between multiple operations and trading partners. Most EIS/ERP systems are enhancing their products to become “Internet Enabled” so that customers worldwide can have direct to the supplier’s enterprise system. Enterprise systems are building in workflow management functionally which provides a means to deal with and control the flow of work by monitoring logistic aspects like workload, and processing times of various supply chain members. Recognizing the need to go beyond the MRPII and EIS/ERP capabilities, vendors are adding to their product assortment. BAAN has introduced concepts like IRP (Intelligence Resource Planning), MRP-III (Money Resources Planning) and strategic technologies like Visual Product configuration, Product Data Management and Finite Scheduling.
4 EIS Research Directions The historical developments in this field are driven by the market, but in an economy molded to a great extent by vendor marketing. Thus academic research has focused on the basic research tools of case study reports and surveys. Recent case studies include MRP integration within ERP [2], international system implementation [3], and many on supply chain impact of EIS [4]. [5], [6]. There are many case studies [7], to include famous reviews of problems with Hershey’s ERP in 1999, when they rushed their installation project to add Y2K compliant features, and led to near catastrophic operational performance sending truckloads of candy to full warehouses, and leaving warehouses with low inventories empty after implementing their ERP. There also is the case of FoxMeyer Drug, which implemented systems in the 1990s and went bankrupt, followed by the success of McKesson Drug, who purchased the bankrupt assets and successfully installed a similar ERP system. As with all case research, each provides an interesting glimpse of what happened in one set of circumstances. But while there are lessons to be learned from each case, it is very difficult to generalize, as each case involves so many variable factors. The next type of research involves surveys of system users. There have been many surveys, but one stream of survey research inaugurated at Indiana University [8]. [9]. [10] has taken off and has been replicated in Sweden [11] and South Korea [12]. The results of this stream of research are reported here. This same group has more recently examined EIS features [13]. Recent studies have also focused on ERP implementation success [14]. Gartner Group consistently reports that IS/IT projects significantly exceed their time (and cost) estimates. Thus, while almost half of the surveyed firms reported expected implementation expense to be less than $5 million, we consider that figure to still be representative of the minimum scope required. However, recent trends on the part of vendors to reduce implementation time probably have reduced EIS installation cost.
Enterprise Information System Trends
7
4.1 EIS Risk Management Managing risk on an EIS project is crucial to its success. Risk is a potential failure point. There are thousands, maybe even millions of potential failure points on an EIS project, in the form of untested technology (and untested staff), political landmines, and even nature’s fury (tornados have occurred during an EIS go live weekend). While various risk management books and methodologies offer variations on a theme, there are generally five steps to managing risk. 1. 2. 3. 4. 5.
Find potential failure points or risks. Analyze the potential failure points to determine the damage they might do. Assess the probability of the failure occurring. Based on the first three factors, prioritize the risks. Mitigate the risks through whatever action is necessary.
Project team members must rely on their experience and advice from others to find potential failure points or risks. Closely monitor activities throughout EIS installation and look for areas of ambiguity. Step 1: One of the easiest and most effective ways to find potential failure points is to talk to other organizations that have done similar projects. Cost estimates are probably the most common potential project failure point. Other potential failure points include lack of an executive sponsor, an under qualified project manager, and no clear objectives for the project. Step 2: The next step is to determine the severity of the potential failure on the budget, project timeline, or the users’ requirements. S t e p 3 : Assessing the likely impact and the probability of the failure occurring is more art than science, requiring in-depth knowledge of both the EIS package and the business. A risk management team should be built that brings together those individuals that have the knowledge and experience to know what might happen. This team must have experience in implementing the specific EIS package for an organization with approximately the same size and in the same industry. Step 4: Prioritize the risks into three groups. Decide which risks should be eliminated completely, because of potential for heavy impact on critical business processes. Set up a monitoring plan for risks that should have regular management attention. Make the entire team aware of those risks that are sufficiently minor to avoid detailed management attention, but which the team should watch for potential problems. Step 5: Mitigate risks by reducing either the probability or the impact. The probability can be reduced by action up front to ensure that a particular risk is less likely to occur. The project risk plan should include a set of steps to recover from each risk, should failure occur. The team must know the person accountable for recovery from each specific risk, and the action to be taken to resolve it. The team must know the symptoms of the impending failure, and act to prevent it from occurring if possible. An example is to test a particular operating system or hardware component to prove that
8
D.L. Olson and S. Kesharwani
works prior to go live. Doing a pilot implementation or prototyping the first set of EIS interfaces are both examples of risk mitigation. Project management should first analyze the system, then proceed in accordance with the plan developed from this analysis. Information Technology Security Process. As a means to attain information technology security, Tracy proposed a series of steps [15]: Establish a Mentality. To be effective, the organization members have to buy in to operating securely. This includes sensible use of passwords. Those dealing with critical information probably need to change their passwords at least every 60 days, which may be burdensome, but provides protection for highly vulnerable information. Passwords themselves should be difficult to decipher, running counter to what most of us are inclined to use. Training is essential in inculcating a security climate within the organization. Include Security in Business Decision Making. When software systems are developed, especially in-house, an information security manager should certify that organizational policies and procedures have been followed to protect organizational systems and data. When pricing products, required funding for security measures need to be included in business cases. Establish and Continuously Assess the Network. Security audits need to be conducted using testable metrics. These audits should identify lost productivity due to security failures, to include subsequent user awareness training. Automation can be applied in many cases to accomplish essential risk compliance and assessment tasks. This can include vulnerability testing, as well as incident management and response. The benefits can include better use of information, lower cost of compliance, and more complete compliance with regulations such as Sarbanes-Oxley and HIPAA. Within this general framework, Tracy gave a security process cycle, described in Table 1: Table 1. Tracy’s Security Process Cycle Process Inventory Assess Notify Remediate
Validate
Report
IT Impact Assets available Vulnerabilities
Function Access assets in hardware and software Automatically alert those responsible for patch management, compliance Who needs to know? Automatically alert those responsible for patch management, compliance Action needed Automate security remediation by leveraging help desks, patch databases, configuration management tools Check if corrective Automatically confirm that remediation is actions worked complete, record compliance and confirm compliance with risk posture policies Can you obtain Give management views of enterprise IT needed information? risk and compliance
Enterprise Information System Trends
9
This cycle emphasizes the ability to automate within an enterprise information system context. EIS Security Risks. Dhillon and Torkzadeh presented a value-analysis hierarchy to sort objectives related to information systems security [16]. That process involved three steps: 1. Interviews to elicit individual values 2. Converting individual values and statements into a common format, generally in the form of object and preference. This step included clustering objectives into groups of two levels. 3. Classifying objectives as either fundamental to the decision context or as a means to achieve fundamental objectives. The Dhillon and Torkzadeh value-focused hierarchy for information systems security risk, and select elements can be applied to ERP selection as in Table 2: Table 2. EIS Security Risk Objectives Fundamental objectives Enhance management development practices Provide adequate human resource management practices Maximize access control Promote individual work ethic Maximize data integrity Maximize privacy
Components Develop IT staff capabilities Encourage high group morale
Minimize unauthorized access to information Minimize temptation to use information for personal benefit Minimize unauthorized changes Ensure data integrity Emphasize importance of personal privacy Emphasize importance of rules against disclosure
Extracted from [17].
EIS by their nature make information systems more secure. Means objectives reflect how fundamental objectives are attained. With respect to EIS impact on information systems security, there are a number of ways in which the fundamental objectives listed in Table 2 are attained. • Key components of EIS are business process reengineering and best practices, which lead to better IT staff capabilities. • Open communication is provided, improving IT department interactiveness and open communication. With respect to overall organizational morale, while initial installation of an EIS can adversely impact morale significantly, over time the longrun efficiencies of the system can lead to higher morale if the benefits lead to fair sharing of gains and make performing work more efficient. • EIS systems by their nature control access to information, which leads to better organizational security with respect to its information, and data integrity.
10
D.L. Olson and S. Kesharwani
• EIS systems provide mechanisms for an information audit trail, minimizing temptation to abuse information access. • EIS access controls make accountability for changes transparent, thus minimizing incentive to abuse the system in this manner. Data integrity is also enhanced by the fundamental feature of centralizing databases. • These access controls also enable compliance with privacy regulations, both in the literal meaning as well as the spirit of the law. Supply Chain IT Risks. Information technology makes supply chains work through the communication needed to coordinate activities across organizations, often around the world [18]. These benefits require openness of systems across organizations. While techniques have been devised to provide the required level of security that enables us to do our banking on-line, and for global supply chains to exchange information expeditiously with confidence in the security of doing so, this only happens because of the ability of information systems staff to make data and information exchange secure. IT support to supply chains involves a number of operational forms, to include vendor management inventory (VMI), collaborative planning forecasting and replenishment (CPFR), and others. These forms include varying levels of information system linkage across supply chain members, which have been heavily studied [19]. Within supply chains, IT security incidents can arise from within the organization, within the supply chain network, or in the overall environment. Within each threat origin, points of vulnerability can be identified and risk mitigation strategies customized. The greatest threat is loss of confidentiality. Smith et al. [20] gave an example of a supplier losing their account when a Wal-Mart invoice was unintentionally sent to Costco with a lower price for items carried by both retailers. Supply chains require data integrity, as systems like MRP and ERP don’t function without accurate data. Inventory information is notoriously difficult to maintain accurately. EIS Sources. Information systems (IS) projects involve relatively higher levels of uncertainty than most other types of projects. EIS implementations tend to be on the large end of the IS project spectrum. There are many options for implementation of an EIS: 1. 2. 3. 4. 5. 6. 7.
Adoption of a full EIS package from a single vendor source Single EIS vendor source with internally developed modifications Best-of-breed: adoption of modules from different vendor sources Modules from vendor sources with internal modifications In-house development In-house development supplemented by some vendor products Application service providers (ASP)
Furthermore, open source developments can be adopted, either as vendors of developed systems or as components for in-house development. There is thus a spectrum of options for EIS. Barring ASP, the easiest method is to adopt a system provided by a single vendor, without modifications (number 1 above). But this isn’t necessarily the least expensive option, nor will it necessarily provide the greatest benefits to the firm. The reason to use the best-of-breed approach (number 3 above), using modules from different
Enterprise Information System Trends
11
vendors, is that the functionality obtained from specific modules may be greater in one area for one vendor, but better in another module area (with respect to the needs of the specific adopting organization) from another vendor. EIS systems could be developed in-house (number 5 above). This is not recommended. If this method were adopted, a great deal of IS/IT project management effort would be necessary. As implied by variants numbered 2, 4, and 6, blends of each of these forms of EIS implementation have been applied as well. Finally, EIS could be outsourced (number 7 above), through application service providers. This can result in the lowest cost method of installation. That may involve a lot of convenience at the cost of a lot of control. Outsourcing Risks. A major risk in information systems is outsourcing. Outsourcing provides many opportunities to operate at lower cost, improve quality of product or service, access the best available technology, accessing the open market around the world. Outsourcing those functions that are not an organization’s core competencies allows them to focus on those core business activities that are. But outsourcing in IT involves many risks. Some are listed below with sources: • • • • • • •
Loss of control [21] Hidden costs [22] Disgreements, disputes, and litigation [23] Vendor opportunism [24] Information security exposure [25] Degradation of service [26] Poaching [27].
4.2 Open Source ERP Web services provide a convenient way to access existing internal and external information resources. They use a number of technologies to build programming solutions for specific messaging and application integration problems [28]. However, building a new information system is in some ways like building a new house. Web services may be analogous to cement and bricks. Blueprint and engineering knowledge are more important. SOA gives the picture of what can be done with Web services. SOA exploits the business potential of Web services, which can lead to a type of convergence by enabling organizations to access better methods at lower cost through technology. SOA is a strategy based on turning applications and information sources which reside in different organizations, different systems and different execution environments into “services” that can be accessed with a common interface regardless of the location or technical makeup of the function or piece of data. The common interface must be agreed upon within the environment of systems that can access or invoke that service. A service within SOA either provides information or facilitates a change to business data from one valid and consistent state to another one. Services are invoked through defined communication protocols. The pivotal part of SOA is how communication between different data formats can be accomplished. Web Services, which is independent of operational environment, allow this communication.
12
D.L. Olson and S. Kesharwani
The goal of EIS is to integrate and consolidate all the old departments across an organization into a one system that can meet and serve each department’s unique needs and tasks. Therefore, every aspect of an organization’s business process needs to have a unified application interface, which provides high competitiveness in the market. Enterprises have invested heavily on EIS acquisition while small businesses or entrepreneurs often could not see an affordability of it mainly due to its high upfront prices and lack of resources to maintain the system. To attack this niche market of EIS in the small to medium-sized business sector, vendors has developed transformed EISs by adopting the most advanced information technologies available. The most available business models of EIS include software as a service (SaaS), open source software (OSS) and service oriented architecture (SOA). SaaS offers EIS as a service that clients can access via the Internet. Smaller companies are spared the expenses associated with software installation, maintenance and upgrades. Mango Network, an Irving, Texas, software and services company is a channel of providing software and services for small and midsize wholesale and retail distributors. It combines the pure open-source business model and SaaS. Compiere which is a pure open-source company provides products and Mango sells them through SaaS. Mango charges annual fees based on a customer’s revenue, rather than monthly fees based on the number of users. The Organization for the Advancement of Structured Information Standards (OASIS) defines SOA as: A paradigm for organizing and utilizing distributed capabilities that may be under the control of different ownership domains. It provides a uniform means to offer, discover, interact with and use capabilities to produce desired effects consistent with measurable preconditions and expectations. SOA driven EIS is not only beneficial to enterprises as many believe but also to SMBs. OSS EISs allowed small and medium sized businesses’ access to EIS. The benefits of applying OSS are as follows: • Increased adaptability: Since EIS is not plug and play, implementation processes are necessary to match the company’s business processes and local regulations. Having full access to the EIS source code is beneficial. • Decreased reliance on a single supplier: Proprietary EISs highly depend on the services from vendors and distributors. Upgrading and maintain service can be obtained from single source. • Reduced costs: Proprietary EIS licenses are expensive. OSS EISs’ average implementation costs are at between one-six and one-third of the costs for typical proprietary EISs. The most common business model of OSS is based on a simple idea – free for use, modification, resale and fee for services including implementation. Most EIS-related open-source software uses the Web for delivery of free software. There is at least one product (OpenMFG) allowing users to participate in software development, but with software vendor filtering. Open filtering models have not appeared to date. Among the many open-source EIS, Compiere has most often been appeared in many research articles and business reports. Compiere recorded more than 1.2 million downloads of its software and has more than 100 partners in 25 countries [29]. They don’t sell software but sell services – security and support. They do not allow just anyone to contribute code – the majority of code contributors are trained partners who understand company’s business model.
Enterprise Information System Trends
13
The EIS software OpenMFG allows community members including customers and partners to get the source code and extend and enhance it. The company, then, bring the enhancements into the product.
5 Conclusions There is a new paradigm in ERP. What used to be integrated systems serving single organizations in a closed, secure way are now much more open, adding functionality in the form of business intelligence and supply chain linkages. A common term for this is enterprise information system (EIS). The new term doesn’t mean that the old value has disappeared – it is just more general. We see two areas that we think are growing in importance with respect to enterprise systems. Risk management has become much more important in the last decade, with motivation coming from corporate scandals leading to Basel II and Sarbanes-Oxley regulations. A risk management framework for EIS project was suggested, a security process for IT in general given, and more specific EIS security issues were reviewed. Supply chain IT risk was also discussed. Enterprise systems are needed to comply with all of the additional reporting requirements. Securing them from intrusion, and maintaining the quality of the information they contain, is growing in importance. The spectrum of EIS options was presented. Another area of opportunity is open source enterprise systems. OSS has become a rapidly growing factor in international enterprise system development. Open systems offer the opportunity to better customize systems to each organization, and offer alternatives for smaller businesses to acquire enterprise system functionality at a reasonable cost. While open source systems are not expected to threaten SAP and Oracle, they do offer the opportunity for growth in the use of integrated systems to more organizations in more parts of the world.
References 1. Olson, D.L., Kesharwani, S.: Enterprise Information Systems: Contemporary Trends and Issues. World Scientific, Singapore (2010) 2. Berchet, C., Habchi, G.: The Implementation and Deployment of an ERP System: An Industrial Case Study. Computers in Industry 56(6), 588–605 (2005) 3. Chen, R.-S., Sun, C.-M., Helms, M.M.: Role Negotiation and Interaction: An Exploratory Case Study of the Impact of Management Consultants on ERP System Implementation in SMEs in Taiwan. Information Systems Management 25(2), 159–173 (2008) 4. Bose, I., Pal, R., Ye, A.: ERP and SCM Systems Integration: The Case of a Valve Manufacturer in China. Information & Management 45(4), 233–241 (2008) 5. Dai, Z.: Supply Chain Transformation by ERP for Enhancing Performance: An Empirical Investigation. Advances in Competitiveness Research 16(1), 87–98 (2008) 6. Tarantilis, C.D., Kiranoudis, C.T., Theodorakopoulos, N.D.: A Web-based ERP System for Business Services and Supply Chain Management: Application to Real-world Process Scheduling. European Journal of Operational Research 187(3), 1310–1326 (2008) 7. Olson, D.L.: Managerial Issues in Enterprise Resource Planning Systems. Irwin/McGrawHill, Englewood Cliffs/NJ (2004)
14
D.L. Olson and S. Kesharwani
8. Mabert, V.M., Soni, A., Venkataramanan, M.A.: Enterprise Resource Planning Survey of US Manufacturing Firms. Production and Inventory Management Journal 41(20), 52–58 (2000) 9. Mabert, V.M., Soni, A., Venkataramanan, M.A.: Enterprise Resource Planning: Measuring Value. Production & Inventory Management Journal 42(3/4), 46–51 (2001) 10. Mabert, V.M., Soni, A., Venkataramanan, M.A.: The Impact of Organization Size on Enterprise Resource Planning (ERP) Implementations in the US Manufacturing Sector. Omega 31(3), 235–246 (2003) 11. Olhager, J., Selldin, E.: Enterprise Resource Planning Survey of Swedish Manufacturing Firms. European Journal of Operational Research 146, 365–373 (2003) 12. Katerattanakul, P., Hong, S., Lee, J.: Enterprise Resource Planning Survey of Korean Manufacturing Firms. Management Research News 29(12), 820–837 (2006) 13. Watts, C.A., Mabert, V.A., Hartman, N.: Supply Chain Bolt-ons: Investment and Usage by Manufacturers. International Journal of Operations & Production Management 28(12), 1219–1243 (2008) 14. Li, L., Markowski, C., Xu, L., Markowski, E.: TQM – A Predecessor of ERP Implementation. International Journal of Production Economics 115(2), 569–580 (2008) 15. Tracy, R.P.: IT Security Management and Business Process Automation. Information Systems Security 16, 114–122 (2007) 16. Dhillon, G., Torkzadeh, G.: Value-Focused Assessment Of Information System Security in Organizations. Information Systems Journal 16, 293–314 (2006) 17. Olson, D.L., Wu, D.: Enterprise Risk Management Models. Springer, Heidelberg (2010) 18. Faisal, M.N., Banwet, D.k., Shankar, R.: Information Risks Management in Supply Chains: An Assessment and Mitigation Framework. Journal of Enterprise Information Management 20(6), 677–699 (2007) 19. Cigolini, R., Rossi, T.: A Note on Supply Risk and Inventory Outsourcing. Production Planning and Control 17(4), 424–437 (2006) 20. Smith, G.E., Watson, K.J., Baker, W.H., Pokorski II, J.K.: A Critical Balance: Collaboration and Security in the IT-enabled Supply Chain. International Journal of Production Research 45(11), 2595–2613 (2007) 21. Lacity, M.C., Hirschheim, L.: The Information Systems Outsourcing Bandwagon. Sloan Management Review 35(1), 73–86 (1993) 22. Collins, J., Millen, R.: Information Systems Outsourcing by Large American Industrial Firms: Choices and Impacts. Information Resources Management Journal 8(1), 5–13 (1995) 23. Earl, M.J.: The Risks of Outsourcing IT. Sloan Management Review 37(3), 26–32 (1996) 24. Barthélemy, J.: The Hard and Soft Sides of IT Outsourcing Management. European Management Journal 21(5), 539–548 (2003) 25. Khalfan, A.M.: Information Security Considerations in IS/IT Outsourcing Projects: A Descriptive Case Study of Two Sectors. International Journal of Information Management 24(1), 29–42 (2004) 26. Bahli, B., Rivard, S.: Validating Measures of Information Technology Outsourcing Risk Factors. Omega 33(2), 175–187 (2005) 27. Walden, E.A., Hoffman, J.J.: Organizational Form, Incentives and the Management of Information Technology: Opening the Black Box of Outsourcing. Computers & Operations Research 34, 3575–3591 (2007) 28. Brenner, M.R., Unmehopa, M.R.: Service-oriented Architecture and Web Services Penetration in Next-generation Networks. Bell Labs Technical Journal 12(2), 147–160 (2007) 29. Ferguson, R.B.: Open-source ERP Grows Up. eWeek 23(27), 26–27 (2006)
Non-Euclidean Problems in Pattern Recognition Related to Human Expert Knowledge Robert P.W. Duin Pattern Recognition Laboratory, Delft Univ. of Technology, Delft, The Netherlands [email protected] http://prlab.tudelft.nl
Abstract. Regularities in the world are human defined. Patterns in the observed phenomena are there because we define and recognize them as such. Automatic pattern recognition tries to bridge human judgment with measurements made by artificial sensors. This is done in two steps: representation and generalization. Traditional object representations in pattern recognition, like features and pixels, either neglect possibly significant aspects of the objects, or neglect their dependencies. We therefor reconsider human recognition and observe that it is based on our direct experience of dissimilarities between objects. Using these concepts, pattern recognition systems can be defined in a natural way by pairwise object comparisons. This results in the dissimilarity representation for pattern recognition. An analysis of dissimilarity measures optimized for performance shows that they tend to be non-Euclidean. The Euclidean vector spaces, traditionally used in pattern recognition and machine learning may thereby be suboptimal. We will show this by some examples. Causes and consequences of non-Euclidean representations will be discussed. It is conjectured that human judgment of object differences result in non-Euclidean representations as object structure is taken into account.1
1 Introduction Pattern recognition is an intrinsic human ability. Even young children are able to recognize patterns in the surrounding objects. During the whole life we are guided by pattern recognition. It constitutes implicitly a base for our judgements. Scientists make explicitly use of this ability in their professional life. Science usually starts with a categorization of the phenomena. The differences between the various pattern classes are, at least initially, defined by the human observer based on his personal interest, e.g. following from the utility. Later they may explicitly be related to observable properties. The research area of automatic pattern recognition studies the design of systems that are able to simulate this human ability. In some applications it is aimed to simulate an expert, e.g. a medical doctor, performing a recognition task. The design is based on an analysis of recognized examples and is guided by expert knowledge of the observations and of the procedures that are followed. 1
This paper is an extended version of [1].
J. Filipe and J. Cordeiro (Eds.): ICEIS 2010, LNBIP 73, pp. 15–28, 2011. c Springer-Verlag Berlin Heidelberg 2011
16
R.P.W. Duin
In order to learn from examples it is necessary to represent them such that they can easily be compared. A statistical analysis of a set of examples (the training set) should be possible in order to pave the ground for an appropriate assignment of the pattern class to new examples. We distinguish in this process two steps: Representation. In this first stage real world objects, observed by sensors, are represented such that the comparison with other objects is enabled. All available knowledge about the objects, their properties and the pattern classes to be distinguished should be used here. Generalization. Using the representation, sets of objects (classes) or discriminant functions between them are modeled from their statistics. This is based on statistical estimators and machine learning procedures. The goal is to create the models in such a way that the assignment of class membership of new, incoming objects is facilitated (classification). In the first step the emphasis is on the use of existing knowledge. In the second step ’new’ knowledge is generated from observations (learning from examples). Occasionally it happens as well that the representation is optimized by observations and that additional knowledge is used during statistical modelling. It is the purpose of this paper to discuss conditions and problems in obtaining good representations. It will be shown that proper representations, in agreement with human observations and judgements, may be in conflict with the demands for the next step, the generalization. In some problems (and perhaps in many) such a proper representation is non-Euclidean, but the present set of generalization tools is based on the assumption of an Euclidean space. Examples will be discussed as well as possibilities to solve this problem.
2 The Representation Problem The purpose of the representation step is that it should enable use to compare sets or classes of objects in a numerical way. It should be possible to build models for such a class or to construct decision functions between classes. The dominant, favorite way in pattern recognition is the vector space. Objects are represented as points in such a space and operations on sets of points (classes of objects) result in functions that describe domains for the classes or separation functions between them. The multi-dimensional vector space representation enables the application of many tools as developed in linear algebra, multi-variate statistics and machine learning. A point of concern however is whether it pays respect to the objects and their relations. If we want to learn from the representation of a set of objects that is given to us as examples for the recognition (classification) of new objects then a demand is that a small variation in one of the example objects should result in a small variation in its representation. If this is not the case, if the representation jumps in space, how can we expect that we learn from the set of examples in terms of the construction of class domains or separation boundaries? This demand, the continuity of the variation in the representation as a result of a variation in the object, is directly related to what is called in the early literature on
Non-Euclidean Problems in Pattern Recognition Related to Human Expert Knowledge
17
pattern recognition as compactness: classes of similar objects cover a finite domain in the representation space. We reformulate the demand as: similar real world objects should have similar representations. Are similar representations thereby also related to similar objects? Not necessarily. If two-dimensional objects like hand-written characters are represented by area and perimeter then the representation is compact: small changes in the shape of a character will result in small changes in area and perimeter. Objects with entirely different shapes however may have the same area and perimeter and thereby the same representation. An additional demand for representations, usually not fulfilled, is that similar representations should refer to similar objects. If this is the case, the representation is a true representation. If the representation is not true entirely different objects may be close in the representation space. They may even belong to different classes. This is the cause of class overlap. Given the representation classes may not be fully separated anymore. In spite of the fact that an expert observes essential differences and assigns them to different classes, they may be represented on the same place in the representation space. This can only be solved by statistics: in this area objects should be assigned to the most probable pattern class. Consequently, it is needed to use statistics as probability densities have to be estimated. We like to emphasize that the need of using statistics in pattern recognition is caused by class overlap resulting from a non-true representation. If the representation would have been a true representation then class differences observed by a human expert would have been reflected in the representation and objects of different classes would not have been represented on the same place. The intrinsic amount of class overlap, in pattern recognition called the Bayes error, is the result of the representation. A different representation will yield a different Bayes error. A true representation will result in a zero Bayes error.2
3 Feature Representation For a long time the feature representation has been the only vector representation used in pattern recognition. It is still dominant and the vector spaces resulting from other representations are often even called ’feature spaces’, neglecting their different origin. Features are object properties that contribute to the distinction of classes. They are defined or suggested by the experts that are also able to determine the true class membership (class label) of objects. For many problems it appears to be difficult to define exactly what the feature are. For instance, doctors cannot always exactly defined what should be measured in a lung X-ray or in a ECG signal for the recognition of some disease. Also in daily life it is for humans not easy to describe explicitly how to recognize a particular person. If an expert has good knowledge about the physical background of a pattern class he may well be able to define a small set of powerful features that can be used to construct 2
In this reasoning we neglect here the fact that some objects are ambiguous and can belong to more than a single class, e.g. the digit ’0’ and the letter ’O’ in some fonts. We also assumed that the class labels are assigned without any noise.
18
R.P.W. Duin
a well performing recognition system. If he is hesitative however he may supply long lists of possible measurements that might be used as features. Obtaining many features is the result of a lack of knowledge. This should be compensated by many, well labeled examples to be used by the pattern recognition analyst to train a classification system and possibly to determine a small set of good features. In application areas where many features have been proposed, e.g. in OCR, optical character recognition, this is the result of a lack of knowledge. We don’t know, in the sense that we cannot make it explicit, how we recognize characters. Such applications become only successful if large amounts of data become available to compensate this lack of knowledge.
4 Pixel Representation If good features cannot be found for objects like images, time signals and spectra, an obvious alternative is to take everything: just to sample the object. For images these are the pixels and we will use that word for the resulting representation: the pixel representation. It has the advantage that it still contains everything, seemingly no information is lost (see below for a discussion), but it is not specific. Many pixels may be needed to generate a good result. In the above mentioned OCR application area, a break-through was established when pixel representations became possible due to the availability of large datasets and big and fast computers to handle them. OCR systems are usually based on a combination of many approaches, including pixel based ones. There is a paradox related to this development. High resolution images yield high dimensional vector spaces resulting from the pixel representation. To build classification systems for such spaces many examples (large training sets) are needed. For a given, limited size of the training set, it may be better (yielding a higher performance) to reduce the dimensionality by taking less pixels, e.g. by sub-sampling the images. This is entirely different from the human recognition. It is certainly not true that human recognition is improved by the use of low-resolution images. This points to a possible defect of this whole approach: the representation and/or the classification schemes used in it are not appropriate. What is definitely wrong with the pixel representation is that the pixel connectivity, the relations between neighboring pixels in the image, is lost. From the representation it cannot be retrieved anymore which axes that participate in constituting the space corresponding to neighboring pixels. We have cut the objects in pieces, have put them on a heap and we try now to use this heap for recognizing the object. In other words, we have lost ourselves in many minor details and the sight on the entire object is completely gone. This has already been observed and discussed extensively by Goldfarb [2].
5 Structural Representations An approach to pattern recognition that definitely respects that objects should be considered in their entirety and that it takes into account that it is dangerous to break them
Non-Euclidean Problems in Pattern Recognition Related to Human Expert Knowledge
19
down into unrelated sets of properties is structural pattern recognition. Unfortunately, this approach does not produce a vector space, but represents objects by strings or graphs. Generalization from sets of strings or graphs has been done for a long time by template matching. E.g. a dissimilarity measure between graphs is defined and by the resulting graph match procedure new objects are classified to the class of the object with the most similar graph. Much work has been done on the improvement of the representation as well as on the matching procedure. Classification itself relied for a long time just on template matching, corresponding to the nearest neighbor rule in statistical pattern recognition.
6 Dissimilarity Representation Between the above representations clearly a gap can be observed. For vector spaces very nice sets of tools are available to describe class domains or to construct classification functions. The feature and pixel representations however that apply such vector spaces suffer from the fact that they describe the objects just partially, resulting in strong class overlap, or cut them entirely in pieces by which their structure is lost. The structural representations respect object structure but fail to construct a good representation space for which a broad collection of tools is available. The dissimilarity representation [3] tries to bridge this gap. It is based on the observation that a comparison of objects is basic in the constitution of classes in human recognition [4] In the constitution of a dissimilarity representation on top of structural pattern recognition pairwise dissimilarities between objects are found by matching structural object descriptions. They are used to construct a vector space in which every object is represented as a point. Instead of template matching now classifiers in such vector spaces can be considered. The two main approaches to construct a vector space from a given set dissimilarities, the dissimilarity matrix, will be shortly treated. There are many references that describe these in mathematical terms, e.g. [3],[5]. 6.1 The Dissimilarity Space In the first approach the dissimilarity matrix is considered as a set of row vectors, one for every object. They represent the objects in a vector space constructed by the dissimilarities to the other objects. Usually, this vector space is treated as a Euclidean space. If there are m objects given to construct the space, then each of them is given by m dissimilarities (including the dissimilarity with itself, usually zero). The initial dissimilarity space is thereby given as a m-dimensional vector space with m objects in it. This is a degenerate situation in which many classifiers yield bad results due to overtraining or the the curse of dimensionality [6]. Some classifiers like the SVM can still produce good results in this situation, but for many it may be better either to fill the space with more objects, or to reduce the dimensionality, e.g. by some procedure for prototype selection or feature selection (which coincides here). Even random selection of objects works well as nearby objects are similar and a random selection produces some sampling of total set.
20
R.P.W. Duin
The result is a vector space built by a so called representation set of objects and which is filled by an appropriate training set. The standard tools of statistical pattern recognition can be used to construct classifiers. New objects are mapped into the space by just measuring their dissimilarities to the representation set. It should be realized that the Euclidean distances between objects in the dissimilarity space are only in very special cases identical to the given dissimilarities. In general they are different. However, it is expected that almost identical object have very similar dissimilarities to all representation objects, so they will be very close in the dissimilarity space and have thereby a small distance. Consequently the dissimilarity representation is compact. If the dissimilarity measure that is used is appropriate then the reverse is also true: different objects will have different dissimilarities to the representation objects under the condition that this set is sufficiently large and well distributed over the domain of objects. The dissimilarity representation has thereby the potential to be a true representation. 6.2 Embedding the Dissimilarity Matrix In the second approach, an attempt is made to embed the dissimilarity matrix in a Euclidean vector space such that the distances between the objects in this space are equal to the given dissimilarities. This can only be realized error free, of course, if the original set of dissimilarities are Euclidean themselves. If this is not the case, either an approximate procedure has to be followed or the objects should be embedded into a non-Euclidean vector space. This is a space in which the standard inner product definition and the related distance measure are changed (among others, resulting in indefinite kernels). It appears that an exact embedding is possible for every symmetric dissimilarity matrix with zeros on the diagonal. The resulting space is the so-called pseudo-Euclidean space [3]. The pseudo-Euclidean space consist of two orthogonal subspaces, a ’positive’ subspace and a ’negative’ subspace. Every object has a representation in both subspaces. Both subspaces are normal Euclidean spaces. The squared distance between two objects represented in the pseudo-Euclidean space has to be determined by subtracting the squared distances between their representations in the two subspaces instead of adding them is in a ordinary Euclidean space. The negative subspace can be considered as a correction of the given dissimilarities w.r.t. proper Euclidean distances. Many of the dissimilarity measures used in the pattern recognition practice appear to be indefinite: they cannot be understood as distances in a Euclidean vector space, they are sometimes even not metric and they do not satisfy the Mercer conditions that are needed for optimizing the SVM classifier [7]. A small but growing number of classifiers can be trained in the pseudo-Euclidean space [8], but a general toolbox is not yet available. For this reason and others Euclidean corrections are studied: ways to transform the given dissimilarity matrix or the pseudo-Euclidean embedding in such a way that an Euclidean vector space is constructed that is as close as possible to the original one. This is useful if the cause of the non-Euclidean characteristic of the data is non-informative, i.e. that it is unrelated to the class differences. Measurement noise and approximate optimizations in determining the dissimilarities may result in non-Euclidean relations between objects. Such
Non-Euclidean Problems in Pattern Recognition Related to Human Expert Knowledge
21
noise may be removed by Euclidean corrections. In case however the non-Euclidean characteristics are informative Euclidean corrections will deteriorate the performance. In the present state-of-the-art the dissimilarity space has to be preferred over embedding combined with corrections. It is from a computational point much more feasible and it does not suffer from non-Euclidean problem. The dissimilarity space however, treats dissimilarities as properties and neglects their distance character. For that reason research into embedding approaches continuous from the perspective that may preserve better the information contained in the dissimilarity measurements.
7 Non-Euclidean Human Pattern Recognition In [1] we extensively studied the causes of the non-Euclidean characteristics of many real world datasets. We will summarize some results here and then discuss this topic from a slightly shifted point of view in order to gather support for our main conjecture.
Fig. 1. Illustration of the difference between Euclidean, metric, but non-Euclidean and non-metric dissimilarities. If the distances between the four points A, B, C and D are given as in the left plot then an exact 2-dimensional Euclidean embedding is possible. If the distances are as given as in the middle plot, the triangle inequality is obeyed. So the given distances are metric. but no isometric Euclidean embedding exist. The distances in the right plot are non-Euclidean as well as non-metric.
In fig. 1 the difference between non-metric and non-Euclidean distances is illustrated. Distances can be metric and still non-Euclidean. Non-metric relations constitute a strong example of non-Euclidean relations, but if the distances are metric it is still possible that the distances between more than three points do not fit in a Euclidean space. In fact this is common. In many applications the analyst defines a distance measure that is metric while he demands that the direct distance between objects is always smaller than any detour3. It is not always possible to avoid non-metric relations. Suppose we have to define a dissimilarity measure between real world objects like (images of) cups. They may 3
For local consistency we used in this example everywhere the word ’distance’ instead of dissimilarity. On other place again ’dissimilarity’ will be used to emphasized that we are discussing distance-like relations that are possibly sloppy defined.
22
R.P.W. Duin
Fig. 2. Vector space with the invariant trajectories for three objects O1 , O2 and O3 . If the chosen dissimilarity measure is the minimal distance between these trajectories, triangle inequality can easily be violated, i.e. d(O1 , O2 ) + d(O1 , O3 ) < d(O1 , O3 ).
be observed from different orientations, having different sizes that should not result in contributions to the dissimilarity as they are invariants for the class memberships. So in a pairwise comparison transformations for all orientations and sizes are considered and the smallest dissimilarity that is found is defined as the correct one, made insensitive for the invariants. In other pairwise comparisons this process is repeated for other pairs of cups. Observed in some high-dimensional parameter space a situation as sketched in fig. 2 may exist, showing that in case the transformations for removing the invariants are non-linear the triangle inequality may be violated. Another example is given in fig. 3 which illustrates how an artificial dataset has been generated that we used for studying non-Euclidean data. In a multi-dimensional cube two sets of non-overlapping balls are positioned at random. The balls in the two sets have different radii. Their values are assumed to be unknown For every ball all distances to all other balls are measured from surface to surface. We asked ourselves the question whether it is possible to distinguish the two sets, e.g. can we determine whether an arbitrary ball belongs to the class of large balls or to the class of small balls if we just measure the distances as defined and if the labels of all other balls are given. This appears to be possible by making use of the negative part of the pseudo-Euclidean space. Without it, it is impossible. The surprising result was that if the positive part is neglected and just the negative part is given the separation is even much better. This example makes clear how we may interpret the negative subspace of the pseudoEuclidean space. If all balls would have had zero radii then we just had a collection of proper Euclidean distances. Because the balls have a size the given distances are somewhat shorter. A small value is missing and as a result the negative subspace is needed as a compensation. To phrase it somewhat poetic: as the objects have an inner life that cannot be observed directly, but that influences the measured dissimilarities, we end up with non-Euclidean data. Let us now return to recognition problems for which features are difficult to define, like characters and medical images. Euclidean distances can be defined for such objects, e.g. by putting them on top of each other and adding the squared differences pixel
Non-Euclidean Problems in Pattern Recognition Related to Human Expert Knowledge
23
Fig. 3. Illustration of an artificial experiment in which sets of balls with different radii are distinguished by the distances between their surfaces
by pixel. Researchers trying to improve this create different dissimilarity measures, e.g. by a non-linear deformation of the pixel grid, see [9]. They thereby try to simulate the human way of observing objects implicitly, as they aim to improve the performance of the automatic recognition system such that it approximates the human recognition. This is done by deviating from the Euclidean distance measure. There are many examples in the literature of non-Euclidean dissimilarity measures [3]. In particular in relation with shape recognition the dissimilarity approach using such measures produces good results. This brings us to the following conjecture: The way humans judge differences between real world objects is non-Euclidean. This is caused by the fact that they include object structure next to object features in their judgement. The above mentioned ’inner life’ of objects is thereby identified as structure.
8 Examples The differences between the representations discussed in this paper will be illustrated by two experiments. They are based on real world datasets. It is not our intention to claim that a particular approach outperforms other ones for these applications. A much more extensive set of experiments and discussions would be needed for that. Here we will restrict ourselves to show the performances of a rather arbitrary chosen classifier (LIBSVM with a linear kernel, [10]), which is generally recognized as very good, for the various representations. Other classifiers and other parameter settings may show different results. 8.1 Handwritten Digits We took a part of the classic NIST database of handwritten numbers [11] and selected at random subsets of 500 digits for the ten classes 0-9. They were resampled to images of
24
R.P.W. Duin
Fig. 4. Examples of the images used for the digit recognition experiment
32×32 pixels in such a way that the digits fit either horizontally or vertically. In figure 4 some examples are given: black is ’1’ and white is ’0’. The dataset was repeatedly split in sets for training and testing. In every split the ten classes were evenly represented. The following representations are used: Features. We used 10 moments: the 7 rotations invariant moments and the moments [0 0], [0 1], [1 0], measuring the total number of black pixels and the centers of gravity in the horizontal and vertical directions. Pixels. Every digit is represented by a vector in 32 × 32 = 1024 dimensional vector space. Dissimilarities to the training set. Every object is represented by the Euclidean distances to all objects in the training set. Dissimilarities to blurred digits in the training set. As the pixels in the digit images are spatially connected blurring may emphasize this. In this way the distances between slightly rotated, shifted or locally transformed but otherwise identical digits becomes small. The results are shown in figure 5. They show that for large training sets the pixel representation is superior. This is to be expected as this representation stores asymptotically the unverse of possible digits. For small training sets a proper set of features may perform better. The moments we used here are very general features. Much better ones may be found for describing digits. As explained a feature description reduces the objects: it may be insensitive for some object differences. The dissimilarity representation for sufficiently large representation sets may see all object differences and may thereby perform better. 8.2 Flow Cytometer Histograms This dataset is based on 612 FL3-A DNA flow cytometer histograms from breast cancer tissues in 256 resolution. The initial data were acquired by M. Nap and N. van Rodijnen of the Atrium Medical Center in Heerlen, The Netherlands, during 2000-2004, using
Non-Euclidean Problems in Pattern Recognition Related to Human Expert Knowledge
25
Averaged error (25 experiments)
NIST digits classified by LIBSVM 1 Features Pixels Euclidean Dis Rep Blurred Dis Rep
0.8 0.6 0.4 0.2 0
2
4 6 8 Training set size / class
10
Fig. 5. Learning curves for the digit recognition experiment
the four tubes 3-6 of a DACO Galaxy flow cytometer. Histograms are labeled in 3 classes: aneuploid (335 patients), diploid (131) and tetraploid (146). We averaged the histograms of the four tubes thereby covering the DNA contents of about 80000 cells per patient. We removed the first and the last bin of every histogram as here outliers are collected, thereby obtaining 254 bins per histogram. Examples of histograms are shown in fig. 6. The following representations are used: Histograms. Objects (patients) are represented by the normalized values of the histograms (summed to one) described by a 254 dimensional vector. This representation is similar as the pixel representation used for images as it is based on just a sampling of the measurements. Euclidean distances. These dissimilarities are computed as the Euclidean distances (L2 norm) in the above mentioned vector space. Every object is represented by its distances to the objects in the training set. Calibrated distances. As the histograms may suffer from an incorrect calibration in the horizontal direction (DNA content) we computed for every pairwise dissimilarity between two histograms the multiplicative correction factor for the bin positions that minimizes their dissimilarity. Here we used the L1 norm. This representation makes use of the shape structure of the histograms and removes an invariant (the wrong original calibration) as symbolically illustrated in figure 2. Again a linear Support Vector Machine was used as a classifier using a fixed trade-off parameter C. The learning curves for the three representations are shown in figure 7. They clearly illustrate how for this classifier the dissimilarity representation outperforms the ’pixel’ representation (sampling of the histograms) and that using background knowledge on the definition of the dissimilarity measure improves the results further.
26
R.P.W. Duin Aneuploid histogram 1500
1000
500
0
50
100 150 bin number
200
250
200
250
200
250
Diploid histogram 1500
1000
500
0
50
100 150 bin number Tetraploid histogram
1500
1000
500
0
50
100 150 bin number
Fig. 6. Examples of some flow cytometer histograms: aneuploid (top), diploid(middle) and tetraploid (bottom)
Averaged error (100 experiments)
Non-Euclidean Problems in Pattern Recognition Related to Human Expert Knowledge
Fig. 7. Learning curves for the flow cytometer histogram recognition experiment
9 Discussion and Conclusions For the recognition of real world objects measured by images, time signals and spectra, simple features or samples may not be sufficient. They neglect the internal structure of objects. Structural descriptions like graphs and strings lack the possibility of the use of an appropriate vector space. The dissimilarity representation bridges this gap, but has thereby to be able to deal with non-Euclidean dissimilarities. We conjecture that this deviation from the Euclidean distance measure is caused by the inclusion of structure in the human judgement of object differences which is lacking in the traditional feature representations. Acknowledgements. This research is financially supported by the FET programme within the EU FP7, under the SIMBAD project (contract 213250).
References 1. Duin, R.P.W., Pe¸alska, E.: Non-euclidean dissimilarities: Causes and informativeness. In: Hancock, E.R., Wilson, R.C., Windeatt, T., Ulusoy, I., Escolano, F. (eds.) SSPR&SPR 2010. LNCS, vol. 6218, pp. 324–333. Springer, Heidelberg (2010) 2. Goldfarb, L., Abela, J., Bhavsar, V., Kamat, V.: Can a vector space based learning model discover inductive class generalization in a symbolic environment? Pattern Recognition Letters 16(7), 719–726 (1995) 3. Pe¸alska, E., Duin, R.: The Dissimilarity Representation for Pattern Recognition. Foundations and Applications. World Scientific, Singapore (2005) 4. Edelman, S.: Representation and Recognition in Vision. MIT Press, Cambridge (1999) 5. Pe¸alska, E., Duin, R.: Beyond traditional kernels: Classification in two dissimilarity-based representation spaces. IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews 38(6), 729–744 (2008) 6. Jain, A.K., Chandrasekaran, B.: Dimensionality and sample size considerations in pattern recognition practice. In: Krishnaiah, P.R., Kanal, L.N. (eds.) Handbook of Statistics, vol. 2, pp. 835–855. North-Holland, Amsterdam (1987) 7. Cristianini, N., Shawe-Taylor, J.: Support Vector Machines and other kernel-based learning methods. Cambridge University Press, UK (2000)
28
R.P.W. Duin
8. Pe¸alska, E., Haasdonk, B.: Kernel discriminant analysis with positive definite and indefinite kernels. IEEE Transactions on Pattern Analysis and Machine Intelligence 31(6), 1017–1032 (2009) 9. Jain, A., Zongker, D.: Representation and recognition of handwritten digits using deformable templates. IEEE Trans. on Pattern Analysis and Machine Intelligence 19(12), 1386–1391 (1997) 10. Fan, R.E., Chen, P.H., Lin, C.J.: Working set selection using second order information for training support vector machines. Journal of Machine Learning Research 6, 1889–1918 (2005) 11. Wilson, C., Garris, M.: Handprinted character database 3. Technical Report, National Institute of Standards and Technology (February 1992)
Part I
Databases and Information Systems Integration
Multi-flow Optimization via Horizontal Message Queue Partitioning Matthias Boehm, Dirk Habich, and Wolfgang Lehner Dresden University of Technology, Database Technology Group, Dresden, Germany {matthias.boehm,dirk.habich,wolfgang.lehner}@tu-dresden.de
Abstract. Integration flows are increasingly used to specify and execute dataintensive integration tasks between heterogeneous systems and applications. There are many different application areas such as near real-time ETL and data synchronization between operational systems. For the reasons of an increasing amount of data, highly distributed IT infrastructures, as well as high requirements for up-to-dateness of analytical query results and data consistency, many instances of integration flows are executed over time. Due to this high load, the performance of the central integration platform is crucial for an IT infrastructure. With the aim of throughput maximization, we propose the concept of multi-flow optimization (MFO). In this approach, messages are collected during a waiting time and executed in batches to optimize sequences of plan instances of a single integration flow. We introduce a horizontal (value-based) partitioning approach for message batch creation and show how to compute the optimal waiting time. This approach significantly reduces the total execution time of a message sequence and hence, it maximizes the throughput, while accepting moderate latency time. Keywords: Integration flows, Multi-flow Optimization, Horizontal partitioning, Message queues, Throughput improvement.
1 Introduction The scope of data management is continuously changing from the management of locally stored data towards the management of distributed information across multiple heterogeneous applications and systems. In this context, typically, integration flows are used in order to specify and execute complex procedural integration tasks. These integration flows are executed by message-oriented integration platforms such as EAI servers (Enterprise Application Integration) or MOM systems (Message-Oriented Middleware). For two reasons, many independent instances of an integration flow are executed over time. First, there is the requirement of immediate data synchronization between operational source systems in order to ensure data consistency. Second, data changes of the operational source systems are directly propagated into the data warehouse infrastructure in order to achieve high up-to-dateness of analytical query results (near real-time ETL). Due to this high load of flow instances, the performance of the central integration platform is crucial. Thus, optimization is required. In the context of integration platforms, especially, in scenarios with high load of plan instances, the major optimization objective is throughput maximization [1] rather than J. Filipe and J. Cordeiro (Eds.): ICEIS 2010, LNBIP 73, pp. 31–47, 2011. c Springer-Verlag Berlin Heidelberg 2011
32
M. Boehm, D. Habich, and W. Lehner
the execution time minimization of single plan instances. Thus, the goal is to maximize the number of messages processed per time period. Here, moderate latency times of single messages are acceptable [2]. When optimizing integration flows, the following problems have to be considered: Problem 1. Expensive External System Access. The time-expensive access of external systems is caused by network latency, network traffic, and message transformations into internal representations. The fact that external systems are accessed with similar queries over time offers potential for further optimization. Problem 2. Cache Coherency Problem. One solution to Problem 1 might be the caching of results of external queries. However, this fails, because when integrating highly distributed systems and applications (loosely coupled without any notification mechanisms), the central integration platform cannot ensure that the cached data is consistent with the data in the source systems [1]. Problem 3. Serialized External Behavior. In dependence on the involved external systems, we need to ensure the serial order of messages. For example, this can be caused by referential integrity constraints within the target systems. Thus, we need to guarantee monotonic reads and writes for individual data objects. Given these problems, the optimization objective of throughput maximization has so far only been addressed by leveraging a higher degree of parallelism, such as (1) intra-operator (horizontal) parallelism (data partitioning, see [3]), (2) inter-operator (horizontal) parallelism (explicit parallel subflows, see [4,9]), and (3) inter-operator (vertical) parallelism (pipelining of messages, see [5,6,7]). Although these techniques can increase the resource utilization and thus, increase the throughput, they do not reduce the executed work. In this paper, we introduce the concept of multi-flow optimization (MFO) [8] in order to maximize the message throughput by reducing the executed work. Therefore, we periodically collect incoming messages and execute whole message batches with single plan instances. The novel idea is to use horizontal (value-based) message queue partitioning as a batch creation strategy and to compute the optimal waiting time. There, all messages of one batch (partition) exhibit the same attribute value with regard to a chosen partitioning attribute. This yields throughput improvements due to operation execution on partitions instead of on individual messages. In detail, we make the following contributions: – First of all, in Section 2, we present an architecture of an integration platform and we give a solution overview of MFO via horizontal partitioning. – In Section 3, we introduce the concept of a partition tree and we discuss the derivation of partitioning schemes as well as the related rewriting of plans. – Then, in Section 4, we define the formal MFO problem. Here, we also explain the cost estimation and the computation of the optimal waiting time. – Afterwards, we illustrate the results of our evaluation in Section 5. Finally, we analyze related work in Section 6 and conclude the paper in Section 7.
Multi-flow Optimization via Horizontal Message Queue Partitioning
33
2 System Architecture and Solution Overview A typical integration platform system architecture consists of a set of inbound adapters, multiple message queues, an internal scheduler, a central process execution engine, and a set of outbound adapters. The inbound adapters passively listen for incoming messages, transform them into a common format (e.g., XML) and append the messages to message queues or directly forward them to the process engine. Within the process engine, compiled plans of deployed integration flows are executed. While executing those plans, the outbound adapters are used as services in order to actively invoke external systems. They transform the internal format back into the proprietary message representations. This architecture is also representative for major products such as SAP Process Integration, IBM Message Broker or MS Biztalk Server. The following example explains the instance-based (step-by-step) plan execution within such an architecture. Example 1. Instance-Based Orders Processing: Assume a plan P2 (Figure 1(a)). In the instance-based case, a new plan instance pi is created for each incoming message (Figure 1(b)) and message queues are used at the inbound side only. Receive (o1) [service: s5, out: msg1]
Standard Message Queue
Assign (o2)
m1 [“CustA“]
[in: msg1 out: msg2]
dequeue m1
p1:
o1
o2
o3
o4
o5
o6
Q1: SELECT * FROM s4.Credit WHERE Customer=“CustA“
p2:
o1
o2
o3
o4
o5
o6
Q2: SELECT * FROM s4.Credit WHERE Customer=“CustB“
p3:
o1
o2
o3
o4
o5
o6
Q3: SELECT * FROM s4.Credit WHERE Customer=“CustA“
m2 [“CustB“]
Invoke (o3) [service: s4, in: msg2, out: msg3]
Qi
INNER
m3 [“CustA“]
Join (o4) [in: msg1, msg3; out: msg4]
Assign (o5)
m4 [“CustC“]
[in: msg4 out: msg5]
m5 [“CustB“]
Invoke (o6)
m6 [“CustC“]
dequeue m2 dequeue m3
[service s3, in: msg5] enqueue
(a) Example Plan P2
(b) Instance-Based Plan Execution of P2 Fig. 1. Example Instance-Based Plan Execution
The Receive operator (o1 ) gets an orders message from the queue and writes it to a local variable. Then, the Assign operator (o2 ) prepares a query with the customer name of the received message as a parameter. Subsequently, the Invoke operator (o3 ) queries the external system s4 in order to load additional information for that customer. Here, one SQL query Qi per plan instance (per message) is used. The Join operator (o4 ) merges the result message with the received message (with the customer key as join predicate). Finally, the pair of Assign and Invoke operators (o5 and o6 ) sends the result to system s3 . We see that multiple orders from one customer (CustA: m1 , m3 ) cause us to pose the same query (Invoke operator o3 ) multiple times to the external system s4 . Finally, we may end up with work done multiple times. At this point, multi-flow optimization comes into play, where we consider optimizing the whole sequence of plan instances. Our core idea is to periodically collect incoming messages and to execute whole message batches with single plan instances.
34
M. Boehm, D. Habich, and W. Lehner
Batch Creation via Horizontal Queue Partitioning The na¨ıve (time-based) batching approach, as already proposed for distributed queries [1] and web service interactions [9], is to collect messages during a waiting time Δtw, merge those messages to message batches bi , and then execute a plan instance pi for each batch. Due to the time-based model of collecting messages, there might be multiple distinct messages in the batch according to certain operator predicates. Hence, we need to rewrite the queries to external systems, which (1) might not be possible for certain applications, and which (2) possibly negative performance influence cannot be precisely estimated due to the loose coupling of involved systems [10]. To tackle these problems, we propose a novel concept of MFO via horizontal message queue partitioning. The basic idea is to horizontally partition the inbound message queues according to specific partitioning attributes ba. With such a value-based partitioning, all messages of a batch exhibit the same attribute value according to the partitioning attribute. Thus, several operators of the plan only need to access this attribute once for the whole partition rather than for each individual message. The core steps are (1) to derive partitioning attributes from the integration flow, (2) to periodically collect messages during a waiting time Δtw, (3) to read the first partition from the queue and (4) to execute the messages of this partition as a batch with a single plan instance. Additionally, (5) we might need to ensure the serial order of messages at the outbound side. Example 2. Partitioned Batch-Orders Processing: Figure 2 reconsiders our example for partitioned multi-flow execution. Partitioned Message Queue m1 [“CustA“] CustA
Wait ¨tw
m3 [“CustA“] m2 [“CustB“]
dequeue b1
p’1: o1
o2
o3
o4
o5
o6
Q’1: SELECT * FROM s4.Credit WHERE Customer=“CustA“
p’2: o1
o2
o3
o4
o5
o6
Q’2: SELECT * FROM s4.Credit WHERE Customer=“CustB“
The incoming messages mi are partitioned according to the partitioning attribute customer name that was extracted with ba = mi /Customer/Cname at the inbound side. A plan instance of the rewritten plan P2 reads the first partition from the queue and executes the whole partition. Due to the equal values of the partitioning attribute, we do not need to rewrite the query to the external system s2 . Every batch contains one distinct attribute value according to ba. We achieve performance benefits for the Assign, as well as the Invoke operators. It is important to note that beside external queries (Invoke operator), also local operators (e.g., Assign and Switch), but also operators that work on externally loaded data, can directly benefit from horizontal partitioning. This benefit is caused by operation execution on partitions instead of on individual messages.
Multi-flow Optimization via Horizontal Message Queue Partitioning
35
Clearly, MQO (Multi-Query Optimization) and OOP (Out-of-Order Processing) [11] have already been investigated for other system types. In contrast to existing work, we present the novel MFO approach that maximizes the throughput by computing the optimal waiting time. MFO is also related to caching and the recycling of intermediate results [12]. While caching might lead to using outdated data, the partitioned execution might cause reading more recent data. However, we cannot ensure strong consistency by using asynchronous integration flows (decoupled from clients with message queues) anyway. Further, we guarantee (1) monotonic writes, (2) monotonic reads with regard to individual data objects, (3) that the temporal gap is at most equal to a given latency constraint and (4) that no outdated data is read. In conclusion, caching is advantageous if data of external sources is static and the amount of data is rather small, while MFO is beneficial if data of external sources changes dynamically. The major research challenges of MFO via horizontal partitioning are (1) to enable plan execution of message partitions and (2) to periodically compute the optimal waiting time Δtw. Both are addressed in the following sections.
3 Horizontal Queue Partitioning In order to enable plan execution of message partitions, several preconditions are required. In this section, we describe (1) the horizontally partitioned message queue data structure partition tree, (2) the automatic derivation of the optimal partitioning scheme of such a tree, and (3) the related rewriting of plans. 3.1 Maintaining Partition Trees As the foundation for multi-flow optimization, we introduce the partition tree as a partitioned message queue data structure. This partition tree is an extended multidimensional B* -Tree (MDB-Tree) [13], where the messages are horizontally (valuebased) partitioned according to multiple attributes. Similar to a traditional MDB-Tree, each tree level represents a different partitioning attribute. In contrast, due to the queuing semantics, the major difference to traditional MDB-Trees is that the partitions are sorted according to their timestamps of creation rather than according to their key values. Thus, at each tree level, a list of partitions, unsorted with regard to the attribute values, is stored: Definition 1. Partition Tree: The partition tree is an index for multi-dimensional attributes. It contains h levels, where each level represents a partition attribute bai ∈ {ba1 , ba2 , . . . , bah }. For each attribute bai , a list of batches (partitions) b are maintained. Those partitions are ordered according to their timestamps of creation tc (bi ) with tc (bi−1 ) ≤ tc (bi ) ≤ tc (bi+1 ). The last index level bah contains the messages. A partition attribute has a type(bai ) ∈ {value, value-list, range}. Such a partition tree is used as our message queue that, similar to normal message queues, decouples the inbound adapters from the process engine to handle overload situations but additionally, realizes the horizontal partitioning.
36
M. Boehm, D. Habich, and W. Lehner
Example 3. Partition Tree with h = 2: Assume two partitioning attributes ba1 (customer, value) and ba2 (total price, range) that have been derived from a plan P . Then, the partition tree exhibits a height of h = 2 (Figure 3).
On the first index level, the messages are partitioned according to customer names ba1 (mi ), and on the second level, each partition is divided according to the range of order total prices ba2 (mi ). Horizontal partitioning in combination with the temporal order of partitions reason the outrun of single messages, while the messages within a partition are still sorted according to their incoming order. Such a partition tree message queue has two operations: The enqueue operation is invoked by the inbound adapters whenever a message was received and transformed into the internal representation. During this message transformation, the values of registered partitioning attributes are extracted as well. We use a thread monitor approach for synchronization of enqueue and dequeue operations. Subsequently, we iterate over the list of partitions from the back to the front and determine whether or not a partition with bal (mi ) = ba(bi ) already exists. If so, the message is inserted; otherwise, a new partition is created and added at the end of the list. In case of a node partition, we recursively invoke our algorithm, while in case of a leaf partition, the message is added to the end of the list. Due to linear comparison, the enqueue operation exhibits a worst-case h time complexity of O( i 1/sel(bai)), where sel(bai ) denotes the average selectivity of a single partitioning attribute. In contrast, the dequeue operation is invoked by the process engine according to the computed waiting time Δtw. It simply removes and returns the first partition with b− ← b1 from the list of partitions at index level 1 with a constant worst-case time complexity of O(1). This partition exhibits the property of |b| being the oldest partition within the partition tree with (mini=1 tc (bi )). This property ensures that starvation of messages is impossible. With regard to robustness, we extend the partition tree to the hash partition tree that is a partition tree with a hash table as a secondary index over the partitioning attribute. This reduces the complexity for enqueue operations—in case no serialized external behavior (SEB) is required—to constant time of O(1).
Multi-flow Optimization via Horizontal Message Queue Partitioning
37
3.2 Deriving Partitioning Schemes The partitioning scheme in terms of a partition tree layout is derived automatically from a given plan. This includes (1) deriving candidate partitioning attributes and (2) to find the optimal partitioning scheme for the partition tree. The candidate partition attributes are derived from the operators oi of plan P . This includes the linear search with O(m) for attributes that are involved in predicates, expressions and dynamic parameter assignments. We distinguish between the three partitioning attribute types value, value-list, and range. After having derived the set of candidate attributes, we select candidates that are advantageous to use. First, we remove all candidates, where a partitioning attribute refers to externally loaded data because these attribute values are not present at the inbound side. Second, we remove candidates, which benefit does not exceed a user-specified cost reduction threshold τ regarding the plan costs. Based on the set of partitioning attributes, we can create a concrete partitioning scheme for a partition tree with regard to a single plan. For h partitioning attributes, there are h! different partitioning schemes. The intuition of our heuristic of finding the optimal scheme is to minimize the number of partitions in the index. Therefore, we order the index attributes according to their selectivities with min hi=1 |b ∈ bai | iff sel(ba1 ) ≥ sel(bai ) ≥ sel(bah ) and thus, with complexity of O(h log h). Having minimized the total number of partitions, we minimized the overhead of queue maintenance and maximized the number of messages per top-level partition, which results in highest message throughput. 3.3 Plan Rewriting Algorithm With regard to executing message partitions, only slight changes on physical level are required. First, the message meta model is extended such that an abstract message can be implemented by an atomic message or a message partition. Second, all operators that benefit from partitioning are modified accordingly. All other changes are made on logical level when rewriting a plan P to P during the initial deployment or during periodical re-optimization. For the purpose of plan rewriting, the flow meta model is extended by two additional operators: PSlit and PMerge. Then, the logical plan rewriting is realized with the socalled split and merge approach. From a macroscopic view, a plan receives the top-level partition, dequeued from the partition tree. Then, we can execute all operators that benefit from the top-level attribute. Just before an operator that benefits from a lower-level partition attribute, we need to insert a PSplit operator that splits the top-level partition into the 1/sel(ba2) subpartitions (worst case) as well as an Iteration operator (foreach) that iterates over these subpartitions. The sequence of operators that benefit from this granularity are used as iteration body. After this iteration, we insert a PMerge operator in order to merge the resulting partitions back to the top-level partition if required. If we have only one partition attribute, we do not need to apply split and merge. According to the requirement of serialized external behavior, we might need to serialize messages at the outbound side. Therefore, we extended the message by a counter c. If a message mi outruns another message during enqueue, its counter c(mi )
38
M. Boehm, D. Habich, and W. Lehner
is increased by one. Serialization is realized by timestamp comparison, and for each reordered message, the counter is decreased by one. Thus, at the outbound side, we are not allowed to send message mi until c(mi ) = 0. It can be shown that the maximum latency constraints are still guaranteed [8].
4 Periodical Re-optimization Apart from these prerequisites, the multi-flow optimization now reduces to computing the optimal waiting time for collecting messages. This is done by periodical cost-based optimization [14], where we estimate the costs and compute this waiting time with regard to minimizing the total latency time. 4.1 Formal Problem Definition We assume a sequence of incoming messages M = {m1 , m2 , . . . , mn }, where each message mi is modeled as a (ti , di , ai )-tuple, where ti ∈ Z+ denotes the incoming timestamp of the message, di denotes a semi-structured tree of name-value data elements, and ai denotes a list of additional atomic name-value attributes. Each message mi is processed by an instance pi of a plan P , and tout (mi ) ∈ Z+ denotes the timestamp when the message has been executed. The latency of a single message TL (mi ) is given by TL (mi ) = tout (mi ) − ti (mi ). Furthermore, the execution characteristics of a finite message subsequence M with M ⊆ M are described by two statistics. First, the total execution time W (M ) of a subsequence is determined by W (p ) as the sum of execution times of all partitioned plan instances W (M ) = required to execute M . Second, the total latency time TL (M ) of a subsequence is determined by TL (M ) = tout (m|M | ) − ti (m1 ) as the time between receiving the first message until the last message has been executed. This includes overlapping execution time and waiting time. Definition 2. Multi-Flow Optimization Problem (P-MFO): Maximize the message throughput with regard to a finite message subsequence M . The optimization objective φ is to execute M with minimal latency time: |M | = min TL (M ). Δt There, two additional restrictions must hold: φ = max
(1)
1. Let lc denote a soft latency constraint that must not be exceeded significantly. Then, the condition ∀mi ∈ M : TL (mi ) ≤ lc must hold. 2. The external behavior must be serialized according to the incoming message order. Thus, the condition ∀mi ∈ M : tout (mi ) ≤ tout (mi+1 ) must hold. Finally, the P-MFO describes the search for the optimal waiting time Δtw with regard to φ and the given constraints. Based on the horizontal partitioning, Figure 4 illustrates the temporal aspects. An instance pi of a rewritten plan P is initiated periodically at Ti , where the period is determined by the waiting time Δtw. Then, a message partition bi is executed by pi with
Multi-flow Optimization via Horizontal Message Queue Partitioning
an execution time of W (P ). Finally, we estimate the total latency time TˆL (M ) in order to solve the defined optimization problem. In order to overcome the problems of (1) temporally overlapping plan instances for different partitions, (2) growing message queue sizes (in case of Δtw < W (P )), and (3) any a-priori violated latency constraint, we define the following validity condition: For a given latency constraint lc, there must exist a waiting time Δtw such that (0 ≤ W (P ) ≤ Δtw) ∧ (0 ≤ TˆL (|M |) ≤ lc); otherwise, the constraint is invalid. In other words, (1) we avoid the case Δtw < W (P ), and (2) we check if the worst-case message latency—in the sense of the total latency of |M | = 1/sel distinct message partitions—is lower than or equal to the latency constraint lc. 4.2 Extended Cost Model and Cost Estimation In order to estimate the costs of plan instances for specific batch sizes k with k = |bi |, we need to extend our cost model for integration flows with regard to these partitions. The extended costs C(oi , k ) of operators that benefit from partitioning (e.g., Invoke, Assign, and Switch) are independent of the number of messages k . In contrast, the costs of all operators that do not benefit from partitioning linearly depend on the number of messages in the batch k . For operators that do not benefit from partitioning, the abstract costs are computed by C(oi , k ) = C(oi ) · k and the execution time can be computed by W (oi , k ) = W (oi ) · k or by W (oi , k ) = W (oi ) · C(oi , k )/C(oi ). Finally, if k = 1, we get the instancebased costs with C(oi , k ) = C(oi ) and W (oi , k ) = W (oi ). Thus, the instance-based execution is a specific case of horizontal partitioning (one message per batch). Using the extended cost model, we now can compute the total execution time W (M , k ) and the total latency time TL (M , k ) of message subsequences M by assuming |M |/k = ˆ (M ) of 1/sel instances of a partitioned plan. The estimated total execution time W a subsequence is computed by the estimated costs per instance times the number of executed plan instances: m ˆ (o , k ). ˆ (M , k ) = W ˆ (P , k ) · |M | with W ˆ (P , k ) = W W (2) k i=1 In contrast, the estimated total latency time TˆL (M ) of a message subsequence is comˆ (P , k ). Thus, we compute it posed of the waiting time Δtw and the execution time W ˆ based on the comparison between Δtw and W (P , k ) with
40
M. Boehm, D. Habich, and W. Lehner
⎧
ˆ (P , k ) ⎨ |M | · Δtw + W k ˆ
TL (M , k ) = ˆ (P , k ) ⎩Δtw + |M | · W k
Δtw ≥ W (P , k ) otherwise.
(3)
Due to our validity condition, Δtw < W (P , k ) is invalid and therefore we use only the ˆ (M , k ) ≤ TˆL (M , k ), where first case of Equation 3. As a result, it follows that W ˆ (M , k ) = TˆL (M , k ) is the case at Δtw = W (P , k ). Hence, the total latency W time cannot be lower than the total execution time. 4.3 Waiting Time Computation The intuition of computing the optimal waiting time Δtw with regard to minimizing the total latency time is that the waiting time—and hence, the batch size k —strongly influences the execution time of single plan instances. Then, the latency time depends on that execution time. Figure 5 illustrates the resulting two inverse influences that our computation algorithm exploits: Total Latency Time TL(M’)
Relative Execution Time W(P’,k’) / k’ instance-based
instance-based partitioned (v2)
latency constraint lc total execution time W(M’,k’)
partitioned lower bound
min TL Waiting Time ¨tw
(a) Δtw → W (P )/k Influence
partitioned (v1) Waiting Time ¨tw
(b) Δtw → TL (M ) Influence
Fig. 5. Search Space for Waiting Time Computation
As shown in Figure 5(a), for partitioned plan execution, an increasing waiting time Δtw causes decreasing relative execution time W (P , k )/k and total execution time W (M , k ), which both are non-linear functions that asymptotically tend towards a lower bound. In contrast, Δtw has no influence on instance-based execution times. Furthermore, Δtw also influences the latency time (see Figure 5(b)). On the one hand, an increasing waiting time Δtw linearly increases the latency time TˆL because the waiting time is included in TˆL . On the other hand, an increasing Δtw causes a decreasing execution time and thus, indirectly decreases TˆL because the execution time is included in TˆL . The result, in the general case of arbitrary cost functions, is a nonlinear total latency time that has a local minimum (v1) or not (v2). In any case, due to the validity condition, the total latency function is defined for the closed interval TL (M , k ) ∈ [W (M , k ), lc] and hence, both global minimum and maximum exist. In order to compute the optimal waiting time, we monitor the incoming message rate R ∈ R and the value selectivity sel ∈ (0, 1] according to the partitioning attributes. The first partition will contain k = R · sel · Δtw messages. For the i-th partition
Multi-flow Optimization via Horizontal Message Queue Partitioning
41
with i ≥ 1/sel, k is computed by k = R · Δtw, independently of the selectivity sel. A low selectivity implies many partitions bi but only few messages in a partition (|bi | = R · sel). However, the high number of partitions bi forces us to wait longer (Δtw/sel) until the execution of a partition. Based on the relationship between the waiting time Δtw and the number of message per batch k , we can compute the waiting time, where TˆL is minimal by ˆ (M , Δtw) ≤ TˆL (M , Δtw) ≤ lc Δtw | min TˆL (Δtw) and W
(4)
Using Equation 3 and assuming a fixed message rate R with 1/sel distinct items according to the partitioning attribute, we substitute k with R · Δtw and get |M | TˆL (M , R · Δtw) = (5) · Δtw + W (P , R · Δtw) R · Δtw Finally, we set M = k /sel = (R·Δtw)/sel (execution of all 1/sel distinct partitions, where each partition contains k messages), use Equation 5 and compute Δtw | min TˆL (M , R · Δtw) with TˆL (M , R · Δtw)Δtw = 0 Tˆ (M , R · Δtw)ΔtwΔtw > 0.
(6)
L
If such a local minimum exists, we check the validity of (0 ≤ W (P , k ) ≤ Δtw)∧(0 ≤ TˆL ≤ lc). If Δtw < W (P , k ), we search for Δtw = W (P , k ). Further, if TˆL > lc, we search for the Δtw with TˆL (Δtw) = lc. If such a minimum does not exist, we compute Δtw for the lower border of the interval with Δtw | min TˆL (M , R · Δtw) with TˆL (M , R, Δtw) = W (M , Δtw · R). (7) This lower border of W (M , k ) is given at Δtw = W (P , k ), where W (P , k ) depends itself on Δtw. With regard to the load situation, there might not be a valid Δtw = W (P , k ) with Δtw ≥ 0. In this overload situation, we compute the maximum number of messages per batch k by k | W (M , k ) = TL (M , k ) = lc. In this case, the waiting time is Δtw = W (P , k ) and we have a full utilization W (M , k ) = TL (M , k ). However, we do not execute the partition with all collected messages k but only with the k messages, while the k − k messages are reassigned to the end of the partition tree (and we modify the outrun counters with regard to serialized external behavior). Thus, we achieve highest throughput but still can ensure the maximum latency constraint. Our extended cost model includes only two categories of operator costs. Hence, we compute the waiting time using an tailor-made algorithm with complexity of O(m). It computes the costs W − (P ) that are independent of k and the costs W + (P ) that depend linearly on k . Using a simplified Equation 7, we compute Δtw at the lower border of the defined TL function interval with Δtw = W (P , Δtw · R) = W − (P ) + W + (P ) · Δtw · R =
W − (P ) . (8) 1 − W + (P ) · R
42
M. Boehm, D. Habich, and W. Lehner
Finally, we check the validity condition in order to react on overload situations. As a result, we get the optimal waiting time that minimizes the total latency time and thus, maximizes the message throughput of a single deployed plan.
5 Experimental Evaluation Our evaluation shows that (1) significant throughput improvements are reachable, (2) the maximum latency guarantees hold under experimental investigation, and (3) the runtime overhead is negligible. We implemented the approach of MFO via horizontal partitioning within our java-based WFPE (workflow process engine) and integrated it into our cost-based optimization framework. This includes the (hash) partition tree, slightly changed operators as well as the algorithms for deriving partitioning attributes (A-DPA), plan rewriting (A-MPR) and waiting time computation (A-WTC). Furthermore, we ran our experiments on an IBM blade (OS Suse Linux, 32bit) with two processors (each of them a Dual Core AMD Opteron Processor 270 at 2 GHz) and 9 GB RAM. We used synthetically generated datasets in order to simulate arbitrary selectivities and cardinalities. As the integration flow under test, we used our example plan P2 . 5.1 Execution Time and Scalability First, we evaluated the execution time of partitioned plan execution compared to the unoptimized execution. We varied the batch size k ∈ [1, 20], measured the execution time W (P2 , k ) (Figure 6(a)) and computed the relative execution time W (P2 , k )/k (Figure 6(d)). For comparison, the unoptimized plan was executed k times, where we measured the total execution time. This experiment has been repeated 100 times. The unoptimized execution shows a linear scalability with increasing batch size k , where the logical y-intercept is zero. In contrast, the optimized plan also shows a linear scalability but with a higher logically y-intercept. As a result, the relative execution time is constant for the unoptimized execution, while for the optimized execution it decreases with increasing batch size and tends towards a lower bound. This lower bound is given by the costs of operators that do not benefit from partitioning. Note that (1) even for one-message-partitions the overhead is negligible and (2) that even small numbers of messages within a batch significantly reduce the relative execution time. Second, we investigated the inter-influences between message arrival rates R, waiting times Δtw, partitioning attribute selectivities sel and the resulting batch sizes k (k = R · Δtw). We executed M = 100 messages and repeated all subexperiments 100 times. As a first subexperiment, we fixed a waiting time of Δtw = 10 s. Figure 6(b) shows the influence of the message rate R on the average number of messages in the batch. We can observe (1) that the higher the message rate, the higher the number of messages per batch, and (2) that the selectivity determines the reachable upper bound. However, the influence of the message rate is independent of the selectivity. As a second subexperiment, Figure 6(e) illustrates the influence of Δtw on the batch size k , where we fixed sel = 1.0. Both an increasing waiting time and an increasing message rate, increase the batch size until the total number of messages is reached.
Multi-flow Optimization via Horizontal Message Queue Partitioning
(a) Execution Time
(b) Varying R and sel
(c) Scalability Varying k
(d) Rel. Execution Time
(e) Varying Δtw and R
(f) Scalability Varying d
43
Fig. 6. Execution Time, Scalability and Influences on Batch Sizes
(a) Fixed w/o SEB
(b) Poisson w/o SEB
(c) Fixed w/ SEB
Fig. 7. Latency Time of Single Messages TL (mi )
Third, we investigate the scalability of plan execution. This includes the scalability (1) with increasing input data size, and (2) with increasing batch size. We executed 20,000 plan instances and compared the optimized plans with their unoptimized counterparts, where we fixed an optimization interval of Δt = 5 min. In a first subexperiment, we investigated the scalability with increasing batch size k ∈ {1, 10, 20, 30, 40, 50, 60, 70}. Figure 6(c) shows the results. We observe, that the overhead for executing one-message-partitions is marginal, which is reasoned by additional abstractions for messages and operators. Furthermore, we see the monotonically nonincreasing total execution time function (with increasing batch size) and the existence of a lower bound of the total execution time. In a second subexperiment, we varied the input data size d ∈ {1, 2, 3, 4, 5, 6, 7} (in 100 kB), while the size of externally loaded data was unchanged. We fixed a batch size of k = 10. The results are shown
44
M. Boehm, D. Habich, and W. Lehner
(a) w/o Serialization (SEB)
(b) w/ Serialization (SEB)
Fig. 8. Runtime Overhead for Enqueue with Different Message Queues
in Figure 6(f). In general, we observe good scalability with increasing data size. However, the relative improvement is decreasing because the size of loaded data was not changed. We can conclude that the scalability depends on the workload and on the concrete plan. 5.2 Latency Time Furthermore, we executed M = 1,000 messages using a maximum latency constraint of lc = 10 s and measured the message latency times TL (mi ). We fixed a selectivity of sel = 0.1, a message arrival rate of R = 5 msg/s and used different arrival rate distributions (fixed, poisson) as well as analyzed the influence of serialized external behavior (SEB). As a worst-case consideration, we computed the waiting time Δtw | TL (M = k /sel) = lc, which resulted in Δtw = 981.26 ms because in the worst case, there are 1/sel = 10 different partitions plus the execution time of the last partition. For both message arrival rate distribution functions (Figure 7(a) and Figure 7(b)), the constraint is not significantly exceeded. The latency of messages varies from almost zero to the latency constraint, where the missed constraints are caused by variations of the execution time. The constraint also holds for SEB, where all messages show more similar latency times (Figure 7(c)) due to serialization at the outbound side. 5.3 Runtime Overhead We analyzed the overhead of the (hash) partition tree compared to the transient message queue. We enqueued 20,000 messages (see scalability experiments) with varying selectivities sel ∈ {0.001, 0.01, 0.1, 1.0} and measured the total execution time. This experiment was repeated 100 times. Figure 8 illustrates the results using log-scaled x- and y-axes. We see that without SEB, both the transient queue and the hash partition tree show constant execution time and the overhead of the hash partition tree is negligible. In contrast, the execution time of the partition tree linearly increases with decreasing selectivity due to the linear probing over all partitions. We observe a similar behavior with SEB, except that the execution time of the hash partition tree also increases linearly with decreasing selectivity. This is caused by the required counting of outrun messages.
Multi-flow Optimization via Horizontal Message Queue Partitioning
45
In conclusion, MFO leads to throughput improvements by accepting moderate additional latency time. How much we benefit depends on the used plans and on the workload. The benefit is caused by (1) a moderate runtime overhead, and (2) the need for only few messages in a partition to yield a significant speedup.
6 Related Work Multi-Query Optimization. The basic concepts of Multi-Query Optimization (MQO) [15] are pipelined execution and data sharing, where a huge body of work exists for local environments [16,17] and for distributed query processing [10,18,1,17]. For example, Lee et al. employed the waiting opportunities within a blocking query execution plan [1]. Further, Qiao et al. investigated a batch-sharing partitioning scheme [19] in order to allow similar queries to share cache contents. The main difference is that MQO benefits from reusing results across queries, while for MFO, this is impossible due to incoming streams of disjoint messages. In addition, MFO computes the optimal waiting time. Data Partitioning. Horizontal data partitioning [20] is strongly applied in DBMS and distributed systems. Typically, this is an issue of physical design [21]. However, there are more recent approaches such as table partitioning along foreign-key constraints [22]. In the area of data streams, data partitioning was used for plan partitioning across server nodes [23] or single filter evaluation on tuple granularity [24]. Finally, there are also similarities to partitioning in parallel DBMS. The major difference is that MFO handles infinite streams of messages. Workflow Optimization. In addition, there are data-centric but rule-based approaches of optimizing BPEL processes [25] and ETL flows [26]. In contrast, we proposed the costbased optimization of integration flows [14]. Anyway, these approaches focus on execution time minimization rather than on throughput maximization. Furthermore, there are existing approaches [5,6,4,9] that also address throughput optimization. However, those approaches increase the degree of parallelism, while our approach reduces executed work.
7 Conclusions To summarize, we proposed a novel approach for throughput maximization of integration flows that reduces work by employing horizontal data partitioning. Our evaluation showed that significant performance improvements are possible and that theoretical latency guarantees also hold under experimental investigation. In conclusion, the MFO approach can seamlessly be applied in a variety of integration platforms that asynchronously execute data-driven integration flows. The general MFO approach opens many opportunities for further optimizations. Future work might consider (1) the execution of partitions independent of their temporal order, (2) plan partitioning in the sense of compiling different plans for different partitions, (3) global MFO for multiple plans, and (4) the cost-based plan rewriting. Finally, we might (5) combine MFO with pipelining and load balancing because both address throughput maximization as well.
46
M. Boehm, D. Habich, and W. Lehner
References 1. Lee, R., Zhou, M., Liao, H.: Request window: an approach to improve throughput of rdbmsbased data integration system by utilizing data sharing across concurrent distributed queries. In: VLDB (2007) 2. Cecchet, E., Candea, G., Ailamaki, A.: Middleware-based database replication: the gaps between theory and practice. In: SIGMOD (2008) 3. Bhide, M., Agarwal, M., Bar-Or, A., Padmanabhan, S., Mittapalli, S., Venkatachaliah, G.: Xpedia: Xml processing for data integration. PVLDB 2(2) (2009) 4. Li, H., Zhan, D.: Workflow timed critical path optimization. Nature and Science 3(2) (2005) 5. Biornstad, B., Pautasso, C., Alonso, G.: Control the flow: How to safely compose streaming services into business processes. In: SCC (2006) 6. Boehm, M., Habich, D., Preissler, S., Lehner, W., Wloka, U.: Cost-based vectorization of instance-based integration processes. In: Grundspenkis, J., Morzy, T., Vossen, G. (eds.) ADBIS 2009. LNCS, vol. 5739, pp. 253–269. Springer, Heidelberg (2009) 7. Preissler, S., Habich, D., Lehner, W.: Process-based data streaming in service-oriented environments. In: Filipe, J., Cordeiro, J. (eds.) ICEIS 2010. LNBIP, vol. 73, pp. 60–75. Springer, Heidelberg (2010) 8. Boehm, M., Habich, D., Lehner, W.: Multi-process optimization via horizontal message queue partitioning. In: Filipe, J., Cordeiro, J. (eds.) Graph Grammars 1978. LNBIP, vol. 73, pp. 31–47. Springer, Heidelberg (2010) 9. Srivastava, U., Munagala, K., Widom, J., Motwani, R.: Query optimization over web services. In: VLDB (2006) 10. Ives, Z.G., Halevy, A.Y., Weld, D.S.: Adapting to source properties in processing data integration queries. In: SIGMOD (2004) 11. Li, J., Tufte, K., Shkapenyuk, V., Papadimos, V., Johnson, T., Maier, D.: Out-of-order processing: a new architecture for high-performance stream systems. PVLDB 1(1) (2008) 12. Ivanova, M., Kersten, M.L., Nes, N.J., Goncalves, R.: An architecture for recycling intermediates in a column-store. In: SIGMOD (2009) 13. Scheuermann, P., Ouksel, A.M.: Multidimensional b-trees for associative searching in database systems. Inf. Syst. 7(2) (1982) 14. Boehm, M., Wloka, U., Habich, D., Lehner, W.: Workload-based optimization of integration processes. In: CIKM (2008) 15. Roy, P., Seshadri, S., Sudarshan, S., Bhobe, S.: Efficient and extensible algorithms for multi query optimization. In: SIGMOD (2000) 16. Harizopoulos, S., Shkapenyuk, V., Ailamaki, A.: Qpipe: A simultaneously pipelined relational query engine. In: SIGMOD (2005) 17. Unterbrunner, P., Giannikis, G., Alonso, G., Fauser, D., Kossmann, D.: Predictable performance for unpredictable workloads. PVLDB 2(1) (2009) 18. Kementsietsidis, A., Neven, F., de Craen, D.V., Vansummeren, S.: Scalable multi-query optimization for exploratory queries over federated scientific databases. In: VLDB (2008) 19. Qiao, L., Raman, V., Reiss, F., Haas, P.J., Lohman, G.M.: Main-memory scan sharing for multi-core cpus. PVLDB 1(1) (2008) 20. Ceri, S., Negri, M., Pelagatti, G.: Horizontal data partitioning in database design. In: SIGMOD (1982) 21. Agrawal, S., Narasayya, V.R., Yang, B.: Integrating vertical and horizontal partitioning into automated physical database design. In: SIGMOD (2004)
Multi-flow Optimization via Horizontal Message Queue Partitioning
47
22. Eadon, G., Chong, E.I., Shankar, S., Raghavan, A., Srinivasan, J., Das, S.: Supporting table partitioning by reference in oracle. In: SIGMOD (2008) 23. Johnson, T., Muthukrishnan, S.M., Shkapenyuk, V., Spatscheck, O.: Query-aware partitioning for monitoring massive network data streams. In: SIGMOD (2008) 24. Avnur, R., Hellerstein, J.M.: Eddies: Continuously adaptive query processing. In: SIGMOD (2000) 25. Vrhovnik, M., Schwarz, H., Suhre, O., Mitschang, B., Markl, V., Maier, A., Kraft, T.: An approach to optimize data processing in business processes. In: VLDB (2007) 26. Simitsis, A., Vassiliadis, P., Sellis, T.K.: Optimizing etl processes in data warehouses. In: ICDE (2005)
Workflow Management Issues in Virtual Enterprise Networks André Kolell and Jeewani Anupama Ginige School of Computing & Mathematics, University of Western Sydney, Australia [email protected], [email protected]
Abstract. Increasing competitive pressure and availability of Internet and related technologies have stimulated the collaboration of independent businesses. Such collaborations, aimed at achieving common business goals, are referred to as virtual enterprise networks (VENs). Though web is an excellent platform to collaborate, the requirements of VENs regarding workflow management systems are in excess those of autonomous organizations. This paper provides a comprehensive overview of numerous issues related to workflow managements in VENs. These issues are discussed in the three phases of virtual enterprise lifecycle: configuration, operation and dissolution; and corroborated by two real case studies of VENs in Australia. Keywords: Business processes, Business process management, Virtual organizations, Virtual enterprise networks, Workflow management systems.
Workflow Management Issues in Virtual Enterprise Networks
49
Organizations within a VEN need to face multiple challenges in addition to those that autonomous organizations do need to deal with. Each VEN member has its own aims, core competencies and resources which might be very different to those of other members. In addition, huge spatial distances between the members’ sites and the VEN’s being a temporary arrangement, might further increase the complexity. According to Ricci, Omicini and Denti [3], a VEN’s complexity can be reduced, by improving coordination. Borchardt [2] suggests four different types of coordination instruments to reduce complexity: individual-orientated coordination instruments (selecting an overall coordination agent), structural coordination instruments (clearly defining the VEN’s organizational structure), technocratic coordination instruments (setting up clear production plans, agreeing on internal transfer prices, rules, contracts, etc.) and IT coordination instruments. Although latter include such technologies as telephones, fax, e-mail and video-/online-conferences, the most important IT coordination instruments are groupware and workflow management systems. In the context of VENs, workflow management systems are a technology that enables virtual enterprises to manage, coordinate and control their business activities [4]. Workflow management of VEN is much more complicated than in autonomous organizations because it is confronted with all the complexities that VENs need to face; bringing together all the heterogeneous software services to achieve the conglomerate’s original business goal is only one of them. So far, there are a number of scientific contributions about how workflow management systems for a VEN should be. Examples for research projects in this area are the CrossFlow-project [5], the CA-PLAN-project [6] and the DynaFlow-project [7]. All teams behind these projects identified and tackled the special challenges a VEN has to face when defining and implementing a workflow management solution, but neither have they nor anyone else at any time really concentrated solely on the issues that occur in VENs and concern the implementation and usage of workflow management technology. Although they mention a selection of issues, they immediately focus on possible solutions to these issues which they present in the form of the various projects they are working on. It is therefore worthwhile to entirely focus on workflow management issues that arise in VENs. This paper will discuss the different issues related to workflow management in VENs based on the virtual enterprise lifecycle (Fig. 1). These issues will be subdivided into business, people and technological issues for discussion purposes. Furthermore a brief exploration is carried out on how current workflow management systems and prototypes of research projects try to overcome these issues. The organization of this paper is as follows. In section 2 the research methodology used is explained, followed by an introduction to the two case studies used in this research in section 3. In section 4 the different workflow management issues in VENs are discussed, under each phase of the virtual enterprise lifecycle. How current workflow management systems and prototypes of related research projects try to handle these issues is described in section 5. Section 6 positions the findings of this paper among the similar work carried out by other researchers. Section 7 concludes the paper with a brief view at the possible areas for further research.
50
A. Kolell and J.A. Ginige
2 Research Methodology The findings of this work result from two case studies and a comprehensive literature review. Two case studies selected for this paper are based on one of the author’s involvement in them. In one case, the author was involved in the capacity of an employee of the VEN and as a technology provider in the other. Non formal interviews were used as the main technique of collecting case study related information. In addition, to these case studies a thorough literature review was carried out. This literature review enabled establishing our own findings from the case studies. During the course of the research, identified issues of VEN (either from the case study or through literature review) were allocated to the different phases of the virtual enterprise lifecycle, based on their occurrences. This representation gives a wellstructured overview of all the issues in their respective phases of the virtual enterprise lifecycle.
3 Case Studies In Australia, the Western Sydney Region is considered to be the homeland for micro businesses and small to medium enterprises (SMEs). Over 72,000 micro businesses and SMEs in the region contribute to 10% of the Australian economy [8]. Due to this high number of relatively smaller companies, there are definite advantages in terms of employment and economic development. However, being smaller organizations they face other limitations too. One such limitation is their inability to compete in the global market for reasonably larger contracts. Therefore, many SMEs in this region attempt to join forces by forming VENs. To disclose the various issues that can occur in VENs, two such VENs from Western Sydney region are selected. The first case study is about a group of tool making companies, who attempted to benefit by forming a VEN for the purpose of quoting for bigger jobs. The second is in relation to a virtual consulting company that brought together medical practitioners and information and communication technology experts to join their expertise in providing software solutions for the healthcare sector. An overview of the case studies is presented below. These case studies will be utilized in the latter discussions of this paper. 3.1 Tool Makers Case Study Tool Makers are the companies that develop instruments (or tools), for other mass scale manufacturing organizations such as plastic molding companies, or individuals like inventors. Such four tool making companies in the Western Sydney region decided to join forces in 2004. Tool makers were thriving micro businesses in the 90’s, prior to Australian jobs started being shipped to offshore places like China. The workshops of these companies are equipped with machineries of varying capacities. In the new millennium, due to lack of work that suited the capacity of these workshops, most machineries used to idle. On the other hand, they could not bid for bigger global jobs such as
Workflow Management Issues in Virtual Enterprise Networks
51
manufacturing parts for aircrafts independently, due to lack of capacity in their individual workshops. Hence four companies decided to form one VEN to be able to quote for bigger global contracts, by showcasing their joint workshop capabilities. One particular tender that they aimed bid for was for manufacturing a certain part of Boeing aircrafts. The joining of forces was expected to be achieved as a VEN, using a web-based collaborative quoting system. The idea was to have a web-based workflow management system that would allow these companies to enter jobs details, jointly design the jobs, share design details, divide the activities among the four companies based on the capacity, and jointly offer quotes to the customer. A software development team in the AeIMS (Advanced enterprise Information Management System) research group of University of Western Sydney was given the task of developing a web-based workflow management system for the purpose of online collaborative quoting. Developers of AeIMS started the initial developments in late 2004 in consultation with the companies. Even though these four companies were very enthused about the idea of becoming a VEN, even by late 2005 they were not possible to lift-off as one entity. Then eventually by 2006 they decided to let go the efforts of becoming a single VEN. This case study provides the classic case of various issues that VENs face in the configuration stage of the lifecycle that prevents them to eventuate into a single entity. These issues will be further discussed later in this paper. 3.2 Collaborative Consultation Case Study In contrast to our first case study, the second case study was a success, as it managed to cover the full lifecycle of the VEN from configuration, through operation to dissolution. This collaboration was between medical practitioners and experts in information and communication technology. A few micro consulting companies based in Western Sydney region of Australia decided to join as a VEN in 2005 to provide consulting services to healthcare sector. This virtual consulting enterprise had a different approach and format to the prior. At the configuration stage, these individual companies had identified their strengths and weaknesses in terms of their skill sets, expertise, associations in professional networks, available technologies and other operational needs such as specific insurance policies they held and location. While they operated as individual companies, they came together as one entity based on the consulting work available. Based on the project they would select a champion among themselves to lead the way and other members who would participate in a particular tender. In making these decisions, they would consider the tender specifications and strengths and weaknesses of individual consulting companies. The selected champion company would coordinate the activities and would see through to the end of the project. After the completion of the project, the VEN would temporarily dissolute until a new project comes along. This project based formation of VEN, has worked extremely well for these consulting companies. Even to date they continue to form into a VEN quickly for consulting projects and dissolute it at the end of the project.
52
A. Kolell and J.A. Ginige
The success of being able to operate as a VEN does not guarantee that they operate without any problems. The issues that this VEN faces will be discussed later in this paper.
4 Workflow Management Issues in Virtual Enterprise Networks VENs have to face multiple kinds of challenges during their existence. While some of them are general, for example the selection of partners, establishing trust, agreeing on sharing of revenues/losses, the duration of the network and how to dissolve the network; other challenges are unambiguously related to the VEN’s information technology infrastructure. Camarinha-Matos, Tschammer and Afsarmanesh [9] and Camarinha-Matos [10] point out that the very short lifecycles of modern technologies and the lack of technology-independent reference architectures are the main obstacles to quickly and easily set up VEN from an information technology point of view. They explain how service orientated architectures can help to reduce the problems that arise from heterogeneous software systems. Thus inter-organizational workflow management systems need to be implemented upon (and to some extent independently from) the underlying information technology infrastructures to automate, manage, coordinate and control the business activities of the network [4]. Figure 1 depicts the occurrence of the various workflow management related issues during the virtual enterprise lifecycle. These issues are further categorised as people, business and technological issues. The subsequent sections of this paper describe these issues in detail. 4.1 Issues in the Configuration Phase The configuration phase of the virtual enterprise lifecycle is where most issues occur. This is mainly caused by the requirement to agree and set up the network’s information technology infrastructure. Select a VEN Coordinator: As soon as the initial partner selection is done, a VEN coordinator needs to be chosen. This can be either a member of the VEN or an independent instance that is entirely newly established. The coordinator is responsible for ensuring successful coordination of the VEN’s business activities. In our first case study, one of the reasons for their failure was their inability to select a suitable coordinator. There were certain amounts of trust issues involved leading into certain business matters such as in the division of jobs and handling of funds. Some members owned workshop capabilities that were superior to others; hence they expected to be the coordinator of jobs. However, the others argued that whoever brings the project work should be able to be the coordinator of the activities. Due to these differences in opinion the VEN did not manage to select a suitable coordinator. In the second case study, the selection of the coordinator was project based. Also they had devised successful assessing criteria in selecting a coordinator based on strengths and weaknesses of members and project specifications. Members had the understanding that they may or may not be involved in certain projects, because the rules were laid very clearly at the beginning.
Workflow Management Issues in Virtual Enterprise Networks
Configuration
Operation
Select a VEN Coordinator Define Business Processes (based on long-term Goals) Technological Setup Considerations Agree on WfMS (Architecture)
53
Dissolution
Decrease Transaction Costs (by using Trust to increase Transparency and Agility)
Managing Member Changes Onboarding of new VEN members Compensation of leaving VEN Members
Define Data Access Rights
Build up the Ability to Cope with new (and leaving) VEN Members
Define Interfaces
Continue with Support Processes
Implement the designed Solution
Ongoing Access to relevant Data Implement Activity Tracking and Tracing
Process-Monitoring and Progress-Reporting Corrective Actions Establish, Keep and Improve Trust
People Issues
Business Issues
Technological Issues
External Trigger
Fig. 1. Occurrences of Issues of VEN in Various Phases of their Life Cycle
Define Business Processes: Once the coordinator is identified, the business processes of the VEN need to be defined. For the definition of the processes, it is important to understand the role of each member of the VEN and what they bring into the collaboration. In our first case study, defining the business processes was chaotic as the tool makers did not have a clear understanding of the contributions of the different members. In contrast, the definition of business processes in the second case study was relatively easy as they accurately had identified the strengths and weaknesses of each member and also their contributions to each project. Technological Setup Considerations: The members of the VEN must agree on the information technology they want to use and how interoperability of heterogeneous software systems can be guaranteed. Furthermore, a workflow management system must be chosen, that is able to integrate multiple distributed heterogeneous software systems. The translation of the virtual enterprises strategy – meaning its long-term goals – into business processes and the evaluation of their automation is an indispensable prerequisite for this step of the VEN’s configuration. Generally all members of the VEN are autonomous companies who existed before the collaboration was founded and who aim to exist long after the VEN has dissolved. They have their own interests that sometimes compete with others. It is therefore – in
54
A. Kolell and J.A. Ginige
contrast to an autonomous enterprise – necessary, that both the workflow management system and the single enterprise’s software systems limit the access to software services and data to those partners who really require access to them. Of course, the openness of the VEN’s members is a trade-off between offering useful information that helps to increase the network’s overall success and disclosing a corporate’s secrets, widening the member’s window of vulnerability [11]. In our first case study, the decision of selecting an appropriate workflow management system was outsourced to an external entity that all parties trusted. Hence the decision to develop a web-based collaborative quoting system was fully supported by all the tool maker companies. However, issues arose when deciding about data access rights and archiving. While everyone understood the importance of sharing information, they had issues in sharing certain design details via engineering drawings. Sometimes, their clients did not want information to be disclosed to other parties. Again it came down to the trust issues that prohibited these tool maker companies from resolving the data access issues. The consulting company of our second case study had different needs in terms of information technology. The nature of their work did not require special information technologies, except for some project management tools, which were available to everyone. Hence the coordinator of a particular project would keep track of project participation, timelines, and other project related documents. At the end of the project the coordinating company would archive the information and provide a copy of this archived information to the other parties that were involved in the project. Also since these experts came from different domains, their individual client bases were more or less mutually exclusive and they had no issues about the intellectual property. Also over a period of time they have developed a certain amount of trust among themselves that helped in the operations. Implement Activity Tracking and Tracing: Another issue workflow management systems need to deal with is the proper tracking and tracing of the activities that each member contributes to the final product or service. While this is required by law in some countries it is also useful for identifying errors and continuous quality improvements. In both of our case studies the members identified the need for keeping accurate records of tracking and monitoring activities. The tool makers’ web-based quoting system had the capabilities for each member to report their progress via the system and also to re-distribute work if one member was unable to meet targets due to any reason. The companies involved in the second case study took rather a lose approach in keeping track of progress. Each member would email the coordinator their progress and the coordinator would keep a record of the progress of the project locally. The other members trusted the project management capabilities of the coordinating member to resolve any timeline or resource problems. Camarinha-Matos et al. [1] point out, that configuration efforts are always the largest for an organization when it participates for the first time in a VEN. This claim is further confirmed by our first case study. Based on the experience of the two case studies, it is conclusive that people issues - mainly trust - are the main factor that needs to be resolved at the configuration stage. It is also evident from our case studies
Workflow Management Issues in Virtual Enterprise Networks
55
that it is much easier for organizations to resolve trust issues, when they come from different domains where they do not claim to have a common client base. 4.2 Issues in the Operation Phase The underlying objective that spans across the operation (and dissolution) phases is looking into ways in which transaction costs can be decreased by further establishing trust to increase transparency and agility. Therefore workflow management issues that occur in the operation phase of a VEN concern improvements in flexibility and collaboration. Our second case study will be extensively used to demonstrate the issues involved at this stage. Managing Member Changes: Flexibility is one of the key factors needed at the operational stage of the VEN. In particular external triggers may lead to the introduction of new members to the network or the departure of existing members. External triggers such as needing new skills that did not exist within the network would result in getting new members. Also due to various business and people reasons, members may decide to totally leave or withdraw their contributions to the VEN. Without being highly flexible in its business processes and the technologies for their execution; the VEN is unable to compensate the exit of one of its members in a reasonable amount of time. The requirements of VENs regarding workflow management systems therefore excess those of autonomous organizations. The immediate effects of member changes are the business impact that it has on the running projects. In particular when members leave, the remaining workload needs to be shared among the existing members and projects plans need to be redrawn. Sometimes, new members need to be added to the network. Followed by the business issues, technological implications are faced by the VEN. For example the systems must allow easily changing the setup of the organizational structure of the network; including the workflows and the visibility of and access to data related to it. In addition, the possibility of losing data and single activities within workflows must be considered: It must be prevented that the exit of a member results in the breakdown of operations; meaning that started workflows – for example accepted orders – can no longer be processed due to essential information that left with the exiting member. In our second case study, agility was one of the key foundations that the VEN was built upon. From the accounts and financial point of view the coordinating company sub-contracted the work of the other members. Hence the operational point of leaving or addition of members was handled in a smooth manner. However, the issues were there when data get lost when members leave, as they do not have had a centralized mechanism to collect information. It was up to the project coordinating organization to make sure the leaving member provided all the information they possessed to the other members. This totally worked on the trust established and a certain amount of risk was taken by all members in relation to this. Process Monitoring and Progress Reporting: Furthermore workflow management systems do need to support the continuous optimization of the VEN’s workflows. In this context, the coordinating instance of the VEN needs to monitor the performance of workflow execution and report relevant findings to single members.
56
A. Kolell and J.A. Ginige
The consulting enterprise of our second case study improved its processes based on the experience that they gained from their past projects. For example, they initially used to hire administrative staff to carryout some clerical tasks. Then they realized the issues they face, as VEN itself was not a legal entity that could pay salaries, taxes and superannuation. Hence they encouraged the clerical staff to establish themselves as individual companies and the VEN sub-contracted these micro companies to provide clerical services. Our investigation shows that flexibility and operational work structure, supported by sound information systems are the key ingredients needed in managing issues in the operational stage. Also the ability for VEN to learn from past experiences and improve their business processes is equally important. 4.3 Issues in the Dissolution Phase The dissolution of a VEN does not automatically brush aside any workflow management issues. Instead the opposite is the case. Continuing with Support Processes: Some business processes – such as customer support or product maintenance – must be kept alive although the organization itself dissolved. Even if it does not make sense to provide such services (processes) for the whole lifecycle of the product or service generated by the virtual enterprise, contracts or law might require this for at least a given period of time. Workflow management systems for VEN must therefore be able to deal with this issue – which is again one that emphasizes the requirement of high flexibility – as well. The workflow management system must be flexible enough to overcome the possible loss of data that is only being provided by a single member. Furthermore it must allow regulating access to all the different information according to the regulations the VEN’s members had agreed on when they created their network. Ongoing Access to Relevant Data: The VEN related to our second case study handles dissolution issues by providing the collective knowledge to all the members who were involved in a particular project. Thus no one company had the responsibility of managing such information. As highlighted by the VEN members of the second case study, the most important aspect of smooth dissolution is firstly identifying the dissolution point at the initial configuration of the VEN. In their case, once every consultation project is completed and final payments are done the VEN dissolves. If there is no such clear identification of the dissolution point it can lead into issues. This dissolution points need to be quantifiable and practical. When reflecting back, it is clear that our tool makers of the first case study had issues in deciding this dissolution points. At the early stages of the lifecycle our second case study’s VEN prepares a set of guidelines that involve the activities at the dissolution stage. These guidelines provide details of responsibilities of each member of the activities at dissolution; such as providing continues support, finalizing financial matters, and archiving and distributing information. Our study revealed that smooth dissolution mainly depends on proper planning at the early stages and support of the information systems that continue to provide necessary information beyond the dissolution for the required parties.
Workflow Management Issues in Virtual Enterprise Networks
57
5 Current Workflow Management Solutions The increasing number of workflow management systems makes it nearly impossible to keep an overview of their functionalities. Therefore it is not the aim of this paper to provide a comprehensive overview of how different workflow management systems deal with the issues of VENs. Instead it will only be shown how some systems, especially those that result from research projects related to workflow management in VENs, have improved their architectures and functionalities in order to fulfill the special requirements that emerged from the organizational form of VENs. Some of these scientific workflow management systems or approaches to workflow management in virtual enterprises are PRODNET I and II, SSPS, CrossFlow, DynaFlow and CA-PLAN. A real-life example of a workflow management system that is specialized in inter-organizational collaboration – with a focus on order processing and project execution – is myOpenFactory. The approaches of all those systems to improve inter-organizational collaboration are quite different. Systems like CrossFlow and CA-PLAN create interoperability between the member’s individual workflow systems by treating them as services and integrating them through the usage of newly proposed workflow models. Thus interorganizational service orientated architectures can be set up, with a coordinating instance at its top [5, 6]. Ricci et al. [3] describe that in an optimal case the role of the coordinating instance can be filled out by a software solution. Other systems, such as DynaFlow, expand the scope of the workflow process description language (WPDL) to respond to virtual enterprise’s requirements without trying to introduce entirely new standards but instead building up on existing ones [7]. This is reasonable because it does not constrain the development of standardized base information architecture for the collaboration of networked organizations. Camarinha-Matos, Tschammer and Afsarmanesh [9] point out, that the currently on-going standardization efforts in the area of Internet-related technologies would pave the way for the development of such a base infrastructure. In this context, the work of the workflow management coalition (WfMC) is mentionable. Amongst other working groups they have one working group that is especially focusing on improving the interoperability of workflows. Further improvements of the WfMC’s standards are desirable not only because of making inter-organizational collaboration theoretically easier but also because of the popularity of their standards in general, thus ensuring a quick spread and avoiding the emergence of other standards and technologies at the same time.
6 Similar Work As there are already a lot of publications about VENs and workflow management, this paper can be placed into a broad context of related work. But, as mentioned previously, there are no scientific contributions that entirely focus on the issues related to workflow management in VENs. To begin with, there are publications that investigate the phenomenon of VENs in general and present their general issues, most often including possible solutions. First of all it is worth mentioning Camarinha-Matos et al. [1], who describe general challenges which occur during the different phases of the virtual enterprise lifecycle and
58
A. Kolell and J.A. Ginige
which constrain the network from achieving its optimal agility. Further examples in this area of research are Ricci et al. [3], Institut der Wirtschaft Thüringens [12] and Borchardt [2]. Then there are publications that concentrate on the information technology infrastructure of VENs. Camarinha-Matos, Afsarmanesh et al. published multiple research findings in this area [1, 9, 10, 11]. Most of the scientific workflow management systems and approaches to workflow management in VENs mentioned in the previous section can be classified in this category as well. Again, all those publications have in common that they do not solely concentrate on the issues but instead more or less skip this step by only mentioning some of them and trying to develop and present possible solutions instead. To facilitate future work about workflow management in VENs, it is therefore useful to entirely focus on the workflow management issues that arise in VENs and to create a single place that discusses them and shows how current workflow management systems and prototypes of research projects try to overcome them. For this purpose this paper has been written.
7 Conclusions and Future Work In this paper the plurality of workflow management issues in VENs has been described. It has been shown that requirements to workflow management systems are not limited to an initial configuration but rather occur during all phases of the virtual enterprise lifecycle, even after the VEN has been dissolved. Although multiple efforts have been made, there is no standardized base information architecture for the collaboration of networked organizations so far. Organizations as the WfMC need to provide a suitable framework fostering formalization and normalization of information-semantics and business processes. VENs must then make use of this framework to increase their agility which is a prerequisite for broadening their possibilities to better benefit from latest market developments. Further research, for example in the form of focus group surveys, could help to identify additional issues regarding workflow management in VENs. Rating their individual severities would allow organizations as the WfMC and manufacturers of workflow management systems to focus in their work on what is considered most important. Furthermore, in-depth research could be done, reviewing all the workflow management systems and scientific notions and assess how well they solve the different issues.
References 1. Camarinha-Matos, L.M., Afsarmanesh, H., Rabelo, R.J.: Infrastructure Developments for Agile Virtual Enterprises. Int. Journal of Computer Integrated Manufacturing 16, 235–254 (2003) 2. Borchardt, A.: Koordinationsinstrumente in virtuellen Unternehmen. Deutscher Universitäts-Verlag, Wiesbaden (2006)
Workflow Management Issues in Virtual Enterprise Networks
59
3. Ricci, A., Omicini, A., Denti, E.: Virtual Enterprises and Workflow Management as Agent Coordination Issues. International Journal of Cooperative Information Systems 11, 355– 379 (2002) 4. Meng, J., Stanley, Y.W.S., Lam, H., Helal, A.: Achieving Dynamic Inter-Organizational Workflow Management by Integrating Business Processes, Events and Rules. Paper presented at the 35th Hawaii International Conference on System Sciences, Hawaii (2002) 5. Grefen, P., Aberer, K., Hoffner, Y., Ludwig, H.: CrossFlow - Cross-Organizational Workflow Support for Virtual Organizations. Paper presented at the 9th International Workshop on Research Issues on Data Engineering: Information Technology for Virtual Enterprises (1999) 6. Yan, S.-B., Wang, F.-J.: CA-PLAN - An Inter-Organizational Workflow Model. Paper presented at the 10th IEEE International Workshop on Future Trends of Distributed Computing Systems (2004) 7. Meng, J., Stanley, Y.W.S., Lam, H., Helal, A., Xian, J., Liu, X., Yang, S.: DynaFlow: A Dynamic Inter-Organizational Workflow Management System. Int. Journal of Business Process Integration and Management 1, 101–115 (2006) 8. Ginige, A., Murugesan, S., Kazanis, P.: A Road Map for Successfully Transforming SMEs into E-Business. Cutter IT Journal, The Journal of Information Technology Management 14(5), 39–51 (2001) 9. Camarinha-Matos, L.M., Tschammer, V., Afsarmanesh, H.: On Emerging Technologies for Virtual Organizations. In: Camarinha-Matos, L.M., Afsarmanesh, H. (eds.) Collaborative Networked Organizations - Research Agenda for Emerging Business Models, pp. 207–224. Springer, Berlin (2004) 10. Camarinha-Matos, L.M.: ICT Infrastructures for Virtual Organizations. In: CamarinhaMatos, L.M., Afsarmanesh, H., Ollus, M. (eds.) Virtual Organizations: Systems and Practices, pp. 83–102. Springer, Berlin (2005) 11. Frenkel, A., Afsarmanesh, H., Herzberger, L.O.: Information Access Rights in Virtual Enterprises. Paper presented at the 2nd IFIP / MASSYVE Working Conference on Infrastructures for Virtual Enterprises, Pro-VE 2000, Florianopolis, Brazil (2000) 12. Institut der Wirtschaft Thüringens. Management von Produktionsnetzwerken in kleinen und mittleren Unternehmen. Verlag des Instituts der Wirtschaft Thüringens, Erfurt (2005)
An XML-Based Streaming Concept for Business Process Execution Steffen Preissler, Dirk Habich, and Wolfgang Lehner Dresden University of Technology, Dresden 01187, Germany {steffen.preissler,dirk.habich,wolfgang.lehner}@tu-dresden.de http://wwwdb.inf.tu-dresden.de/˜research
Abstract. Service-oriented environments are central backbone of todays enterprise workflows. These workflow includes traditional process types like travel booking or order processing as well as data-intensive integration processes like operational business intelligence and data analytics. For the latter process types, current execution semantics and concepts do not scale very well in terms of performance and resource consumption. In this paper, we present a concept for data streaming in business processes that is inspired by the typical execution semantics in data management environments. Therefore, we present a conceptual process and execution model that leverages the idea of stream-based service invocation for a scalable and efficient process execution. In selected results of the evaluation we show, that it outperforms the execution model of current process engines. Keywords: Stream, Service, Business process, SOA.
1 Introduction In order to support managerial decisions in enterprise workflows, business people describe the structures and processes of their environment using business process management (BPM) tools [1]. The area of business processes is well-investigated and existing tools support the life-cycle of business processes from their design, over their execution, to their monitoring today. Business process modeling enables business people to focus on business semantic and to define process flows with graphical support. Prominent business process languages are WSBPEL [2] and BPMN [3]. The control flow semantic, on which BPM languages and their respective execution engines are based on, has been proven to fit very well for traditional business processes with small-sized data flows. Typical example processes are ”order processing” or ”travel booking”. However, the characteristics of business processes are continuously changing and the complexity grows. One observable trend is the adoption of more application scenarios with more data-intensive processes like business analytics or data integration. Thereby, the volume of data that is processed within a single business process increases significantly [4]. Figure 1 depicts an example process in the area of business analytics that illustrates the trend to an increased data volume. The process extracts data from different sources and analyzes it in succeeding tasks. First, the process receives a set of customer information as input (getCustInfos) which may include customer id, customer name J. Filipe and J. Cordeiro (Eds.): ICEIS 2010, LNBIP 73, pp. 60–75, 2011. c Springer-Verlag Berlin Heidelberg 2011
Streaming Concept for Business Processes
typical data management process
getCustInfos [receive]
transform [assign]
getInvoices [service]
analyze [service]
getInvoices [invoke] getOrders [invoke]
61
filterOrders [assign]
joinIDs [assign]
analyze [invoke]
...
getOrders [service]
Fig. 1. Customer Data Integration Scenario
and customer address. Second, the customer ids are extracted and transformed to fit the input structure of both succeeding activities (transform). In a third step, the customer ids are enriched concurrently with invoice information (getInvoices) and current open orders (getOrders) from external services. All open orders are filtered (filterOrders) to get only approved orders. Afterwards all information for invoices and orders are joined for every customer id (joinIDs) and analyzed (analyze). Further activities are executed for different purposes. Since they are not essential for the remainder of the paper, they are denoted by ellipses. The activity type for every task is stated in square brackets ([]) beneath the activity name. These types are derived from BPEL as standard process execution language for Web services. As highlighted in Figure 1 by the dotted shaded rectangle, this part of the business process is very similar to typical integration processes within the data management domain with data extraction, data transformation and data storage. In this domain, available modeling and execution concepts for data management tasks are aligned for massive data processing [5]. Considering the execution concept, Data stream management systems [6] or Extract-Transform-Load (ETL) tools [7] as prominent examples incorporate a completely different execution paradigm. Instead of using a control flow semantic, they utilize data flow concepts with a stream-based semantic that is typically based on pipeline parallelism. Furthermore, large data sets are split into smaller subsets. This execution has been proven very successfully for processing large data sets. Furthermore, many existing work, e.g. [8,5,9], has been demonstrated and evaluated that the SOA execution model is not appropriate for processes with large data sets. Therefore, the changeover from the control flow-based execution model to a streambased execution model seams essential to react on the changing data characteristics of current business processes. Nevertheless, the key concepts of SOA like flexible orchestration and loosely coupled services have to be preserved. This paper contributes to this restructuring by providing a first integrated approach of a stream-based extension to the service and process level. Contribution and Outline. In this paper we contribute as follows: First, we summarize give a brief introduction into the concept of stream-based service invocation in SOA (Section 2). Second, the data flow-based process execution model is presented that allows stream-based data processing (Section 3). Furthermore, our approach for streambased service invocation in [10] is extended to enable orchestration and usage of web services as streaming data operators (Section 4). In Section 5, we discuss optimizations
62
S. Preissler, D. Habich, and W. Lehner
for our execution concept. Finally, we evaluate our approach in terms of performance (Section 6), present related work (Section 7) and conclude the paper (Section 8).
2 Stream-Based Service Invocation Revisited In [11], we describe the concepts of control flow-based process execution that are used in today’s SOA environments and highlight it’s shortcomings in terms of data-intensive service applications. In a nutshell, two major drawbacks for data-intensive business processes have been identified. Both are related to control flow-based process execution: (1) on the process level with the step-by-step execution model in conjunction with an implicit data flow and (2) on the service level with the inefficient, resource consuming communication overhead for data exchange with external services based on the request– response paradigm and XML as data format. In [10], we already tackled the service level aspect by introducing the concept of stream-based Web service invocation to overcome the resource restriction with large data sizes. We recall the core concept briefly and point out limitations of this work. The fundamental idea for the stream-based Web service invocation is to describe the payload of a message as finite stream of equally structured data items. Figure 2(a) depicts the concept in more detail. The message retains as the basic container that wraps header and payload information for requests and responses. However, the payload forms a stream that consists of an arbitrary number n of stream buckets bi with 1 ≤ i ≤ n. Every bucket bi is an equally structured subset of the application data that usually is an array of sibling elements. Inherently, one common context is defined for all data items that are transferred within the stream. The client controls the insertion of data buckets into the stream and closes the stream on its own behalf. Figure 2(b) depicts the interaction between client and service. Since the concept can be applied bidirectional, a request is defined as input stream SI,j whereas a response is defined as output stream SO,j with j denoting the corresponding service instance. By adding bucket queues to the communication partners, sending and receiving of stream buckets are decoupled from each other (in contrast to the traditional request–response paradigm) and intermediate responses result. It has been evaluated, that this concept reduces communication overhead in comparison to message chunking by no need for single message creation. In addition, it provides a native common context for all stream items and context sensitive data operations like aggregation can be implemented straightforward. The main drawback of the proposed concept is that it assumes all stream buckets to be application data and equal in structure. This does not take dynamic service parameterization into account. Hence it is not applicable for a more sophisticated stream environment with generalized services operators. To conclude this section, the stream-based service invocation approach represents only one step in the direction of streaming semantic in service-oriented environments. While the proposed step considers the service level aspect to overcome resource limitations, the process level aspect is obviously an open issue. Therefore, we are going to present a data flow-based process approach for messages processing in the following section. In Section 4, we are extending the service level streaming technique to cover new requirements from the proposed process perspective.
Streaming Concept for Business Processes
Message
Header
SI,j
Payload
{meta data}
b1
bi Stream S
...
...
bn
Client Instance
stream bucket
bn
...
bi
...
b1
b'1
...
b'i
...
b'n
SO,j
time
(a) Stream-Based Message Container.
63
WebService Service Web Web Service Web Service Instance Instance Instance Wjjj
bucket queue
(b) Stream-Based Message Interaction.
Fig. 2. Stream-based Service Invocation
3 Stream-Based Process Execution Basically, our concept for stream-based process execution advances the process level with data flow semantics and introduces a corresponding data and process model for stream-based data processing. 3.1 Data Model When processing large XML messages, available main memory becomes the bottleneck in most cases. One solution is to split the message payload into smaller subsets and to process them consecutively. This reduces memory peaks by not having to build the whole message payload in memory. To allow native subset processing, we introduce the notion of processing buckets, that enclose single message subsets and that are used transparently in the processing framework. Let B be a process bucket with B = (d, t, pt ), where d denotes a bucket id, t denotes the bucket type and pt denotes the XML payload in dependence on the bucket type t. The basic bucket type is data, that identifies buckets that contain actual data from message subsets. Of course, a processing bucket can also enclose the complete message payload pm with pm == pt , as it is the case when the message payload initially enters the process or if pm is small in size. Nevertheless, for large message payloads it would be beneficial to split them into a set of process buckets bi with pt,i ⊆ pm . Since process buckets carry XML data, XPath and XQuery expressions can be used to query, modify and create the payload structure. As entry point for such expressions, we define two different variables $ bucket and $ system that define different access paths. Variable $ bucket is used to access the bucket payload while variable $ system allows access to process-specific variables like process id or runtime state. for $cust in $_bucket/custInfos/customerInfo return $cust bucket bi <custInfos> <customerInfo> ... ... ...
Example 1. Payload Splitting: Consider the activity getCustInfos from our application scenario in Figure 1. It receives the message with the payload containing a set of customer information. Figure 3 depicts the splitting of this payload into several smaller process buckets. The split is described by a very simple XQuery expression with setting the repeating element to $ bucket/custInfos/customerInfos. It creates one process bucket for every customer information in the resulting sequence that can be processed consecutively by succeeding activities. 3.2 Process Model Instead of using a control flow-based process execution, our process model uses a data flow-based process execution that is based on the pipes-and-filters execution model found in various systems like ETL tools, database management systems or data stream management systems. Using the pipes-and-filters execution model all activities ai ∈ A of a control flow-based process plan P are executed concurrently as independent operators oi of a pipeline-based process plan PS . All operators are connected to data queues qi between the operators that buffer incoming and outgoing data. Hence, a pipelinebased process plan PS can be described via a directed, acyclic flow graph, where the vertices are operators and the edges between operators are data queues. Figure 4 depicts the execution model of the pipeline-based version of our scenario process. Since the data flow is modeled explicitly, the implicit, variable-based data flow from the traditional instance-based execution as described in [11] has been removed. This requires the usage of the additional operator copy that copies the incoming bucket for every outgoing data flow. We define our stream-based process plan PS as PS = (C, O, Q, S) with C denoting the process context, O with O = (o1 , . . . , oi , . . . , ol ) denoting the set of operators oi , Q with Q = (q1 , . . . , qj , . . . , qm ) denoting the set of data queues between the operators and S denoting the set of services the process interacts with. An operator o is defined as o = (i, o, f, p) with i denoting the set of incoming data queues, o denoting the set of outgoing data queues, f denoting the function (or activity type, in reference to traditional workflow languages) that is applied to all incoming data and p denoting the set of parameters that is used to configure f and the operator, respectively. Figure 5 depicts two succeeding operators oj and oj+1 that are connected by a data queue and that are configured by their parameters pj and pj+1 . Since the operator oj+1 processes the data of its predecessor oj , the payload structure of buckets in queue qi must match the structure that is expected by operator oj+1 . Queues are not conceptually bound to any specific XML structure. This increases the flexibility of data that flows between the operators and can simplify data flow graphs by allowing operators with multiple output structures. Nevertheless, for modeling purposes a set of different XML schemas can be registered to every operator’s output that can be used for input validation for the succeeding operators. Example 2. Schema-Free Bucket Queues: Consider an XML file containing books and authors as sibling element types. The receive operator produces buckets with either the schema of books or authors. Since the bucket queues between operators are not conceptually schema-bound, both types can be forwarded directly and, e.g., processed by a routing operator that distributes the buckets to different processing flows.
Streaming Concept for Business Processes Process Engine Pipes-and-Filter-based Process Plan PS,i data flow
receive
parameter pj data queues
invoke assign
assign
copy invoke
invoke
assign
Fig. 4. Pipeline-Based Execution of Process Plan PS
parameter pj+1
oj+1
oj bucket bk
65
queue qi
Fig. 5. Operators and Processing Bucket Queue
For parameter set p, we use the respective query languages that where defined with our data model to configure the operator or to retrieve and modify the payload. Clearly, a concrete parameter set of an operator o is solely defined by function f . For f , we define a set of predefined algorithms that are needed for sophisticated data processing. Inspired by [12], we define three classes of basic functions that semantically provide a foundation for data processing: These classes are interaction-oriented functions including invoke, receive and reply, control-flow-oriented functions including route, copy, and signal and data-flow-oriented functions including assign, split, join, union, orderby, groupby, sort and filter. All functions work on the granularity of process bucket B and are implemented as operators. Now, we discuss the semantics of example operators for every function class in more detail. In particular, this will be receive, for the class of interaction-oriented functions, copy, for the class of control-flow-oriented functions, and join, for the class of dataflow-oriented functions. Receive Operator. The most important operator for preparing incoming messages is the receive operator. This operator gets one bucket with the payload of the incoming message to process. As parameter p, a split expression in XPath or XQuery must be specified to create new buckets for every item in the resulting sequence. It is closely related to the split operator. While split is used within the data flow to subdivide buckets, receive is linked to the incoming messages and usually starts the process. Please, refer to Example 1 for the usage of the receive operator. Copy Operator. The copy operator, as it is used in our application scenario, has one input queue and multiple output queues. It is used to execute concurrent data flows with the same data. For l output queues it creates l − 1 copies of every input bucket and feeds them all output queues. Join Operator. The join operator can have different semantics and usually joins two incoming streams of buckets. For this paper, we describe an equi-join that is implemented as a sort-merge join. This requires the join keys to be ordered. In our application scenario, this ordering is given inherently by the data set. Alternatively, the receive operator can be parameterized to order all items by a certain key. If this requirement cannot be fulfilled, another join algorithm has to be chosen. The set of parameters for a join operator includes (1) paths to both input bucket stream elements, (2) the paths to both key values that have to be equal and (3) the paths to the target destination in the output structures of the join operator.
66
S. Preissler, D. Habich, and W. Lehner $left_source := $_bucket/invoice $left_key:=$_bucket/invoice/@custId $right_source: $_bucket/order $right_key:=$_bucket/orders/@custId
bucket bi <invoice custId="3" ...> ... ... getInvoices stream 5
Example 3. Merge join operator: Figure 6 depicts the join operator joinIDs that joins the payload of order buckets and invoice buckets into one bucket for each customer id. Thus, the customer id attribute is the join key in both input streams. The join key paths are denoted by $left key and $right key. Although the invocation of stream-based services will be discussed in the next section, assume that the getInvoices operator produces one bucket for every invoice per customer id. Thus, buckets with equal customer id arrive in a grouped fashion due to the preceding invoke operators. The join algorithm takes every incoming bucket from both input streams and compares the id that is currently joined. In our example, the current id is 3. If the bucket ids equal the current id, the payloads according $left source and $right source are extracted from that buckets and inserted into the new output bucket according the target paths $left target and $right target. If the ids of both streams become unequal, the created bucket is passed to the succeeding operator as it is already done for id 2.
4 Generalized Stream-Based Services Taking the presented process execution as our foundation, we address the communication between process and services in this section. The general idea is to develop streambased services that operate 1) as efficient, stream-based services for traditional service operations like data extraction and data storage and 2) as stream operators for dataoriented functionalities and for data analysis. This enables our stream-based process model to integrate and orchestrate such services natively as remote operators. Thus, the process can decide whether to execute an operator locally or in distributed fashion on a different network node. 4.1 Service Invocation Extension The main drawback of the presented stream-based service invocation approach from Section 2 is the missing support for service parameterization. Only raw application data
Streaming Concept for Business Processes data
67
parameter
a)
Client
Service
b)
Client
Service
Fig. 7. Extended Bucket Concept
and thus only one data structure without metadata is supported. Parameters are not considered specifically and the only way to pass parameters to the service is to incorporate them into the application data structure (see Figure 7a). This blurs the semantics of both distinct structure types and creates overhead if the parameter only initializes the service instance. Furthermore, a mapping between stream buckets with its blurred structure on the service level and our processing buckets on the process level has to be applied. Example 4. Drawback of Single Bucket Structure: In our application scenario, the parameter for the service getInvoices would be a time frame definition, in which all returned invoices had to be created. Although this one time frame is valid for all customer ids that are processed by the service, it has to be transmitted with every stream bucket. We extend the stream item definition by deploying the proposed process bucket definition B from our data model directly into the invocation stream. Remember, a process bucket is described by its type t and the payload pt that depends on t. Hence, we denote buckets that carry application data with t = data. We introduce parameter buckets for service initialization or reconfiguration by adding a new type t with t = param. Thus, parameters are separate buckets that have their own payload and that are processed by the service differently. Figure 7b depicts the concept of parameter separation. Besides a more clear separation, this concept generalizes stream-based service implementations by allowing to deploy parameterizable functions as Web services that are executed on stream buckets. Furthermore, it enables us to incorporate these services as remote operators into our process model. A parameter set p that is currently used to configure a local operator oi can be transferred to the service via dedicated parameter buckets. Hence, this service can act as a remote operator, if it implements the same function f . The conceptual distinction between remote service and local operator becomes almost negligible. Of course, central execution will certainly dominate the communication overhead compared to a distributed execution. But further research should investigate this in more detail. Example 5. Generalized Filter Service: Consider the filterOrders operator in our application scenario. It filters incoming order buckets according to the order’s status. The filter expression is described as XPath statement in its parameter set p. If the filter algorithm f is deployed as a Web service, it is configured with p using the parameter bucket structure. Thus, a service instance can filter arbitrary XML content according to the currently configured filter expression.
68
S. Preissler, D. Habich, and W. Lehner
4.2 Classification and Applicability In order to integrate stream-based services as data sources and as remote operators into our process execution, we first have to classify our defined process operators according incoming and outgoing data flows. In a second step, we investigate how to map these operator classes to stream-based services. Following [7], we can classify most operators into unary operators (one input edge, one output edge, e.g.: invoke, signal and groupby) and binary operators (two input edges, one output edge, e.g.: join and union). Furthermore, unary operators can have an input–output relationship of 1:1, 1:N and N:1. Applicability as unary operator: Naturally, a stream-based service has one input stream and one output stream. Therefore, it can be directly mapped to an unary operator. Since the receiving and sending of process buckets within a service instance are decoupled, the input–output relationships of 1:1, 1:N and N:1 are supported in straightforward fashion. Example 6. 1 : N relationship: Consider our data source getInvoices. The corresponding invoke operator is depicted in Figure 8. First, the service is configured using the parameter set with $valid year=2009 as predicate, so that only invoices that were created in 2009 will be returned. Second, since the service directly accepts process buckets, input buckets containing single customer ids are streamed to the service. These customer id buckets are the result of the split in the getCustInfos operator and the succeeding transform operator. The service retrieves all invoices and returns every invoice for every customer id in one separate response bucket. Hence, the presented service realizes a 1 : N input-output relationship. Since the service returns process buckets, they can be directly forwarded to the joinIDs operator. Applicability as binary operator: Since a stream-based service typically provides only one input stream, it cannot be mapped directly to binary operators. A simple approach to map the stream-based service to the type of a binary operator is to place all buckets from both input queues to the one request stream to the service and to let the service validate which bucket belongs to which operator input. As a first step, we focus on this approach and also implemented it for our evaluation in Section 6. Further research should investigate if a more sophisticated approach, e.g., one that implements two concurrent streams for one service instance, would be more applicable. parameter $valid_year=2009 invoke operator
bucket bi <customer> 3
1
send
receive
bi
bj
bi-1
bj+1
bi-2
bj+2
N
Fig. 8. Invoke Operator
bucket bj <invoice custId="3" ...> ... ...
Streaming Concept for Business Processes
69
5 Process Model Optimizations and Extensions In the following section we discuss implications and challenges for our process-based data streaming approach. In particular, there are three topics that consider different aspects of our approach: 5.1 Process Execution Optimization Our optimization considerations can be classified into two main sections: (1) intraprocess optimizations, which analyzes optimization possibilities within one process instance and (2) inter-process optimizations, which analyzes performance improvements between consecutive process executions. Intra-process Consideration and Optimization. Since our approach allows for data splitting, it scales for arbitrary data sizes. However one implication is, that the scalability depends on the bucket payload size and the number of buckets within the process. Since all data queues block new insertions if they become empty or full, the maximum number of buckets within the process is implicitly defined by the sum of slots in all data queues. Furthermore, if a customer id and its invoices/orders are processed completely, their buckets are consumed by the analyze operator and discarded afterwards. Therefore we consider different queue sizes between the operators as possible optimization parameter where processes with small queue sizes will require less main memory but will be more prone to communication latencies for single buckets. Instead processes with larger operator queues consume more main memory but will be able to terminate fast preceding operators and free resources, while slow succeeding operators in the chain can process their items more slowly without slowing down a fast preceding one. Another implication in terms of bucket payload sizes is that the receive operator does not build the incoming message payload in memory completely. Instead, it reads the message payload from an internal storage and parses the XML file step by step, according the split path in the receive operator. Of course, this may restrict the expressiveness of XQuery statements, since the processing is forward only. Furthermore the payload size depends on the XPath expression and the input data structure. An optimization technique in this area is the packaging of single bucket payloads into one physical bucket. While the logical separation of each payload is preserved by the operators via modifications in the operator’s functionality, there are much less physical buckets in the systems. We will analyze whether, and if, how many payloads have to be packed into one physical bucket to improve process execution time. Inter-process Optimization. Currently, the pipeline-based execution is only deployed on an intra-process-based level. Thereby, the payload of every incoming message is processed in pipeline-based, but different incoming messages are executed in separate instances in consecutive executions. One optimization is to allow new messages to be processed in the same instance as the previous messages. This leads to the processing of a new message while the previous message is still be processed. However, the process has to distinguish between single messages to maintain separate contexts. To mark the start of a new message and thus the end of an old message, we deploy punctuation as described in [13] by extending the data model and introducing a new bucket type
70
S. Preissler, D. Habich, and W. Lehner ...
Msg 3
Msg 2
Msg 1
oj
oj+1
oj+2 punctuation bucket
Fig. 9. Inter-process Optimization with Punctuation Buckets
t = SEP CT X. Such a punctuation bucket is injected into the stream when a new message starts (see Figure 9). It has a predefined payload structure in pt which includes meta data from the next message like request id or response endpoint that are needed to communicate results appropriately. Furthermore, the process model is extended to be aware of punctuations and to ensure the separation of message contexts. In general, if an operator consume a punctuation bucket from the input queue, it resets its internal state and forwards the punctuation bucket to each outgoing queue. For the copy operator, this implies that it copies the punctuation bucket for every outgoing queue. Special considerations have to be made for binary operators like join or union and for the invoke operator. Since binary operators merge concurrent data flows, the operator function has to stop consuming from one queue, if it encounters the punctuation in that queue. The operator has to consume from the second queue, until it also encounters the punctuation bucket in the second queue. Then, both punctuations are removed from the queue and one of them is placed in the outgoing queue. For the invoke activity are two possibilities exist for punctuation handling. If the remote service does not support punctuations, the operator has to shut down the invocation stream normally as it would be the case when shutting down the whole process normally. Afterwards, the punctuation is placed in the outgoing queue and the new invocation stream is established for the next message context. If the service supports punctuations, the operator has not be aware of any punctuation semantic. The invocation stream is kept open and the punctuation is placed in the stream with all other buckets, whereas the service resets its internal state and forwards the punctuation back to the operator. Both types have been implemented and are evaluated in Section 6. If punctuations are not used at all but different message payloads are processed in one shared context, it allows new application scenarios in the area of message stream analysis that is described next. 5.2 Applicability and Operator Extension In our presented application scenario, we considered equally structured items. This is a typical data characteristic in data-intensive processes. However, if items are not equallystructured, our process model supports this by schema-less data queues (see Example 2) and multiple definitions in the route operator. Another reasonable application domain is inter-message processing in the area of message stream analysis. Consider services, that are monitored via sensors. These sensors forward metrics like response time and availability in a predefined time interval. Decision rules and action chains can be defined as a process-based data streaming application and would enable the reduction of
Streaming Concept for Business Processes
71
orthogonal, not XML-based systems like traditional data stream management systems. To support such scenarios, we can leverage the semantics of punctuations described earlier to provide one process instance and thus one common context for all arriving messages. Additionally, only the process model has to be extended with more operator types to support event processing operations like time windows and sequences. Furthermore, it has to be investigated, how decision rules can be mapped to our process graph and how Complex Event Processing (CEP) can be embedded in this context.
6 Evaluation In this section, we provide performance measurements for our stream-based process execution. In general, it can be stated that the stream-based message processing leads to significant performance improvements and scales for different data sizes. 6.1 Experimental Setup We implemented our concept using Java 1.6 and the Web service framework Axis21 , and we ran our process instances on a standard blade with 3 GB Ram and four cores at 2 GHz. The data sources were hosted on a dual core workstation with 2 GB Ram connected in a LAN environment. Both nodes were assigned 1.5 GB Ram as Java Heap Size. All experiments were executed on synthetically generated XML data and were repeated 30 times for statistical correctness. We used our running process example. For the traditional process execution, the process graph consists of seven nodes: one receive activity, three assign activities (transform, filterOrders and joinIds), and three invoke activities (getInvoices, getOrders and analyze). For our stream-based process execution the process graph consists of eight nodes. We additionally have the copy operator that distributes the customer ids to both invoke operators. Furthermore, we replace the assign operators filterOrders and joinIds with the respective filter and join operator. We use n as the number of customer information that enter the process. In addition, we fix the number of invoices and orders returned for each customer id from the services getInvoices and getOrders to 10 for all conducted experiments. This leads to 20 invoices/orders for every processed customer id. The textual representation of one customer information item that enters the process is about 1kb in size. It gets transformed, enriched and joined to about 64kb throughout the process. Although realworld scenarios for data integration often exhibit larger message sizes, these sizes are sufficient for comparing the presented approaches. 6.2 Performance Measurements For scalability over n, we measured the processing time for the traditional controlflow-based process execution (CPE) and our stream-based process execution (SPE) in Figure 10(a) with a logarithmic scale. Thereby, CPE denotes the control-flow-based execution which processes all customer information n in one process instance. CPE 1
http://ws.apache.org/axis2/
CPE chunk10 SPE local
1000
2000
3000
4000
10 5
processing time [in s]
●
●
●
2
800 600 400
● ●
●
1
5 0
200
10
CPE CPE chunk10 CPE chunk100 SPE local SPE distributed
chunk10 and CPE chunk100 uses the CPE but distribute n items over n/chunkSize service calls with chunkSize = {10, 100}. SPE local represents our stream-based process execution with only getInvoices, getOrders and analyze being streambased invoke operations to external Web services. In contrast, SPE distributed replaces the join operator joinIDs with a binary invoke operator described in Section 4.2 and implements the join as stream-based service instance. We can observe, that CPE does not scale over 1.000 customer ids with its 20.000 invoices/orders due to main memory limit of 1.5 GB and its variable-based data flow which stores all data. In contrast, CPE chunk10 and CPE chunk100 scale for arbitrary data sizes whereas a chunk size of 10 customer information per process call offers the shortest processing time. Nevertheless, this data chunking leads to multiple process calls for a specific n and alters the processing semantic by executing each process call in an isolated context and thus assuming independency between all n items. Furthermore, it also exhibits a more worse runtime behavior than SPE local and SPE distributed. Since chunking scales for arbitrary data sizes, we will focus on these approaches in our following experiments. Figure 10(b) depicts the standard deviation of runtimes for CPE chunk10, CPE chunk100 and SPE local. While the standard deviation of both chunk-based controlflow execution concepts increases for larger n, the standard deviation of our streambased execution shows significantly lower. This is due to the fact, that with higher n the number of service calls increases for the CPE approaches, which also involves service instance creation and the all new routing of the message to the service endpoint. In contrast, our stream-based service invocation only creates one service instance per invoke operator and the established streams are kept open for all n that flow through it. In Figure 10(c) we measured the runtime performance with different numbers of dedicated CPU cores to the process instances. We can observe, that the number of cores does not affect the CPE approaches significantly. This is due to the fact that at most 2 threads are executed concurrently (both concurrent invokes for getInvoices and getOrders in the process graph). Furthermore, waiting times for the return of the service calls (processing, creation and transmission of invoices and orders) does even not fully utilize one core and makes the presence of the remaining three cores obsolete. For the SPE approach, we have 12 threads (8 operator nodes with one thread per operator plus an additional thread for every invoke (+3) and join (+1) operator). The execution
time for different n is significantly higher with only using one CPU core. Nevertheless, the SPE also outperforms the CPE with one single CPU core. Again this points to the fact of dominating waiting times for service calls in CPE-based processes. The assignment of two CPU cores speeds up the SPE significantly whereas 4 CPU cores do not increase performance that may justify its dedicated usage. As a fourth experiment, Figure 11(a) depicts execution times for different operators of CPE, CPE chunk10 and SPE local in a logarithmic scale. Here we fix n = 500 and measured the time the operators finish to process all items. In general, the execution of every SPE operator takes more time than the corresponding activities in the CPE environment (except the CPE invoke). Furthermore, all SPE operators run quite the same amount of time. This is due to the pipelined execution of all operators with its blocking queue semantic. So while the transformID operator starts running, all succeeding operators are also started appropriately. So while the transformID operator processes further buckets, e.g. the joinIDs operator already processes buckets from it’s input queues. The blocking nature of the operators implies, that the pipeline is only as fast as the slowest operator in the chain. As for the joinIDs activity in the CPE, is the most time and resource consuming step. We used a standard Java XPath library to implement the join for the CP E. Thereby, all invoices and orders for every customer id are retrieved from internal variables (see [11] for more detail) and stored in internal variables. For our SP E implementation, we used the same library and algorithm but process only small message subsets with each join step. This seemed to speed up the whole data processing significantly. In Figure 11(b), we focused solely on our SPE implementation and compared the plain execution with the optimized execution considering the punctuation semantics from Section 5. Therefore, we fixed the message size to n = 10 and varied the message rate r from 1 to 20. We measured cumulative time to finish all messages. Thereby, SPE denotes the plain execution without optimization. Furthermore, SPE punct process denotes the optimization where a punctuation is not supported by services and thus terminates the invocation stream. Finally, SPE punct service denotes the optimization where punctuations are supported by the stream-based services and thus the invocation streams can be kept open. As expected, the plain SPE implementation is the most inefficient for consecutive process execution. While it scales very good with increasing data volume within one process instance, it becomes a bottleneck if many consecutive
74
S. Preissler, D. Habich, and W. Lehner
instances have to be created. This includes the instantiation of every operator before any processing can be done and the expensive invocation stream establishment for every invoke operator. On the opposite, SPE punct process keeps all operators and the current process instance for consecutive requests. Thus, it eliminates operator creation overhead and scales much better with increasing message rate. Finally, SPE punct service additionally eliminates the expensive network stream establishment for consecutive requests and the whole process pipeline remains filled. To conclude, the evaluation of our concept has shown, that the pipeline-based execution in conjunction with the stream-based Web service invocation yields significant performance and scalability improvements for our application scenario.
7 Related Work In general, there exist several papers addressing the optimization of business processes. The closest related work to ours is [14]. They investigate runtime states of activities and their pipelined execution semantics. However, their proposal is more a theoretical consideration with regard to single activities. They describe how activities have to be adjusted to enable pipelined processing, whereas they do not consider 1) message splitting for efficient message processing and 2) communication with external systems. Furthermore, [12] addresses the transparent rewriting of instance-based processes to pipeline-based processes by considering cost-based rewriting rules. Similar to [14], they do not address the optimization of data-intensive processes. The optimization of data-intensive business processes is investigated in [15,16] and [5]. While [15] proposes to extend WSBPEL with explicit database activities (SQLstatements), [16] describes optimization techniques for such SQL-aware business processes. In contrast to our work, their focus is on database operations in tight combination with business processes. [5] presents an overall service-oriented solution for data-intensive applications that handles the data flow separately from process execution and uses database systems and specialized data propagation tools for data exchange. However, the execution semantics of business processes is not touched and only the data flow is optimized with special concepts – restricting the general usability of this approach in a wider range.
8 Conclusions In this paper we presented the concept of stream-based XML data processing in SOA using common service-oriented concepts and techniques. There, we used pipeline parallelism to process data in smaller pieces. In addition, we addressed the communication between process and services and introduced the concept of generalized stream-based services. It allows the process to execute services as distributed operators with arbitrary functionality. Furthermore, we presented optimizations to increase scalability for inter-process message processing. In experiments we showed the applicability of these concepts in terms of performance. Future work should address the modeling aspects of such processes in more detail. More specifically, it should be investigated, how notations like BPMN in conjunction with annotated business rules have to be mapped to
Streaming Concept for Business Processes
75
an operator graph of our process model. Additionally, a cost model is reasonable that considers communication costs, complexity of data structures and complexity of implemented operator functions to advise the remote or local execution of process operators.
References 1. Graml, T., Bracht, R., Spies, M.: Patterns of business rules to enable agile business processes. Enterp. Inf. Syst. 2(4), 385–402 (2008) 2. OASIS. Web services business process execution language 2.0 (ws-bpel) (2007), http://www.oasis-open.org/committees/wsbpel/ 3. OMG. Business process modeling language 1.2 (2009), http://www.omg.org/spec/BPMN/1.2/PDF/ 4. Kouzes, R.T., Anderson, G.A., Elbert, S.T., Gorton, I., Gracio, D.K.: The changing paradigm of data-intensive computing. IEEE Computer (2009) 5. Habich, D., Richly, S., Preissler, S., Grasselt, M., Lehner, W., Maier, A.: Bpel-dt - data-aware extension of bpel to support data-intensive service applications. In: WEWST (2007) 6. Abadi, D.J., Ahmad, Y., Balazinska, M., Cetintemel, U., Cherniack, M., Hwang, J.-H., Lindner, W., Maskey, A.S., Rasin, A., Ryvkina, E., Tatbul, N., Xing, Y., Zdonik, S.: The Design of the Borealis Stream Processing Engine. In: CIDR (2005) 7. Vassiliadis, P., Simitsis, A., Baikousi, E.: A taxonomy of etl activities. In: DOLAP (2009) 8. Machado, A.C.C., Ferraz, C.A.G.: Guidelines for performance evaluation of web services. In: Proceedings of the 11th Brazilian Symposium on Multimedia and the Web, WebMedia 2005, pp. 1–10. ACM Press, New York (2005) 9. Suzumura, T., Takase, T., Tatsubori, M.: Optimizing web services performance by differential deserialization. In: Proceedings of the 2005 IEEE International Conference on Web Services, ICWS 2005, Orlando, FL, USA, July 11-15, pp. 185–192 (2005) 10. Preissler, S., Voigt, H., Habich, D., Lehner, W.: Stream-based web service invocation. In: BTW (2009) 11. Preissler, S., Habich, D., Lehner, W.: Process-based data streaming in service-oriented environments - application and technique. In: Filipe, J., Cordeiro, J. (eds.) ICEIS 2010. LNBIP, vol. 73, pp. 56–71. Springer, Heidelberg (2011) 12. Boehm, M., Habich, D., Preissler, S., Lehner, W., Wloka, U.: Cost-based vectorization of instance-based integration processes. In: Grundspenkis, J., Morzy, T., Vossen, G. (eds.) ADBIS 2009. LNCS, vol. 5739, pp. 253–269. Springer, Heidelberg (2009) 13. Tucker, P.A., Maier, D., Sheard, T., Fegaras, L.: Exploiting Punctuation Semantics in Continuous Data Streams. IEEE Trans. on Knowl. and Data Eng. 15(3), 555–568 (2003) 14. Bioernstad, B., Pautasso, C., Alonso, G.: Control the flow: How to safely compose streaming services into business processes. In: IEEE SCC, pp. 206–213 (2006) 15. Maier, A., Mitschang, B., Leymann, F., Wolfson, D.: On combining business process integration and etl technologies. In: BTW (2005) 16. Vrhovnik, M., Schwarz, H., Suhre, O., Mitschang, B., Markl, V., Maier, A., Kraft, T.: An approach to optimize data processing in business processes. In: VLDB, pp. 615–626 (2007)
A Framework to Assist Environmental Information Processing Yuan Lin1 , Christelle Pierkot2 , Isabelle Mougenot1, Jean-Christophe Desconnets2, and Th´er`ese Libourel1 1
Universit´e Montpellier 2, LIRMM 161 rue Ada, 34095 Montpellier Cedex 5, France [email protected] 2 IRD US ESPACE, Maison de la t´el´ed´etection 500, avenue J.F. Breton, 34093 Montpellier cedex 5, France [email protected]
Abstract. Scientists of the environmental domains (biology, geographical information, etc.) need to capitalize, distribute and validate their scientific experiments. A multi-function platform will be an adaptable candidate for meeting the challenges. We have designed and implemented the MDweb platform [3] which is nowadays used in various environmental projects. Our main objective is to integrate a workflow environment. In this paper, an introduction to a three-level workflow environment architecture (static, intermediate, dynamic) is presented. We focus on the ”static” level, which concerns the first phase of constructing a business process chain, and discuss around the ”intermediate” level, which covers both the instantiation of a business process chain and the validation, in terms of conformity, of the generated chain. Keywords: Platform MDweb, Meta description, Scientific workflow, Infrastructure, Conformity checking.
1 Introduction 1.1 General Environmental applications (biodiversity, ecology, agronomy, etc.) are undergoing considerable growth, requiring the establishment of hardware and software infrastructures. Indeed, communities want to benefit from efficient mutualization frameworks because data and process already exist in quantity. Data associated with scientific experiments is often voluminous and complex to acquire and the process that deal with it change over time. The experiments themselves are complex and most often correspond to a more or less sophisticated organization of process. In this context, a framework to assist environmental information process ([7,9]) raises several challenges: – Syntactic interoperability relative to various data and process requires good knowledge on the domain. Metadata standards have been proposed and developed for improving this situation. J. Filipe and J. Cordeiro (Eds.): ICEIS 2010, LNBIP 73, pp. 76–89, 2011. c Springer-Verlag Berlin Heidelberg 2011
A Framework to Assist Environmental Information Processing
Fig. 1. Overview of the sharing and mutualization platform
– Semantic interoperability relative to the domain knowledge is much more complicated. In general, communities co-construct a shared vocabulary (thesaurus or a domain ontology) to fill this gap. – Compatibility and substitutability of process in an experimental process chain. We need a formal language for defining process chains, and then, a detail analysis of its realization. 1.2 Objective Our objective is to propose a framework to extend MDweb platform. This has been developed according to standard to ensure semantic and syntactic interoperability. It provides a graphical interface and several functional components as shown in figure 1: – Metadata manager for referencing data and process, – Search engine for locating and accessing resources (data/process) using the abovementioned descriptions, both for local and remote resources, – Workflow experimentation environment. The two first components have been implemented in the MDweb platform. The component of a workflow environment for the definition, instantiation and execution of experimental process chains is now our research subject. In this article, we present in the section 2 an overview of MDweb, a platform for sharing and mutualizing environmental resource, and its main components. The section 3 presents our main contributions concerning a simple workflow definition language and our thoughts about the conformity model in a workflow. Finally, the last section concludes the paper and defines further areas of our research.
2 MDweb MDweb is an open source platform based on metadata. It was designed as a generic, multi-lingual, multi-standard tool for cataloging and locating environmental resources over the web1 . Currently, it is used by several national and international institutions as a 1
Various other tools with the same aim exist [1,2].
78
Y. Lin et al.
Fig. 2. Overview of the MDWeb Tool
key component of a spatial data infrastructure such as European project for Nature Conservation: NatureSDI+ (http://www.nature-sdi.eu/). Implementing in the framework european directives, INSPIRE, the last version of MDweb is compliant with the implementing rules for metadata and the associated discovery services of the INSPIRE directives [12,13]. MDweb allows users to describe spatial datasets, collections or web services inside metadata database, publish and made them reachable. On the other hand, users can also search and access to resources via a multi criteria web search application. Figure 2 shows an overview of this web application. To structure metadata and ensure syntactic interoperability, MDweb implements, mainly, the international standard for geographic information metadata, ISO 19115 [14]. To ensure interoperability with other spatial cataloguing applications, MDweb implements the Catalog Service for the Web (CS-W2) of the OpenGIS Consortium [17,19] which provides the web service interfaces to search and edit metadata through the web. 2.1 Metadata These essential components consist of a generic database of metadata (metabase) and two thematic and spatial reference bases used for semantic control of metadata during the metadata edition phase and during searches. The genericity of the metadata database resides in the originality of the relational schema that describes a structure at a meta level coupled with a standard structure. It allows the storage of any kind of metadata standard and define metadata profile to use them. A metadata profile has been defined by ISO 19115:2003 and ISO 19106 as a subset of elements of the standard and where some element properties can be adapted for the use by a particular community to facilitate implementation and interoperability between different systems in use by that community.
A Framework to Assist Environmental Information Processing
79
MDweb proposed various metadata profiles based on ISO 19115 or ISO 19119 [15]. For instance, they correspond to the description of a kind of data type (data collection or data set such as digital map, vector layer, raster layer, etc.) or of web services such as OGC service: WMS, WFS, CSW, etc. Others one profiles are currently under construction to describe more general experimental process ran by environmental community. To support the edition of metadata, MDweb proposes a metadata editor. Basically, the users could select a profile corresponding to the data type which they want describe, load the input form and edit the metadata of such profile. Finally, the metadata sheet created will be validated and published through MDweb search engine to be reachable. 2.2 Thesaurus Conscious of the fact that a cataloguing tool’s success depend to a great extent on the final users obtaining relevant results to their searches, we have emphasized the contribution of semantics as a extension of related section present in the metadata structure. This has taken the form of extending the Keywords and DataIdentification sections of the ISO 19115 standard. However, it also recommends an extension for the use of a dedicated vocabulary (thesaurus). On the strength of this recommendation, we have introduced the use of thesauri within MDweb. A thesaurus is a set of terms arranged by semantic relationships defined by the ISO 2788 [16] standard. Several reference thesauri have been included with the MDweb tool: the multilingual agricultural-terms thesaurus of the FAO (Food and Agriculture Organization) called AGROVOC [5] and the multi-lingual environmental-terms thesaurus called GEMET [4] from EIONET (European Environment Information and Observation Network). Other thesauri can be associated by importing them in the SKOS or RDF formats in the relational schema of MDweb’s thematic reference base. An similar approach to the one on the terminology was applied to the spatiality of information. We designate by spatial reference base a set of relevant geographic objects, which can act as referents for the concerned community. Such vocabularies or spatial objects will be use to support the edition phase concerning the keyword input filling. They are also used to compose the search queries during the searching phase. 2.3 Search Engine The metadata search engine in MDweb provides users various search modes, one of the most useful is the multi-criteria search, which allows to compose a query based on four criterions: – What: Allows the user to specify one or more keywords originated from controlled vocabulary (cf. the sub-section above), – When: Allows the user to specify the period in which the reference was created, updated or revised, – Where: Allows the user to restrict the search to a specific geographical location of data based on a bounding box definition, – Who: Allows the user to specify a remote data catalog which is provide by an organisation entity (agency, institution,etc.) of data.
80
Y. Lin et al.
The MDweb platform focused on sharing and mutualizing existing data and services. We extend the same functionnality for general process (Geoprocessing, Business geoprocessing, R function, etc.). The next important step for us, is to focus on the workflow component, as introduced in the section 1.2.
3 Our Vision of a Workflow Environment 3.1 Overview Our objective is to integrate a workflow environment in the sharing and mutualization platform. From the business point of view (experimenters), a workflow usage is based on three stages shown in figure 3: Instanciation of the workflow model
Execution
User
Planning a workflow model
Fig. 3. Business point of view
1. Definition: abstract definition of a process chain corresponding to an experimentation (planning of experiments), 2. Instantiation: more specific definition after identifying the various elements (data/ process) of the chain, 3. Execution: customized execution (according to strategies corresponding to requirements). According to the architectural styles introduced by OMG [23], we propose a 3-level architecture (cf. figure 4) to describe the experimental life cycle: 1. The static level concerns the design phase and consists to build abstract business process models using a simple language defined by meta level. There exists several standards for defining a process model. In [18], we have analysed some of them like: UML (Activity diagram) [20], SPEM [22], BPMN [21]. One common point of these is they are all very comprehensive but require substantial efforts to understand and to use. However, it might be not so easy for scientists which are not experts in this field. 2. The intermediate level represents the instantiation and pre-control phase. Users refine and customize their experimentations (based on models designed during the previous step) by determining and localizing the most suitable resources (data, programs and services). In order to assure that the generated process chain could be correctly performed, we also propose to include a pre-control step. It verifies the validity of the instanced chain based on formal conformity rules. Among the different scientific workflow environments that we have analysed, such as Kepler [6], Taverna-myGrid [11], BioSide [8], etc., each of them declines an graphical interface for defining workflow using a abstract syntax not necessarily
A Framework to Assist Environmental Information Processing
Workflow Meta-Model
Language used to define a workflow business model
Conforms
Business description of the process chain
Business model Model instantiated from a business model Instantiated
model
81
Business model
.....
Business model
Static
Instance of
.........
Instantiated model
Intermediate
Choice of the execution strategy
Centralized / Decentralized execution
Dynamic
Fig. 4. A workflow environment
accessible for users. Nevertheless during the execution, they handle the conformity problem by using specific adaptations, either manually or semi-automatically. 3. The dynamic level concerns the execution phase. This takes place according to various strategies defined by both the experimenters and the operational configurations. In this article, we will only cover the advances relating to the first two levels: ”static” and ”intermediate”. Our contributions concerning these two levels are presented in the two following sub sections: sub section 3.2 presents a meta-model corresponding to the abstract syntax of a simplified but complete workflow description language ; sub section 3.3 analyzes different situations of compatibility in a process chain, and gives an original proposal for conformity checking. 3.2 Static Level: Language for Defining Process Chains As analysed in the last section, for the static level, we have to propose a language to define process chains. It should be as simple as possible since the user community does not consist of computer experts. Nevertheless, the language should be as comprehensive as possible to be able to represent the wide-ranging experiments. Our language is defined by a meta-model (cf. figure 5) which is based on those analysed existing meta-models (cf section 3.1). The goal is to define the minimum number of essential elements to be able to represent the maximum number of possible situations [10]. The meta-model was designed from the point of view of the workflow software environment. At the most abstract level, a workflow consists of elements and links between them. The connection between elements and links is provided by the concept port.
82
Y. Lin et al.
The elements can be divided into: – Tasks: predefined tasks, to use or reuse2 , – Roles: existing roles (which will intervene during execution), – Data: available resources, to be mobilized. The concept of the task corresponds to concepts of Activity, process, etc., as generally used in the other workflow meta-models.3 *
Workflow Task <>
* 0..1
0..1
has OutputPorts
has InputPorts
* *
*
Port 1
has Input
1..*
1..* <>
Link
Atomic
Data XOR OR
1
has Output
*
Role
Element
Complex
AND MixedLink ControlLink DataLink
Fig. 5. The workflow meta-model
The elements are connected by unidirectional links4 via ports. We distinguish between: – The data links (DataLink) which are used to transfer data between tasks and to ensure the correct sequence of a process chain. – The control links (ControlLink) which are included to control the execution between role-task or task-task. – The mixed links (MixedLink) which are special data links with conditions for filtering transfered data. Links connect elements by way of ports (normal ports, by default). Each element can have input/output ports (the I/O type is deteminated by the direction of the corresponding link). In addition, for handling more complex situations such as data merging, synchronization, etc., some specific ports (AND/OR/XOR) are introduced. To facilitate the design of a workflow, a corresponding graphical language is also proposed as shown in the figure 6. 2 3
4
In the current context, Web service, for example, can be considered as task. The concept of complex task matches a sub workflow, which is an aggregate of elements, ports, and links. There is no direct link between role and resource. In most cases, links between role and resource can be deduced from role-task and task-resource links.
A Framework to Assist Environmental Information Processing
83
Element
Port
Normal port :
Link
Mixed :
Complex page Task OR port :
Condition Data
AND port :
V
Control :
page page Data
Role
Loop
XOR port :
V
Atomic Task
Condition
Data :
X Data
Fig. 6. An associated graphical language
The sequences page page to analyze
The sequences to analyze
Searching for similarities
Results of the similarities
Aligning the results
Results of the alignment
page page Results
Fig. 7. A business model of the biological domain
3.3 Intermediate Level: Concept of Context and Conformity At the intermediate level, the user transforms the abstract business model into an instantiated concrete model using appropriate data and process. To do this, we use the platform’s search engine component to locate such resources. After that, we propose to verify and validate the generated concrete model before its execution. To this purpose, we present the sub stages planned at this level, the arised difficulties and the corresponding solutions. Analysis of the ”Intermediate” Level’s Sub-stages. To illustrate the different stages, we use an example from the biological domain. Figure 7 shows the abstract business model: it starts by an analysis of similarities based on a set of sequences, and then the results will be transferred to the next process in order to align them. Based on such abstract process model, the two essential sub stages during the ”intermediate” phase are: 1. Instantiation Stage. It consists in instantiating the abstract elements defined in the process model by using the real-world resources, which could be found by search engine. In our contexte, this search is based on the meta-informations of those resources. Considering again the example of figure 7, suppose that the search leads to two concrete instances T1 and T2 for the two abstract process whose signatures are: – T1: SimilarityAnalysis (A): B. T1 takes data of format5 A as input and returns a result of format B – T2: Alignment (C): D. T2 takes data of format C as input and returns a result of format D After instantiation of the business model, we obtain a concrete model shown in figure 8. 5
We use “data type” or “data format”.
84
Y. Lin et al.
The sequences page page to analyze
The sequences to analyze
Searching for similarities
Results of the similarities
Instanciation T1 A
Results of the alignment
Aligning the results
page page Results
Instanciation C
B SimilarityAnalysis
T2
D
Alignment
Fig. 8. Instantiation from the business model A
B SimilarityAnalysis
C
?
D Alignment
Fig. 9. Conformity problem
2. Chaining and Validation Stage. Concrete elements should be linked between themselves using various predefined links of the meta-model. In the example under consideration, it will be data links that will be used. A further difficulty arises (cf. figure 9) which we call the ”conformity” problem: does the data exchanged between the two concrete process conform to their signatures? Handling Conformity. Based on the previous analysis, we can say that the objectives of the two sub stages are: – searching and locating the resources necessary for instantiating the business process model, – verifying and correcting the incompatibilities in the instantiated model to obtain a valid model. To achieve these objectives, several attempts have been conducted in parallel: the first concerns the modeling of the different resource categories that make up the environment or context of the information technology platform, the second concerns the choice of the formalism, depending on the problems posed, the most suitable for representing the knowledge associated with these various resources. Concept of Context or Environment of the Platform. The concept of the context or the environment of the platform can be represented by the following three sub organizations (cf. figure 10): – Organization of human resources: they manage the user accounts on the platform as well as their different roles and associated access rights, – Organization of data, and – Organization of process. Since we started with the hypothesis of distribute resources (data and process), we propose to store locally only their references, in the form of appropriate descriptions. These descriptions (similar as metadata) are ordered within generalization/specialization hierarchies. The latter serve as a basis for the localization and search of real resources.
A Framework to Assist Environmental Information Processing Work context
1 specification * * Data category 1 described by 1
Ref. of a real data
Organization of data
linked to
* Description of concrete data
is in 1 format sub-type of described 1 * 1 1 Metadata of by Data * formats 1 1 format * * *
1
Description of data category
Complex format
1
1
Organization of human resources
* Description of rights
* Description of roles possesses *
Ref. of a user account
Organization of processes
specification * * Process category
* Description of users
* * can play
1
described by
85
1
Description of process category
Ref. of a real process
* Description of * concrete process * *
* * linked to
Atomic format
Description of complex process
input formats
output formats
Description of atomic process
Fig. 10. Concept of work context
Data hierarchy
Data category
Data sub-category 1
Description page page real data 1
.........
Process hierarchy
Process category
Data Process sub-category 2 sub-category 1
................
Description real process 1
Description page page real data 2
input format
Process sub-category n
Description real process 2
output format
Format 1 Format X
input format
Format Y
Fig. 11. Matching of process signatures and of their input/output data formats
As shown in the class diagram of the global architecture (cf. figure 10), in the organization of data hierarchy, a description relating to some concrete data is linked to the corresponding data format. And in the organization of process hierarchy, the description relating to a concrete process comprises at the minimum the process’s signature, which itself includes the input and output data formats. For building these hierarchies, we are currently developing a formalism which conforms the metadata profile defined in MDweb platform. Proposed Solution for Verifying Conformity. For the purpose of verifying conformity, we propose to use the context defined earlier.
86
Y. Lin et al.
Let us take an example with a data hierarchy and a process hierarchy represented in the left and the right parts of the figure 11 respectively. The manner of establishing relationships between these two hierarchies is essentially based on the matching between different predefined data formats and the process signatures. In general, we intend to construct the global resource graph after analyzing the stored descriptions. Definition of a Resource Graph. A resource graph in our work context is an oriented graph G=(N, A), with: – N a non-empty finite set of nodes, N = NP ∪ ND ∪ NF , with: 1. NP : a set of nodes, which represents concrete process; 2. ND : a set of nodes, which represents real data; 3. NF : a set of nodes, which represents data formats. We can then determine if a node n∈N, then n∈NP ∨ n∈ND ∨ n∈NF . – A a set of arcs between the nodes. If an arc a=(n1 ,n2 ) ∈A, then n1 ∈ N ∧ n2 ∈ N ∧ n1 =n2 . Two types of arcs are presented in a resource graph: 1. AR : a set of reference arcs. If an arc ar =(n1 ,n2 ) ∈ AR , then (n1 ∈ ND ∧ n2 ∈ NF ) ∨ (n1 ∈ NP ∧ n2 ∈ NF ) ∨ (n1 ∈ NF ∧ n2 ∈ NP ) 2. AS : a set of specialization arcs. If an arc as =(n1 ,n2 )∈AS , then n1 ∈ NF ∧ n2 ∈ NF An example of a resource graph is shown in figure 126 . It has been obtained by using a set of graphical symbols meant to represent data descriptions (overlaid rectangles), the various data formats (the ovals), and the process descriptions (the rectangles with handles, with the latter representing signatures). The reference links (ref ) constructed between the resources are in fact the relationships established from the matching between data formats stored in each of the descriptions (of data/process). For example, Data 1 is in format 1; process 6 takes data in format 2 and format 4 as input and generates results in format 5. in order to simplify the diagram, only a specialization link (subType) between format 6 and format 8 is added. Based on the previous hypothesis, we think that the problem of checking and validation of the conformity of an instance model can be considered as an itinerary-finding problem between two fixed nodes in the resource graph. The verification of conformity thus comes down to determine if there exists a match between two process nodes. To further elucidate our proposal, we provide the following definitions: – The function subType(fx) returns a set of formats, which are sub formats of fx . – The function NumberInputs(px ) returns the number of input parameters of process px . – A path cn =(n1 , n2 , ......, nt ), with for i=1..t, ni ∈ N, if it exists, is represented by a set of nodes, which starts with node n1 and ends with node nt . All other nodes included in this set are covered by this path. The definition of a path between two nodes denotes in fact a possible matching solution between them. 6
In order to keep the graph readable and at the level of concrete resources, we haven’t added the resource categories showed in figure 11.
A Framework to Assist Environmental Information Processing
Desc. page page data 1
Desc. page page data 2
I
Format 7 Ref.
Ref.
Desc. page page data 3
O
Desc. process 4
Desc. page page data 4
Ref.
Ref.
87
Ref.
Ref.
Format 1
Format 3
Format 2
Ref.
Ref. I Ref. I
O
Desc. process 1 Desc. process 3
I
Ref. Ref.
I
Ref.
Desc. process 2
Format 6
O
Ref.
Format 4
Ref.
Ref. I
Desc. process 6 I
O
Ref.
O
Desc. process 7
Ref.
Ref. Ref.
Format 8
Ref. I
O
subType
O
Desc. process 5
I
Format 5
Desc. process 8
Format 9
O
Ref.
Fig. 12. The concept of a resource graph in our context
– A set of paths path(nx , ny )={c1 , c2 ,......, ct } is a collection of paths, with for i=1..t, a path ci in this collection should be in the form ci =(nx , ......, ny ). The definition of a set of possible paths between two nodes represents the set of matching solutions found between those two nodes. – A path cn =(n1 , n2 , ......, nt ) is simple, if and only if: for i=2..t-1, if ni ∈ NP , then NumberInputs(ni )=1. The definition of a simple path corresponds to a match between two data formats, which does not require an additional input parameter. Thus, the path (F2, P2, F3, P5, F4) is a simple path, the two process nodes, P2 and P5, do, in fact, satisfy the condition of the simple path, i.e.: NumberInputs(P2)=1 and NumberInputs(P5)=1. – A path cn =(n1 , n2 , ......, nt ) is complex, if and only if: for i=2..t-1, ∃ni ∈ NP and NumberInputs(ni ) > 1. The definition of a complex path corresponds to a match between two data formats requiring additional input parameters. Thus path (F4, P6, F5, P7, F6) is complex because P6 requires two input parameters (in addition to the F2 format parameter). The analysis of compatibility between two chained process can be summarized as an analysis of the compatibility between the data formats of the chained output and input ports between process. Let us suppose that the data formats fo and fi are linked, and that the direction of the data flow is from fo towards fi , we can then summarize the different conformity cases into the four following situations: 1. Perfect compatibility, with condition if (fo = fi ) ∨ (fo ∈ subType(fi )). The data format of the output of the first process is identical to, or is a sub-type of, the data format of the input of the second process. From a syntactic point of view, no adaptation is necessary. 2. Compatibility after adaptation, with condition if (fo = fi ) ∧ (fo ∈ subType(fi)) ∧ (path(fo, fi ) = ø) ∧ (∃ cn ∈ path((fo, fi )), cn is a simple path).
88
Y. Lin et al.
The two data formats are not compatible at first glance but a path between the two has been found using an adaptation solution. This adaptation takes place automatically without recourse to additional input parameters. 3. Compatibility with adaptation’, with condition if (fo = fi ) ∧ (fo ∈ subType(fi)) ∧ (path(fo, fi ) = ø) ∧ (∀ cn ∈ path((fo, fi )), cn is a complex path). The two data formats are not compatible at first glance but a path that links them has been found. However, to apply the adaptation solution additional input parameters have to be provided. 4. Incompatible, with condition if (fo = fi ) ∧ (fo ∈ subType(fi )) ∧ (path(fo, fi ) = ). This situation is quite clear: the two linked data formats are not compatible at all, and no path has been found in the resource graph. Human intervention will be required in such a case. If the problem of incompatibility is resolved by implementing a specific adapter, the system will be enriched by saving the adaptation solution used.
4 Perspectives and Conclusions Platform MDweb offers the possibilities to describe resource and then to use these saved descriptions to localize the corresponding resource. The workflow component, as presented in this article, is actually our main research focus, and it’s under construction. The meta-model and the graphical language for designing workflow currently exist only as prototypes. We now have to construct the resource graph based on our proposals, and meanwhile we are aware that the global resource graph may become substantial and thus its construction and maintenance may prove cumbersome. The possibility of constructing a local resource graph using templates of business process chains will be taken into account because that may alleviate this difficulty. Then the conformity cases need to be formalized and different path-finding algorithms should be constructed to allow the validation of the intermediate level process chains. The next step for us, consist thus in the verification of the instanced process chains, and of course these validated chains can be then shared and reused by other users. The final part of our work will be devoted to the dynamic phase, i.e., the execution strategy of valid chains.
References 1. Catalog M3 Cat, http://www.intelec.ca/html/fr/technologies/m3cat.html 2. Geonetwork Open Source, http://geonetwork-opensource.org 3. MDweb Project, http://www.mdweb-project.org 4. GEMET, http://www.eionet.europa.eu/gemet/about?langcode=en 5. AGROVOC, http://aims.fao.org/website/AGROVOC-Thesaurus 6. Altintas, I., Lud¨ascher, B., Klasky, S., Vouk, M.A.: S04 - Introduction to Scientifc Workflow Management and the Kepler System. In: SC, p. 205 (2006) 7. Barde, J., Libourel, T., Maurel, P.: A metadata service for integrated management of knowledges related to coastal areas. Multimedia Tools Appl. 25(3) (2005) 8. BioSide. BioSide Community site, Bioside user guide. v1.0.beta (2008)
A Framework to Assist Environmental Information Processing
89
9. Desconnets, J., Libourel, T., Clerc, S., Granouillac, B.: Cataloguing for distribution of environmental resources. In: AGILE 2007: 10th International Conference on Geographic Information Science (2007) 10. F¨urst, F.: L’ing´enierie ontologique (2002) 11. Hull, D., Wolstencroft, K., Stevens, R., Goble, C.A., Pocock, M.R., Li, P., Oinn, T.: Taverna: A Tool For Building And Running Workows of Services. Nucleic Acids Research 34(WebServer-Issue) (2006) 12. INSPIRE Metadata Implementing Rules: Technical Guidelines based on EN ISO 19115 and EN ISO 19119 (2007) 13. Draft Implementing Rules for Discovery Services (IR3). Drafting Team Network Services (2008) 14. Geographic Information Metadata, ISO 19115, International Organization for Standardization, Genve, Suisse (2003) 15. Geographic information Services, ISO 19119. International Organization for Standardization, Gene, Suisse (2005) 16. Guidelines for the establishment and development of monolingual thesauri, ISO 2788. International Organization for standardization (ISO), Genve, Suisse (1986) 17. OpenGIS Catalogue Services Specification 2.0.2 - ISO Metadata Application Profile (2007) 18. Lin, Y., Libourel, T., Mougenot, I.: A workflow language for the experimental sciences. In: ICEIS 2009: 11th International Conference on Enterprise Information Systems (2009) 19. OGC. Catalog Service for Web. Open Geospatial Consortium (2007) 20. OMG. Uml 2.0 superstructure specification (2001) 21. OMG. Business process definition metamodel, beta 1 (2008) 22. OMG. Software process engineering meta-model specification version 1.1 (2005) 23. OMG. Meta Object Facility (MOF) Core Specification, Version 2.0, formal/06-01-01 (2006)
Using Visualization and a Collaborative Glossary to Support Ontology Conceptualization Elis C. Montoro Hernandes, Deysiane Sande, and Sandra Fabbri Computing Department, Universidade Federal de São Carlos 13565-905, São Carlos, SP, Brazil {elis_hernandes,deysiane_sande,sfabbri}@dc.ufscar.br
Abstract. Although there are tools that support the ontology construction, such tools do not necessarily take heed to the conceptualization phase in its need of execution resources. The objective of this paper is to present the use of visualization and collaborative glossary as an effective means of enhancing the conceptualization phase of the ontology construction. These resources are being applied through the process named ONTOP (ONTOlogy conceptualization Process) which is supported by the ONTOP-Tool that provides an iterative way to defining the collaborative glossary and uses a visual metaphor to facilitate the identification of the ontology components. Once the components are defined, it is possible to generate an OWL file that can be used as an input to other ontology editors. The paper also presents an application of the both process and the tool, which emphasizes the contributions of this proposal. Keywords: Ontology engineering, Collaborative glossary, Conceptualization phase, Information visualization.
Using Visualization and a Collaborative Glossary to Support Ontology Conceptualization
91
In the literature there are several approaches to ontology development [10][11] [13][15] [22]. In [5] the authors discuss ontology methods that had been used since the 90’s and comment that none of them have reached the maturity. However, Methontology [10] is a method that has been considered one of the most complete [5]. Among the development activities, Methontology is composed by the phases of Figure 1, which are present in the majority of the ontology development processes. Aiming at supporting the execution of these phases some tools and languages were proposed in literature. Examples of these tools and Examples of these languages are: OntoEdit [26], WebONTO [7], WebODE [1] and Protégé 2000 [21][24]. Examples of these languages are: RDF (Resource Description Framework) and RDF-S (Resource Description Framework Schema) [13], OIL (Ontology Interchange Language) [16], DAML+OIL [4] and OWL (Ontology Web Language) [23]. In this research we particularly use the ontology editor Protégé-2000 [21] [24] and the language OWL [23] due to the features they provide for our work and to their acceptance in ontology area.
Fig. 1. Ontology Development Activities of Methontology. Adapted [5].
In spite of such proposals, none of the tools support the conceptualization phase highlighted in Figure 1. The objective of this phase is to organize the non structured knowledge, which was acquired in the previous specification phase. The conceptualization phase converts domain information into a semi-formal specification using a set of intermediate representations. This phase is considered the most important phase for the ontology identification and its initial activity is the construction of a glossary [5]. Apart from the glossary, many initial definitions and ontology components, like classes and their relationships, are established in this phase. Even in the other methods cited before, for example Ontology Development 101 Guide [22], On-To-Knowledge [11] and Grüninger and Fox´s method [15], there is an equivalent phase. Considering this context, the objective of this paper is to present, through the process ONTOP (ONTOlogy conceptualization Process) and the ONTOP-Tool, the use of a collaborative glossary tool and the use of visualization, aiming to support the conceptualization phase which helps a great deal in the identification of classes. In this proposal we are using the glossary available in the free Moodle environment (Modular Object-Oriented Dynamic Learning Environment) [19]. The activities proposed in this process are supported by ONTOP-Tool that is responsible for the interaction between the glossary and the visualization, allowing an easier identification of the ontology components. The intention of the example presented in this paper is to explain the process, the interaction between the glossary and the visualization and the functionalities of the tool showing the contribution of the proposal and not exploring the ontology properly.
92
E.C.M. Hernandes, D. Sande, and S. Fabbri
The remaining of the paper is organized as follows: Section 2 comments the importance of a glossary for ontology definition and mentions the main characteristics a glossary should have; Section 3 provides a brief view on visualization; Section 4 presents the ONTOP process and explains the activities that compose it; Section 5 provides an example of the process application supported by the tool and Section 6 presents the conclusions and future works.
2 The Importance of a Glossary for Ontology Definition As mentioned before, [14] defines ontology as a formal and explicit specification of the description of concepts in a domain. These concepts must represent the semantic of the domain once the ontology can be used by many applications. Hence, the domain knowledge should be acquired and documented as precise as possible and, to achieve this objective, some authors indicate the glossary as the artifact able to support and facilitate this task [10][9]. A common problem associated with ontology construction is that those who need the ontology are specialists on the domain and often they are geographically spreaded. If that is the case, the support of tools becomes essential in order to allow a collaborative development and documentation of the ontology construction. The glossary could be a richer source of information if a greater number of specialists participated in its production. In such case, the glossary would synthesize concepts from different contributors. Another important reason for using glossaries is that its use is much friendlier to the specialists than the use of the notation applied to describe ontology. Concerning the main characteristics, a glossary should satisfy the auto-reference principle which says that terms used to describe another terms should also be an entry of the glossary. In addition, a glossary should satisfy the principle of minimum vocabulary, i.e., the vocabulary should be as small as possible and does not have ambiguities. [9] emphasize that glossaries should use the concept of hypertext aiming at facilitating the navigation in the document. This characteristic is native in the Moodle environment and was a reason for choosing this environment in this proposal. In addition, Moodle provides other resources like a glossary administrator, different permissions, discussion forums, etc., that facilitate the collaborative work and its management.
3 Information Visualization Visualization is a process that transforms data, information and knowledge in a visual form that explores the natural visual capacity of human beings, providing an interface between two powerful systems of information treatment: the human brain and the computer [12]. Effective visual interfaces provide a quick interaction with large volume of data, making the identification of characteristics, patterns and tendencies that were masked easier.
Using Visualization and a Collaborative Glossary to Support Ontology Conceptualization
93
In the literature, there are some visualization techniques and tools that implement them [12]. They present advantages and limitations according to the format and type of the data that will be visualized and to the exploration needs. In this research we use the Tree-Map technique [18] that is illustrated in Figure 2. This technique represents the data as nested rectangles in accordance with their hierarchy. The size of the rectangles is proportional to the number of items that compose the next level of the hierarchy. The size and color variation of Tree-Map makes evident the characteristics of each set of data. This fact enhances the visualization of large sets of data like, for example, a glossary that contains many terms. In addition, Tree-Map uses all the screen space that allows the representation of a great amount of data. Figure 2 shows an example of Tree-Map. In this case the technique is used in the NewsMap site to show news about several topics from several countries [20]. In this example each box represents news and the news are grouped by subject. Each subject has a color and the color tone of the box represents the that time the news was posted in the site. In Figure 2 each subject was highlighted with a white dashed line.
Fig. 2. Newsmap site [20]
Visualization allows a broad view of the data as well as the abstraction of new information in a quicker way than if the analysis was done manually. Even when the set of data is small, an appropriated visualization allows an immediate identification of tenuous differences in the data. Many advantages of visualization uses can be viewed through the researches of many authors [2][3][17]. In this research the Tree-Map technique is used to represent the terms of the glossary such that each box represents a term and the size and the color of the boxes represent the frequency that the term is used in the glossary. In this case, the visualization allows a quick identification of the most cited terms, which are candidate to classes of the ontology.
94
E.C.M. Hernandes, D. Sande, and S. Fabbri
Although there is, for example, the TreeMap [27] and other tools that implement this technique, they did not have essential resources to help in our problem. Aiming to refine the glossary, we needed that the visualization tool provided two basic operations: string search and edition. This fact leads us to implement ONTOP-Tool that will be presented in Section 5.
4 The ONTOP Process ONTOP is a process supported by the ONTOP-Tool which enhances the ontology conceptualization by making use of glossary and visualization (Figure 3). The glossary can be constructed in a collaborative way among the ontology stakeholders, including the domain specialists, through an iterative process of refinement. As the collaborative work is fundamental, we decided to use the glossary of Moodle environment for the reason that it provides some management facilities. Plus the fact that it is possible to export the terms to an XML file so that they can be loaded in the ONTOP-Tool and also visualized through the Tree-Map [27], allowing the interaction between the tool and the Moodle [19]. After that, by means of visualization information, it is possible to define the initial ontology components. These components can be exported to an OWL file [23] and be used by a ontology editor like Protégé-2000 [21][24] to go ahead with the ontology formalization.
Fig. 3. ONTOP process
The steps that compose the ONTOP process are the following: Step 1 – Refine Glossary: the objective of this step is to create, refine and validate the glossary iteratively, counting on the domain specialists’ involvement. This is an iterative step where the glossary is imported from and exported to the Moodle environment as well as to and from the ONTOP-Tool, until the glossary is finally able to represent the domain. During this iteration the ontology stakeholders can insert, remove or define the terms. The Step 1 is composed by the following activities:
Using Visualization and a Collaborative Glossary to Support Ontology Conceptualization
95
1) Create a glossary in the Moodle environment; 2) Share the glossary with the ontology stakeholders so that some specialists can participate in the glossary definition; 3) Refine the glossary with the following actions: (i) Export the glossary from the Moodle to an XML file; (ii) Import the XML file to the ONTOP-Tool so it can be visualized by means of the Tree-Map; (iii) Export the glossary from the ONTOP-Tool to an XML file; (iv) Import the XML file to the Moodle (v) Go back to (i) until the ontology stakeholders come to an agreement. Step 2 – Define ontology components: the objective of this step is to identify, among glossary terms, the possible ontology components and, then, classify them in terms of class, class instance, relationship or synonyms. At this point the contribution of the ONTOP-Tool is to make evident the most used terms, pinpointing then as possible ontology components candidates. Step 3 – Define class hierarchy: the objective of this step is to define the hierarchy of the components identified in step 2. The hierarchy is easily established by the ONTOP-Tool through a drag-and drop action. Step 4 – Define class relationships: the objective of this step is to attribute the relationships among the classes. Some of these relationships are predefined and obtained from the information generated in Step 2 and some others can be inserted by the user when necessary. Step 5 – Generate OWL file: the objective of this step is to generate the OWL file which is composed of all the information defined by the user in the previous four steps. This file can be imported by an ontology editor as Protégé-2000, which is used in this research.
5 An Example of Using ONTOP and ONTOP-Tool In this section we present an example of the process application, detailing the functionalities provided by the ONTOP-Tool. The process is illustrated based on Experimental Software Engineering (ESE) domain. The glossary constructed for this domain counted on the collaboration of domain specialists that composed the program committee of 2006’s Experimental Software Engineering Latin American Workshop [8]. Experimental Software Engineering is a growing area in software engineering and deals with different types of experimental studies, for example, surveys, case study, controlled experiment [28]. Due to limited space we cannot give a deep overview of the domain. In spite this limitation, our main objective is to explain the process steps showing how they work and how they help the ontology conceptualization phase. The ESE glossary was constructed in the Moodle environment aiming to facilitate the communication among the program committee, which was geographically distributed. Based on this ESE glossary version, ONTOP was applied as illustrated below.
96
E.C.M. Hernandes, D. Sande, and S. Fabbri
Figure 4 shows the initial screen of the ONTOP-Tool which has buttons for the functionalities needed to execute the process steps. To execute Step 1, for refining the glossary, the user should use the first three buttons. Clicking on the button “Import Moodle Glossary” the tool uploads the XML file that contains the ESE Glossary. After that, the user should click on the button “Analyse the Glossary” to visualize it like in Figure 5 where: • each box corresponds to a term; • each box is colored according to the term frequency; • clicking on a box it is possible to insert or edit the term definition; • boxes that have a fading color represent terms that were edited in the current visualization; • terms can be inserted or excluded as in the Moodle glossary.
Fig. 4. ONTOP-Tool initial screen
In this example, as the ESE glossary was already constructed by the domain specialists, it was not necessary much iteration to execute the refinement. However, just to exemplify the contribution of visualization, note that at the left top corner of Figure 5 there are a set of boxes that are grouped because they correspond to terms that do not have a definition. This situation is easily identified in the visual metaphor. To obtain this information in the Moodle environment, the user should verify the terms, one by one. Missing or equal definitions are quickly identified by means of the ONTOP-Tool. Considering the previous situation, if the user decides to insert a definition to these terms, the color of their boxes is faded (see Figure 6). This is an interesting artifice of the ONTOP-Tool since every time a color is faded in the visual metaphor it means
Using Visualization and a Collaborative Glossary to Support Ontology Conceptualization
97
that the corresponding term was edited. The color will persist faded while the user stays in the same functionality. If the user wishes to share the editions among the ontology stakeholders, he should export this version of the glossary clicking on the button “Export Moodle Glossary” and import it again to the Moodle environment, by means of Moodle functionality.
Fig. 5. Initial glossary visualization
Fig. 6. Fade color represents edited terms
98
E.C.M. Hernandes, D. Sande, and S. Fabbri
All these activities should be repeated until the ontology stakeholders reach an agreement. Once the glossary is finished, the user can execute the Step 2, clicking on the button “Identify components”, for classifying the glossary terms as ontology components. Figure 7 shows the screen of this functionality, where the region to insert the definitions is highlighted. The visualization is the same of the previous functionality and, at this moment, one of the contributions of visualization is related to the size or the color of the boxes, since they represent the frequency associated to each term. Terms that have high frequency are candidates to become classes of the ontology. For example, in Figure 7, the terms Experiment, Simulation and Survey that are highlighted in the figure, correspond to the most referenced in the ESE glossary; they have the largest boxes and colors that correspond to high levels of frequency. In fact, for the ESE domain, the term “Experiment” is used for defining or expressing many other terms like Controlled Experiment, Experiment Design, Replicated Experiment, etc. The same happens with the term “Simulation” that is used to compose Continuous Simulation and Dynamic Simulation, in addition to define other terms.
Fig. 7. Screen for defining components
Another contribution of visualization for defining the ontology components is related to the search resource. In this case, when the user classifies a term, the ONTOP-Tool uses this term as a key word for searching other terms that use the defined term in some way. As it is showed in Figure 8 the terms that satisfy the searching are highlighted in the screen. This fact allows that all these terms are classified at the same time, making easier the classification activity. In Figure 8 all the terms that use “Validity” were highlighted when the user classified that term.
Using Visualization and a Collaborative Glossary to Support Ontology Conceptualization
99
Fig. 8. Terms highlighted after a searching
As it happens during the refinement activity, the color of the boxes becomes fade as the terms are classified. In Figure 9 all the boxes have a fade color. This visual effect allows that the user identifies, quickly, the terms that were defined the ones that were not. After all the terms were classified, the next step provided by ONTOP-Tool – Step 3 – corresponds to the button “Define hierarchy” that should be used to organize, in
Fig. 9. Visualization after the definition of all the terms
100
E.C.M. Hernandes, D. Sande, and S. Fabbri
an hierarchical way, the ontology classes defined in the previous step. This functionality uses a drag-and-drop interface which facilitates this operation. Again, considering the ESE domain, the initial organization of the classes is presented in Figure 10. By means of the drag-and-drop resource the user can reach the organization showed in Figure 11 in a friendly way. Another resource provided by ONTOP-Tool is available through the Step 4 that corresponds to the button “Define relationships” of Figure 4. The screen related to this functionality is presented in Figure 12. In this interface, the classes defined in Step 2 are presented on the left and on the right side of the screen. Between them it is presented a list of properties. These properties can be provided by the ONTOP-Tool or can be defined by the user in this occasion. The properties that are provided by the tool correspond to the ones that are frequently used by ontologies or the ones that were defined in Step 2.
Fig. 10. Initial hierarchy of the ontology classes Fig. 11. Final hierarchy of the ontology classes
The establishment of the relationships requires the following actions: (i) (ii) (iii) (iv) (v)
select a class of the right list, for example, the Lab Package class; select a property, for example, is_basis_for; select a class of the left list, for example, the Replication class; confirm the relationship; repeat the actions (i) to (iv) until all the relationships are established.
After these actions, the relationship “Lab Package is_basis_for Replication” was created.
Using Visualization and a Collaborative Glossary to Support Ontology Conceptualization
101
Fig. 12. Screen for defining relationships
We observe that ONTOP-Tool creates relationships of Domain-Range type. This kind of relationship indicates that the property links the individuals of the Domain class to the individuals of the Range class. The other kinds of relationship that are used in the context of ontology should be created by the tools that support ontology development, like Protégé-2000. Finally, the last functionality provided by ONTOP-Tool corresponds to the Step 5 and to the button “Create OWL file” of Figure 4. This functionality allows the creation of this file that contains all the information defined till now. The OWL file can be imported to several ontology editors. In our research we use Protégé-2000, versions 3.4 and 4.0.
6 Conclusions and Further Work To sum up, this paper presented the ONTOP process which supports the conceptualization phase of an ontology development. The tools for ontology development identified in the field literature do not deal with the conceptualization of the domain since they focus on the implementation phase. ONTOP deals with this phase and it is supported by the ONTOP-Tool which facilitates the construction of a collaborative glossary as well as the identification of the ontology components. Concerning the glossary construction, itself the target domain should be as representative as possible. To reach this objective it is essential that different views and suggestions are considered. This fact implies the involvement of different stakeholders, especially the domain specialists that are often geographically distributed. For the reason we decided to use the Moodle glossary for the fact that the Moodle environment is a free software that provides a good set of glossary management functionalities. By means of the Moodle environment the glossary is easily shared and validated by many stakeholders. Also, as it is possible to export the glossary as an XML file, the ONTOPTool provides an iteration activity that enhances its refinement.
102
E.C.M. Hernandes, D. Sande, and S. Fabbri
Another aiding support provided by the ONTOP and ONTOP-Tool for the conceptualization phase (which is essential for every method that supports the ontology development, including Methontology) is visualization. This resource was adopted in light of two different purposes: to facilitate the glossary refinement (for example, making easier the identification of the definition of terms) and to facilitate the preliminary identification of the ontology components (for instance, using the size of the boxes to select possible components of the ontology). An additional advantage of our proposal is the fact that, once ONTOP-Tool can automatically generate an OWL file at the end of the process, the next phases of the ontology construction may be carried out from the point where many definitions have already been done. To continue the ontology construction, the OWL file can be imported to a ontology editor like Protégé-2000, among others. All things considered, it is important to finally point out that although we used the Experimental Software Engineering domain to exemplify the process and the tool, it was not our intention to present a deeper analysis of the ontology itself, but rather explore the ONTOP process and the ONTOP-Tool. In our further studies, we intend to improve the ONTOP-Tool by adding linguistic processing so that semantic tagging can be used to enhance the identification phase of the ontology components. Another functionality that we intend to add to the ONTOPTool is the generation of an XMI file (XML Metadata Interchange). This file would allow classes, properties and relationships to be used by UML tools. Acknowledgements. The authors would like to thank the Brazilian funding agencies CAPES, CNPq and FAPESP, the institute INEP and the Project Observatório da Educação for their support.
References 1. Arpirez, J.C., Corcho, O., Fernández-López, M., Gómez-Pérez, A.: WebODE: a scalable workbench for ontological engineering. In: International Conference on Knowlodge Capture, pp. 6–13. ACM Publisher, New York (2001) 2. Auvil, L., Llorà, X., Searsmith, D., Searsmith, K.: VAST to Knowledge: Combining tools for exploration and mining. In: IEEE Symposium on Visual Analytics Science and Technology, pp. 197–198. IEEE Press, New York (2007) 3. Chen, C., Kuljis, J., Paul, R.J.: Visualizing Latent Domain Knowledge. IEEE Transactions on Systems, Man and Cybernetics: Applications and Reviews 31(4), 518–529 (2001) 4. Connolly, D., Harmelen, F., Horrocks, I., McGuinnes, D.L., Patel-Schneider, P.F., Stein, L.A.: DAML+OIL Reference Description. Cambridge:W3C (2001), http://www.w3.org/TR/daml+oil-refeence 5. Corcho, O., Fernández-López, M., Gómez-Pérez, A.: Methodologies, tools and languages for building ontologies. Where is their meeting point? Data & Knowledge Engineering 46(1), 41–64 (2003) 6. Daconta, M.C., Obrst, L.J., Smith, K.T.: The Semantic Web: A guide to future of XML, Web Services and Knowledge Management. Wiley Publishing, Indianapolis (2003) 7. Domingue, J.: Tadzebao and WebOnto: discussing, browsing and editing ontologies on the web. In: Bancff Knowledge Acquisition, Modeling and Management Workshop. KAW Press, Alberta (1998)
Using Visualization and a Collaborative Glossary to Support Ontology Conceptualization
103
8. Fabbri, S., Travassos, G.H., Maldonado, J.C., Mendonça Neto, M.G.M., Oliveira, M.C.F.: ESE Glossary. In: Proceedings of Experimental Software Engineering Latin American Workshop, pp. 1–33. UNIVEM, Rio de Janeiro (2006) 9. Falbo, R.A., Menezes, C.S., Rocha, A.R.: A systematic approach for building ontologies. In: Coelho, H. (ed.) IBERAMIA 1998. LNCS (LNAI), vol. 1484, pp. 349–360. Springer, Heidelberg (1998) 10. Fernández-López, M., Gómez-Pérez, A., Pazos-Sierra, A., Pazos-Sierra, J.: Building a chemical ontology using Methontology and the ontology design environment. IEEE Intelligent Systems & Their Applications 14(1), 37–46 (1999) 11. Fensel, D., Harmelen, F.: Project Presentatin On-to-Knowledge; content-driven Knowledge-Management Tools through Evolving Ontologies. Technical report, Vrije Universiteit Amsterdam (1999) 12. Gershon, N., Eick, S.G., Card, S.: Information Visualization. ACM Information Visualization Interactions 5(2), 9–15 (1998) 13. Gómez-Pérez, A., Fernández-López, M., Corcho, O.: Ontological Engineering. Springer, London (2004) 14. Gruber, T.R.: Toward principles for the design of ontologies used for knowledge sharing? Knowledge Acquisition 5(2), 199–220 (1993) 15. Gruninger, M., Fox, M.S.: Methontology for the Design and Evaluation of Ontologies. In: Workshop on Basic Ontological Issues in knowledge Sharing, pp. 1–10. AAAI Press, Montreal (1995) 16. Horrocks, I., Fensel, D., Harmelen, F., Decker, S., Erdmann, M., Klein, M.: OIL in a Nutshell. In: Workshop on Applicattion of Ontologies and PSMs, pp. 4.1-4.12. ECAI Press, Berlin (2000) 17. Ichise, R., Satoh, K., Numao, M.: Elucidating Relationships among Research Subjects from Grant Application Data. In: 12th International Conference Information Visualization, pp. 427–432. IEEE Press, New York (2008) 18. Johnson, B., Shneiderman, B.: Tree-maps: a space-filling approach to the visualization of hierarchical information structures. In: 2nd Conference on Visualization, pp. 284–291. IEEE Press, California (1991) 19. Moodle – Modular Object-Oriented Dynamic Learning Environment, http://moodle.org/ 20. NewsMap - Application for Google News, http://newsmap.jp 21. Noy, N.F., Fergerson, R.W., Musen, M.A.: The Knowledge Model of Protégé-2000: Combining Interoperability and Flexibility. In: Dieng, R., Corby, O. (eds.) EKAW 2000. LNCS (LNAI), vol. 1937, pp. 17–32. Springer, Heidelberg (2000) 22. Noy, N.F., McGuinness, D.: Ontology development 101: a guide to creating your first ontology. Techinal Report, Standford Knowledge Systems Laboratory KSL-01-05 and Standford Medical Informatics SMI-2001-0880 (2001) 23. OWL - Ontology Web Language, http://www.w3.org/2004/OWL 24. Protégé-2000 – Ontology Editor and Knowledge Acquisition System, http://protege.stanford.edu 25. Staab, S.; Schnurr, P.; Studer, R.; Sure, Y.: Knowledge Processes and Ontologies. IEEE Intelligent Systems - Special Issue on Knowledge Management 16(1) (2001) 26. Sure, Y., Erdmann, M., Angele, J., Staab, S., Studer, R., Wenke, D.: OntoEdit: Collaborative Ontology Development for the Semantic Web. In: Horrocks, I., Hendler, J. (eds.) ISWC 2002. LNCS, vol. 2342, pp. 221–235. Springer, Heidelberg (2002) 27. TreeMap Tool, http://www.cs.umd.edu/hcil/treemap 28. Wohlin, C., Runeson, P., Höst, M., Ohlsson, M.C., Regnell, B., Wesslén, A.: Experimentation in software engineering - an introduction. Springer, Sweden (2000)
A Strategy to Support Software Planning Based on Piece of Work and Agile Paradigm Deysiane Sande, Arnaldo Sanchez, Renan Montebelo, Sandra Fabbri, and Elis Montoro Hernandes Computing Department, Federal University of São Carlos, São Carlos, Brazil {deysiane_sande,renan_montebelo,sfabbri, elis_hernandes}@dc.ufscar.br, [email protected]
Abstract. Background: The estimation of iterations must be as precise as possible, especially for the agile methods, since the success of this kind of development is intrinsically related to this fact. Aim: In order to establish a systematic planning of iterations, this paper presents the PW-Plan strategy that works with different planning techniques and a generic unit of work to plan the iteration, under the agile paradigm development focus. Method: PW-Plan strategy was extracted from a real software development process and evolved from another strategy that combines the application of Use Case Points and Personal Software Process. Results: PW-Plan was applied in two case studies of two small businesses companies that showed the feasibility of its application. Conclusion: The case studies provided insights of the PW-Plan contribution for both the developer’s and the manager’s processes. Also, its application provides more precise estimations for each iteration. Keywords: Planning, Planning tracking, Process improvement, Agile method, Small companies.
A Strategy to Support Software Planning Based on Piece of Work
105
processes. This concern led the company to adopt the use of the PSP and hence the development team uses the main practices of PSP 1.1 - related to the project estimation - together with Process Dashboard [5], a PSP supporting tool. After defining the UCP|PSP strategy, it was observed that its steps were quite naturally related to agile practices proposed in the Scrum framework [6]. As agile methods are a feasible methodology for small teams [7], they became appropriate in the context of small businesses. Thus, the UCP|PSP strategy evolved in order to become more generic regarding the estimate size method. Planning and describing iterations are possible not only through use cases, but also by any other unit of work, such as user stories. Moreover, being based on iterations, it is adaptable, especially to agile methods. This evolution of the UCP|PSP strategy was given the name PW-Plan, which is presented in this paper. Aiming to support this evolution, other tools were investigated. Two major points guided this evaluation: the chosen support tool should provide that all stakeholders had access to the project data; and the tool should be free, since the target was small companies with limited budget. Hence, FireScrum [8] was selected to be used in conjunction with Process Dashboard [5] to enhance the support to PW-Plan. This article is organized as follows: Section 2 presents the concepts related to agile methods and agile planning. Section 3 details the PW-Plan strategy proposed in this paper. Section 4 comments some free tools that support agile methods, and more details are given for FireScrum and Process Dashboard tools which were used to apply PW-Plan. In Section 5 it is presented two case studies in two small business companies, Linkway and NBS, showing the use of the strategy in different situations. In Section 6 it is presented the lessons learned and, finally, in Section 7 it is presented the conclusions and further work.
2 Agile Methods and Agile Planning The agile approach applied to software development came into the spot in 2001, date of the publication of the Agile Software Development Manifest [9]. This manifest highlighted their differences compared to traditional methods, especially by being incremental, cooperative, direct and adaptive. The most prominent agile methodologies are: Extreme Programming - XP [7], Dynamic Systems Development Method [10], Feature Driven Development - FDD [11], Adaptive Software Development – ASD [12], OpenUP [13], the Crystal Clear and Orange methods of the Crystal methodology [14] and Scrum [6]. Scrum stands out among the methods due to its emphasis on project management. It is an agile framework that provides a set of best practices to achieve the success of a project, supporting the construction of a software product in iterative steps. It does not define ‘what should be done’ in all the circumstances. Hence, it may be used in complex projects where it is not possible to predict everything that will occur [6]. Scrum projects are carried out in a series of iterations called Sprints. Each Sprint has a certain time in calendar days to be completed. Schwaber proposes a 30 days Sprint [6]. Scrum defines three main roles: Product Owner, responsible for the requirements; Team, represented by developers, and Scrum Master, represented by the manager. The two main artifacts of Scrum are the Product Backlog - list of
106
D. Sande et al.
requirements that must be implemented in the system - and the Sprint Backlog - list of tasks to be performed on a Sprint. The process begins when the product owner has a general description of the product to be developed. From this description, called Vision, the Product Backlog is created. At the beginning of each sprint, the Sprint Planning Meeting takes place, where the Product Owner prioritizes the Product Backlog items. Based on this prioritization, the Team selects the tasks that will be in the Sprint Backlog. During the Sprint, the Team carries out the Daily Scrum Meeting - which is 15 minutes long – where the work is synchronized and possible issues are discussed. At the end of each Sprint, the Team presents the completed functionality at the Sprint Review Meeting, and the Scrum Master encourages the Team to review the development process to make it more efficient for the next Sprint. A key point for the proper practice of agile methods is the planning of iterations. This planning must be based on the size estimation of the items that will be developed, and also based on the productivity of the Team members. Although these items may have different representations, the most common one is User Story, which is a brief description of the functionality being developed accordingly to the client's project vision [15]. Among the existing methods to estimate the size of the work to be done, Cohn highlights [15]: Story Points (SP): unit of measure which expresses the size of a user story, a system characteristic or any piece of work to be developed. Ideal days: unit of measure which corresponds to an ideal day of work, which is a day when every resource needed is available and the work is done without interruptions. Two scales of magnitude are suggested by Cohn to characterize the complexity (or size) of the work to be done: the Fibonacci sequence, in which the next sequence number is the sum of the two previous numbers (1, 2, 3, 5, 8, ...); and a second sequence, in which each number is twice the precedent number (1, 2, 4, 8, 16, ...). These scales can be used in conjunction with what Cohn called Planning Poker. In order to estimate the complexity of the task, Team members receive cards with these sequence numbers, and for each Piece of Work, the values are arranged together until a consensus is reached. Haugen in [16] presents results that indicate a good performance of the Team on the accuracy of estimations when this type of technique is used. In addition to the methods presented to calculate the size of the work to be done, there are other more traditional techniques that can be used to assist in planning. They are: Function Points Analysis (FP): proposed by Albrecht for measuring software projects size [17]. This measure is calculated based on the complexity of the technique five logical components. These points are calculated in two steps, generating respectively the unadjusted and the adjusted points. In the latter, technical and environmental factors are considered to interfere with the complexity of the development. Use Case Points (UCP): proposed by Karner to estimate software projects based on use cases [3]. This technique was inspired by FP and also calculates the unadjusted and the adjusted points. Unadjusted Use Case Points are based on the complexity of
A Strategy to Support Software Planning Based on Piece of Work
107
actors and use cases. The Adjusted Use Case Points considers environmental and technical complexity factors, much like FP. Based on the complexity, Karner estimated the development time by multiplying UCP by 20 hours. This is a value that should be adjusted to the company size and to the complexity of the software being developed. The agile estimation techniques previously mentioned are feasible alternatives to achieve the necessary estimations in the strategy proposed in this research, even if agile methods are not being used. In addition, despite the technique used to estimate, it is important to continually control and monitor the estimation and the software planning. As it will be presented, the strategy proposed herein takes into account this activity.
3 PW-Plan Strategy The PW-Plan strategy is an evolution of UCP|PSP [2], which was established by systematically using the Use Case Points and the PSP methods together. This evolution resulted from the observation that the strategies steps would be easily adjusted to Agile Methods, especially Scrum. The strategy main goal is to support planning and monitoring of each iteration, increasing the software development process quality. While Scrum only determines ‘what should be done’ [18], PW-Plan defines ‘how it should be done’. In this paper context, it is considered the PSP 1.1 usage supported by Process Dashboard tool, which records the total time spent in every tracked activity. However, if it is chosen not to use PSP and Process Dashboard, the strategy can still be used, as long as alternatives ways of time tracking are applied. The strategy consists of two large blocks which are constantly executed: planning and control. These blocks feed each other with information gathered by the PSP method, which provides constant feedback. The Control block aim is to assess if the planning elaborated in the Planning block is being correctly followed and, if not, the reasons for this situation. With this Control feedback, the planning activities are constantly adjusted by the EL (Effort Level), which represents the relationship between time spent and work done. The work itself is characterized by the complexity of the estimation method that is being used, like UCP, SP, etc. The Scrum roles are represented in the strategy as follows: the Scrum Master is represented by the manager; the Team is represented by the developers; the Product Owner is the customer representative who is responsible for the return of investment. The Scrum activities are identified in the strategy as: Sprint Planning Meeting 1 corresponds to Step 1; Sprint Planning Meeting 2 is related to Steps 1, 2 and 3; the Sprint Review Meeting is related to Step 9. Regarding the work to be done, the correlation between Scrum and the proposed strategy is as follows: the Product Backlog is represented by a system specification that can be described through use cases, stories, etc; the Sprint Backlog corresponds to the Piece of Work (PW), which may be composed by Items of Work (IW) that can be composed of Tasks. For example, a PW may be a set of use cases selected for an iteration; an IW, in this case, means a use case which can be decomposed into tasks.
108
D. Sande et al.
Figure 1 presents the whole strategy, which is composed by the following steps: Step 1 – Planning Meeting: from system specification - which can be represented by use cases, user’s stories and etc - the manager and the developer discuss the complexity of the work to be done. This complexity is characterized by a technique compatible with the representation being used. For example, use cases require the use of the UCP technique; user stories require the use of Story Points, and so on. In the case of Story Points, the complexity will be determined using some technique - such as Planning Poker - in conjunction with the Fibonacci sequence. Step 2 – PW Detailed Planning: based on the specification of Step 1, it is defined the PW to be developed in the iteration. The IWs that compose this PW are defined based on the development work load of the iteration. If the iteration is the first one, it should be used historical data or the manager and developers’ experience to determine the EL. The subsequent iterations must use the EL calculated in step 8. Step 3 – Detailed IW Planning: each developer is responsible for the division of the IWs assigned to them in Tasks, using the method of his or her preference. At the end of each Task, the developer must evaluate his EL accordingly to the time recorded by Process Dashboard tool, aiming to obtain a more precise planning of the next task. If PSP is being used, this auto-evaluation is equivalent to the ‘postmortem’ phase. In addition, the Process Improvement Proposal report is generated, where errors are reported and estimation improvement activities are proposed. Step 4 – Development: based on the detailed planning of the previous step, the developer effectively does coding, testing and defects fixing activities according to his personal process, which can be improved using the PSP guidelines. Step 5 – Concluded Task: this event is characterized by the conclusion of a Task, hence triggering Step 8, which calculates a new EL value. Step 6 – Concluded Item of Work: this event is related to the conclusion of a IW, triggering Step 8; Step 7 – Concluded Piece of Work: end of iteration. This is characterized by a PW conclusion, triggering Step 9 execution; Step 8 – Current EL calculation: when a Task or an IW are concluded, the developer’s EL must be adjusted to meet the actual relationship between work done and effort in hours. The adjustment corresponds to accumulated time spent so far (provided by PSP using the Process Dashboard tool) divided by the number of points that represent the complexity of work already done. Step 9 – Control Meeting: at the end of an iteration, a meeting between manager and developers is carried out to discuss lessons learned and project scope. Eventually, the scope may change due to new or eliminated requirements, which may impact the initial planning. Besides, if the EL has a very big variation in one unique iteration, it must be evaluated if any external factor (technical or environmental) may be interfering the developer’s performance. In summary, the PW-Plan Strategy provides a systematic approach for planning and controlling iteration-based software development. It is based on the PSP guidelines for individual software process improvement, particularly on the planning activities.
A Strategy to Support Software Planning Based on Piece of Work
109
Fig. 1. PW-Plan strategy
Iterative work is the base of the strategy and then, it is easily adapted to Scrum. Besides, the strategy work unit, named Piece of Work, is a general unit and can also be adapted to the enterprise level.
4 Investigated Tools to Support PW-Plan Project management and project planning tools should support the planning, monitoring and controlling of a project [1]. This requires that project-related information must be kept in a common data base, easily accessible in order to obtain projects metrics. In the agile context, it is important that the tool chosen to support the process makes the development process less bureaucratic as possible. For small businesses, an important factor for choosing a tool is its acquisition cost. At present, there are some free tools that have been developed to support the monitoring of agile projects such as Agilofant1, XPlanner2, TargetProcess3, Agilo for Scrum4, IceScrum5, FireScrum6, among others. These tools intend to provide the ability to manage a project following the agile model where documentation tends to be reduced, and deliveries are made in shorter periods and incrementally. Among the tools mentioned, FireScrum has been selected to support the application of the PW-Plan strategy. It was developed using concepts of Web 2.0 and gathers a set of integrated applications to support teams that use Scrum as a basis for developing their projects. Also, it is particularly useful for distributed teams [8].
The main purpose of the FireScrum tool is support teams that use Scrum. However it also has some support modules that are designed to focus on other needs of project management. The modules of FireScrum are: Core, Taskboard, Planning Poker, Bug Tracking and Desktop Agent. Although FireScrum supplies some needs of PW-Plan strategy, the FireScrum tool doesn’t have the functionality of controlling the time spent in activities. Thus, to support this specific need, the Process Dashboard tools may be used together with FireScrum. In this case the Process Dashboard is used to perform the time estimation and control. Hence, the IWs discussed at the Planning Meeting must be recorded in FireScrum, which further allows the creation of Work Items (Backlog Items) and the customization of the measure used. When a product is created in the FireScrum, it is possible to associate a measure that can be hours, use case points, story points, days, etc. On the main screen of FireScrum - in the Core module - two columns are displayed: the Product Backlog and the Committed Backlog. In the Product Backlog column is recorded all system items and its complexity obtained for each IW during the Step 1 of the PW-Plan. When an IW is created, its description must be registered, which can be refined during the next planning meetings when each IW is selected to compose an iteration. At the Committed Backlog column it will be created the iterations (Sprints), and the IWs which are selected to compose this PW should be dragged of Uncommitted Backlog to the created Sprint. After the meeting, each team member must select and break the items assigned to him into tasks, and estimate each task in hours. To control the time spent in development, the IWs and tasks created in FireScrum should be replicated into Process Dashboard, on which the time will be measured and hence allowing the EL calculation. By the use of Core and Task Board modules of FireScrum, the work will be tracked by the whole team. In the Task Board module, the tasks not yet started shall be in the ‘To Do’ column, while the tasks in development shall be in the ‘In Progress’ column. Tasks which cannot be started at the moment – for whatever reason – shall be in the ‘Impeded’ column, and concluded tasks shall be in the ‘Done’ column. In the next section, two case studies will show in detail the application of the strategy in in two different contexts.
5 Case Studies This section presents two practical examples of the proposed strategy: the first is the development, from ground-up, of a website by Linkway Company; and the second is an update to a traditional desktop system by NBS Company. Each project used a different estimation procedure: the first used “Use Case Points” (UCP), while the second one used “Story Points” (SP) together with the Fibonacci sequence [14]. This fact shows that the strategy is generic and can be adapted to different habits and needs. In both cases the companies used Process Dashboard [5] as the PSP support tool. 5.1 Linkway Case Study – Use Case Points This case study was conducted at Linkway Company during the development of a web portal to a carpet industry. This portal had the following features: a catalogue of
A Strategy to Support Software Planning Based on Piece of Work
111
manufactured products, a list of representatives, product news and institutional data. These data were stored in a database and were manipulated by the Web application. The whole portal was built in Java [19] by only one developer, who consumed approximately 216 hours distributed over the nine use cases that composed the system. To plan the activities, the strategy presented in this paper was applied using Use Case Points to calculate the estimation. First, at the Planning Meeting (Step 1), the manager and the developer defined the complexity values of Actors and Use Cases, hence calculating the Unadjusted Use Case Points. Also in this step, the complexity of technical and environmental factors were evaluated. Then, in the PW Detailed Planning (Step 2), it was calculated the Adjusted Use Case Points and the total estimated time to develop the system. This time corresponds to the multiplication of Adjusted Use Cases Points by the EL, which was determined from the company’s historical data. These values are presented in Table 1. Still in the detailed PW planning stage, it was defined the use cases (IWs) that composed an iteration. To define the PW, it is necessary to know the duration of one iteration. At Linkway, an iteration corresponds to a two week period (or around 60 hours). The workload of the iteration is considered this way because a developer has a daily journey of 8 hours of work, but for planning purposes only 6 hours are considered per day. Thus, considering the historical EL (3), each iteration should have a maximum of 20 UCP (hours / EL = 60 / 3 = 20). Table 1. Initial Web system planning values developed by Linkway Description Unadjusted Use Case Points Technical Complexity Factor Environment Factor Adjusted Use Case Points Initial EL (company’s historical data) Initial Total Time Estimated
Value 101 1.10 0.85 93.45 3 hours 280.40 hours
Based on this value, the use cases were distributed considering 20 UCP per iteration. From this point on, the system development was started and both the EL and the total development time were continuously adjusted. This adjustment allowed the re-planning of the current iteration at the end of the use case, and also allowed that the next iteration would be calculated based on current data instead of historical information. To perform these adjustments, at the conclusion of each Use Case (Step 7), the following values were updated: Accumulated Use Case Points value: the current Use Case value plus the previously finished Use Cases points; Accumulated time spent in the Use Cases development: time spent in the current Use Case plus the previously accumulated value; Current EL value: Accumulated Time divided by Accumulated UCP (this new EL varies as the developer’s performance varies);
112
D. Sande et al.
Remaining time to system finalization: new EL multiplied by Adjusted UCP minus Accumulated UCP so far; Table 2 depicts the application of the strategy, and such table must be elaborated as each Use Case is finished. This systematic monitoring of the development process provides the effective control of the iterations allowing the constant adjustment of the iteration, making the overall planning more feasible and less error-prone. Table 2. Results of the strategy application at Linkway Step 1
Steps 1 and 2 Step 4 Adjusted Use Case Time effectively used for the Use Use Points Case conclusion Complexity Case Individual Accumulated Individual Accumulated Sprint 1 1 Complex 14.50 14.50 24.60 24.60 2 Simple 5.24 19.74 8.36 32.96 3 Medium 9.87 29.61 38.20 71.16 Sprint 2 4 Complex 14.50 44.10 53.50 124.66 5 Medium 9.87 53.97 10.20 134.86 Sprint 3 6 Simple 5.24 59.22 6.50 141.36 7 Medium 9.87 69.09 29.50 170.86 8 Medium 9.87 78.96 28.50 199.36 Sprint 4 9 Complex 14.50 93.45 16.85 216.21
Step 8
Step 2
EL
Time left (in hours)
1.70 1.67 2.40
134.22 123.10 153.22
2.83 2.50
139.66 98.70
2.39 2.47 2.52
81.81 60.17 36.51
2.31
0
Thus, after finishing the use case 1, the corresponding row for the use case was updated. The accumulated values were exactly the same as individual values because only this use case was developed so far. The EL is then 1.70 (24.60 / 14.50), not 3.0 as it was initially assigned according to historical data (see Table 1). Due to the decreased EL value, the planning was recalculated and it was possible to predict that the initial planning of 20 points per iteration could be increased to 30. It is likely that the experience gained by the developer has caused him to be more productive than in previously developed applications. Now, if the developer continued with this new calculated productivity, the application - which initially should consume 280.4 hours in the current conditions would consume 158.87 hours of work (93.45 UCP * 1.7 EL). Hence, the time estimated to completely finish the application would be 134.22 hours ((93.45-14.50) * 1.70). When the use case 2 was completed, the same calculations described early were applied. The new EL was, then, 1.67, indicating that to achieve full implementation more 123.10 hours would be needed. This would correspond to an error of 157.30 hours, compared with initial estimations. Observing the data for use case 9, it is possible to note that the EL has increased to 2.31, and that the actual number of hours spent to develop the system was, actually, 216.21, less than the original estimation of 280.40 hours.
A Strategy to Support Software Planning Based on Piece of Work
113
It should be highlighted that this constant change in the EL was the result of the application and registration of the planning activities of the PSP. This procedure gives the developer a greater personal planning capacity, as well as a more precise work estimation capacity. 5.2 NBS Case Study – Story Points The second case study, applied in NBS company, was an update to a desktop system of public accounting, developed in Delphi [20]. The existing accounting system was restructured to meet the requirements of electronic auditing by governmental agencies. Because this is a maintenance activity in a previously existing system, the application of Use Case Points was not appropriate because only some parts of the use cases would be modified. Thus, modifications in the system were described as user stories. As the previous case study, work was performed by only one developer, who consumed approximately 380 hours distributed over 40 stories that made up the system. The Fibonacci sequence - which is one of the methods proposed by studies in the area to characterize the complexity of a user story [14] - was used to calculate these Story Points. At the Planning Meeting between manager and developers (Step 1), each user story was given a score, using the Fibonacci sequence. The total Story Points to complete the development of the system was calculated as 308. Then, in the detailed PW planning activity (Step 2), the total time estimated to update the system was calculated by multiplying the Story Points by the EL, whose value was taken from historical data. From these calculations, a determined story quantity was allocated for each iteration, for the next two weeks. These values are shown in Table 3. Table 3. Initial planning values of the desktop system updated by NBS Description Total Story Points Initial EL (historical data) Initial total time estimated
Value 308 1.3 hours 400.4 hours
The distribution of the stories per iteration was done, as the previous case study, considering that each iterations lasts for approximately 60 hours. Thus, considering the historical EL (1.3), each sprint should have, approximately, 47 points per story (hours / EL = 60 / 1.3 = 47.2). The development was then initiated, and the adjustments of the EL and the remaining time for conclusion were done at the end of each story. When a story was concluded (Step 7), the following values were updated: Accumulated Story Points: current story points plus the previously accumulated value; Total accumulated time spent coding stories: time spent coding current story plus previously accumulated value;
114
D. Sande et al.
Current EL value: total accumulated time spent coding stories divided by accumulated story points; Total time to development completion: current EL multiplied by total story points, minus accumulated points so far; Table 4 shows the developed stories during the iterations and the calculated values when applied the strategy. As an example, the EL calculation for story 3 was the total time (0.48 + 2.20 + 12.42 = 15.10) divided by total story points (1 + 2 + 8 = 11), which is 1.37 (15.10 / 11). The data is only partially shown in table 4 because there were too many stories to be represented in this paper. When story 1 was finished, the corresponding row was updated. The accumulated values were exactly the same as individual values, because only this story was developed so far. The EL is, then, 0.48 (0.48 / 1), and not 1.30 as initially indicated by historical values. Hence, if the developer continued with this productivity, only 147.36 hours would be remaining to complete the whole development. Table 4. Results of the strategy application in NBS company
A Strategy to Support Software Planning Based on Piece of Work
115
However, this EL value produced very low time estimation. Thus, it was decided to wait for the completion of a more complex story to verify if the productivity would remain so high. When story 3 was concluded, it was possible to note that the EL was then much nearer the initial value taken from the developer historic data. Thus, the manager decision was to keep the iteration development considering the same quantity of stories distributed in the PW Detailed Planning. At the end of iteration 1, the EL was 1.2, nearer the 1.37 of story 3. Based in this new EL, a new Story Points was calculated as the appropriate amount of work for each iteration. This value is (time / EL = 60 / 1.20) 50 points. As this story points values was similar to the initial value (50), no modifications were made to the iterations organization.
Fig. 2. FireScrum planning screen
For this 308 story points system with average EL of 1.24, the actual time spent developing the whole system was 380 hours, which is less than the originally estimated 400 hours. This difference between estimated time and actual time, yet small one, is a strong evidence of the developer’s improvement in his personal planning capacity and work estimation. This is, again, result of the constant application of the PSP methods, which require that the developer plan his work and then become more precise in his estimations. To make all development data available to involved people and register the finished project for further analysis, this project execution data was registered at FireScrum tool. In Figure 2, it is shown the iterations executed throughout this project and the IWs developed in the first iteration.
116
D. Sande et al.
6 Lessons Learned The main lessons learned are related to the definition of a personal process whose aim is to constantly plan and monitor the development of a software project. The definition of a personal process improves the productivity and the decisionmaking capacity of the developer. In the case studies presented, both companies formally adopted the PSP, which provided a disciplined development environment which productivity could be measured and estimated constantly. Thus, if the developers do not adopt the PSP, they should always produce estimations of the work to be developed and then track the time spent to effectively develop it. The iteration-based development facilitates the planning, which must be elaborated according to the productivity of each developer. Hence, chances of success are high, which makes the strategy very feasible in the small businesses context. The planning monitoring must be constantly done, because it allows estimations to be adjusted at any time in the development. This monitoring should be done by the manager, who will act as a coach of the team, re-estimating the work to be developed and encouraging developers to improve their personal software development processes.
7 Conclusions In this paper, the PW-Plan strategy was presented. This strategy supports the planning stage of iteration-based software development. It can be used with other development methodologies because the planning phase is essential for every development cycle. Each iteration develops a PW, which can be composed of IWs that are distributed among developers. These, in turn, can decompose an IW into Tasks. Throughout development, the developers must use the planning guidelines of the PSP 1.1, so that the developer has a commitment of planning and tracking the time associated with each development effort. This approach allows the Effort Level (EL), which reflects the relationship between work and the time spent doing work, to be constantly updated. This always up-to-date value allows the monitoring of the project as a whole, as well as each iteration. According to case studies, it was obtained evidence that the PW-Plan strategy supports the project planning and control, providing an improvement in the activities of both the manager and the developer. For the manager, he will take greater control over the project and will be able to take decisions at the appropriate time if something seems different from the expected. For the developer, he discovers his productivity and, therefore, does more precise estimations of his personal work. Overall, estimations for the software project are more accurate. It is noteworthy that in the context of large companies the strategy may have to be adapted, because depending on the size of the development team an individual control is unfeasible. For larger teams, the manager must define an approach to "coach" several developers at the same time, even though the strategy must be applied by each developer, and the manager should control their productivity individually. The two case studies presented had the participation of a single developer, fact that did not allow a preliminary assessment of this issue.
A Strategy to Support Software Planning Based on Piece of Work
117
The EL value varied almost 100% from one company to another. This strongly suggests that the EL should be adjusted to the context of each company in order to represent, preferably, the productivity profile of each developer. There was a relative stability of the EL for each project, allowing the constant monitoring of development through the analysis of this variable. Because PW-Plan is based on iterations and in the performance monitoring of the developer, it is a generic strategy. It can be adapted to agile methods and for any estimation technique used by the company, be it small or an enterprise. As future work, it is intended to include in this strategy other levels of the PSP, apply it to projects with more than one developer and also explore other types of metrics that could further improve planning. At present, quality practices – especially Verification, Validation and Testing (VV & T) activities - are already being incorporated into the strategy in a way that the same systematic control remains functional. Also, it is intended to perform an analysis of the strategy implementation as a support to the implementation of some processes models, such as Capability Maturity Model Integration [21] and Process Improvement of Brazilian Software [22]. Acknowledgements. We would like to thank CNPq and FAPESP for the financial support. Special thanks to Linkway and NBS companies for their cooperation in the work.
References 1. Pressman, R.S.: Software Engineering: A Practitioner’s Approach. McGraw-Hill Inc., New York (2007) 2. Sanchez, A., Montebelo, R., Fabbri, S.: PCU|PSP: Uma Estratégia para ajustar Pontos por Casos de Uso por meio do PSP em Empresas de Pequeno Porte. In: Proceedings of VI Simpósio Brasileiro de Qualidade de Software, pp. 187–202 (2007) 3. Karner, G.: Resource Estimation for Objectory Projects. Objective Systems SF AB (copyright owned by Rational Software) (1993) 4. Humphrey, W.S.: A discipline for software engineering. Addison-Wesley, Pittsburgh (1995) 5. Dashboard - The Software Process Dashboard Initiative, http://processdash.sourceforge.net 6. Schwaber, K.: Agile project management with Scrum. Microsoft Press, Redmond (2004) 7. Beck, K., Andres, C.: Extreme Programming Explained: Embrace Change. AddisonWesley Professional, USA (2004) 8. Cavalcanti, E., Maciel, T.M.M., Albuquerque, J.: Ferramenta Open-Source para Apoio ao Uso do Scrum por Equipes Distribuídas. In: Workshop de Desenvolvimento Distribuído de Software (2009) 9. Manifesto - Manifesto for Agile Software Development, http://agilemanifesto.org/ 10. DSDM - DSDM Public Version 4.2 Manual, http://www.dsdm.org/version4/2/public/ 11. Palmer, S.R., Felsing, J.M.: A Practical Guide to Feature-Driven Development. PrenticeHall, New Jersey (2002)
118
D. Sande et al.
12. Highsmith, J.: Agile software development ecosystems. Addison-Wesley, Reading (2002) 13. Openup, http://epf.eclipse.org/wikis/openup/ 14. Cockburn, A.: Agile software development. Addison-Wesley Longman Publishing Co. Inc., Boston (2002) 15. Cohn, M.: Agile estimating and planning. Prentice-Hall, New Jersey (2005) 16. Haugen, N.C.: An empirical study of using planning poker for user story estimation. In: Proceedings of Agile 2006 Conference, p. 34 (2006) 17. Albrecht, A.J.: Measuring application development productivity. In: Proceedings of SHARE/GUIDE IBM Application Development Symposium, pp. 83–92 (1979) 18. Kniberg, H.: Scrum and XP from the Trenches - How we do use Scrum (2007), Retrieved from http://www.crisp.se/henrik.kniberg/ ScrumAndXpFromTheTrenches.pdf 19. Sun - Developer Resources for Java Technology, http://java.sun.com 20. Embarcadero - Delphi, http://www.embarcadero.com/products/delphi 21. CMMI - Capability Maturity Model Integration Version 1.2. CMMI-SE/SW, V1.2 – Continuous Representation (SEI Technical Report CMU/SEI-2006-TR-001) (2006) 22. MPSBR - Melhoria de Processo do Software Brasileiro – Guia Geral (Versão 1.2) (2007), http://www.softex.br/mpsbr (retrieved January 4, 2010)
Evaluating the Quality of Free/Open Source Systems: A Case Study Lerina Aversano and Maria Tortorella Department of Engineering, University of Sannio, via Traiano 82100, Benevento, Italy {aversano,tortorella}@unisannio.it
Abstract. Selecting and adopting open source projects significantly impact the competitiveness of organizations. Specially, Small and Medium Enterprises have to deal with major difficulties when they have to adopt existing solutions for managing their business activities. In particular, this problem is mainly felt with the availability of ERP Open Source systems. This paper proposes EFFORT (Evaluation Framework for Free/Open souRce projecTs), a framework for evaluating the quality of Open Source systems. The framework is, then, specialized for evaluating the quality of ERP Open Source systems. The usefulness of the specialized framework is investigated through a case study. Keywords: Software evaluation, Software Metrics, Open Source, Enterprise resource planning.
needs to be specialized to the evaluation of FlOSS systems related to a specific context. In particular, the paper specializes EFFORT to the context of the ERP software systems, and shows the applicability of the specialized version through a case study. The paper is structured as follows: related works are discussed in section 2; the quality model and proposed measurement framework are described in section 3; section 4 discusses the specialization of EFFORT and its application to a case study, consisting of the evaluation of an ERP FlOSS project; finally, conclusions and future works are discussed section 5.
2 Related Work Open source projects are quite different from standard ones, in terms of production process, distribution methods and support. Many organizations and researchers considers the evaluation of these aspects as necessary to assess the quality of an open source project. In [3], Kamseu and Habra analyzed the different factors that potentially influence the adoption of an open source software. They identified a three dimensional model based on the following different dimensions of quality related to open source projects: quality of development process; quality of the community which made and maintains the product; and quality of the product. The authors also defined a set of hypothesis regarding the correlation between quality and adoption of an open source project. In particular, they supposed that the quality of the community and development process has a positive effect on the global project quality, which has a positive influence on the adoption of the relative product. In [4], Sung, Kim and Rhew focused on the quality of the product and identified some problems in evaluating an OSS product, such as the difficulty of using description and/or specification and collecting information that the developers do not make public. They also noticed that testing OSS is often insufficient, because of the programmers’ lack of responsibilities for testing the performance and/or defect of their product, since commercial purpose is usually not the main reason of the development. For defining an OSS product quality model, the author of [4] acted in a collaborative way, collecting priorities for some product characteristics defined by different open source development companies. The resultant model consists of a set of 4 main quality characteristics (functionality, usability, implantation, reusability) and 10 subcharacteristics. IRCA [5] – Identify, Read reviews, Compare, and Analyze – is an OSS selection process, based on side-by-side comparison of different software. The process consists of four steps: Identify candidates, Read existing reviews, Compare the leading programs' basic attributes to your needs, and Analyze the top candidates in more depth. The attributes it considers are: functionality, cost, market share, support, maintenance, reliability, performance, scalability, usability, security, flexibility/customizability, interoperability, and legal/license issues. No metric is considered for performing the evaluation. The QSOS – Qualification and Selection of Open Source software – methodology [6] consists of a set of steps aiming at evaluating a list of OSS projects meeting a set of requirements, on the basis of a list of evaluation criteria, for selecting the OSS
Evaluating the Quality of Free/Open Source Systems: A Case Study
121
project that better meets the evaluation criteria. QSOS is thought for reuse and uses a repository of past evaluations. Criticisms to this methodology were moved with reference to the small scoring range [9]. Moreover, ambiguity was detected for the lack of definitions. The OpenBRR project – Business Readiness Rating for Open Source – born with the same purpose of QSOS [8]. The approach is based on a set of high level steps regarding: Pre-screening; Tailoring of the evaluation template reviewing and selecting appropriate evaluation criteria from a given hierarchy; Data Collection and Processing; Data Translation. Criticisms moved to OpenBRR mainly regard the ambiguity of the terms used for the criteria [7]. QualiPSo – Quality Platform for Open Source Software – is one of the biggest initiatives related to open source software realized by the European Union. The project was born with the purpose of supporting companies and governments in the adoption of trusted OSS. QualiPSo intends to define and implement technologies, processes and politics for facilitating the development and use of OSS components with the same level of trustworthiness given by proprietary software. QualiPSo products include an evaluation framework for the trustworthiness of Open Source projects [9], consisting of: a conceptual model that defines the trustworthiness in terms of product quality, just like the standard ISO/IEC 9126, that contains as-is utility, exploitability in development, functionality, interoperability, reliability, performance, security, cost effectiveness, customer satisfaction and developer quality; a trustworthiness model, that links qualities influencing trustworthiness with main characteristics of OSS products; and a set of metrics. Table 1. Comparison among models: coverage of ISO standard ISO/IEC 9126 Functionality
Table 1 shows a comparison among the models described above with reference to the quality parameters proposed in the ISO/IEC 9126 standard. In several cases, an ISO attribute is considered covered by a quality model if this one considers at least one of its attribute. The table highlights that none of the quality models considers all the quality attributes introduced by the ISO standard. The quality model that appears to be more complete is IRCA considering 6 parameters of the 7 listed ones, even if it does not propose the metrics for performing the evaluation.
122
L. Aversano and M. Tortorella
Analyzing common features consists of identify the intersections among models. The diagram in Figure 1 shows the intersection of all the listed models. This comparison does not include the quality characteristics of the ISO/IEC 9126. As one can see, some models mostly emphasize product intrinsic characteristics, and only in a small part the other OSS dimensions. Such models provide major coverage of the ISO standard. Vice versa, models that try to deeply consider OSS aspects, offer a reduced coverage. IRCA is the only exception, as it appears to be complete enough. Unfortunately, it is just a conceptual model, but it does not provide an evaluation framework and is not formalized.
Sung-Kim-Rew QSOS Licence
Maturity
Documentation Support Adoption Community
IRCA
OpenBRR
Architecture
QualiPSo Cost effectiveness
Fig. 1. Comparison among FlOSS projects quality models
This paper proposes the EFFORT evaluation framework, aiming at overcoming the limitations of the previous approaches through the definition of a quality models that considers all the characteristic aspects of an open source software project.
3 The Proposed Framework This section presents the proposed evaluation framework, called EFFORT – Evaluation Framework for Free/Open souRce projects. Its main purpose is defining a quality model and measurement tool for supporting the evaluation of FlOSS projects, avoiding the limitation of the approaches analyzed in the previous section. The quality model is synthesized in Figure 2. It defines the quality of a FlOSS project as the synergy of three major components: quality of the product developed within the project, trustworthiness of the community of developers and contributors, and product attractiveness to its specified catchment area. The evaluation framework covers the aspects reported in the model above and is defined by applying the Goal Question Metric (GQM) paradigm [10]. This paradigm guides the definition of a metric program on the basis of three abstraction levels: Conceptual level, referred to the definition of the Goals to be achieved by the measurement activity; Operational level, consisting of a set of Questions facing the way
Evaluating the Quality of Free/Open Source Systems: A Case Study
123
the assessment/ achievement of a specific goal is addressed; and Quantitative level, identifying a set of Metrics to be associated to each question. Figure 2 shows the hierarchy of attributes that composes the quality model. In correspondence to each first-level characteristics, one Goal is defined. Then, EFFORT includes three goals. Questions, consequentially, map the second-level characteristics. Considering the amount of aspects to take into account, Goal 1 was broken up into sub-goals, because of its high complexity. For question of space, the figure does not present the third level related to the metrics used for answering the questions. The following subsections summarily describe each goal, providing a formalization of the goal itself, incidental definitions of specific terms and list of questions. For reasons of space, the metrics considered for answering all the questions are not reported in this paper. A complete portion of the framework, including the questions, will be just shown for Goal 2. Free/Open Source Project Quality
3.1 Product Quality One of the main aspects that denotes the quality of a project is product quality. It is unlikely that a product of high and durable quality was developed in a poor quality project. So, it is necessary to consider all the aspects of software product quality, as defined by ISO/IEC 9126 standard [1]. Goal 1 is defined as follows: Analyze the software product with the aim of evaluating its quality, from a software engineering’s point of view. Given the vastness of the aspects considered by the ISO standard, Goal 1 is decomposed in sub-goals, each of which is focused on a single issue corresponding to one of the six main characteristics of the reference model: portability, maintainability,
124
L. Aversano and M. Tortorella
reliability, functionality, usability, and efficiency. The in-use quality characteristic is not considered in this context. Several sub-characteristics are defined for each attribute, and examined by specific questions. Table 2 shows sub-goals and questions related to Goal 1. A precise definition of each characteristic, can be found in ISO/IEC 9126 standard [1]. Table 2. Sub-goals and questions about Product Quality Sub-goal 1a: Analyze the software product with the aim of evaluating it as regards the portability, from a software engineering’s point of view Q 1a.1 What degree of adaptability does the product offer? Q 1a.2 What degree of installability does the product offer? Q 1a.3 What degree of replaceability does the product offer? Q 1a.4 What degree of coesistence does the product offer? Sub-goal 1b: Analyze the software product with the aim of evaluating it as regards the maintainability, from a software engineering’s point of view Q 1b.1 What degree of analyzability does the product offer? Q 1b.2 What degree of changeability does the product offer? Q 1b.3 What degree of testability does the product offer? Q 1b.4 What degree of technology concentration does the product offer? Q 1b.5 What degree of stability does the product offer? Sub-goal 1c: Analyze the software product with the aim of evaluating it as regards the reliability, from a software engineering’s point of view Q 1c.1 What degree of robustness does the product offer? Q 1c.2 What degree of recoverability does the product offer? Sub-goal 1d: Analyze the software product with the aim of evaluating as regards the functionality, from a software engineering’s point of view Q 1d.1 What degree of functional adequacy does the product offer? Q 1d.2 What degree of interoperability does the product offer? Q 1d.3 What degree of functional accuracy does the product offer? Sub-goal 1e: Analyze the software product with the aim of evaluating as regards the usability, from a user’s point of view Q 1e.1 What degree of pleasantness does the product offer? Q 1e.2 What degree of operability does the product offer? Q 1e.3 What degree of understandability does the product offer? Q 1e.4 What degree of learnability does the product offer? Sub-goal 1f: Analyze the software product with the aim of evaluating as regards the efficiency, from the software engineering’s point of view Q 1f.1 How the product is characterized in terms of time behavior? Q 1f.2 How the product is characterized in terms of resources utilization?
3.2 Community Trustworthiness When adopting a FlOSS product, users are generally worried about the offered supports in case of troubles. The possibility to get a product for free awakes the often founded fear of not having warranties. In fact, the community is not in duty-bound of supporting a user that adopts its software product. Anyway, a certain degree of support is generally given in quantity and modality that differ from a community to another one. It is considered valuable to include community trustworthiness in the definition of the global quality of a FlOSS project. With community trustworthiness,
Evaluating the Quality of Free/Open Source Systems: A Case Study
125
we intend the degree of trust that a user can give to a community regarding support. Goal 2 is defined as follows: Analyze the offered support with the aim of evaluating the community with reference to the trustworthiness, from a (user/organization) adopter’s point point of view. Generally a community provides a set of tools that support users in using its products such as forums, mailing lists, bug trackers, documentation, wiki and frequently asking questions. In other cases, it is possible for the user to acquire a commercial edition of the software product, that usually differs from free edition in terms of support and warranties provided. The availability of support tools, as well as support services, gives to users confidence in using the product and increases the degree of trust toward the community and, consequentially, the project. Another important factor influencing trust in a project is the availability of documentation for installing, using and modifying the software product. Moreover, the more a community is large and active in developing new releases, updates, patches and so on, the more it is trusted from the user’s point of view. All these aspects are included in the community trustworthiness concept. Table 3 shows the set of questions related to Goal 2, and Table 4 lists also the metrics related to question 2.3. Table 3. Questions about Community Trustworthiness Q 2.1 Q 2.2 Q 2.3 Q 2.4 Q 2.5
How many developers does the community involve? What degree of activity has the community? Support tools are available and effective? Are support services provided? Is the documentation exhaustive and easily consultable?
Table 4. Metrics related to question Q 2.3 M 2.3.1 M 2.3.2 M 2.3.3 M 2.3.4 M 2.3.5 M 2.3.6 M 2.3.7 M 2.3.8 M 2.3.9
Number of thread per year Index of unreplied threads Number of forums Average of threads per forum Average of posts per year Degree of internationalization of the forum Number of trackers Wiki volume Number of frequently asked questions
3.3 Product Attractiveness This goal has the purpose of evaluating the attractiveness of the product toward its catchment area. The term attractiveness indicates all the factors that influence the adoption of a product by a potential user, who perceive convenience and usefulness for achieving his scopes.
126
L. Aversano and M. Tortorella
Goal 3, related to product attractiveness, is formalized as follows: Analyze software product with the aim of evaluate it as regards attractiveness from a (user/organization) adopter’s point of view. This goal is more dependent from the application context than the other ones. The difference of application contexts is the reason why different kind of software products are developed for satisfying different needs. For instance, an Enterprise Resource Planning (ERP) system is referred to an organization context, and the more it is customizable and configurable, the more it is attractive. Such a thing is not necessarily true for a word processing software, to which the user mainly requires ease of use and compliance of de facto standards. It is possible, anyway, to identify some factors of attractiveness, shared among all FlOSS projects. Table 5. Questions about Product Attractiveness Q 3.1 Q 3.2 Q 3.3 Q 3.4
What degree of functional adequacy does the product offer? What degree of diffusion does the product achieved? What level of cost effectiveness is estimated? What degree of reusability and redistribution is left by the license?
Two elements that have to be considered, during the selection of a FlOSS product, are functional adequacy and diffusion. The latter, in fact, could be considered as a marker of how the product is appreciated and recognized as useful and effective. Other factors that one can consider are cost effectiveness, estimating the TCO (Total Cost of Ownership) [11], and type of license. This aspects are considered for formulating the questions of Goal 3 that are listed in Table 5. Concerning cost effectiveness, considered in Question 3.3, it is opportune to collect all the information regarding cost of services. The amount of available information can vary a lot among projects, also because services are sometimes offered by third parties. For making the evaluation framework more complete with reference to a specific project, it is possible to add metrics whenever the available information allows that. The license, referred in Questions 3.4, can have a various degree of relevance, according to the purpose and needs of the users. In particular, the kind of license influences reuse and imposes some restrictions more or less severe regarding the possibility of including the code in projects. Principal characteristics of the licenses are: persistency, that imposes constraints on redistribution; propagation, which states that source code is linkable only with code released under the same license. A license can have both characteristics above, just one of them, or no one. This leaves different degree of freedom in using code within other projects. It is not rare that software producers leave the possibility of choosing among different types of licenses, for not preventing the diffusion and adoption of their software. 3.4 Data Analysis Once data have been collected by means of metrics, it is necessary to aggregate them, according to the interpretation of the metrics, so one can obtain useful information for
Evaluating the Quality of Free/Open Source Systems: A Case Study
127
answering the questions. Aggregation of answers gives an indication regarding the achievement of the goals. In doing aggregation, some issues needs to be considered. These are listed below: • Metrics have different type of scale, depending on their nature. Then, it is not possible to directly aggregate measures. To overcome that, after the measurement is done, each metric is mapped to a discrete score in the [1-5] interval, where: 1 = inadequate; 2 = poor; 3 = sufficient; 4 = good; and, 5 = excellent. • An high value for a metric can be interpreted in a positive or a negative way, according to the context of the related question; even the same metric could contribute in two opposite ways in the context of two different questions. So, the appropriate interpretation is provided for each metric. • Questions do not have the same relevance in the evaluation of a goal. A relevance marker is associated to each metric in the form of a numeric value in [1,5] interval. Value 1 is associated to questions with minimum relevance, while value 5 means maximum relevance. The aggregation function for Goal g is defined as follows: /
where: rid, relevance associated to question id (sub-goal for goal 1); Qg, the set of questions (sub-goals for goal 1) related to goal g. m(q) is the aggregation function of the metrics of question q: 1
6
/|
|
where v(id) is the score obtained for metric id and i(id) is its interpretation. In particular: 0 1
Mq is the set of metrics related to question q.
4 Case Study This section discusses the case study that was performed for verifying the applicability of EFFORT. With this in mind, an ERP open source project was evaluated. The chosen project was Compiere [13]. This required the customization of EFFORT to the ERP context and brought to the definition of a framework specialized for evaluating FlOSS ERP systems. The evaluation of FlOSS ERPs has already been investigated in literature, and some frameworks have been proposed.
128
L. Aversano and M. Tortorella
Based on a set of aspects to be investigated in a software system, Birdogan and Kemal [14] propose an approach identifying and grouping the main criteria for selecting an ERP system. Evaluation-Matrix (http://evaluation-matrix.com) is a platform for comparing management software systems. The approach follows two main goals: constructing a list of characteristics representing the most common needs of the user; and having at disposal a tool for evaluating available software systems. Open Source ERP Guru [15] is a web site offering a support to the users in the identification of an ERP open source solution to be adopted in their organization. It aims at providing an exhaustive comparison among open-source ERP software systems. In [16] Reuther and Chattopadhyay performed a study for identifying the main critical factors for selecting and implementing an ERP system to be adopted within a SME. The identified factors were grouped in the following categories: technical/functional requirements, business drivers, cost drivers, flexibility, scalability, and other ones specific to the application domain. This research was extended by Zirawani, Salihin and Habibollah [17], that reanalyzed it by considering the context of FlOSS projects. Wei, Chien and Wang [18] defined a framework for selecting ERP system based on the AHP – Analytic Hierarchy Process – technique. This is a technique for supporting multiple criteria decision problems, and suggests how determining the priority of a set of alternatives and importance of the relative attributes. The listed models are quite heterogeneous, but they have the common goal of identifying critical factors for the selection of ERP systems. Birdogan and Kemal model is the most complete. The largely considered criteria regard functionality, usability, costs, support services, system reliability and customizability. The following two subsections discuss the specialization of EFFORT and evaluation of an ERP system, respectively. 4.1 Specializing EFFORT Goal 3 is the most dependent from the application context than the other goals. That is why every kind of software products come to life to satisfy different needs. For instance, with reference to a real-time software, the more it is efficient the more it is attractive. Such a thing is not necessarily true for a word processing software, to which the user requires ease of use and compliance of de facto standards. For this reasons, Goal 3, that mainly regards the way a software system should be used for being attractive, strongly depends on the application domain of the analysed software system and needs a customization to the specific context. Therefore, EFFORT was extended and customized to the context of the ERP systems for taking into account additional attraction factors that are specific to this context. The customization of EFFORT required the insertion of additional questions referred to Goal 3. In particular, the following aspects were considered: Migration between different versions of the software, in terms of support provided for switching from a release to another one. In the context of ERP systems, this cannot be afforded like a new installation, because it would be too costly, taking into
Evaluating the Quality of Free/Open Source Systems: A Case Study
129
account that such a kind of systems are generally profoundly customized and host a lot of data; System population, in terms of support offered for importing big volumes of data into the system; System configuration, intended as provided support, in terms of functionality and documentation, regarding the adaption of the systems to specific needs of the company, such as localization and internationalization. Higher the system configurability, lower the start-up time; System customization, intended as support provided, without direct access to source code, for doing alteration to the system, such as the definition of new modules, installation of extensions, personalization of reports and possibility of creating new workflows. This characteristic is very desirable in ERP systems. Table 6 shows questions extending Goal 3. As it can be noticed, the new questions are referred to the listed characteristics. Table 6. Specialization of EFFORT for evaluating ERP systems Q 3.5 Q 3.6 Q 3.7 Q 3.8
What degree of support for migration between different releases is it offered? What degree of support for population of the system is it offered? What degree of support for configuration of the system is it offered? What degree of support for customization of the system is it offered?
The customization of Goal 3 also required the adding of entries concerning costs. In fact, EFFORT baseline just considered the possibility of having the product free of charge and the amount to be spent for an annual subscription. As this is not sufficient for ERP systems, a customization was considered for including also costs for customization, configuration, migration between releases and population of the system. In addition, knowing the application domain, all the three goals required a better characterization of some aspects. In fact, additional metrics were considered other than those ones considered by the general version of EFFORT. For instance, regarding Goal1, the specialized version assigned a relevance value different from that assigned in the baseline version to the adaptability and replaceability (and, consequentially, to portability). In fact, the number of supported DBMS and availability of a web client interface were considered for the adaptability characteristic. Whereas, aspects such as availability of functionality for backup and restore data, availability of backup services and numbers of reporting formats were taken into account for the replaceability characteristic. Those aspects are not significant for other kind of software products. 4.2 Evaluating an ERP FlOSS Project The ERP specialized version of EFFORT was applied for the evaluation of Compiere [13], a FlOSS ERP project. Compiere is one of the most diffused ERP Open Source System. Therefore, it was considered as a relevant case study for validating the framework applicability. In addition, a comparison was carried out between the results obtained evaluating Compiere by using the baseline version of the EFFORT
130
L. Aversano and M. Tortorella
framework, and those reached evaluating the system by applying the customized version of the framework. In the following, a summary of the evaluation results in table form is given for each goal described in previous section. The data necessary for the application of the framework were mainly collected by analyzing different source: documentation, software trackers, source code repositories and official web sites of the project. Moreover, some other data were obtained by analyzing the source code and using the product itself. Finally, further considered data source were some very useful web sites, such as sourceforge.net, freshmeat.net and ohloh.net. Table 7. Results regarding Compiere Product Quality QUALITY CHARACTERISTIC Portability Adaptability Installability Replaceability Maintainability Analyzability Changeability Testability Technology cohesion Reliability Robustness Maturity Recoverability Functionality Functional adequacy Interoperability Usability Pleasantness Operability Understandability Learnability
PRODUCT QUALITY
RELEVANCE FLOSS ERP
3
3
3 5 4
SCORE BASELINE
SPECIALIZED
4,1
3,57
5 2,64 4,67
3,33 2,64 4,75
2,83
2,83
3 2,8 2,5 3
3 2,8 2,5 3
4,42
4,46
4,16 4,67
4,16 4,75
4,13
3,96
3,25 5
3,25 4,67
3,28
3,28
2 4 3,89 3,25
2 4 3,89 3,25
2
4
5 5 4
EFFORT BASELINE VERSION EFFORT SPECIALIZED VERSION
3,77 3,66
The “in vitro” nature of the experiment did not allow a realistic evaluation of the efficiency, so it has been left out. Tables 7, 8 and 9 synthesize the obtained results. They list all the quality characteristics and score obtained for each of them by applying the relevance indexes, also listed, of both baseline EFFORT and its customized framework. Table 7 shows results regarding the quality of Compiere software product. It presents two columns of scores: the “BASELINE” column is obtained by just considering metrics from the EFFORT general version, while the “SPECIALIZED” column contains results from the evaluation obtained by means of all the metrics of the EFFORT customized version. The global scores obtained with the two different relevance criteria, are substantially the same for Compiere product quality. It can be observed that Compiere product is characterized by more than sufficient quality. With a detailed analysis of the sub-characteristics, it can be noticed that the
Evaluating the Quality of Free/Open Source Systems: A Case Study
131
product offers a good degree of portability and functionality, excellent reliability and sufficient usability. Concerning product quality, the lowest value obtained by Compiere is related to the maintainability. With reference to the reliability, the characteristic with higher score, a very satisfying value was achieved by the robustness, in terms of age, small amount of post discovered release bugs, low defect density, defect per module and index of unsolved bugs. An even higher value was obtained for the recoverability, measured in terms of availability of backup and restore functions and services. Concerning maintainability, the lower score, it was evaluated by mainly using CK metrics [2], associated to the related sub-characteristics. For instance, the mediumlow value for testability of Compiere depends on high average number of children (NOC) of classes, number of attributes (NOA) and overridden methods (NOM), as well as little availability of built in test functions. Values of cyclomatic complexity (VG) and dept of inheritance tree (DIT) are on the average. Table 8 report data regarding community trustworthiness. In Goals 2 and 3, the hierarchy of the considered characteristics has one less level. Moreover, aspects of this goal are completely generalizable for all FlOSS projects so anything of this part of the EFFORT framework changes, but the relevance. The score obtained by Compiere for community trustworthiness is definitely lower with respect to product quality. In particular, community behind Compiere is not particularly active; in fact, average number of major release per year, average number of commits per year and closed bugs percentage assume low values. Support tools are poorly used. In particular, low activity in official forums was registered. Documentation available free of charge was of small dimension; while the support by services results was more than sufficient, even if it was available just for commercial editions of the product. This reflects the business model of Compiere Inc., slightly distant from traditional open source model: product for free, support with fee. This time the evaluation by means of the specialized version of EFFORT gives better results for the Compiere community. That is the main reason for which the availability of support services was considered more important than community activity, in the ERP context. Table 8. Results regarding Community Trustworthiness QUALITY CHARACTERISTICS
Developers Number Community Activity Support Tools Support Services Documentation COMMUNITY TRUSTWORTHINESS
RELEVANCE FLOSS ERP 2 1 4 2 5 4 2 4 4 4 EFFORT BASELINE VERSION EFFORT SPECIALIZED VERSION
SCORE 2 2,60 2,44 3,44 1,67 2,36 2,43
As mentioned before, product attractiveness is the quality aspect more dependent from operative context of the product itself. In this case, it was extended the relative goal with other four questions, as explained in the previous section. The aim was to investigate how this can influence the evaluation.
132
L. Aversano and M. Tortorella
Results are showed in Table 9. Compiere offers a good attractiveness, especially if the score obtained from the analysis done with the EFFORT customized version is considered. In particular, a sufficient functional adequacy and excellent legal reusability are obtained, because of the possibility left to the users of choosing the license, even if it is a commercial one. Compiere product results quite widespread. The last thing was evaluated measuring the following attributes: downloads number, freshmeat popularity index, sourceforge users rating number, positive sourceforge rating index, success stories number, visibility on google, official partners number, as well as number of published books, experts review and academic papers. The higher relevance of the cost aspects in the ERP systems explains the different scores obtained in the baseline and specialized versions of EFFORT with reference to the cost effectiveness in Table 9. As an ERP system, Compiere provides an excellent customizability and data importability, as well as a good configurability and migrability. High values for those characteristics contribute to increment attractiveness, that goes from 3,42 to 3,96. Table 9. Results about Product Attractiveness QUALITY CHARACTERISTIC Functional Adequacy Diffusion Cost Effectiveness Legal Reusability Migrability Data Importability Configurability Customizability PRODUCT QUALITY
RELEVANCE SCORE FLOSS ERP B A S E L I N E SPECIALIZED 5 5 3,25 3,25 4 3 4 4 3 5 2,40 3,22 1 5 5 5 5 3,67 5 5 2 3,89 4 4,67 3,42 EFFORT BASELINE VERSION EFFORT SPECIALIZED VERSION 3,96
5 Conclusions This work has been motivated by necessity of having tools and models to characterize and evaluate the quality of FlOSS projects, comprehensive of quality characteristics of the product and peculiar aspects of such a projects. In particular, when an organization has to take decision regarding the introduction of an ERP system in its operative processes the availability of methodological and technical tools for supporting the adoption process is desirable. This paper presents EFFORT, a framework for the evaluation of FlOSS projects, and its customization to explicitly fit the ERP software system domain. The specialization mainly regarded product attractiveness characterization. The proposed framework is compliant with ISO/IEC 9126 standard for product quality. In fact it considers all of characteristics defined by the standard model, but inuse quality. Moreover, it considers major aspects of FlOSS projects and, has been specialized for ERP systems. The applicability of the framework is described through a case study. Indeed, EFFORT was used to evaluate Compiere, one of the most diffused FlOSS ERP. The
Evaluating the Quality of Free/Open Source Systems: A Case Study
133
obtained results are quite good for product quality and product attractiveness. They are less positive with reference the community trustworthiness. Future investigation will regard the integration in the framework of a questionnaire for evaluating customer satisfaction. This obviously includes more complex analysis. In particular, methods and techniques specialized for exploiting this aspect will be explored and defined. In addition, the authors will continue to search for additional evidence of the usefulness and applicability of the EFFORT and customizations, by conducting additional studies also involving subjects working in operative realities.
References 1. International Organization for Standardization. ISO standard 9126: Software Engineering – Product Quality, part 1-4. ISO/IEC (2001-2004) 2. Chidamber, S.R., Kemerer, C.F.: A metrics suite for object oriented design. IEEE Transactions on Software Engineering 20(6), 476–493 (1994) 3. Kamseu, F., Habra, N.: Adoption of open source software: Is it the matter of quality? PReCISE PReCISE, Computer Science Faculty, University of Namur, rue Grandgagnage, Belgium (2009) 4. Sung, W.J., Kim, J.H., Rhew, S.Y.: A quality model for open source selection. In: Proceedings of the IEEE Sixth International Conference on Advanced Language Processing and Web Information Technology, China, pp. 515–519 (2007) 5. Wheeler, D.A.: How to evaluate open source software/free software (OSS/FS) programs (2009), http://www.dwheeler.com/oss_fs_why.html 6. QSOS, Method for Qualification and Selection of Open Source software. Atos Origin (2006) 7. Deprez, J.C., Alexandre, S.: Comparing Assessment Methodology for Free/Open source Software: OpenBRR & QSOS, PROFES 2008, 189–203 (2008) 8. OpenBRR, Business Readiness for Open Source. Intel (2005) 9. Del Bianco, V., Lavazza, L., Morasca, S., Taibi, D.: The observed characteristics and relevant factors used for assessing the trustworthiness of OSS products and artefacts. QualiPSo (2008) 10. Basili, V.R., Caldiera, G., Rombach, H.D.: The goal question metric approach. In: Encyclopedia of Software Engineering. Wiley Publishers, Chichester (1994) 11. Kan, S.H., Basili, V.R., Shapiro, L.N.: Software quality: an overview from the perspectiveof total quality management. IBM Systems Journal (1994) 12. Aversano, L., Pennino, I., Tortorella, M.: Evaluating The Quality of Free/Open Source Projects. In: INSTICC Proceedings of the ENASE (Evaluation of Novel Approaches to Software Engineering) Conferences, Athens, Greece (2010) 13. Compiere, Open Source ERP Solution, http://www.compiere.com/ 14. Birdogan, B., Kemal, C.: Determining the ERP package-selecting criteria: The case of Turkish manufacturing companies. Business Process Management Journal 11(1), 75–86 (2005) 15. Open Source ERp Guru, Evaluation Criteria for Open Source ERP selection (2008), http://opensourceerpguru.com/2008/01/08/10-evaluationcriteria-for-open-source-erp/
134
L. Aversano and M. Tortorella
16. Reuther, D., Chattopadhyay, G.: Critical Factors for Enterprise Resources Planning System Selection and Implementation Projects within Small to Medium Enterprises. In: Proc. of IEEE International Engineering Management Conference 2004, pp. 851–855 (2004) 17. Zirawani, B., Salihin, M.N., Habibollah, H.: Critical Factors to Ensure the Successful of OS-ERP Implementation Based on Technical Requirement Point of View. In: IEEE Third Asia International Conference on Modelling & Simulation, AMS 2009, pp. 419–424 (2009) 18. Wei, C.C., Chien, C.F., Wang, M.J.J.: An AHP-based approach to ERP system selection. International Journal of Production Economics 96(1), 47–62 (2004)
Business Object Query Language as Data Access API in ERP Systems Vadym Borovskiy, Wolfgang Koch, and Alexander Zeier Hasso Plattner Institute for Software Systems Engineering, Prof.-Dr.-Helmert-Str. 2-3, D-14482 Potsdam, Germany
Abstract. Efficient data manipulation API is a necessary prerequisite in satisfying a number of acute needs of ERP system developers and eventually end-users. The current work defines the efficiency as an ability of accessing and manipulating ERP data at any granularity level, while maintaining integrity of the data. This paper contributes with the concept of query-like service invocation implemented in the form of a business object query language (BOQL). Essentially, BOQL provides on-the-fly orchestration of CRUD-operations of business objects in an ERP system and allows to achieve both the flexibility of SQL and encapsulation of SOA. A special design effort has been dedicated to making BOQL scalable and resistant to changes in data schema of the system. To demonstrate the power of the suggested concept navigation, configuration and composite application development scenarios are presented in the paper. All suggestions have been prototyped with Microsoft .Net platform. Keywords: ERP data access, Business object query language, ERP system architecture, Navigation in ERP systems, Enterprise composite applications, UI customization.
1 Introduction Flexible data access is essential for Enterprise Resource Planning (ERP) systems. Efficient retrieval and manipulation of data are required to address a number of acute needs of ERP systems users and application developers. By interviewing users of ERP systems we found out that two the most common requirements of every-day system users are: (i) efficient navigation among ERP data and (ii) user-specific configuration. The first one means dynamic and fully automatic discovery of information semantically relevant to a given user in a given context. For example, when a sales manager views customer details the system provides a list of links to invoices not paid by the customer or a list of links to products ordered over the last six months by this customer. By clicking the provided links (basically shortcuts) the manager can navigate to relevant data with minimum effort. The second requirement means that a pre-packaged system must support user-specific views on top of standard data structures. For example, a sales order in SAP R/3 has hundreds of fields, most of which are never used. Despite this all fields are displayed at the sales order entry form. This complicates the form and slows down order processing. To satisfy these requirements an ERP system must support data retrieval and manipulation at any granularity level. Indeed, if an ERP system had application programming J. Filipe and J. Cordeiro (Eds.): ICEIS 2010, LNBIP 73, pp. 135–148, 2011. c Springer-Verlag Berlin Heidelberg 2011
136
V. Borovskiy, W. Koch, and A. Zeier
interface (API) allowing to query any piece of data and assemble dispersed data piece in a single result set, the two requirements would be feasible to fulfill. In fact, a solution to the second one would become trivial. A personal configuration would be nothing else but a set of queries returning/processing only relevant attributes. The navigation challenge could also be resolved in the same way: the list of links could be constructed based on the result set of queries. More details on that can be found in the Section 3. In addition to the user needs flexible data access is relevant for ERP application developers in a number of use cases. A series of interviews conducted by the authors revealed the need for an API as discuss above for integration and extension scenarios. In the first case, the lack of a convenient API prevents application developers from exchanging data between an ERP system and other software used by an enterprise. In the second case, current poor APIs significantly complicate the development of enterprise composite applications (ECAs) on top of ERP systems. Enterprise composite applications have become a primary tool of extension and adaptation of enterprise software. They allow application developers to add features to ERP systems and, thus, view an ERP system not as a product but as a platform exposing data and functionality, which can be reused/recombined in new ways. Turning an ERP system into such a platform inevitably requires a quite high degree of system openness and, thus, appropriate API. The current work contributes with the concept of query-like service invocation implemented in the form of a business object query language (BOQL). BOQL is the corner stone of the API offering both the flexibility of SQL and encapsulation of SOA. Section 2 of the paper presents this concept along with a prototype demonstrating the idea in practice. Section 3 demonstrates the application of BOQL to the challenges and needs discussed above. Section 4 concludes the paper.
2 Flexible Data Access Data access API of an ERP system highly depends on internal details of the system. Not all architectures can efficiently support data access from outside of the system. In fact, most of the systems never allow such access. Data accessibility means an ability to access the data model (or metadata) and actual data storage. In other words, users need to know how their business data is structured and how they can retrieve the data. 2.1 State of the Art A straightforward approach to data access can be to use SQL. Since ERP systems rely on relational databases, issue SQL statements could be issued directly against the databases to retrieve required data. Although SQL is natively supported by the underlying databases, this approach is unlikely to deliver the expected results. SQL statements need to be written against an actual schema of a database. The schema of an ERP system is very complex and is not intended to be directly used by customers. In fact, it is considered to be private and therefore is hidden from users. ERP vendors can use the concept of views on top of the internal database. On one hand views can hide data organization and internals. On the other hand views can reduce the essential complexity of the schema from a customer’s perspective by exposing only a subset of the schema
Business Object Query Language as Data Access API in ERP Systems
137
relevant to a given customer or a group of customers. So why not to use SQL against views as a data accessing API? The problem with this approach is that it violates the data encapsulation principle. Basically SQL against views exposes too much of control over the underlying database and greatly increases the risk of corrupting data in the system. An ERP system is not only a collection of structured data, but also a set of business rules that apply to the data. Generally these rules are not a part of the system’s database. Direct access to the database circumvents the rules and implies data integrity violation. Therefore, to enforce the rules the direct access to data by any means is strictly prohibited. To enforce business rules and further increase the encapsulation semantically related pieces of data are grouped together with business logic (expressed in a programming language) to form a monolithic construct called business object. This allows to hide actual data storage behind an object-oriented layer. Grouping data and business logic in business objects simplifies the consumption of data from a programming language. For these reasons ERP systems can be seen as object-oriented databases. In this case SQL against views as data access API is inappropriate. An alternative to SQL against views can be data as a service approach. In this case a system exposes a number of Web services with strictly defined semantics. This approach has an advantage of hiding internal organization of data. Instead of a data schema a set of operations that return data are exposed by a system. By choosing operations and calling them in an appropriate sequence required data can be retrieved. Because of using Web services this approach is platform independent. In fact, the data as a service approach has been very popular. SAP, for instance, has defined hundreds of Web service operations that access data in Business Suite. Amazon Electronic Commerce service is another example of such approach. However, this method has two serious disadvantages: lack of flexibility and high cost of change. Although an ERP system vendor can define many data accessing operations, they will never cover all possible combinations of data pieces of an ERP system. Often these operations are limited to one business object. ECAs on the other hand address very specific or fine-granular needs and deliver value by assembling information coming from different locations of a system. Therefore, the granularity of data services does not match the granularity of ECAs’ operations and the services cannot provide adequate support for the ECAs. For this reason ECAs need to issue multiple service calls and combine a result set on their own. This greatly complicates the development of ECAs and undermines their performance. This situation clearly demonstrates the advantage of the SQL against views approach. The ability to construct fine-granular queries that fully match the information needs of ECAs makes SQL a much more flexible API than data as a service. High cost of change has to do with evolution. Over its lifecycle an ERP system will go through a number of changes. If these changes affect internal data organization most probably all data services that work with affected data structures will need revision. In the worst case a subset of data service operations may become irrelevant and require full substitution. Revising these operations is costly. The situation exacerbates if changes to service operations make their new versions incompatible to previous ones. This implies failures in ECAs that already consume previous versions. The SQL against views approach has less cost of change. A set of views that map the old schema to the
138
V. Borovskiy, W. Koch, and A. Zeier
new one localizes changes on the database level and does not require the revision of all outstanding ECAs. As one can see both approaches have advantages and disadvantages. SQL as a data access API gives great flexibility by allowing to construct queries that match the granularity of a user’s information needs. However, SQL exposes too much control over the database, circumvents business logic rules and binds ECAs to a specific implementation platform. The data as a service approach on the other hand enforces business rules by exposing a set of Web operations which encapsulate data access and hide data organization. However, the granularity of the exposed operations does not match that of a user’s needs, which creates inflexibility and hits performance. Furthermore, data as a service approach has high cost of changes. 2.2 Business Object Query Language In this subsection we contribute with an idea of how to combine the advantages of discussed approaches while eliminating their disadvantages and propose a concept called business object query language (BOQL). It is clear that accessing raw data directly and circumventing business logic contradicts with data encapsulation. For this reason business objects appeared. They fully control the access to the data and protect the integrity of data. From external perspective business objects are simply a collection of semantically related data, e.g. invoice, bill of material, purchase order, and a number of business operations that can be performed on the data, e.g. read, create, etc. A business object can be represented as set of data fields or attributes, e.g. id, count, name, and associations or links with other business objects, e.g. a SalesOrder is associated with a Customer and Product business objects. Despite the diverse semantics of business objects they all have the same structure (an array of attributes and associations) and behavior (a set of operations). The most basic set of operations a business object supports is known as CRUD - Create, Retrieve, Update, Delete. Although too generic, this set of operations has an advantage that any business object can support it. Therefore, all business objects can be derived from the same base class featuring the mentioned arrays (of attributes and associations) and CRUD-operations. Such uniform behavior and structure allow to introduce a query language for business objects very much like SQL for relational entities. We propose the following scenario: 1. A programmer composes a query, the description of what to retrieve from the system, according to some SQL-like grammar and sends the query as a string to the system via a generic service operation, for example ExecuteQuery. 2. The system parses the string to detect present clauses (from, select, where, etc.) and builds a query tree - an internal representation of the query. The tree is then passed for further processing to a query execution runtime, very much like in a DBMS). 3. Using the from clause the runtime obtains references to the business objects from which the retrieval must be performed: source business objects. Then the runtime traverses the query tree in a specific order and converts recognized query tokens to appropriate operation calls on the source business objects. For example, tokens from select clause are converted to Retrieve or RetrieveByAssociationChain operations, while tokens from update clause are converted to Update operations.
Business Object Query Language as Data Access API in ERP Systems
139
4. Having constructed the call sequence, the runtime binds corresponding string tokens to the input parameters of CRUD-operations. For example, the token Customer.Name of the select clause is interpreted as a call to Retrieve operation with the input parameter value ”Name” on the business object Customer. Now everything is ready to perform the calls of CRUD-operation in the on-the-fly constructed sequence. The last step the runtime performs is the composition of result set. After that the result is formatted in XML and sent back to the calling programm. In its essence the query language performs orchestration of calls to objects’ operations based on user-defined queries. These queries are transformed to a sequence of operation calls that yield the required data. Business object query language has an advantage of supporting fine-grained queries as in the case of SQL without circumventing business rules as in the case of the data as a service approach. Such an approach is allowed by a uniform representation of business objects (in terms of the structure and behavior). Although the suggested approach offers great extend of data access flexibility, it creates a number of challenges. The first one is the development of the query parser. If an ERP provider decides to support most of the SQL features/syntax, the implementation of the parser (and runtime) will require high effort. Therefore, there is no sense in pursuing the suggested approach for systems with simple data schema. For large systems like SAP Business ByDesign, where there are hundreds of business objects with dozens of associations and attributes each, this approach is more preferable. The parser and the runtime are independent from business objects. Once the both have been developed a provider can use them without changes to expose new objects. Our second concern regarding business object query language is performance. Users may create very complex queries and thus put a high workload on the system. Given the fact that the system is planned to be shared by many customers, a high workload generated by one customer application can potentially affect the performance of other customers’ applications. What is more disturbing is the fact that arbitrary queries may destroy the performance of the underlaying database. Every database management system has internal query optimizer and a set of system tables accumulating the operational statistics. Given the statistics and database metadata (primarily the information on indices and the size of tables), the optimizer computes the query execution plan that ensures the highest possible performance for a given query in a given situation. The problem here is that the optimizer adjusts itself to the most frequent types of queries. But from time to time it will encounter a query for which it will generate the plan much worse than its default plan, meaning that optimization will only worsen the performance. Such situations happen periodically with many databases. The reason for this is that the internal statistics has been computed for completely different types of queries and the system has sharpened itself for those queries. In such situations the optimizer can generate a weird plan (e.g. use wrong index ) and the query will block the system for some time. In the worst case such rare queries may trigger the recalculation of the statistical information and readjustment of the optimizer, which blacks the whole system for much longer time. In our implementation we have not experienced such performance problems for the reason we have cached the data in in-memory tables and did not issue on the fly any queries to the underlaying database. In fact, using in-memory data storage solves the
140
V. Borovskiy, W. Koch, and A. Zeier
Fig. 1. The architecture of the test system
performance problem mentioned above. Nevertheless, we believe that such a problem is worth an additional research and investigation. 2.3 Suggested Architecture The current subsection demonstrates how an ERP system can support BOQL. The Figure 1 sketches the architecture of a prototyped system. BOQL is implemented by two elements: a business object engine and a query engine: the former manages business objects in a way BOQL assumes1 and the latter provides access to them from outside of the owning process via a query-like interface. These two elements are instances of BoEngine and QueryEngine classes respectively. Both are created at the system’s startup time. Business object engine is instantiated first to assemble business objects and store references to them in a pool. Then the instance of the query engine is created. It has access to the pool and thus can manipulate the objects. Every business object encapsulates an in-memory table to cache data. The in-memory table is populated with data taken from a private database. Every object also encapsulates logic to synchronize its in-memory table with the database. To the query execution runtime an object is seen through its interface: a collection of attributes and associations to other objects and a set of operations. How those are implemented is completely hidden inside the object. Typically, attributes and associations are bound to data fields and relations of the underlying physical storage. In this prototype we concentrate on two operations (accessors) from the interface: Retrieve to get attributes of a given object and RetrieveByAssociationChain to navigate from one object to another via specified associations. These operations retrieve data from the underlying physical storage or the 1
Meaning that it guarantees the compliance to CRUD-interface.
Business Object Query Language as Data Access API in ERP Systems
141
local cache according to internal business logic. By default the accessors assume oneto-one correspondence between a business object’s logical and physical schemas. For example, if an attribute Attr1 of a business object Bo1 is queried the query runtime looks for data field named Attr1 in a table corresponding to a given business object; if an association Assoc1 of a Bo1 is queried the runtime looks for a foreign key relationship corresponding to the association and constructs a join. Neither the business object nor its in-memory and database tables can be directly accessed outside of the owning process. The direct access to the data is prohibited to enforce integrity rules and internal business logic implemented by business objects. To access the business data an external application must use the standardized query-like interface exposed by the query engine. When the latter receives a query it parses it and transforms recognized tokens to corresponding operation invocations. The result of these invocations is put in XML format and sent back to a client application. As an implementation platform for the prototype we chose .NET. The system is implemented as a Windows service and the query interface is published as a Web service hosted by Internet Microsoft Information Services (IIS). The Web service is meant to dispatch a query to the system and serves as a request entry point. There is no other way to invoke or access the system except for issuing a call to the Web service. The physical data storage is implemented as a Microsoft SQL Server 2005 database. 2.4 Scalability of BOQL One of the current trends in ERP systems development is switching to software as a service (SaaS) model. No maintenance costs and a better pricing model based on actual usage drive organizations to favor SaaS ERP solutions over traditional on-premise ones. Since the service provider has to bear the cost of operating the system (rather than the consumer), the provider is strongly motivated to reduce the system’s total cost of ownership. A well-designed hosted service reduces total cost of ownership by leveraging economies of scale by sharing all aspects of the IT infrastructure among a number of consumers. Consolidation of multiple consumers onto a single operational system, however, comes at the price of increased system complexity due to multi-tenancy at the database layer. The inevitable consequence of multi-tenancy is increased workload: the system has to handle requests from much more consumers in comparison to a singletenant systems. Therefore, scalability in handling data requests becomes a crucial characteristic of an ERP system. In order to make BOQL scalable concurrent execution of independent queries was ensured in the prototype. The concurrency was be employed at all components of the system: query engine and business object engine. This was achieved by (i) making the parser and runtime multi-threaded and (ii) executing the code of each business object in a dedicated thread. Because available hardware capacity (namely number of available CPUs/cores) was limited, we used centralized pool of threads in order to avoid unproductive thread preemption and context switching. In addition to running multiple queries in parallel, BOQL supports intra-query parallelism. Meaning that, whenever is possible the service calls within a query are done in parallel. To figure out what calls to execute at the same time we analyze the abstract syntax tree built by the parser for a given query and issue every call, which does not rely
142
V. Borovskiy, W. Koch, and A. Zeier
Fig. 2. Business Object Graph
on yet not retrieved data, in a separate (dedicated) thread. The result of these invocations is assembled in a single XML document and sent back to a client application. 2.5 Exposing Business Object Model Using BOQL requires the knowledge of business object model, that is what business objects a system has, what attributes and associations every business object has. To communicate this information we use oriented graphs. The vertices of a graph denote business objects and oriented edges denote associations. A set of attributes is attached to every vertex (see Figure 2). For the sake of compactness we will not list the attributes on further diagrams. The graph plays the same role for business objects as the schema for a database. It depicts the structure of business data and is essential to know to compose queries. We have developed a tool called Schema Explorer that automatically retrieves metadata from the test system and builds a business object graph. Such a tool greatly simplifies the creation of BOQL queries. This tool provides a plenty of useful functionality: business object search, association and attribute search, finding connections/paths between any two business objects, displaying a business object graph or its part, intellisense support for query editor, test execution of a query, to name just a few. The implementation of metadata retrieval depends on the implementation of the backend system. For the implementation presented in the subsection 2.3 the metadata about the instances of business objects is obtained using the reflection mechanism of .Net Framework. Query engine exposes a number operations which internally use reflection in order to query the business object metadata. For example, if a developer wants to know what business objects a system has the tool calls an operation which scans the pool and obtains the types of business objects instantiated by the system. To look up the list of associations of a given business object, say Customer, the tool issues a call to another operation that gets the names of elements in the Associations array of the corresponding business object. Because business object model does not change (at least should not) we cache metadata in order to avoid the reflection overhead. 2.6 Sustaining Changes Over their life-cycle, the data structures in an ERP system may change. In this case, old client applications may suffer. Ideally, a system must never reject BOQL queries
Business Object Query Language as Data Access API in ERP Systems
143
from applications that successfully interacted with the previous versions of the system’s business object graph and should compensate the differences in data structures in order to preserve the integrity of the applications. The ability of BOQL to sustain changes of business object graph is not straightforward to see. In fact, if the business object graph changes, previously written queries and thus all applications issuing these queries may become invalid. The situation is the same as in the case of changing the schema of a database - existing SQL queries are not guaranteed successful execution. Therefore, the suggested architecture must be augmented with a compatibility assurance tool that minimizes incompatibility in the case of changing the business object graph. A literature review showed that in the case of relational and object-oriented databases the view mechanism can be applied to cope with schema changes [4], [1], [3]. The prime tool for view support in relational and object-oriented databases is mapping [2], [7], [5], [6]. The architecture we proposed natively supports mappings. That is, mappings can be seamlessly embedded into the system. The mappings can be supported in two ways: by means of data access plug-ins and query rewriting. The first approach is based on substitution of default association and attribute accessors (Retrieve and RetrieveByAssociationChain) with custom ones. This is achieved by encapsulating a new accessor inside a call-back operation and dynamically selecting this operation when handling a data request. Associations and attributes that have or require custom accessors we call virtual. The set of all virtual elements of a business object is called the view of the business object. Every time a virtual element is called a custom accessor implementing the mapping is invoked. This accessor is implemented as a callback operation of a feature pack that is dynamically loaded by the system when it encounters a query addressed to the older version of the schema for the first time. Having loaded the feature pack the system plugs in the custom accessor into the runtime of the query engine. In the prototyped system we used reflection mechanism to redirect calls from default to custom accessors. The second approach is based on altering a query while it is being parsed. Before converting a query token to an appropriate operation call the parser looks up a correspondence dictionary to find the actual path to the asked attribute or association, rewrites the token and re-parses it to get a valid operation call.
3 Applying BOQL 3.1 Profile-Based Configuration The idea of profile-based user configuration aims at enabling end-users of a system to customize the system’s presentation layer according to their own preferences. This is achieved as follows. With the help of Schema Explorer end-users discover information in an ERP system that they are interested in. They simply select business objects of interest and select the attributes they want to see for every object. Then with the help of the same tool they generate BOQL that retrieve/change these data and store these queries in a structured way in a personal profile. Next, when a user logs in to the system the later picks up the user’s profile executes necessary queries and presents the results to the user.
144
V. Borovskiy, W. Koch, and A. Zeier
In our prototype we have implemented user interface layer with Microsoft Silverlight. The UI is capable of automatically generating three different types of frontend forms: (i) business object summary form: lists all instances of a certain type with a short description for each instance; (ii) drilldown form: lists detailed information on a particular business object instance; (iii) related items form: lists instances of other business objects that are connected with a given object instance. Figure 3 illustrates two forms: the upper one is a summary of all customers in the system, and the lower one is a drilldown form for a particular instance of a sales order business object. The related items form is essentially the same as the summary form. There is no difference in rendering them. The only difference is the actual BOQL query, the result of which populates the form. On the right side of the figure there is a profile from which the forms were automatically generated. One can see that a user called ”Purchase01” is interested in business objects Customer, Opportunity, Material, Sales order, etc. (all second level elements of the tree). Within each object there is a list of fields that must be displayed for the object. For ”Customer” the fields are name, status and id. An object instance can also have related items associated to it. For example, when the user looks up an instance of a sales order they might be willing to navigate to services, quotes and opportunities associated to this particular sales order. Note that the profile is fully configurable and no query is executed before the user has explicitly clicked a corresponding link. To save time and effort of end users an ERP system vendor can create a number of role-specific configuration profiles, which users may adjust according to their needs. 3.2 Navigation in ERP Systems An ERP system stores all facts and figures about a company’s business activities in a structured way. Conventionally the data are stored in a normalized way. This means that data generated by the same business process is very likely to be split into pieces, which will be stored separately. Therefore, it is almost for certain that semantically related data pieces will be disjoint by the system. Even though business objects aim at creating a semantically complete entities, they do not reassemble business data completely after the data is normalized at the storage level of the system. The bigger the company the more diverse and complex data structures get and the more complicated the reassembly becomes. This results in a partial loss of semantical links between data in the ERP system. The reason is that many semantical relationships are not modeled at the database level. For example in SAP R/3 system, there is no direct physical relationship between a customer and invoice entities. They are connected via a sales order. Therefore, if a user wants to know if a customer paid their invoices the user needs to find first all sales orders of the customer and then check all associated invoices. This is not practical. The field research we conducted, showed that users of ERP systems are struggling a lot with this problem. They often find that even though their system stores all information they need, they cannot get it quickly because of the missing links. We have observed that a user opens on average 3 to 6 windows, performs 10 to 20 mouse clicks and types 15 to 30 characters to get semantically related data. This results that an employee spends considerable amount of their working time on searching and looking up data in an ERP system.
Business Object Query Language as Data Access API in ERP Systems
145
Fig. 3. User interface forms and profile configuration
The ability to handle the corporate data efficiently greatly improves the productivity of the company’s employees. The opposite is also true. If an employee needs to spend considerable amount of time to look up transactional or master data in an ERP system, the employee’s productivity significantly drops. Unfortunately, this is often the case. BOQL in combination with the Schema Explorer and user profiles can solve this problem completely. To reconstruct semantical links between business objects we run a path search algorithm on the business object graph. In the example above the user would need to select in Schema Explorer the Customer business object as the starting point and CustomerInvoice as the target business object. Then the tool would use a modification of a graph traversal algorithm to find a path from the source to the target and convert this path to a BOQL query. After this the user just needs to update their profile (the section Related Items of the Customer business object). Next time when the user opens customer details screen a corresponding link will appear (see Figure 3).
146
V. Borovskiy, W. Koch, and A. Zeier
3.3 Enterprise Composite Applications A composite application (CA) is an application generated by combining content, presentation, or application functionality from a number of sources. CAs aim at combining these sources to create new useful applications or services [8]. An enterprise composite application is a CA application which has an ERP system as one of its sources2 . CA access their sources via thoroughly specified application programming interface (API). The key characteristics of a CA are its limited/narrowed scope and straightforward result set. CAs often address situational needs and provide replies to fine-grained information requests. They are not intended to provide complex solutions for general problems rather they offer compact answers to clear-cut questions. CAs create value by pulling all data and services a user needs to perform a task on a single screen. These data and services can potentially come from many sources, including an ERP system. Very often users are confronted with a problem of having necessary information and functionality distributed across many forms. By creating a CA that assembles them on the same screen users can substantially increase the productivity of their work. Additional benefit here is that a CA can present information in a way that meets personal preferences of a user. The architecture we suggest natively supports CA. The main enabler of composite applications is the BOQL, which basically provides a mechanism for query-like invocation of business objects’ services. BOQL allows CAs to manipulate ERP data from outside of a system without violating internal business logic. Because the query engine supports SOAP protocol, CAs can be developed and executed on any platform that is suitable for a user and has support for XML. The process of developing CAs we see as follows. 1. A user3 figure out on which business objects they want to perform custom operations. This depends on the actual task and application domain. Then the user composes BOQL queries that will return the data from the business objects. To compose the queries the user can use Object Explorer and Schema Explorer tools described earlier. 2. Using SOAP interface of the query engine the user executes the queries and retrieves ERP data. 3. Using selected programming language the user develops code that operates on the selected data and performs the required operations. 4. In case the CA needs to change the data in the source ERP system it composes BOQL queries that do so and executes the queries using the same SOAP interface of the query engine. In the following use case the flexible data access of the suggested approach creates business value. Consider a Web retailer that sells items on-line and subcontracts a logistics provider to ship sold products to customers. The retailer operates in a geographically large market (e.g. the US or Europe4). In this situation the consolidated shipment of 2
3 4
From now on we consider only enterprise composite applications. The terms ”CA” and ”ECA” for the sake of brevity are considered to mean the same in the rest of the paper. Power user or application programmer. The greater the territory and the higher the sales volume, the more relevant the case.
Business Object Query Language as Data Access API in ERP Systems
147
items can generate considerable savings in delivery and thus increase the profit of the retailer. Consolidation means that a number of sold items is grouped in a single bulk and sent as one shipment. The bigger the shipment size, the higher the bargaining power of the retailer when negotiating the shipment with a logistics provider. The savings come from price discounts gained from higher transportation volume5 . In this way the retailer can lower the transportation cost per sold product. The consolidation of shipment is done anyway by all logistics providers in order to minimize their operating costs6 By controlling the delivery of sold items explicitly, the retailer captures the savings that otherwise go to a logistics provider. Let the retailer use a system with a business object graph as Figure 4 presents to manage their sales. We assume that the system exposes a query-like Web service interface as described in the section 2.3. The query returning the shipping address for all sales order items that are to be delivered looks as follows: SELECT SO.Items˜id, SO˜id, SO.Contact.Customer˜name, SO.Contact.Address˜street, SO.Contact.Address˜number, SO.Contact.Address˜zip, SO.Contact.Address˜city, SO.Contact.Address˜state, SO.Contact.Address˜country FROM SalesOrder As SO WHERE SO.Status = "ToDeliver" GROUP BY SO.Contact.Address˜city, SO.Contact.Address˜state
By invoking the query-liked Web service and passing the above query to it, a third-party application consolidates the items by their destination. The next step for the application is to submit a request for quote to a logistics provider and get the price of transporting each group of items. Many logistics providers have a dedicated service interface for this, so the application can complete this step automatically. Once the quote has been obtained and the price is appropriate the products can be packaged and picked up by the logistics provider. To enable applications like the one just described the system must expose a flexible data access API. That is, the system must be able to return any piece of data it stores and construct the result set in a user-defined way. As mentioned in the section 2 traditional APIs cannot completely fulfill this requirement: SQL against views circumvents the business rules enforced outside the database; Web services limit the retrievable data to a fixed, predefined set. The architecture suggested in the current work overcomes the existing limitations and offer the necessary degree of data access flexibility. 5 6
So called economies of scale in transportation. In fact, the economies of scale in transportation resulted in Hub-and-Spoke topology of transportation networks.
148
V. Borovskiy, W. Koch, and A. Zeier
Fig. 4. Schema of the Web retailer’s CRM
4 Conclusions Flexible data access API is essential for ERP systems. It helps to resolve a number of challenges. Unfortunately, existing approaches and APIs do not offer appropriate level of flexibility and simplicity while guaranteeing integrity and consistency of data when accessing and manipulating ERP data. The current work contributes with the concept of query-like service invocation implemented in the form of a business object query language (BOQL). BOQL is the corner stone of the API offering both the flexibility of SQL and encapsulation of SOA. In its essence, BOQL is on-the-fly orchestration of CRUD-operations exposed by business objects of ERP systems. In addition the paper showed how BOQL enables navigation among ERP data and configuration of UI layer as well as development of enterprise composite applications. Furthermore, we outlined the major components of the architecture and prototyped with Microsoft .NET platform an ERP system that supports BOQL as prime data access API.
References 1. Banerjee, J., Kim, W., Kim, H.-J., Korth, H.F.: Semantics and implementation of schema evolution in object-oriented databases. In: Proceedings of the 1987 ACM SIGMOD international Conference on Management of Data, pp. 311–322 (1987) 2. Bertino, E.: A view mechanism for object-oriented databases. In: Pirotte, A., Delobel, C., Gottlob, G. (eds.) EDBT 1992. LNCS, vol. 580, pp. 136–151. Springer, Heidelberg (1992) 3. Bratsberg, S.E.: Unified class evolution by object-oriented views. In: Pernul, G., Tjoa, A.M. (eds.) ER 1992. LNCS, vol. 645, pp. 423–439. Springer, Heidelberg (1992) 4. Curino, C.A., Moon, H.J., Zaniolo, C.: Graceful database schema evolution: the prism workbench. In: Proceedings of the VLDB Endowment, pp. 761–772 (2008) 5. Liu, C.-T., Chrysanthis, P.K., Chang, S.-K.: Database schema evolution through the specification and maintenance of changes on entities and relationships. In: Loucopoulos, P. (ed.) ER 1994. LNCS, vol. 881, pp. 132–151. Springer, Heidelberg (1994) 6. Monk, S., Sommerville, I.: Schema evolution in oodbs using class versioning. In: ACM SIGMOD, pp. 16–22 (1993) 7. Shiling, J.J., Sweeney, P.F.: Three steps to views: extending the object-oriented paradigm. In: Conference Proceedings on Object-Oriented Programming Systems, Languages and Applications, pp. 353–361 (1989) 8. Yu, J., Benatallah, B., Casati, F., Daniel, F.: Understanding mashup development. In: IEEE Internet Computing, pp. 44–52 (2008)
Part II
Artificial Intelligence and Decision Support Systems
Knowledge-Based Engineering Template Instances Update Support Olivier Kuhn1,2,3,4, Thomas Dusch1 , Parisa Ghodous2,3, and Pierre Collet4 1
PROSTEP AG, Darmstadt, Germany Universit´e de Lyon, CNRS, Lyon, France 3 Universit´e Lyon 1, LIRIS, UMR5205, F-69622, Lyon, France Universit´e de Strasbourg, CNRS, LSIIT, UMR7005, 67412, Strasbourg, France [email protected] http://www.prostep.com 2
4
Abstract. This paper presents an approach to support the update of KnowledgeBased Engineering template instances. As result of this approach, engineers are provided with a sequence of documents, giving the order in which they have to be updated. The generation of the sequence is based on the dependencies. In order to be able to compute a sequence, information about templates, Computer-Aided Design models and their relations are required. Thus an ontology has been designed in order to provide a comprehensive knowledge representation about templates and assemblies, and to allow inference on this knowledge. This knowledge is then used by a ranking algorithm, which generates the sequence from modified templates, without involving a manual analysis of the models and their dependencies. This will prevent mistakes and save time as the analysis and choices are automatically computed. Keywords: Ontology, Update strategy, Knowledge-based Engineering, KBE templates, Ranking.
1 Introduction Nowadays, high-end industries such as automotive or aerospace industries are designing products that are more and more complex and that integrate various disciplines. The product diversification and the increase of the model range has motivated new IT tools and has impacted the product development process [1]. One change during the last years is the democratisation of Knowledge-Based Engineering (KBE) which has become a standard in product development. KBE is a large field at the crossroads of ComputerAided Design (CAD), artificial intelligence and programming. It facilitates the reuse of knowledge from previous design choices and thus reduces design time and costs. The storage of knowledge can be realised in knowledge-based 3D-CAD models by using KBE representation tools present in CAD systems [2]. Standardization is also a way to reuse knowledge. Furthermore, Dudenh¨offer [3] said that the standardization and the use of common parts and platforms is a key factor for efficiency in the automotive industry. To this extent, one solution to reuse knowledge is the use of KBE templates. KBE templates are intelligent documents or features that aim at storing know-how and facilitate its reuse. They are designed to adapt themselves to various contexts, which J. Filipe and J. Cordeiro (Eds.): ICEIS 2010, LNBIP 73, pp. 151–163, 2011. c Springer-Verlag Berlin Heidelberg 2011
152
O. Kuhn et al.
can lead to some maintenance problems. Maintaining a huge number of templates is quite a challenging task because of the many relations created with other documents. Modifications can be made to templates to add new functionalities or fix some bugs. This is why there is a need to propagate these modifications to existing copies of the template, called instances, that are used in a specific context, for instance, an engine assembly. The work presented in this paper targets the problem of template update propagation. An ontological representation of templates and CAD models is used in order to infer new knowledge. This knowledge is then used to compute an update sequence, which can then be used by engineers in charge of propagating changes. With this tool, the task of analysing dependencies between documents and evaluating the impact of the relations on the update propagation is removed. This paper is structured as follows. In section 2 the problem is presented. Section 3 presents several research works related to KBE templates. Section 4 describes the global approach. The developed ontology is presented in chapter 5. Section 6 presents how an update sequence is computed for the update of template instances. In section 7 an application of the work is presented. Finally, in section 8 some conclusions are given.
2 Template Update Problem 2.1 KBE Template Definition The aim of Knowledge-Based Engineering is to capture and reuse the intent and product design knowledge through parameters, rules, formulas, automation, and also knowledge templates. The reuse of knowledge allows to speed up the design process by reducing design recreation and to save costs. The ultimate goal is to capture information related to best-practices and design know-how in a company. Knowledge-Based Engineering is nowadays used by many companies and has proven its advantages. Examples of enhancements resulting from the use of KBE are presented in [4,5]. Templates are knowledge-based applications that allow the storage and the reuse of know-how and company best practices. Knowledge-based applications include a wide set of elements that contains documents, parametric CAD models, software, KBE, CAE analyses etc. They provide defined functionalities, such as geometry, calculus, etc. They are designed in order to adapt themselves to a given context regarding some defined inputs given by the context. The process of putting a template into a context and setting the inputs of the template is called “instantiation.” For instance, in CATIA V5, a Dassault Syst`emes CAD system, the instantiation process will create a copy of the template, which is called “template instance,” then put it into the context and finally link the elements from the context to the template instance’s inputs. At the end of the process, two entities are present, which have separate life cycles: the template definition and its template instance. In addition to knowledge storage, templates are also used to provide standardized parts and assemblies for design activities, and to integrate proven design solutions into future product design processes [1]. Katzenbach et al. also exposed that the mandatory use of template-based design processes enhances the design maturity during the
Knowledge-Based Engineering Template Instances Update Support
153
Template Context External specifications
Template instance
Concept model
Adapter model
Publications
Publications
Design specifications
Construction
Publications
Publications
Components
Output
Publications
Publications
Fig. 1. Generic structure of a CAD template with link flow [7]
complete design phase [1]. Moreover, Kamrani and Vijayan showed an integrated design approach based on templates technologies that allows to reduce drastically the development time needed for new products [6]. Figure 1 presents the generic structure of an assembly template instance in a context. The context is composed of several entities called the “external specifications,” which are other elements present in the context that will provide parameters’ value or geometry to the template. The inputs of the template are gathered in the “adapter model,” which is composed of basic geometry to guide the “construction.” The “output” is used to present some specific elements of the template to the context. References to other documents are based on publications. The aim of using publications is to provide a named reference of an element within the document, that can be easily recognised and referred to. So if the content of a document changes, the links between documents will not be broken as the elements inside the document are not directly referred to. The figure also presents the link flow (represented by arrows) that represents the hierarchy of the model. 2.2 Addressed Problem In large and complex assemblies like those present in automotive or aerospace industries, the number of templates and template instances can reach several thousands and even more. This implies a huge effort to maintain them as they become more complex by incorporating new potential variants for future design [1]. There is a second challenge regarding template update, that concerns the propagation of the modifications done to templates. Lukibanov initiated this problem of template management because Product Data Management systems and CAD software did not address it to a full extent [8]. Once a template has been modified and validated in order to suit new requirements
154
O. Kuhn et al.
or fix some bugs, changes should be propagated to other templates and their instances to use the same version everywhere. The complexity of the problem comes from the heterogeneity of the data. There are several types of documents that can be linked together for several reasons such as a parameter dependency, parent-child relation etc. All these relations may have an impact on the propagation of updates. The relations have to be analysed and represented in a suitable format in order to allow a computer software to take advantage of the knowledge, to analyse the current state and to create a sequence of updates. In this paper, the template update propagation to instances is addressed. The objective is to provide engineers with a sequence of necessary updates, in order to help them to achieve the template instances updates faster and with less difficulties and errors.
3 Related Work KBE templates is a recent technology that has become the purpose of many research works and applications. The ability of templates to adapt themselves to a given context has been used by Siddique and Boddu to integrate the customer into the design process [9]. They proposed a mass customisation CAD framework that takes into account user parameters to automatically generate a CAD model from predefined templates. The automotive industry has also integrated templates into their development processes. Haasis et al. [10] and Katzenbach et al. [1] presented the template-based process at Daimler AG, an automotive company. There will be a need in the future to standardize component concepts between product families to face the complexity of products and processes. The solution they have adopted is to resort to KBE templates in the engineering process. Mbang proposed the use of KBE templates to integrate together Product, Process and Resource aspects, in order to make it seamless to the designers [11]. Some research has been made concerning template maintenance. Lukibanov addressed the problem of template management and distributing the latest versions of templates [8]. The proposed solution involves ontologies that are used as a knowledge representation layer about templates and their interconnections. An ontology allows to represent concepts and relationships between these concepts [12]. Ontologies also provide a solution to find dependencies and to check the consistency of the ontology thanks to an inference process. One ontology is created from each templates by mapping them to the knowledge model. Each ontology describes the inputs, outputs and links to CAD models of the corresponding template, and uses the visualisation of these information to propagate changes to other templates. However his approach does not handle template instances and focuses on the CATIA V5 CAD system.
4 Approach To propagate modifications to template instances, the propagation to other templates has to be taken into account because templates can be linked together, but they can also be composed of instances of other templates. This is why the approach presented
Knowledge-Based Engineering Template Instances Update Support
155
New document or new document version Updated documents Update of ontology instances
Document analysis (a)
Launch update
Execution of inference engine
Update sequence calculation
Update of template instances
(b)
Fig. 2. Developed process for template update propagation
by Lukibanov has been adapted and extended in order to take into account template instances and to try to abstract the methodology so that it can be applied to various CAD systems [8]. Figure 2 presents the developed process to propagate the template modifications. The process is decomposed into two main parts. In the first part (figure 2.a), an information database about templates and CAD models is maintained. It has been considered that having one ontology for each template does not scale well as data on template instances and other assemblies have to be handled. The proposed solution is to define one domain ontology based on the analysis of template concepts, CAD systems and existing ontologies. Concepts in the ontology will be instantiated by analysing CAD models and templates to gather relevant information for the template update propagation. Data retrieved from CAD models are raw data. Some information is not visible or incomplete regarding the needs. For this reason, an inference engine is used on the instantiated ontology to enhance the classification and discover information not directly accessible in the CAD models. This ontology is presented in section 5. The second part is dedicated to the propagation of changes (figure 2.b). Locating all relevant template instances incorporated in huge assemblies and estimating the consequences of the necessary changes to be made could easily be a full time job. Furthermore, some relations are not explicitly available and their impact on the update has to be estimated. This is why an algorithm is proposed, which is in charge of computing an update sequence. This algorithm takes advantage of the knowledge of the domain that has been gathered in the ontology and of the enriched information on the templates and models available in the ontology. More details are provided in section 6.
5 Ontology Design 5.1 Aim of the Ontology To generate an update sequence, the algorithm requires information about the type of the documents and the existing relations between documents. To provide these information in a computer understandable and processable format, an ontology represented with the Web Ontology Language1 (OWL) has been developed. The OWL representation language has been chosen because it is based on open standards and it is a W3C recommendation since 2004. OWL allows to have a formal representation of a domain and to resort to inference engines. Right now, the OWL DL is used, which is a sub-language 1
http://www.w3.org/TR/owl-guide/
156
O. Kuhn et al.
of OWL named in correspondence with Description Logic on which it is based. It is the largest set of OWL that provides decidable reasoning procedures. Katzenbach et al. pointed out from their study that relations between documents need an efficient visualisation tool to have an overview on all interdependencies [1]. With this ontology, a classification and an efficient overview of all explicit and implicit dependencies in templates and assemblies are provided. 5.2 Followed Methodology To develop the ontology, it has been decided to use the Ontology Development 101 methodology [13] for its simplicity and its lightness. This methodology is composed of seven steps. The first step is to define the domain of the ontology. The domain of the ontology focuses on the problem, i.e., concepts and relations are related to KBE, templates and CAD models. Such specific ontologies are called “application ontologies.” To design the ontology, a mixed top-down and bottom-up approach has been used. It starts from the main concepts definition and at the same time from a CAD system analysis. The idea is to make them meet so that the ontology includes details linked with generic concepts. The second step of the methodology is to reuse existing ontologies. No available or reusable ontology that can be reused for the problem has been found. However there are standards in the product design field from which can be extracted and reused useful information and concepts. One of the most well-known standard is the “STandard for the Exchange of Product model data” (STEP), which is also referred to as the ISO 10303 norm [14]. STEP provides standards for product data representation and covers a broad range of applications from geometric and topological representation to kinematics, passing by product life cycle. STEP can thus provide some elements for the needed abstraction level for a generic document representation in the ontology to ease the integration of other CAD systems. Then the ontology can be enriched with a detailed analysis of the problem and concrete systems. 5.3 System Analysis Step three of the methodology is to enumerate the important terms that will appear in the ontology in order to define the concepts and the object properties. For this purpose, the CATIA V5 CAD system from Dassault Systems has been analysed, which is used within automotive, aerospace or ship building industries. The analysis was focused on knowledge elements and relations between documents (Multi-Model links in CATIA V5). CATIA V5 integrates KBE workbenches that provide KBE template mechanisms to create and instantiate templates. There are three main types of templates available: feature templates, document templates and process templates. The process templates have been defined as out of the scope of this work because they address CAx processes and the focus is put towards CAD. It is also possible to use standard CAD models as templates without resorting to the specific CATIA KBE workbench.
Knowledge-Based Engineering Template Instances Update Support
157
But with this method, there are no explicit template definitions and no support tool for template instantiation. When looking closer, it can be seen that the concept of template instances does not exist within CATIA V5. Template instances are not handled and are considered as standard documents with no possibility to recognise them. Regarding the relations between documents, 19 different types of links have been identified. Each link involves two documents, one for the source and the other for the target of the link. The links do not all have the same impact on the update propagation. For this reason a classification of link types depending on their impact is needed. From this analysis, the main terms and also some of their relationships have been defined. Concerning templates, the specific CATIA V5 templates as well as models used as templates have to be taken into account. Terms related to template instances have also to be taken into account and a solution to track and classify template instances has to be integrated in the ontology. Regarding the relations, they have been classified and the new term of “dependency link” has been created that will gather all links that will influence the update propagation. 5.4 Ontology Description Figure 3 presents a part of the developed ontology. The first step in the mixed approach used in order to design the ontology is to define the upper level of the ontology (blue rectangles) by creating the document definition concepts and relations inspired by the STEP standard. Then the CAD system specific concepts of the ontology (green rounded rectangles) have been defined on the basis of the CATIA terminology as well as the newly identified terms that are not defined within the CAD system (section 5.3). Those new concepts are defined acording to existing concepts and relations in order to deduce them with an inference engine. For instance, a new concept called “PartAsTemplate” has been defined, which defines a CATPart document that contains no template definition from CATIA V5 but that is used as a template. Its definition with Description Logics notation is the following: PartAsTemplate ≡ CATPart ∩ ¬(∃hasDef intion.DocumentT emplate) ∩ ∃hasID.TemplateID. Finally, mid-level concepts have been integrated, such as system independent template concepts like “template” and “template instance” (orange ellipses). All these concepts are linked together with “is-a,” equivalence or aggregation relationships. Concerning the relations between documents, they have been represented as object properties. The 19 link types present in CATIA V5 have then been added. To be able to track instances, a relation between template definitions and template instances has been added by adding an identifier to the models that will be shared between a template and its instances. Inverse links have also been defined, with the inverse property axiom, to be able to navigate easier between documents because in CATIA V5, links are unidirectional and a document is not aware of the presence of a link targeting it. All these data constitute the foundation to compute an update sequence for the update propagation.
158
O. Kuhn et al.
Product has Version has Definition has Document
CATDocument
is-a CATPart
part-of
Feature is-a Document_template_definition
is-a CATProduct
is-a Instance_document
Template_document
Fig. 3. Extract of the ontology with the abstraction level and the CAD system concepts (here CATIA V5)
6 Update Sequence Computation The goal is to provide engineers in charge of propagating changes in template definitions to its instances and to other templates with a comprehensive sequence they can follow. This sequence will give them an ordered list of documents (with a corresponding rank) that have to be updated or replaced. Following this sequence rank after rank will save time as the engineers do not have to analyse the complex situation with all its interdependencies. This will also prevent redundant or useless updates. 6.1 Graph Representation The data representation created with the ontology can be seen as a directed graph with documents instances as nodes and their relationships as edges. The specificity of the obtained graph is that nodes and vertices are typed. Their types depend on the concepts and object properties they represent, so one node can have several types. The algorithm will work on this graph to extract relevant nodes and to assign them a rank. 6.2 Approach The problem has been tackled with a ranking approach based on relations between documents defined in the ontology. The objective is to build an ordered sequence by assigning a rank rk (k ∈ N) to each document. The rank represents the order in which documents have to be processed. Several documents can have the same rank, meaning that they can be processed at the same time. This approach was inspired by research on hierarchical structure visualisation and directed graph drawing [15,16]. The results of their work, is an efficient algorithm to draw hierarchical graphs. An implementation has been made in graphviz2, an opensource graph visualisation tool. The initial version of the algorithm was proposed in [17]. It is composed of 4 phases: 2
http://www.graphviz.org/
Knowledge-Based Engineering Template Instances Update Support
B
C
A
D F
E
A
B
D
E
F
(a)
159
C
(b)
Fig. 4. Acyclic directed graph (a) and its result (b) after the first phase of the [17] algorithm
1. 2. 3. 4.
Place the graph nodes in discrete ranks. Order nodes within rank to avoid crossing edges. Compute the coordinates of nodes. Compute edges’ splines.
The interest was focused on the first phase where nodes are ranked. This method builds a hierarchy composed of n levels, from a directed and acyclic graph. The hierarchy is denoted G = (V, E, n, σ), where: – V is a set of vertices such as: V = V1 ∪ V2 ∪ . . . ∪ Vn
(Vi ∩ Vj = ∅, i = j)
where Vi is the set of vertices of rank i and n the height of the hierarchy. – E is a set of edges, where each edge is unique. – σ is a set of sequence σi for each Vi . σi is the sequence of vertices within Vi such as σi = v1 , v2 , . . . , v|Vi | with |Vi | the number of vertices of Vi . To create the hierarchy, each directed edge e = (source, target) has to obey the following condition: e = (vi , vj ) ∈ E, vi ∈ Vi and vj ∈ Vj satisfies i < j
(1)
The result of this phase of the algorithm can be seen in figure 4. It has been applied to a small example composed of six vertices and six edges. The result (b) shows three ranks (n = 3) and validates the condition presented in equation 1. This ranking algorithm has been adapted in order to make it produce an update sequence for the template update propagation. 6.3 Adaptation of the Algorithm The data from templates and models generate a more complex graph as the relations and links between the documents can have various effects on the update propagation. This is why the classification made in the ontology is used. In this approach, the sequences σ are not taken into account because only the placement of documents in the good rank is relevant to the addressed problem.
160
O. Kuhn et al.
The original modified documents are the inputs of the algorithm and will be placed at the first rank r = 0. Starting from these documents, the algorithm builds the hierarchy. The ontology is queried for the types of documents and the links that propagate the update from these documents. Depending on the types of the documents, several actions can be undertaken. The documents linked with an “inverse dependency link” are added at rank r + 1 as they have to be processed after the dependency is satisfied. Then the algorithm continues with the rank r + 1 where the documents were just added. If the current document is a template, the behaviour is different. As templates may be containers, the re-instantiation of a template has to be done after all the included documents have been updated. To be able to perform a re-instantiation, the document that contains the template instance has to be loaded before doing the action. So if rti is the rank of the template instance, its containing assembly parent should be located in a previous rank such as its rank rparent < tti . The estimated complexity of the algorithm is linear (O(h) with h the total number of nodes).
7 Application 7.1 Developments The presented approach has been implemented in a system composed of two parts. The first part is CAD system dependant. In this case CATIA V5 and the C++ CAA API have been used to analyse CAD models and templates. Data extracted are then transferred in an XML format to the second part of the system. The second part is in charge of maintaining the ontology instances and computing the update sequence. A JAVA application has been developed using the OWL-API3 to manipulate the OWL ontology and the inference engine. The presented ontology was created using Prot´eg´e 44 [18] that is an open source ontology editor. The implementation of the update sequence computation algorithm was also done in JAVA as it uses the OWL-API to access the ontology data. Concerning the inference engine, FACT++ [19] has been chosen, which is an efficient OWL-DL reasoner directly usable through the OWL-API. 7.2 Scenario This scenario uses the CATIA V5 CAD system. The scenario is composed of 92 CAD models of different types (CATParts, CATProducts, CATIA V4 models). Within this set of documents are 9 templates: 8 document templates with some of them containing instances from other templates and 1 “User Defined Feature,” which is a feature template like a predefined hole. The study case starts with the modification of geometry elements in one document template. It is considered that the template has been validated and is ready to be used. Now related documents and all instances have to be updated in order to have up-to-date models. 3 4
Knowledge-Based Engineering Template Instances Update Support
161
Fig. 5. Example of update sequence
Without any support tool, the persons in charge of propagating the modifications will have to locate all the related template instances and related documents through links and references. Once they have all these information, they can make the necessary changes. Finally, the new modifications may also have some consequences on other templates or their instances. . . all these steps are time consuming and can lead to mistakes or leaving out some documents. 7.3 Application and Results First of all, up-to-date information of templates and CAD models have to be retrieved. Currently all models are analysed and instances recreated in the defined ontology without instances (in the future incremental updates are planned because the full analysis is rather time consuming). After this step, a comprehensive overview of models and their relationships is available. Then the user has to select the modified templates and launch the update sequence computation. An example of result for one modified template is presented in figure 5. The 92 models problem was computed in approximately 300 ms on a Pentium M 1.8Ghz. It shows the documents (boxes) that have to be updated and the order in which they have to be processed. The first documents to be handled are located at rank 0. The dotted arrows represent the “instance location link,” which is the link from a template to one of its instances. The other arrows target a document contained in the link source document. Other types of relations present in the ontology can also be shown. Engineers are thus provided with means to merely follow the sequence rank after rank, load given documents and apply the changes. This eliminates the unproductive task of searching relations through documents and their documentation and the focus can be put on the updates.
162
O. Kuhn et al.
8 Conclusion and Perspectives In this paper, a solution to propagate changes made in KBE templates to their instances and related documents has been presented. Update propagation is a complex and time consuming task. The complexity comes from the size and the heterogeneity of the network representing documents and their relations. The proposed solution aims at supporting engineers in the task of updating related templates and instances after template definitions were modified. The main benefits of this approach are the speedup of the global task of propagating template updates, as well as to avoid incomplete updates. In a set of several thousand of models, it is hard to have a good overview of all dependencies to find needed information. This is even more complex due to non-explicit relations and links that are not represented within the CAD system such as, for example, the template instances location. The approach is based on an OWL ontology that has been defined from the analysis of the problem and CATIA V5 as example. This ontology also includes an abstract level composed of concepts inspired from the STEP standard, to facilitate the integration of other CAD systems. The aim of the ontology is to represent knowledge from KBE templates and CAD models. A reasoner is also used on the ontology in order to infer knowledge that is not provided by the data extraction from KBE templates and CAD models. These information are then used by a ranking algorithm that will provide an update sequence to support engineers. Further improvements can be made to enhance the global process of changes propagation. The first would be to automate the model update or template instance replacement. It would also be interesting to investigate OWL 2, which has become a W3C recommendation in October 2009, to evaluate its benefits comparing to OWL DL for data representation. At the moment only the technical aspects of the template are taken into account. However it would be interesting to store the functionalities of templates within the ontology, for example, in order to query the ontology. Further investigations will also include the test of this approach on large industrial cases in order to evaluate its performance on large assemblies.
References 1. Katzenbach, A., Bergholz, W., Rohlinger, A.: Knowledge-based design an integrated approach. In: Heidelberg, S.B. (ed.) The Future of Product Development, pp. 13–22 (2007) 2. Liese, H.: Wissensbasierte 3D-CAD Repr¨asentation. PhD thesis, Technische Universit¨at Darmstadt (2003) 3. Dudenh¨offer, F.: Plattform-effekte in der Fahrzeugindustrie. In: Controlling, vol. 3, pp. 145– 151 (2000) 4. Gay, P.: Achieving competitive advantage through knowledge-based engineering: A best practise guide. Technical Report, British Department of Trade and Industry (2000) 5. Chapman, C.B., Pinfold, M.: The application of a knowledge based engineering approach to the rapid design and analysis of an automotive structure. Advances in Engineering Software 32, 903–912 (2001) 6. Kamrani, A., Vijayan, A.: A methodology for integrated product development using design and manufacturing templates. Journal of Manufacturing Technology Management 17(5), 656–672 (2006)
Knowledge-Based Engineering Template Instances Update Support
163
7. Arndt, H., Haasis, S., Rehner, H.P.: CATIA V5 Template zur Umsetzung von Standardkonzepten. In: Vieweg Technology Forum Verlag (ed.) Karosseriebautage Hamburg, Internationale Tagung (2006) 8. Lukibanov, O.: Use of ontologies to support design activities at DaimlerChrysler. In: 8th International Prot´eg´e Conference (2005) 9. Siddique, Z., Boddu, K.: A CAD template approach to support web-based customer centric product design. Journal of Computing and Information Science in Engineering 5(4), 381–386 (2005) 10. Haasis, S., Arndt, H., Winterstein, R.: Roll out template-based engineering process. In: DaimlerChrysler EDM—CAE Forum (2007) 11. Mbang, S.: Durchg¨angige Integration von Produktmodellierung, Prozessplannung und Produktion am Beispiel Karosserie. In: CAD - Produktdaten ”Top Secret”?! (2008) 12. Mizuguchi, R.: Tutorial on ontological engineering - part 1: Introduction to ontological engineering. In: New Generation Computing, vol. 21, pp. 365–384. OhmSha&Springer (2003) 13. Noy, N., McGuinness, D.: Ontology development 101: A guide to creating your first ontology. Technical Report, Stanford University (2001) 14. STEP: ISO 10303 - industrial automation systems and integration - product data representation and exchange (1994) 15. Gansner, E.R., Koutsofios, E., North, S.C., Vo, K.P.: A technique for drawing directed graphs. IEEE Trans. Softw. Eng. 19(3), 214–230 (1993) 16. North, S.C., Woodhull, G.: On-line Hierarchical Graph Drawing. In: Mutzel, P., J¨unger, M., Leipert, S. (eds.) GD 2001. LNCS, vol. 2265, pp. 232–246. Springer, Heidelberg (2002) 17. Sugiyama, K., Tagawa, S., Toda, M.: Methods for visual understanding of hierarchical system structures. IEEE Intelligent Systems Transactions on Systems, Man, And Cybernetics 11(2), 109–125 (1981) 18. Noy, N.F., Sintek, M., Decker, S., Crubezy, M., Fergerson, R.W., Musen, M.A.: Creating semantic web contents with protege-2000. IEEE Intelligent Systems 2(16), 60–71 (2001) 19. Tsarkov, D., Horrocks, I.: Fact++ description logic reasoner: System description. In: International Joint Conference on Automated Reasoning, vol. 3, pp. 292–297 (2006)
Coordinating Evolution: An Open, Peer-to-Peer Architecture for a Self-adapting Genetic Algorithm Nikolaos Chatzinikolaou School of Informatics, University of Edinburgh, Informatics Forum Crichton Street, Edinburgh, Scotland, U.K.
Abstract. In this paper we present an agent-based, peer-to-peer genetic algorithm capable of self-adaptation. We describe a preliminary architecture to that end, in which each agent is executing a local copy of a GA, using initially random parameters (currently restricted to the mutation rate for the purposes of experimentation). These GA agents are optimised themselves through the use of an evolutionary process of selection and recombination. Agents are selected according to the tness of their respective populations, and during the recombination phase they exchange individuals from their population as well as their optimisation parameters, which is what lends the system its self-adaptive properties. This allows the execution of ”optimal optimisations” without the burden of tuning the evolutionary process by hand. Thanks to its parameter-less operation, our platform becomes more accessible and appealing to people outside the evolutionary computation community, and therefore a valuable tool in the eld of enterprise information systems. Initial empirical evaluation of the peer to peer architecture demonstrates better harnessing of the available resources, as well as added robustness and improved scalability. Keywords: Genetic algorithms, Distributed computation, Multi-agent learning, Agent coordination.
1 Introduction 1.1 Genetic Algorithms Since their inception by John Holland in the early 70’s [13] and their popularisation over the last few decades by works such as [10], Genetic Algorithms (GAs) have been used extensively to solve computationally hard problems, such as combinatorial optimisations involving multiple variables and complex search landscapes. In its simplest form, a GA is a stochastic search method that operates on a population of potential solutions to a problem, applying the Darwinian principle of survival of the fittest in order to generate increasingly better solutions. Each generation of candidate solutions is succeeded by a better one, through the process of selecting individual solutions from the current generation according to their relative fitness, and applying the genetic operations of crossover and mutation on them. The result of this process is that later generations consist of solution approximations that are better than their predecessors, just as is natural evolution. J. Filipe and J. Cordeiro (Eds.): ICEIS 2010, LNBIP 73, pp. 164–176, 2011. c Springer-Verlag Berlin Heidelberg 2011
Coordinating Evolution
165
GAs have proved to be flexible and powerful tools, and have been successfully applied to solve problems in domains too numerous and diverse to list here (examples are provided in surveys such as [23]). Despite their widespread success, however, there’s still a number of issues that make their deployment by the uninitiated a non-trivial task. Two themes that keep recurring in the literature are parameter control, which involves determining the optimal set of parameters for a GA; and parallelisation, which involves distributing the computational load of a GA between multiple computational units. It is on these two themes that this research concentrates. 1.2 Scope of This Paper The principal objective of this research is the design and implementation of a scalable architecture that will enable large numbers of agents (in the form of computers participating in an open network) to cooperate in order to solve complex problems that would require a prohibitively long time to solve in a standard, non-parallel GA. The architecture we propose aims to address both of the issues of parameter control and parallelisation at the same time, by using a novel approach: that of implementing an open, peer-to-peer network of interconnected GAs, in which no a priori assumptions will need to be made regarding their configuration. Instead, the capacity of each GA for solving the problem at hand will be evolved itself during runtime. In addition, the self-organising nature of the system will ensure that all and any available resources (participating computers) will be efficiently exploited for benefiting the system. The general concept underlying this research involves exploiting self organisation and coordination among agents through the use of mobile protocols described using a process calculus, and in particular the Lightweight Coordination Calculus (LCC) as specified in [22,21].
2 Related Work 2.1 Self Adaptation in GAs In every application of a GA, the designer is faced with a significant problem: tuning a GA involves configuring a variety of parameters, including things such as population sizes, the operators used for selection and mutation, type and size of elitism etc. As a general case, before a GA can be deployed successfully in any problem domain, a significant amount of time and/or expertise has to be devoted to tuning it. As a result, numerous methods on parameter optimisation have appeared over the years [8]. These generally fall in one of two categories: – Parameter Tuning, in which the set of GA parameters are determined a priori, and then applied to the GA before it is executed. – Parameter Control, in which the parameters change (adapt) while the GA is running. It was discovered early on [12,27] that simple a priori parameter tuning is generally insufficient to produce adequate results, as different stages in the evolutionary process are likely to require different parameter values. Therefore, in our research we concentrate on dynamic parameter adaptation, along the lines of work presented in [4,8,16].
166
N. Chatzinikolaou
2.2 Parallel GAs Even after a set of optimal parameters has been established, traditional (canonical) GAs suffer from further difficulties as problems increase in scale and complexity. [19] has identified the following: – Problems with big populations and/or many dimensions may require more memory than is available in a single, conventional machine. – The computational (CPU) power required by the GA, particularly for the evaluation of complex fitness functions, may be too high. – As the number of dimensions in a problem increases and its fitness landscape becomes more complex, the likelihood of the GA converging prematurely to a local optimum instead of a global one increases. To some extend, these limitations can be alleviated by converting GAs from serial processes into parallel ones. This involves distributing the computational effort of the optimisation between multiple CPUs, such as those in a computer cluster. [15] identify three broad categories of parallel genetic algorithm (PGA): – Master-slave PGA This scheme is similar to a standard, or canonical, genetic algorithm, in that there is a single population. The parallelisation of the process lies in the evaluation of the individuals, which is allocated by the master node to a number of slave processing elements. The main advantage of master-slave PGAs is ease of implementation. – Fine-grained or Cellular PGA Here we have again a single population, spatially distributed among a number of computational nodes. Each such node represents a single individual (or a small number of them), and the genetic operations of selection and crossover is restricted to small, usually adjacent groups. The main advantage of this scheme is that it is particularly suitable for execution on massively parallel processing systems, such as a computer system with multiple processing elements. – Multi-population or Multi-deme or Island PGA In an island PGA there are multiple populations, each residing on a separate processing node. These populations remain relatively isolated, with “migrations” taking place occasionally. The advantages of this model is that it allows for more sophisticated techniques to be developed. The approach of parallelisation of genetic algorithms becomes even more appropriate in the light of recent developments in the field of multi-processor computer systems [18], as well as the emergence of distributed computing, and particularly the new trend towards cloud computing [9]. The work presented in this paper was originally influenced by [3], which follows the island paradigm [26,5]. This scheme, however, is just one of many. Among others, [6,19,2] each provide an excellent coverage of the work done on this theme.
Coordinating Evolution
167
2.3 Multi-agent Coordination This architecture is based on the concept of coordinating the interactions between individual GA agents using shared, mobile protocols specified in the Lightweight Coordination Calculus (LCC) [21,22]. It is the coordination between the agents that is going to guide the GA agents as a system, by providing an open and robust medium for information exchange between them, as well as the necessary evolutionary pressure. Apart from being used to specify declaratively the interactions between agents, LCC is also an executable language, and as such it will be used to dictate the interactions that will “glue” the GA agents together. LCC is designed to be a flexible, multi-agent language, and has proved successful in the implementation of open, peer-to-peer systems, as was demonstrated by the the OpenKnowledge framework [20].
3 Architecture 3.1 “Intra-agent” Genetic Algorithm The system we have developed consists of a network of an arbitrary number of identical agents. Each agent contains an implementation of a canonical GA that acts on a local population of genomes, performing standard crossover and mutation operators on them. We call this GA the “intra-agent GA”, and its steps are that of a typical GA. For a population of size n: 1. 2. 3. 4.
Evaluate each member of the population. Select n pairs of parents using roulette wheel selection. For each pair of parents, recombine them and mutate the resulting offspring. Repeat from step 1 for the newly created population.
3.2 “Inter-agent” Genetic Algorithm At the same time, every agent has (and executes locally) a copy of a shared, common LCC protocol that dictates how this agent coordinates and shares information with its peers. The result of this coordination is a secondary evolutionary algorithm, which evolves not the genomes in each agent but the agents themselves, and in particular the population and parameters that each of them use for their respective “intra-agent” GA. We call this secondary GA the “inter-agent” GA: 1. Perform a number of “intra-agent” GA iterations. This step is equivalent to step 1 of the “intra-agent” GA, as it essentially establishes a measure of that agent’s overall fitness. This fitness is based on the average fitness of all the individual genomes in the agent’s population, as established by the “intraagent” GA. 2. Announce agent’s fitness to neighbouring peers, wait for them to announce their own fitness, and select a fit mate using roulette wheel selection. Again, this is similar to step 2 above. The only difference this time is that every agent gets to select a mate and reproduce, as opposed to the “intra-agent” GA where both (genome) parents are selected using roulette wheel selection.
168
N. Chatzinikolaou
3. Perform crossover between self and selected agent (population AND parameters). Here we have the recombination stage between the two peers, during which they exchange genomes from their respective populations (migration) as well as parameters. The new parameters are obtained by averaging those of the two peers, and adding a random mutation amount to them. 4. Repeat from step 1. By combining the GA parameters of the agents in addition to the genomes during the migration/recombination stage (step 3), we ensure that these parameters evolve in tandem with the solution genomes, and thus remain more-or-less optimal throughout the evolutionary process. It is this characteristic that lends our system its self-adaptive properties. The idea behind this approach is that we use the principle that forms the basis of evolutionary computation to optimise the optimiser itself. 3.3 Agent Autonomy and Motivation As is the norm in most multi-agent systems, each agent in our implementation is fully autonomous. This autonomy is evident in the fact that the agents are able to function even without any peers present, or when peer-to-peer communication is compromised. This characteristic has the obvious advantage of improved robustness. However, an agent operating in isolation will not be able to evolve its own GA parameters, and hence its performance will remain at a steady, arbitrary level dictated by the current set of GA parameters it uses. This is where the motivation of the agents to interact with their peers stems from: by having agents collaborate/breed with their peers, the system as a whole evolves, adapts, and improves its performance. 3.4 Comparison with Existing Systems Our architecture shares some characteristics with other approaches in the field. For example, in a typical “island” based parallel GA [3] there is usually migration of genomes, but no evolution of GA parameters. On the other hand, in typical meta-GA implementations [11,7], there is adaptation of the genetic operators but no parallelisation of the evolutionary process. Key to our approach is the fact that the optimisation of the GA agents does not happen before they are used to solve the actual problem at hand, but instead their optimisation happens as a continuous, dynamic process during their operational lifetime. This is of particular relevance, especially in enterprise environments where requirements between different applications fluctuate significantly.
4 Preliminary Experiments 4.1 Objectives Following the implementation of our architecture, we performed a number of experiments in order to evaluate its performance compared to (a) traditional (canonical) genetic algorithms, and (b) an island-based parallel genetic algorithm with simple population migration.
Coordinating Evolution
169
4.2 Test Case As our optimisation test case, we used the Rastrigin equation, which is widely adopted as a test function in the field. Its general form is given in equation 1. F (¯ x) = kA +
k
(x2i − Acos(2πxi ))
(1)
i=1
This is a minimisation problem, which implies that the aim of the GA is to make the fitness measure as small as possible, with the optimal value being zero. In all our experiments, the steepness A was set to 10, and the number of Rastrigin variables k was set to 30. The range of x was -0.5 to +0.5, encoded in 16-bit Gray code. The choice of these parameters was influenced by similar experiments in the literature (e.g. [28]). 4.3 Intra-agent GA Configuration Since we are mainly interested in observing the adaptation that occurs in the individual agents’ GAs themselves during the evolutionary process, we tried to keep things simple and controllable by only allowing a single parameter to adapt: that of the mutation rate. All the rest of the parameters were kept constant: the population size was fixed at 100 individuals, and no elitism was used. Also, and despite being at odds with established GA practice, the intra-agent crossover rate was set to 0 - effectively disabling it. Again, this was done so as to better observe the impact of the mutation rate adaptation on the overall fitness progression. Finally, the roulette wheel selection scheme was used. It must be noted at this point that these parameters were deliberately selected with simplicity in mind rather than performance. In fact, it can be argued that the parameter selection described above is rather inefficient, yet in being so, it allows us to better observe the impact of the design of our architecture in the overall performance of the GA. 4.4 Inter-agent GA Configuration Regarding the set-up of the multi-agent system, we conducted experiments using 3, 6, 12, 24, 48 and 96 agents at a time. We also had to specify how many iterations each agent would perform before crossover with the other agents occurred. This parameter is dependent on the available bandwidth available by the computational platform on which the system is deployed. In our case, this value was set to 10, which was empirically found to perform best for our platform. We performed runs using three different schemes for the inter-agent crossover: 1. No crossover: essentially, each agent’s GA run in isolation from the others. This experiment was implemented using MATLAB’s Genetic Algorithm toolbox, which provided us with an independent and solid performance benchmark. 2. Population crossover: during the inter-agent crossover phase, only individuals between the different sub-populations were exchanged, while GA parameters were not recombined.
170
N. Chatzinikolaou
3. Full crossover: This is the scheme that we propose in our architecture. In this case, the parameters of the agents’ GA were also recombined in addition to the population exchange. The first two schemes were implemented not as an integral part of our architecture, but instead as a benchmark for evaluating its performance. Essentially, scheme 1 emulates a set of traditional, serial GAs using static parameters covering the full available spectrum, while scheme 2 emulates a typical “island-based” GA, with migration taking place among individual GAs that - again - use static parameters. For the first and second configuration, each agent was given a mutation rate equal to half that of the previous agent, starting at 1.0. This means that, as more agents were introduced in the system, their mutation rate was reduced exponentially. The reason for this decision is the fact that, in almost all GAs, later generations benefit from increasingly smaller mutation rates [8]. For the second and third case, agents were selected by their peers for crossover using roulette wheel selection, where each agent’s fitness was dictated by the average fitness of its current population. Finally, in order to compensate for the stochastic nature of the experiments and produce more meaningful results, all runs were executed 10 times and their output was averaged. 4.5 Evaluation of Performance In general, the performance of a GA is measured by the time it takes to achieve the required result (usually convergence to a stable or pre-set fitness). In our experiments, however, we deemed it more appropriate to use the number of generations as a measure of performance. The reason for this is that the actual execution time depends on variables which are unrelated to the algorithm itself, such as the capabilities and load of the processing elements or the bandwidth of the network on which these reside. In all of our experiments, the population size was the same, and thus the execution time required for the evaluation of every population (typically the most computationally intensive task in a GA) was also the same. Therefore, we can assume that the number of generations taken by the algorithm to converge is proportional to the actual time it would require on a benchmark computational system. This, of course, does not take into account the overhead incurred by network communication; however, as this overhead is again more or less equivalent in all experiments, we can safely factor it out.
5 Results 5.1 Speed of Convergence For our first experiment, we executed runs using different numbers of agents and all three inter-agent crossover schemes. Each run was stopped as soon as a fitness of 50.0 was reached by any of the agents in that run. Figure 1 shows the relative performance of the three schemes (note that the x-axis is shown in logarithmic scale).
Coordinating Evolution
171
Number of Generations Required for Fitness 50.0
300 None Population Full
250
200
150
100
50
0 3
6
12 24 Number of Agents
48
96
Fig. 1. Relative speed performance of the three inter-agent crossover schemes
As can be seen, the worst performer was the first scheme, which emulates a number of isolated sequential GAs. In addition to being the slowest, it also failed to reach the target fitness when using too few agents (n = 3 and n = 6) - the reason for this being that the agents’ mutation rates were too high to allow them to converge to the target fitness. The population exchange scheme performed significantly better in terms of speed, although it also failed to converge when few agents were used (again, the target fitness was not reached for n = 3 and n = 6). The full crossover scheme performed even better in terms of speed, but its most significant advantage is the fact that it managed to reach the target fitness even when using few agents - although at the cost of more generations. Finally, the downward slope of this scheme’s curve as the number of agents increases, provides a first suggestion towards its superior scaling properties. 5.2 Quality of Solution The next experiment involved executing runs for 1000 generations each, again using all three inter-agent crossover schemes for different numbers of agents. This allowed us to see how close to the optimal fitness of 0.0 each configuration converged. Figures 2, 3 and 4 show the resulting graphs from these runs, with the actual fitness results provided in table 1. The y-axis of the graphs has been made logarithmic in order to improve the legibility of the plots. From these results, it becomes obvious that using the full crossover scheme achieves the best solution in terms of quality, in addition to being the fastest of the three. Furthermore, it is becoming more obvious at this stage that the full crossover scheme scales significantly better as the number of agents increases. The first two schemes seem to be “hitting a wall” after the number of agents is increased beyond 24. For the case of the full crossover, however, adding more agents seem to be contributing to the performance of the system all the way up to, and including, n = 96.
172
N. Chatzinikolaou Table 1. Best (minimum) fitness after 1000 generations Scheme 1 (No crossover) 2 (Population crossover) 3 (Full crossover)
Fig. 2. Run of 1st scheme (no inter-agent crossover) for different numbers of agents
2
10
Fitness
1
n=3 n=6 n=12 n=24 n=48 n=96
10
−1
10
0
200
400 600 Generations
800
1000
Fig. 3. Run of 2nd scheme (population inter-agent crossover) for different numbers of agents
Finally, the ability of this scheme to perform well even when using a small number of agents can also be seen in figure 4.
Coordinating Evolution
173
5.3 Adaptation of the Mutation Rate As a final investigation on how the mutation rate adapts in the full inter-agent crossover scheme, we plotted the progress of the best agent’s mutation rate against the generations, in a typical run using three agents. Figure 5 illustrates the results (again using a logarithmic y-axis). From this plot, we can see that the mutation rate drops exponentially in order to keep minimising the fitness, which agrees with our expectations.
2
10
Fitness
1
n=3 n=6 n=12 n=24 n=48 n=96
10
−1
10
0
200
400 600 Generations
800
1000
Fig. 4. Run of 3d scheme (full inter-agent crossover) for different numbers of agents 3
0
10
10 Fitness Mutation Rate
−1
10
2
−2
10
−3
10
1
Mutation Rate
Fitness
10
−4
10
10
−5
10
0
10
0
−6
10
200
400 600 Generations
800
1000
Fig. 5. Adaptation of the mutation rate
6 Conclusions The results presented above are encouraging, as they prove that this preliminary version of the architecture we propose is effective. By distributing the load among
174
N. Chatzinikolaou
multiple agents, the system manages to converge to near-optimal solutions in relatively few generations. The most important contribution, however, is the fact that, by applying the principle of natural selection to optimise the GA agents themselves, the evolutionary algorithm becomes self-adaptive and thus no tuning is required. By eliminating the need for tuning and thus taking the guesswork out of GA deployment, we make evolutionary optimisation appeal to a wider audience. In addition, the peer-to-peer architecture of the system provides benefits such as improved robustness and scalability.
7 Future Work 7.1 Complete GA Adaptation As stated earlier, and in order to aid experimentation, only the mutation rate is currently adapted in our system. This is of course not very effective for a real-life application, where the full range of genetic algorithm parameters (population size, elite size, crossover rate, selection strategy etc.) needs to be adapted as the evolutionary process progresses. This extension is relatively straightforward to implement, as the basic characteristics of the architecture’s implementation remain unaffected. 7.2 Asynchronous Agent Operation Currently, all agents in our platform work synchronously. This means that they all perform the same number of iterations before every inter-agent crossover stage, with faster agents waiting for the slower ones to catch up. When the platform is deployed in a network consisting of computational elements of similar capabilities, this strategy works fine. However, in networks with diversified computational elements, this scheme is obviously inefficient. We aim to modify the current coordination protocol in order to resolve this, by using time- or fitness-based cycle lengths rather than generation-based ones. 7.3 Extended Benchmarking Although we have established that our platform performs significantly better than standard, canonical GAs (such as the basic MATLAB implementation we compared against), we can obtain a better picture of how our architecture compares with other systems in the field by performing more test runs and comparing results. For instance, the systems proposed by [28] and [14,25] seem to share some characteristics with our own system, even though the two architectures differ in their particulars. Even though our focus is more on the openness of the system, it would be helpful to have an idea of the relative performance of our system with the status quo. However, at the time of writing, implementations of these systems were not readily accessible. We also intend to perform more benchmarks using alternative test functions, such as the ones presented in [24,1,17]. This way we will be able to better assess the capability of our system to adapt to different classes of problem, and will give us significant insight into which evolutionary behaviour (expressed as the evolutionary trajectory of each parameter) better suits each class of problem.
Coordinating Evolution
175
7.4 Additional Solvers Finally, it will be interesting to take full advantage of the openness inherent to our architecture and LCC, by allowing additional kinds of solvers to be introduced in the system (e.g. gradient search, simulated annealing etc.). This will require the re-design of our protocol regarding the inter-agent crossover, or possibly the co-existence of more than one protocol in the system. We believe that the effort required will be justified, since, by extending our architecture in this way, we will effectively be creating an open, peer-to-peer, self-adaptive hybrid optimisation platform.
References 1. Ackley, D.H.: A connectionist machine for genetic hillclimbing. Kluwer Academic Publishers, Norwell (1987) 2. Alba, E., Troya, J.M.: A survey of parallel distributed genetic algorithms. Complexity 4(4), 31–52 (1999) 3. Arenas, M.G., Collet, P., Eiben, A.E., Jelasity, M., Merelo Guerv\’{o}s, J.J., Paechter, B., Preuß, M., Schoenauer, M.: A framework for distributed evolutionary algorithms. In: Guerv´os, J.J.M., Adamidis, P.A., Beyer, H.-G., Fern´andez-Villaca˜nas, J.-L., Schwefel, H.-P. (eds.) PPSN 2002. LNCS, vol. 2439, pp. 665–675. Springer, Heidelberg (2002) 4. Back, T.: Self-adaptation in genetic algorithms. In: Proceedings of the First European Conference on Artificial Life, pp. 263–271. MIT Press, Cambridge (1992) 5. Belding, T.C.: The distributed genetic algorithm revisited. In: Proceedings of the 6th International Conference on Genetic Algorithms, pp. 114–121. Morgan Kaufmann Publishers Inc., San Francisco (1995) 6. Cant-Paz, E.: A survey of parallel genetic algorithms. Calculateurs Paralleles 102 (1998) 7. Clune, J., Goings, S., Punch, B., Goodman, E.: Investigations in meta-gas: panaceas or pipe dreams? In: GECCO 2005: Proceedings of the 2005 Workshops on Genetic and Evolutionary Computation, pp. 235–241. ACM, New York (2005) 8. Eiben, A.E., Hinterding, R., Hinterding, A.E.E.R., Michalewicz, Z.: Parameter control in evolutionary algorithms. IEEE Transactions on Evolutionary Computation 3, 124–141 (2000) 9. Foster, I., Zhao, Y., Raicu, I., Lu, S.: Cloud computing and grid computing 360-degree compared. In: Grid Computing Environments Workshop, GCE 2008, pp. 1–10 (2008) 10. Goldberg, D.E.: Genetic Algorithms in Search, Optimization, and Machine Learning. Addison-Wesley, Reading (1989) 11. Grefenstette, J.: Optimization of control parameters for genetic algorithms. IEEE Transactions on Systems, Man and Cybernetics 16(1), 122–128 (1986) 12. Hesser, J., M¨anner, R.: Towards an optimal mutation probability for genetic algorithms. In: Schwefel, H.-P., M¨anner, R. (eds.) PPSN 1990. LNCS, vol. 496, pp. 23–32. Springer, Heidelberg (1991) 13. Holland, J.H.: Adaptation in Natural and Artificial Systems. University of Michigan Press, Ann Arbor (1975) 14. Kisiel-Dorohinicki, M., Socha, K., Service Telematique Et Communication: Crowding factor in evolutionary multi-agent system for multiobjective optimization. In: Proceedings of IC-AI 2001: International Conference on Artificial Inteligence. CSREA Press (2001) 15. Lim, D., Ong, Y.-S., Jin, Y., Sendhoff, B., Lee, B.-S.: Efficient hierarchical parallel genetic algorithms using grid computing. Future Gener. Comput. Syst. 23(4), 658–670 (2007)
176
N. Chatzinikolaou
16. Meyer-Nieberg, S., Beyer, H.-G.: Self-adaptation in evolutionary algorithms. In: Parameter Setting in Evolutionary Algorithm, pp. 47–76. Springer, Heidelberg (2006) 17. Michalewicz, Z.: Genetic algorithms + data structures = evolution programs, 3rd edn. Springer, London (1996) 18. Munawar, A., Wahib, M., Munetomo, M., Akama, K.: A survey: Genetic algorithms and the fast evolving world of parallel computing. In: 10th IEEE International Conference on High Performance Computing and Communications, pp. 897–902 (2008) 19. Nowostawski, M., Poli, R.: Parallel genetic algorithm taxonomy. In: Proceedings of the Third International, pp. 88–92. IEEE, Los Alamitos (1999) 20. Robertson, D., Giunchiglia, F., van Harmelen, F., Marchese, M., Sabou, M., Schorlemmer, M., Shadbolt, N., Siebes, R., Sierra, C., Walton, C., Dasmahapatra, S., Dupplaw, D., Lewis, P., Yatskevich, M., Kotoulas, S., de Pinninck, A.P., Loizou, A.: Open knowledge semantic webs through peer-to-peer interaction. Technical Report DIT-06-034, University of Trento (2006) 21. Robertson, D.: International Conference on Logic Programming, Sant-Malo, France (2004) 22. Robertson, D.: A lightweight coordination calculus for agent systems. In: Declarative Agent Languages and Technologies, pp. 183–197 (2004) 23. Ross, P., Corne, D.: Applications of genetic algorithms. In: On Transcomputer Based Parallel Processing Systems, Lecture (1995) 24. Schwefel, H.-P.: Numerical Optimization of Computer Models. John Wiley & Sons, Inc., New York (1981) 25. Socha, K., Kisiel-Dorohinicki, M.: Agent-based evolutionary multiobjective optimisation. In: Proceedings of the Fourth Congress on Evolutionary Computation, pp. 109–114. Press (2002) 26. Tanese, R.: Distributed genetic algorithms. In: Proceedings of the Third International Conference on Genetic Algorithms, pp. 434–439. Morgan Kaufmann Publishers Inc., San Francisco (1989) 27. Tuson, A.L.: Adapting operator probabilities in genetic algorithms. Technical report, Master’s thesis, Evolutionary Computation Group, Dept. of Artificial Intelligence, Edinburgh University (1995) 28. Yoshihiro, E.T., Murata, Y., Shibata, N., Ito, M.: Self adaptive island ga. In: 2003 Congress on Evolutionary Computation, pp. 1072–1079 (2003)
CONTASK: Context-Sensitive Task Assistance in the Semantic Desktop Heiko Maus1 , Sven Schwarz1 , Jan Haas2 , and Andreas Dengel1,2 1
2
German Research Center for AI (DFKI GmbH), Kaiserslautern, Germany Computer Science Department, University of Kaiserslautern, Kaiserslautern, Germany {heiko.maus,sven.schwarz,andreas.dengel}@dfki.de [email protected]
Abstract. In knowledge work, users are confronted with difficulties in remembering, retrieving, and accessing relevant information and knowledge for their task at hand. In addition, knowledge-intensive, task-oriented work is highly fragmented and, therefore, requires knowledge workers to effectively handle and recover from interruptions. The Semantic Desktop approach provides an environment to represent, maintain, and work with a user’s personal knowledge space. We present an approach to support knowledge-intensive tasks with a context-sensitive task management system that is integrated in the Nepomuk Semantic Desktop. The context-sensitive assistance is based on the combination of user observation, agile task modelling, automatic task prediction, as well as elicitation and proactive delivery of relevant information items from the knowledge worker’s personal knowledge space. Keywords: Task management, Proactive information delivery, Personal knowledge space, User observation, Agile task modelling, Semantic desktop, Personal information management.
1 Introduction Corporate work routines have changed vastly in recent years, and today’s knowledge workers are constantly presented with the challenge of negotiating multiple tasks and projects simultaneously [1]. Each of these projects and tasks stereotypically requires collaboration with different teams, and dealing with a varied plethora of data, resources, and technologies. In contrast to traditional, static business processes, the task-oriented work of clerks today is often highly fragmented. Each interruption requires knowledge workers to mentally re-orientate themselves, and permanent task switches and disruptions are associated with a significant overhead cost [2]. Especially for complex, knowledge-intensive tasks requiring significant quantities of related documents and resources, reorientation entails a substantial cognitive overhead. While an efficient task execution requires the processing of particularly task relevant information at hand, today’s knowledge workers have to face the known information overload. An increasing amount of data is available, however, spread over various information sources such as email client, address book, local and remote file systems, web browser, wikis, and organizational structures. The efficient and successful processing of J. Filipe and J. Cordeiro (Eds.): ICEIS 2010, LNBIP 73, pp. 177–192, 2011. c Springer-Verlag Berlin Heidelberg 2011
178
H. Maus et al.
a task depends on the quality of finding and selecting the most relevant information for the task at hand. This represents a source of errors in daily knowledge work. Important information is not found, connections are overseen or relevant experts are not identified. The result consists of suboptimal problem solutions, unnecessary repetitions of already accomplished tasks or wrong decisions. As a solution to the outlined problems of the knowledge worker, this paper proposes a context-sensitive task management system named ConTask. It focuses on the areas of knowledge capturing, knowledge reuse, and interruption recovery. By tracking the user’s actions, the system provides automatic means to intelligently elicit task-specific relevant information items and, thus, capture a task’s context. This is used for proactively delivering such context-specific, task-relevant knowledge to a user to ensure a reuse of valuable task know-how. Thereby, ConTask aims at the following goals: – Automatically capture created/consulted information objects and assign these to tasks to capture task know-how and to structure the personal knowledge space in a task-centric way. – Increase potential productivity for knowledge workers and reduce resource allocation costs by proactively providing relevant, task-related information and resources. – Enable and ensure task-specific know-how reuse. – Facilitate reorientation back into an interrupted task by reducing the cognitive and administrative task switching overhead. – Improve task-specific assistance by learning from knowledge workers’ feedback. The paper is structured as follows: the next sections introduce the scientific background and give an overview on related work. Section 4 explains the main ingredients and concepts of ConTask. A summary and outlook on future work conclude the paper.
2 Background Various approaches identify business processes as a means for structuring a company’s knowledge [3,4]. As business processes form the core operational sequences of every company, their efficiency is critical for a company’s success. Knowledge workers are integrated into crucial parts of the business processes, and the quality of their (procedural) know-how decides between success and failure (for the company). They are embedded in business processes where we are mainly interested in supporting knowledge-intensive tasks for know-how capture, provision, and reuse as well as assistance in multitasking for supporting the knowledge worker in his daily work directly. Knowledge-intensive Tasks. Especially knowledge-intensive tasks entail the challenge of retrieving, structuring and processing information, e.g., for judging a case or for making crucial decisions. Knowledge-intensive tasks notably rely on processing large quantities of relevant information and coevally producing valuable knowledge to be reused in similar situations (later on). Aiming at preserving this valuable knowledge, tasks should be utilized to structure a knowledge worker’s personal knowledge space consisting of various resources like documents, emails, and contact addresses, as well as real-life concepts such as persons, projects, or topics.
CONTASK: Context-Sensitive Task Assistance in the Semantic Desktop
179
Typically, knowledge-intensive activities are explorative and not completely known a priori [5]. As many parts of their execution might not be predetermined, they can not be completely modelled in advance. On the basis of this, we introduce the concept of weakly-structured workflows consisting of knowledge-intensive activities with specific design decisions for applications supporting these workflows. They mainly incorporate the two aspects lazy and late modelling and the strong coupling of modelling and execution of process-models. Lazy modelling refers to an on demand refinement of process models initially only partially specified. This pays off for weakly-structured workflows as details of the execution of agile knowledge-intensive tasks are not known in advance. This aspect is strongly related to the coupling of modelling and execution of process-models. Starting with a partial model, weakly-structured workflows allow for dynamical refinement of the process model during its execution. Our work resulted in the TaskNavigator [6,7], a browser-based workflow system. It supports weakly-structured workflows through agile task management for teams, proactive information delivery (PID) based on the task context (mainly consisting of task name, description, and attached documents, i.e. text-based), process know-how capture and re-use. Evaluations have shown that a main drawback was the effort for users to maintain their task, i.e., by uploading documents to the task represented in the browser. The work presented here, is a continuation of the overall goal by integrating this into user’s personal knowledge space and allowing a much easier management of their tasks embedded on their personal desktops. Multitasking. Nowadays knowledge workers are engaged in multiple tasks and projects in parallel (e.g., [8]). Several studies have shown that task-oriented knowledge work is highly fragmented [9]. Typically, knowledge workers spend only little time on a certain task before switching to another. Task switches and disruptions cause significant overhead costs (e.g., [2,10,11]). After an interruption, knowledge workers must reconstruct their task-specific mental state. This encompasses, in amongst other detail, memories around the task, including next steps to take, required resources, critical factors and deadlines. In addition to the cognitive overhead, restructuring of the desktop and physical work environment is also often required. Resources such as documents, websites, emails, and contact addresses must be relocated and utilized. The retrieval of these various taskspecific resources represents a challenging and time consuming problem. Furthermore, the cognitive challenge to remember all task-specific relevant information can result in significant difficulty for the successful completion of a given task. Short term memory loss concerning critical resources or other task-related information can have major ramifications. The frequent interruption of knowledge-intensive tasks is often a contributing factor to workplace stress and frustration. Studies show that constant interruptions in these situations lead to changes in work rhythm, mental states and work strategies [2]. Often this results in attempts to compensate lost time through an accelerated and therefore even more stressful work pace. The higher workloads and additional effort associated with frequently switching tasks increase both pressure and frustration for today’s knowledge workers.
180
H. Maus et al.
3 Related Work Many different approaches focus on assisting knowledge workers with knowledgeintensive tasks and in multitasking work scenarios. The TaskTracer system [12] consists of an user observation framework collecting events from various office applications. The system utilizes the observed user actions and applies machine learning algorithms for automatically predicting the user’s current task and for associating accessed information items like files or web pages with the elicited task [13]. This enables a task-specific provision of these information items. The APOSDLE Project [14] represents a similar approach. The project integrates task management, e-learning, knowledge management and communication systems. APOSDLE utilizes user observation to support the task-centric provision of suitable learning material and to associate knowledge artefacts with corresponding tasks. The user observation framework is realized by software hooks on the operation system level. Observed user actions are reported to the task predictor component, a machine learning component that serves for task prediction and task switch detection. The OntoPIM [15] project suggests an architecture for a task information system including a monitoring system for user observation. Observed user actions are interpreted by an inference engine to elicit information relevant to the user’s tasks and to proactively provide these items. This task-specific PID is integrated into a selected target application (e.g., into a web browser). Alternatively, the windows context menu is enriched with task-related information. Instead of integrating the task-specific assistance into several applications, our approach aims at integrating the PID into a task management system. This reduces the amount of applications having to be adjusted and enables the agile task modelling to directly use the captured and elicited task-specific knowledge. A task management system embedded in the personal knowledge space along the vision we presented in [4] is KASIMIR [16]. It focuses on capturing, evolving, and providing process patterns from an organisational repository. The tasks within the processes are enriched with information items from the Nepomuk Semantic Desktop (see next section) by user interaction similar to ConTask. Similar work on identifying process patterns but coming from the process management side and embracing task management is the Collaborative Task Management (CTM) approach [17]. The main difference of both approaches from ConTask is the lack of user observation and PID our system applies and their steps further to apply evolving process patterns in process management. We will investigate the combination of these approaches in the joint project ADiWa1 started in 2009.
4 Ingredients of ConTask ConTask is based on the following base components to achieve a context-oriented personal task management: a task management system, the Semantic Desktop, and a framework for retrieving user context. Based on these components, ConTask supports with 1
http://www.adiwa.net
CONTASK: Context-Sensitive Task Assistance in the Semantic Desktop
181
proactive information delivery and agile task modelling. It enables observing task management, providing a task prediction, and allowing relevance feedback and learning. These components and concepts will be detailed in the remainder of this section. Task Management. As mentioned in Section 2, in order to support knowledge workers in working with knowledge-intensive tasks directly on their desktops, a task management system called TaskPad was developed. TaskPad provides the possibility to work on personal tasks and to synchronize tasks from different sources such as TaskNavigator, where the tasks were part of some agile workflows. TaskPad allows to access, attach, and upload documents to TaskNavigator. As both TaskNavigator and TaskPad are fully RDF/S based, they rely on the ontology for weakly-structured workflows developed in [5]. Besides this, TaskPad provides the usual task management functionality such as maintaining tasks, taking notes, attaching URLs, notes, or documents, and filtering the task list2 Fig. 2 shows the task list as Task Diary and the Task Editor. Semantic Desktop. To represent and maintain the user’s personal knowledge space we use and integrate with the Nepomuk3 Semantic Desktop which transfers the idea of the Semantic Web to the user’s local desktop (for a recent overview, see [18]). It serves to capture and represent the knowledge worker’s personal mental models [19]. This personal knowledge space consists of real world concepts such as persons, places, projects or topics, as well as, the connections and relations between them (see Fig. 1). Documents contain information about these concepts and represent the knowledge worker’s individual background, tasks or personal interests. In this context, the Personal Information Model Ontology (PIMO) serves to formalize and structure the personal knowledge space (a PIMO excerpt can be seen on the upper left side with PIMO classes).4 It is the core of the Semantic Desktop and provides the possibility to associate real world concepts with resources, such as documents, emails or contact details, for personal information management. For example, Fig. 1 shows personal notes taken during a meeting using a Semantic Wiki embedded in the Nepomuk Semantic Desktop. Here, the meeting itself is an instance of the PIMO class Meeting together with linked concepts such as attendees, previous meetings or resources such as the calendar entry in MS Outlook. The wiki text shown is mixed with concepts, such as the project it belongs to, topics mentioned in the meeting, etc. Using a concept within the text (e.g., by auto-completion) allows to browse to the concept within the wiki text as well as adds relations to the meeting instance automatically. Furthermore, an ontology-based information extraction (iDocument, see [20]) analyzes text and provides proposals of concept which might fit for the current thing (lower left tab). Thus, during everyday usage, the PIMO evolves with relevant concepts of the user’s work. As the meeting is then linked User Context. The Context Service elicits the user’s work context, which is a snapshot consisting of contextual elements with relevance to the user’s present goal or task 2 3 4
The filters are based on SPARQL queries. http://nepomuk.semanticdesktop.org/ The PIMO is represented in RDF/S – the basic language of the Semantic Web – and is able to include different vocabularies resp. ontologies, i.e., it can be adapted to different domains.
182
H. Maus et al.
Fig. 1. Nepomuk Semantic Desktop: semantically-enriched meeting notes (some text obscured) with PIMO relations (among them the task accessible in ConTask as shown in Fig. 2)
[21,22]. Each contextual element corresponds to an entity from the user’s PIMO. Contextual elements also contain a value describing the certainty for actually belonging to the current context. As knowledge workers regularly switch their tasks and, hence, their work context, the Context Service maintains several so called context threads. Each context thread represents a context snapshot associated with a certain task and contains all information items from the user’s PIMO that are relevant to that task. To enable context-sensitive desktop applications, the Context Service provides an API, with the capabilities to switch to a different context thread, to query all relevant information items of a certain thread and to receive information about context thread switches. This API is used to inform the Context Service whenever the user is switching to a different context thread and allows to correctly maintain context snapshots. In this scenario, the User Observation Hub5 (UOH) serves as a technical means to automatically observe the user’s actions (see also Fig. 5). This is used for gathering evidence about relevant concepts belonging to the user’s current context. The UOH includes an extensive user action ontology formalizing all types of so called native operations (NOPs) that are observable during daily knowledge work. The ontology comprises operations such as browsing a website, adding a bookmark, reading an email or accessing a file in the file system. To gather these user actions, the user observation framework provides a set of installable observers, which report the corresponding user actions to 5
CONTASK: Context-Sensitive Task Assistance in the Semantic Desktop
183
the UOH. The observer framework includes plugins for Mozilla Firefox and Thunderbird and a Windows file system observer. Observed user actions are gathered in the UOH and distributed to registered listeners. The Context Service is such a designated user observation listener receiving notifications about observed user actions. As outlined, the allocation and retrieval of relevant information items represents a time consuming challenge for today’s knowledge workers. Thus, ConTask aims at increasing potential productivity for knowledge workers and reducing allocation costs by proactively providing relevant, task-related information and resources. Proactive Information Delivery. The user’s PIMO represents a range of concepts and resources the knowledge worker deals with during daily work. Therefore, elements in the worker’s PIMO are taken to serve as items for the proactive information delivery (PID). Aiming at a task-centric work support, the information items are provided in a task-centric manner. Thus, the PID structures the personal knowledge space in a taskoriented way. To automatically elicit task-specific relevant information items from the PIMO, ConTask utilizes the Context Service. Based on the user’s interactions with desktop applications, such as browsing a website or writing an email, the Context Service elicits task-specific relevant information items using techniques of machine learning, entity recognition, and document similarity (for details see [23]). In addition, a History Service provides task-specific records of all accessed information items. Being registered as a UOH listener, the History Service maintains a detailed task-specific access history of PIMO concepts and resources (based on the observed user actions). Using the History and the Context Service, ConTask provides both directly accessed and automatically reasoned items. Fig. 2 depicts a screenshot of ConTask, where the so called PID Sidebar proactively provides task-centric access to relevant information items. The PID Sidebar is part of the Task Editor which allows for easy consultation and modification of a task’s properties. Below the task’s name, status, and time constraints, the so called task attachments represent the information items explicitly associated with the task. PID Sidebar and Task Editor are located on the right-hand side of the screenshot. ConTask provides further interaction possibilities on both PID items and task attachments, such as viewing an item specific access history, opening resources with the associated application or viewing resp. editing PIMO elements with the PIMO Editor. The left-hand frame of the screenshot shows the user’s task list (so-called Task Diary), which can be used to easily navigate or switch between tasks. While the Task Editor contains manually attached information items, the PID Sidebar shows additional, potentially relevant information items. The History Service is used to deliver directly accessed items, and the Context Service is used to propose automatically reasoned items. PID Categories. The PID Sidebar provides information items in the following three categories: – Elicitation for Task. This category contains the most relevant information items from the user’s PIMO with respect to the task. It comprises both directly observed and also elicited items, that have not been directly accessed by the user.
184
H. Maus et al.
Fig. 2. ConTask: Task Diary with open tasks; Task Editor with the current active task and the PID sidebar with proposed concepts from the PIMO and recently accessed resources (the context menu opens the PIMO editor in Fig. 1)
– History for Task. This section contains all PIMO elements for which a user action has been observed during the execution of this task. This category does not comprise any concluded or elicited elements, but only directly observed elements such as directly accessed files or websites. The items here stem from the History Service. Items already shown in the Elicitation For Task are omitted. – History. This category contains the most frequently accessed PIMO elements that are not contained in the first two categories, but that have been directly accessed by the user since a certain point of time. These items also stem from the History Service. These categories were chosen based on the following reasoning: Elicitation For Task provides the most relevant elicited information items computed by the Context Service’s machine learning techniques. The PID Sidebar supports the user by proposing potentially relevant items which have not been explicitly associated with the task at hand. The effectiveness of these algorithms is a critical factor for the acceptance of this service. Low quality proposals will merely distract and annoy the user while good quality proposals have the potential to increase the user’s work performance and foster a successful task execution. Particularly for newly created tasks, there is not enough explicit information available that can be used to automatically learn what this task is about. As a consequence, the Context Service is not able to determine good proposals. For these cases, the PID Sidebar’s category History For Task contains all information items the user has accessed while working on the task. According to the user observation, these items have been recently touched by the user while the task at hand was selected as the current task. The problem is, we can not expect the user to make every task and every task switch explicit—consider interruptions or phone calls, for example. The task management tool does not know whether the user is actually working on the currently selected task. As a consequence, one can not automatically attach recently touched items to the
CONTASK: Context-Sensitive Task Assistance in the Semantic Desktop
185
task. Rather ConTask proposes the items allowing users to attach them with a simple drag&drop gesture. The section Task Prediction will explain the how ConTask tries to keep track of the user’s current task nevertheless. For example consider the following scenario. The user accesses a website that is associated with the current task, but the Context Service does not determine this website as relevant, as it has to be further stimulated by being re-accessed by the user. In this case, the website would be provided in the History For Task section. If the relevance value of the website increased based on further stimulation, the website would be included into the Elicitation For Task category and would therefore be removed from the History For Task. A similar reasoning explains the History category. As elaborated, today’s knowledge workers deal with multiple task switches during their daily work. To assure correct association of user actions and accessed information items with the current task, ConTask uses automatic task prediction that will be explained in one of the following sections. However, as these task determination approaches might sometimes fail in highly multitasking work scenarios, some user actions could be associated with the wrong task. Thus, the corresponding information items might be proactively provided with this task instead of the task they actually belong to. Manual task switches sometimes have fuzzy boundaries [13]. For instance if a knowledge worker is just reading a web page that is related to his current task but that leads him to switch to a new one: should the resource be associated with the old or with the new task? Additionally, if a knowledge worker explicitly consults a related task (and hence switches to the other task) for reusing there-stored know-how, the History For Task for the originating task will not contain the consulted material. A solution for this case provides the History category at the bottom of a task’s PID Sidebar. It contains all information items which have been recently accessed by the user, but which have neither been associated with, nor elicited for, nor accessed during the task at hand. This overview over all recently accessed items allows an easy reuse of know-how across different tasks (via simple drag&drop a resource can be added as a task attachment). Agile Task Modelling. Based on the proactively provided PIMO elements, ConTask enables agile, on-the-fly task modelling. As knowledge-intensive tasks can not be completely designed or modelled before their execution, agile, lazy modelling was chosen to allow for task refinement during the execution process. Via context menu or drag&drop, new items, e.g., proposed items from the PID Sidebar, can be easily added as task attachments. This enables knowledge workers to associate information objects to tasks and to thereby classify work knowledge to independent task information units. The result is a task-centric structuring of the user’s personal knowledge space. As lazy modelling explicitly allows to enrich tasks with relevant information during their execution, this approach reduces the up front modelling effort. Starting new tasks does not require completely designed task models. Knowledge workers are not enforced to enter whatever additional information before actually starting a task. This aligns with suggestions in [3,24], where minimal analysis and initial modelling overhead are identified as one of the key requirements for successful business process-oriented knowledge management. Furthermore, task-oriented structuring of the user’s personal knowledge space enables intuitive and direct process know-how reuse. For example, while working on a
186
H. Maus et al.
report document for project x, the knowledge worker may remember an already completed similar report for project y. In case a relevant information item of project y is actually relevant for project x, ConTask allows to sight, attach, and reuse these items from one task to another with a few clicks. By explicitly attaching reused items to tasks, ConTask supports a light-weight capturing of task-specific knowledge and, hence, provides the basis for the integration of these tasks in organizational workflow systems such as TaskNavigator. The explicitly attached and conserved task-specific items as well as the task-specific history also facilitate rapid reorientation when switching back to an interrupted task. ConTask reduces the mentioned cognitive and administrative overhead consisting of remembering and reallocating task-specific relevant information items. By double clicking attached or recent documents and resources, task-specific working states can be easily reconstructed and the task can be resumed without much delay. As the PID Sidebar merely proposes potentially relevant items, ConTask provides the possibility of rejecting non-relevant or unsuitable suggestions. The system remembers these decisions and does not provide rejected information items again. Both kinds of feedback, acceptance and rejection, represent evidences for adapting the context of a task and serve as relevance feedback for the task-specific PID. That way, user feedback enables automatic learning and system improvement [3,4,6]. Observing Task Management. During the execution of a task, the User Observation Hub (UOH) observes the user’s task-specific behavior. Besides tracking actions inside office applications, ConTask also observes the user’s explicit task management interactions such as reuse/open, drag&drop, and reject operations within the Task Editor and the PID Sidebar. Automatic, unobtrusive learning is applied to improve the task-specific PID. The observed user events are utilized for the following two goals: – Relevance feedback on proactively provided information items. – Automatic task switch detection based on the user’s interaction with the system. To integrate explicit task management operations into the user observation framework, the user action ontology has been enriched with NOPs for task operations. These additional NOPs resemble a task operations ontology capable of representing user interaction with an agile task management tool. Observable actions of the ConTask system which are also passed to the UOH are switching to some task or giving relevance feedback on PID items, for example. The task operations ontology is designed to represent NOPs from different task management applications. It comprises a minimal set of operations that are necessary for the purpose of relevance feedback on proactively provided information items and automatic task switch detection. It contains operations such as task creation, attribute modification such as by adding a task description or attachments. Further operations inform about accessing attachments or interacting with the PID sidebar. A Task Observation Service inside ConTask describes the user’s task actions according to the NOP ontology and report these to the UOH. Fig. 3 presents the created task operations ontology as UML class-diagram. Task Prediction. As outlined, Context and History Service form the basis of the taskspecific PID. Both services observe the user’s desktop activity and record or elicit
CONTASK: Context-Sensitive Task Assistance in the Semantic Desktop
187
Fig. 3. Task Operations Ontology
relevant information items corresponding to a certain context thread. However, both services rely on explicit information about the currently processed task. If users are stressed or get interrupted, they will not use the task management tool for every tiny task deviation. Hence, not every task switch is technically observed. To compensate for this, ConTask contains a Task Elicitation Service realizing a task prediction. The Task Elicitation Service maps the tasks of the task management tool to context threads maintained by the Context Service. The observed user actions are treated as evidences to predict and update the currently active context thread. If the Context Service detects a context switch, a corresponding task switch is also proposed to the user. And vice versa: If the user switches a task, the Context Service is informed to switch to the corresponding context thread, too. Thus, this service serves to automatically and unobtrusively determine the user’s current task and to utilize this to switch the Context and History Service to the corresponding thread. The detection of the currently processed task is based on the observed user interaction with ConTask, such as opening the Task Editor window or attaching relevant information items to a task. For example, if the recently open task is different to the last one, this in interpreted as a task switch. To realize this, the task operations ontology is divided into the following two categories: – Operations with a Direct Implication on a Task Switch. These operations are interpreted as strong indication for the fact that the user works on a certain task. Based on that, any of these user actions immediately leads to the conclusion of a task switch. They comprise all write access operations on a task, such as adding an attachment or editing the task’s description, as well as opening the Task Editor window. – Operations with a Weak Implication on a Task Switch. These operations only lead to a task switch conclusion if the following condition holds true: No other operation
188
H. Maus et al.
with either weak or direct implication on a task switch occurs within a certain amount of time t. Operations with weak implications consist of the selection of a task in the task list and the event that occurs if the Task Editor becomes the active window. The reasoning for the timeout value, associated with the selection of a task, is that knowledge workers sometimes browse their task list by clicking on each single entry. This serves to get an overview on their tasks and to determine which tasks are most critical at the moment. To avoid that each selection during browsing triggers a task switch and therefore context thread switch, selections are only interpreted as task switches after the timeout. A similar reason explains the timeout value corresponding to the focus gain event of the Task Editor. As several Task Editor windows might be open at the same time, the user might quickly switch focus between two or more Task Editors to compare the corresponding tasks. The timeout value avoids that every focus change directly determines a task switch. The automatic task elicitation serves to precisely determine which user actions and corresponding information objects occurred in the context of which task. This aims at increasing the Context and History Service’s effectiveness. As the assistance is realized in an unobtrusive way, the user just needs to work and interact with ConTask, without having to deal with explicitly telling the system to perform a task switch. On the basis of this, the user is relieved from actively selecting the currently performed task. Since knowledge workers frequently perform task switches and face interruptions of their current task, automatic switch detection reduces interaction overhead and still guarantees that information items are associated with the correct task. In addition to this, the Context Service also provides the possibility of automatically detecting whether the last user action(s) match better with different context thread(s) than the current one. In this case, the service notifies interested listeners about a potential switch. ConTask utilizes these switch detection capabilities. However, a task switch can not be performed automatically and, hence, the user is consulted via a specific popup listing potential task switches. (for an example see Fig. 4). The window slides up from the bottom of the screen and only remains visible for a short amount of time. If the user does not interact within this timeframe, the popup disappears. Relevance Feedback and Learning. The Task Elicitation Service also utilizes the observed interactions with ConTask for the transmission of feedback to the Context Service. Actions such as the assignment of a PID item to a task are interpreted as positive feedback. They increase the relevance value of the item in the corresponding context thread. On the other hand side, the user’s rejection of a PID item or removal of an attached item from a task are interpreted as “negative” feedback. The relevance feedback serves to increase the Context Service’s effectiveness. The cycle of proactively providing contextual information to the user and transmitting user feedback to the Context Service leads to a better synchronization of context thread and task: Relevant information items from the context thread are proposed in the PID Sidebar. Some of them may be added to the task by the user (via drag&drop from the sidebar to the task attachments). In that case, the resulting feedback leads to increased
CONTASK: Context-Sensitive Task Assistance in the Semantic Desktop
189
Fig. 4. Potential Task Switch Window
Fig. 5. ConTask components and interrelations
relevance values of these items in the context thread. Fig. 5 shows how user observation data is utilized in ConTask to realize a knowledge improvement cycle. As an extension to the mentioned user observation providers, such as Mozilla Firefox, ConTask utilizes the Task Observation Service to become observable. Interested listeners, such as the Task Elicitation Service, receive the transmitted task events from the User Observation Hub. On the basis of the received task NOPs, the Task Elicitation Service transmits relevance feedback to the Context Service and switches both Context and History Service to the thread belonging to the current task. The History and Context Service gain their data from the UOH and complete the loop by providing information items for the PID within ConTask. The Task Elicitation Service only depends on the task operations ontology and the assumptions on task-oriented work that are utilized for the purpose of task prediction. The History and Context Service only depend on the NOP ontology formalizing the
190
H. Maus et al.
user’s desktop activity. Thus, any task management application that is compatible with the task operations ontology can be integrated into the created scenario realizing the knowledge improvement cycle. Necessary integration steps involve the extension of the system’s user interface with observation calls to the Task Observation Service. If the task management application includes a PID component and allows for agile task modelling, the Context and History Service could be utilized to proactively provide relevant, task-specific PIMO elements to the user. Feasibility Study. The current proof-of-concept implementation of ConTask delivers early indications that the system actually has the potential to assist the user while keeping the additional work at a minimum. In the long run, a robust context identification is essential for keeping the assistance scalable: The context identification algorithm is expected to estimate the correct context in most of the cases. We created a ground truth by tagging a large set of observed user operations, manually assigning user actions to “contexts”. A ten-fold cross validation on this ground truth data shows that 78% of the operations are identified correctly, 9% of the guesses were incorrect, and 13% of the cases were not identified at all. Striving for a best-effort strategy, a relatively high number of unidentified cases (13%) is not considered harmful for the user’s actual work. An amount of 9% incorrect context guesses is not very high, but this is a critical value as false identifications may lead to false context switch proposals and, hence, to disruptions of the user. One cause for the false identifications is that the user observation software does not recognize some of the user operations. Additionally, users mentally separated some contexts which were technically identical. Additional sensors providing evidences for additional contextual elements will reduce these problems. Hence, we will continuously enhance the user observation and context elicitation technology.
5 Summary and Outlook This paper addressed challenges in today’s knowledge work: continuously increasing quantities of information, knowledge intensive tasks, and highly fragmented multitasking work scenarios. The context-sensitive task management system ConTask was designed to address these challenges and alleviate the knowledge worker’s job. ConTask is integrated into the Semantic Desktop and combines task management with context-specific assistance. The assistance is based on the combination of user observation, automatic elicitation and proactive information delivery of relevant information items from the user’s PIMO. ConTask enables agile task modelling for defining tasks on the fly and striving for a task centric structuring of the personal knowledge space. Observation of the user’s interaction with ConTask is utilized for relevance feedback and automatic task prediction to increase the precision of the PID while keeping the required task management to a minimum. Thereby, the system realizes a knowledge improvement and learning cycle. As ConTask is only capable of detecting task switches to already existing tasks, a possible improvement would be an algorithm for detecting the user is working on a new task which is not yet reflected in the system. Similarly, making proposals for refining a task into subtasks based on the observations (e.g., by detecting that involved
CONTASK: Context-Sensitive Task Assistance in the Semantic Desktop
191
resources of a task can be separated in two topic clusters but within the task). This would significantly support agile task modelling. We currently expand the observed area to the physical desktop of a user by using a digital camera and applying image recognition algorithms to recognize user actions with paper documents on the desk [25]. So far, recognizable actions are placing, removing, and moving a paper document on the desk as well as arranging a pile all enriching the user context. The user context gets enriched with actions such as user placed a known document of the PIMO onto the desk or placed an unknown one (including extracted text). Users can assign piles to concepts or tasks of the PIMO and later on browse the state of their physical desktop even from a concept or task. Next step will be a tighter integration of ConTask and the desk observation with specific actions for task management. For getting better suggestions on PID, we currently investigate to embed the ontologybased information extraction system iDocument to extract PIMO entities in observed text snippets as well as to contextualize with the PIMO as background knowledge as it is done in the Nepomuk Semantic Desktop [20] and the TaskNavigator in [7]. In the ADiWa project we investigate the usage of the PIMO-based user context to contribute enriched task context to dynamic business processes and to capture process know-how from process participants. We especially investigate on how user tasks enriched with concepts from PIMO but also from group repositories can contribute to knowledgeintensive business processes. Acknowledgements. This work has been partly funded by the German Federal Ministry of Education and Research (BMBF) in the ADiWa project (01IA08006).
References 1. Gonz´alez, V.M., Mark, G.: Managing currents of work: multi-tasking among multiple collaborations. In: ECSCW 2005: 9th European Conference on Computer Supported Cooperative Work, pp. 143–162. Springer, Heidelberg (2005) 2. Mark, G., Gudith, D., Klocke, U.: The cost of interrupted work: more speed and stress. In: CHI 2008: SIGCHI Conference on Human Factors in Computing Systems, pp. 107–110. ACM, New York (2008) 3. Abecker, A., Hinkelmann, K., Maus, H., M¨uller, H.J. (eds.): Gesch¨aftsprozessorientiertes Wissensmanagement. xpert.press. Springer, Heidelberg (2002) 4. Riss, U., Rickayzen, A., Maus, H., van der Aalst, W.: Challenges for Business Process and Task Management. Journal of Universal Knowledge Management (2), 77–100 (2005) 5. van Elst, L., Aschoff, F.R., Bernardi, A., Maus, H., Schwarz, S.: Weakly-structured workflows for knowledge-intensive tasks: An experimental evaluation. In: IEEE WETICE Workshop on Knowledge Management for Distributed Agile Processes (KMDAP 2003), IEEE Computer Society Press, Los Alamitos (2003) 6. Holz, H., Rostanin, O., Dengel, A., Suzuki, T., Maeda, K., Kanasaki, K.: Task-based process know-how reuse and proactive information delivery in TaskNavigator. In: CIKM 2006: ACM Conference on Information and Knowledge Management (2006) 7. Rostanin, O., Maus, H., Zhang, Y., Suzuki, T., Maeda, K.: Lightweight conceptual modeling and concept-based tagging for proactive information delivery. Ricoh technology report 2009, no. 35, Ricoh Co Ltd., Japan (December 2009)
192
H. Maus et al.
8. Gonz´alez, V.M., Mark, G.: “Constant, constant, multi-tasking craziness”: managing multiple working spheres. In: CHI 2004: SIGCHI Conference on Human Factors in Computing Systems, pp. 113–120. ACM, New York (2004) 9. Czerwinski, M., Horvitz, E., Wilhite, S.: A diary study of task switching and interruptions. In: CHI 2004: SIGCHI Conference on Human Factors in Computing Systems, pp. 175–182. ACM, New York (2004) 10. Iqbal, S.T., Horvitz, E.: Disruption and recovery of computing tasks: field study, analysis, and directions. In: CHI 2007: SIGCHI Conference on Human factors in Computing Systems, pp. 677–686. ACM, New York (2007) 11. Mark, G., Gonzalez, V.M., Harris, J.: No task left behind?: examining the nature of fragmented work. In: CHI 2005: SIGCHI Conference on Human Factors in Computing Systems, pp. 321–330. ACM Press, New York (2005) 12. Stumpf, S., Bao, X., Dragunov, A., Dietterich, T.G., Herlocker, J., Johnsrude, K., Li, L., Shen, J.: The TaskTracer system. In: 20th National Conference on Artificial Intelligence, AAAI 2005 (2005) 13. Stumpf, S., Bao, X., Dragunov, A., Dietterich, T.G., Herlocker, J., et al.: Predicting user tasks: I know what you’re doing! In: 20th National Conference on Artificial Intelligence, AAAI 2005 (2005) 14. Lokaiczyk, R., Faatz, A., Beckhaus, A., Goertz, M.: Enhancing Just-in-Time E-Learning Through Machine Learning on Desktop Context Sensors. In: Kokinov, B., Richardson, D.C., Roth-Berghofer, T.R., Vieu, L. (eds.) CONTEXT 2007. LNCS (LNAI), vol. 4635, pp. 330– 341. Springer, Heidelberg (2007) 15. Lepouras, G., Dix, A., Katifori, A.: OntoPIM: From personal information management to task information management. In: SIGIR 2006 Personal Information Management Workshop (2006) 16. Grebner, O., Ong, E., Riss, U.: Kasimir - work process embedded task management leveraging the semantic desktop. In: Multikonferenz Wirtschaftsinformatik, pp. 1715–1726 (2008) 17. Stoitsev, T., Scheidl, S., Flentge, F., M¨uhlh¨auser, M.: From personal task management to end-user driven business process modeling. In: Dumas, M., Reichert, M., Shan, M.-C. (eds.) BPM 2008. LNCS, vol. 5240, pp. 84–99. Springer, Heidelberg (2008) 18. Grimnes, G.A., Adrian, B., Schwarz, S., Maus, H., Schumacher, K., Sauermann, L.: Semantic desktop for the end-user. i-com 8(3), 25–32 (2009) 19. Sauermann, L., Bernardi, A., Dengel, A.: Overview and Outlook on the Semantic Desktop. In: 1st Workshop on The Semantic Desktop at ISWC 2005. CEUR Proceedings, vol. 175, pp. 1–19 (November 2005) 20. Adrian, B., Klinkigt, M., Maus, H., Dengel, A.: Using idocument for document categorization in nepomuk social semantic desktop. In: Pellegrini, T. (ed.) i-Semantics: Proceedings of International Conference on Semantic Systems 2009. JUCS (2009) 21. Schwarz, S.: A context model for personal knowledge management applications. In: RothBerghofer, T.R., Schulz, S., Leake, D.B. (eds.) MRC 2005. LNCS (LNAI), vol. 3946, pp. 18–33. Springer, Heidelberg (2006) 22. Schwarz, S.: Context-Awareness and Context-Sensitive Interfaces for Knowledge Work Support. PhD thesis, University of Kaiserslautern (2010) 23. Schwarz, S., Kiesel, M., van Elst, L.: Adapting the multi-desktop paradigm towards a multicontext interface. In: HCP-2008 Proc., Part II, MRC 2008 – 5th Int. Workshop on Modelling and Reasoning in Context, pp. 63–74 TELECOM Bretagne (June 2008) 24. Holz, H., Maus, H., Bernardi, A., Rostanin, O.: From Lightweight, Proactive Information Delivery to Business Process-Oriented Knowledge Management. Journal of Universal Knowledge Management (2), 101–127 (2005) 25. Dellmuth, S., Maus, H., Dengel, A.: Supporting knowledge work by observing paper-based activities on the physical desktop. In: Proceedings of 3rd Int. Workshop on Camera Based Document Analysis and Recognition, CBDAR 2009 (2009)
Support for Ontology Evolution in the Trend Related Industry Sector Jessica Huster Informatik 5 (Information Systems), RWTH Aachen University, Ahornstr. 55, 52056 Aachen, Germany [email protected]
Abstract. Ontologies provide a shared and common understanding between people and application systems. Ontology-based knowledge management systems have proven to be useful in different domains. Due to the lack of support for quick and flexible evolution of underlying ontologies, they are difficult to apply in creative and dynamic domains where domain specific and technical knowledge is quickly evolving. A representative for such a domain is the home textile industry. In this paper we have analysed the requirements of ontology evolution using results from research on knowledge creation for this creative domain. We present a system to support marketing experts and designers by a visual and ontologybased knowledge management system. The system allows for the integration of explicit requirements from the experts (top-down) as well as semi-automatically derived requirements (bottom-up) to change and evolve the ontology used. Keywords: Ontology evolution, Knowledge management, Text mining, Concept-drifts.
1 Introduction Ontologies have been proven to be useful in different fields such as information integration, search, and retrieval. They are used to support people in their knowledge intensive tasks and build the basis for semantic search engines. Regarding the Semantic Web, ontologies with their definition of concepts and relationships provide the backbone for structured access and exchange of shared knowledge [5], [6]. However, ontologies, once created, do not last forever because they do not only encapsulate stable knowledge. In reality, the domains which are modelled by ontologies evolve, and so must the ontologies to stay useful [5], [21], [8]. There is a need for dynamic evolution in the conceptualisation to ensure reliability and effective support through ontologies. Ontologies, originally a term from philosophy and related to the theory of existence, were once defined by Gruber as ‘a specification of a conceptualization’. They are also seen as ‘a shared and common understanding that reaches across people and application systems’ [4]. Despite these and other existing definitions for ontologies a broad range of interpretations are present. These interpretations include simple catalogues, sets of structured text files, thesauri, taxonomies and also sets of general logical constraints that enable automated reasoning [23]. The ontology used in this paper is a lightweight, terminological ontology using mainly sub-class-relationships to describes the domain of the home textile industry, i.e., curtains, furnishing fabrics and carpets. J. Filipe and J. Cordeiro (Eds.): ICEIS 2010, LNBIP 73, pp. 193–207, 2011. c Springer-Verlag Berlin Heidelberg 2011
194
J. Huster
The issue of adapting ontologies due to a certain need of change is addressed by several different, but also closely related research disciplines. This field is summarised under the term of ontology change and comprises several subfields, each focusing on another aspect of ontology change. Flouris et al. [6] identify 10 subfields in their work, namely: ontology mapping, morphism, matching, articulation, translation, evolution, debugging, versioning, integration, and merging. Regarding the Semantic Web the field of ontology evolution is of particular importance as distributed, heterogonous and dynamic domains represented by different ontologies are expected to cooperate in this field. In the literature the term ontology evolution is used with different meanings. In the context of this paper, we will adapt the definition by Flouris et al. [6]. They define ontology evolution as the ‘process of modifying an ontology in response to a certain change in the domain or its conceptualization’. Our focus is on how a system has to be designed to allow flexible ontology evolution, i.e how can the changes be identified to start and continuously perform the process of modifying an ontology. We distinguish this evolution process from ontology versioning which deals with maintaining several different, but related versions of the same ontology. Creative domains are one example for quickly evolving domains in which new concepts and relationships are very frequently established. In this paper we will consider the domain of home textile industry as a representative of such creative and dynamic domains. Stakeholders in this domain have to flexibly adjust to upcoming and descending trends. Therefore, the issue of modifying a domain ontology is important in this context. Additionally, methodologies for the visualisation of the knowledge evolution are required as users need to be able to understand how the context and importance of certain concepts in the domain have changed. Evaluating existing evolution approaches showed that these approaches are not satisfactory and flexible enough to support the requirements of this industry. In this paper we present a knowledge management system using flexible ontology evolution to support this industry sector in monitoring trend relevant sources. The system helps to identify drifts in the market and adapt the underlying ontology accordingly. It is based on insight we obtain from research on the knowledge creation cycle of Nonaka and Takeuchi [17]. The remaining paper is structured as follows: The next section introduces the characteristics of the home textile sector. In the following section the knowledge creation cycle on which insights we base our system design is described (cf. section 4). Section 5 gives an overview on existing approaches to the topic of evolving ontologies. Section 6 presents the system in more detail. Finally, section 7 presents the evaluation conducted in cooperation with carpet producing companies.
2 The Domain of the Home Textile Sector The home textile sector comprises different companies whose products are applied in interior furnishing. Among them are producers for carpets, curtains, pillow cases, etc. This sector is a heavily trend-related industry. Producers have to flexibly adapt to changing preferences and consumer behaviour. If they misinterpret or even oversee those changes, their production planning will be faulty. As a consequence non-marketable
Support for Ontology Evolution in the Trend Related Industry Sector
195
products will stick to the stock, whereas on the other hand existing market-potentials cannot be leveraged. Companies and especially market analysts have to monitor particular fields for recent trends that may impact the company sales and success. It is fundamental and essential to detect emerging topics early, and to observe how they evolve over time. The earlier the producers identify particular techniques as well as new colour and material combinations, surfaces and patterns for a new design, the better they are able to refine the products. Hereby, the producers are able to attract the interest of the customers and surprise them positively with new designs. These products are mainly sold through innovation and design, nowadays [1]. Hence, the companies have to both seek for trends as well as setting those by varying colouring, colour families, materials, patterns and their combinations to create novelty and charm [19]. This includes the emergence of new, particularly metaphoric terms as well as shifts in the meaning of existing terms. A trend in this context is a logical effect of past and ongoing progress. Trends can be predicted based on statistics, expertise of the persons involved in a particular domain as well as a certain ‘gut feeling’ [19]. The process of trend detection is a rather creative and weakly structured process. We learned from a carpeting producer that they arrange regular meetings in their company for the exchange of experiences and to discuss trend ideas, for example, sales meetings by representatives with the management, conversation with architects, etc. Sometimes, so-called mood boards are used during those meetings to visualize trend ideas in form of pictures or material collages. Finally, trends can be, and are, derived from daily conversation at the point of sale with potential buyers. To support the home textile domain through a knowledge based information system trend analysis is the most important feature that has to be taken into account.
3 The Knowledge Creation Process A widespread model in knowledge management that needs to be considered in this context, is the framework of Nonaka and Takeuchi [17]. Based on their observations of the different understanding and handling of knowledge western and asian companies rely on, they developed a model combining these diverse views. Tacit knowledge, which refers to the asian way of knowledge comprehension, includes mental models, such as paradigms, schemata and views reflecting the individual perception and are difficult to communicate. It is also related to concrete know-how, skills and crafts. Explicit knowledge on the other hand refers rather to the western way of knowledge comprehension. It is about past events or objects, is more systematic and can be articulated using a formal language. Explicit knowledge is also referred to as codified knowledge. This distinction between tacit and explicit was originally introduced by Polanyi [18], who identified that knowledge covers more than the things we can articulate. Nonaka and Takeuchi combine these two views and describe knowledge creation as an oscillating transformation process between both knowledge types. Figure 1 shows the four knowledge shifts of transformation. Socialisation refers to the transformation of tacit into tacit knowledge. Knowledge is passed on other people through exchanging and sharing of experiences. Informal group discussions may help in this case, as well as observing particular situation, imitating them and sampling methods or procedures. Externalisation
196
J. Huster Tacit knowledge
Tacit knowledge
Socialisation
transformation in
Explicit knowledge
Articulation/ Externalisation
transformation by
Explicit knowledge
Internalisation
Combination
Fig. 1. The four phases of knowledge transformation according to Nonaka and Takeuchi [17]
describes the essential step from tacit into explicit knowledge triggered by dialogue or collective reflection. Tacit knowledge is articulated and formalised through concepts, metaphors, analogies, hypothesis or models. Explicit knowledge can be stored in different kind of media, such as documents and form thus the basis for further processing. The step of systematic processing is referred to as Combination. Knowledge stored in different media is reconfigured by combining, sorting, exchanging and categorisation leading to knew explicit knowledge. The result of this process step is called ‘systemic knowledge’. Internalisation finally transforms explicit into tacit knowledge resulting in operational knowledge. People put knowledge into their specific context and improve their know-how and skills thereby. This cyclic process shows that new knowledge depends on the know-how of domain experts as well as (written) domain data and its interchange. A system supporting the process of knowledge creation to evolve an ontology has to incorporate both sides in a periodic process. In the next section we present the requirements for a system in such a domain as the home textile industry in more detail.
4 Deducing Requirements for the System Design The previous section considered the general model of knowledge creation according to Nonaka and Takeuchi, which we now put into the context of the home textile sector. Our aim is to understand how an information system has to be established to support the experts in this domain in their trend detection process while ensuring flexible evolution of the knowledge relevant in this context. Currently, trend analysis is a rather unstructured process which heavily relies on the expertise and trend reception of the persons involved. The experts manually leaf or even flick through different fashion magazines to get an overview on materials, colours, etc., as well as their combinations mentioned there. Due to this qualitative analysis they only get for example an impression ‘that there is a lot of violet colour to come’ or ‘there is a
Support for Ontology Evolution in the Trend Related Industry Sector
197
lot of buzz about embossed leather’. A systematic analysis of the magazines is missing at this point. However, a trend prognosis comprises both, the know-how and experience of experts as well as systematic analysis of related data. Relevant questions for the designers for developing trend ideas are: ‘What kind of materials, colours, surface, structures, designers or architects can be found in the magazines?’ ‘How does the occurrence of their appearance develop over time?’ It is important to recognise new terms describing colours or surface structures. The important knowledge in this sector is the knowledge about trends as well es their development over time. This knowledge is based upon knowledge of terms describing colour families, colours, materials, surfaces, patterns, architect and designer names, etc. It can be modelled into an ontology to represent the domain and efficiently supporting a knowledge information system for this domain. As new knowledge emerges continuously in such creative and dynamic domains the system has to allow adaptation of the ontology in a flexible way, also. Trend detection is related to creating and identifying new knowledge and can be therefore associated with the knowledge cycle of Nonaka and Takeuchi to understand the general design of a system supporting continuously evolution of the underlying ontology. The knowledge cycle starts with the process of socialisation, where first trend ideas in the heads of domain experts are exchanged and experiences are shared with each other. Designers and marketing experts are working closely together exchanging ideas while designing new products. The next step is the externalisation where the first ideas are shaped and further developed. Mood boards are used during this process sometimes. In the discussion conclusions are drawn, leading to new and innovative designs. So the knowledge has evolved and possibly results in the requirement to adapt the underlying ontology. Hence, an appropriate system should provide features to directly include the trend ideas arising from such discussions in the ontology. Explicit knowledge, available in form of product catalogues, literature on exhibitions and fashion magazines, is an important source to get an overview over the market. Combination is accomplished by qualitative analysis of different magazines or product catalogues. This transformation step of knowledge creation can be enriched to a systematic analysis of these sources. Applying Ontology based text mining or other machine learning approaches support extracting relevant information. An appropriate visual presentation of the analysis results supports the user in accessing the information much easier. The user may interact with the system through additional features provided and is assisted in putting the knowledge in his own context to come up with new design suggestions. This process is related to internalisation which is currently also based on reading of fashion magazines and product catalogues. After the internalisation the knowledge cycle starts from the beginning again. The overall design according to the knowledge creation cycle is shown in Fig. 2. Domain experts exchange ideas and can explicitly formulate the necessary changes into the ontology. The ontology is then applied to efficiently support the knowledge system in analysing trend relevant data sources. On the one hand the analysis results are used for developing new designs. On the other hand, these results derived from the underlying data sources through applying the ontology are used to evolve the ontology. The enhanced ontology may also be
198
J. Huster Support during search
Domain experts [Requirements Top-down]
Representation as “Context Stars” O Overview on em emerging trends
Ontology gy (knowledge representation)
New knowledge / trend ideas
[Requirements irements Bottom-up] om-up]
relevant and trend representing term
Fig. 2. Continuously evolution of underlying ontology during trend identification
changed by the users explicitly. In any case, the ontology is applied again when using the system, thus the knowledge cycle starts again. We call new requirements to an ontology which are explicitly formulated by the experts top-down requirements. They arise at the user side resulting in a direct change of the ontology due to these requirements. Whereas, requirements identified through the analysis of data sources are bottom-up requirements, as these requirements are implicit and have to be identified first, applying for example machine learning approaches. These terms are similar to the concepts of Bloehdorn [2] who refers to explicit requirements leading to top-down changes in the ontology, and implicit requirements leading to bottom-up changes. In accordance with these four transformation steps of knowledge creation and to address the requirements for flexible evolution support during trend analysis, we designed an ontology-based system reflecting this knowledge cycle. Combination is supported by accessing digitised fashion and trend magazines, enabling a systematic evaluation of the colour families and material groups which are mentioned in these sources. Furthermore, implicit, i.e., bottom-up requirements are identified, leading to appropriate changes in the ontology when indicated. Marketing experts and designers can analyse the frequencies of colours, colour families, or material types from magazines and trend books and assess the development of colour and material statistics over time. This step relates to the internalisation of knowledge. The first cycle starts based on an initial domain ontology. This well known knowledge about concepts like colour, material, structure or design of surface is modelled in the ontology and forms the stable ‘anchors’ to identify the terminological development, which is a-priori unknown. The ontology is applied in the system enhancing the search for trend detection and monitoring the market relevant sources. Marketing experts and designers recognise new terms, that evolve from the active use of these terms and concept combinations, and supervise their development over time. Based on these terms more specific trend ideas and design suggestion can be developed. Terms describing colours or surface structures getting dominant in magazines or articles currently not
Support for Ontology Evolution in the Trend Related Industry Sector
199
incorporated in the domain ontology are concept candidates for the ontology. Those concept candidates can be seen as the bottom-up requirements on the ontology. Integrated into the ontology, they can be used for a more specific search on particular trends to further ensure reliability, accuracy and effectiveness of the search [13]. Step by step the domain ontology evolves to a trend ontology representing the essential knowledge for the company. Fig. 2 gives an overview on the evolution and application cycle. The next section reports on some existing evolution methods followed by a more detailed description of our system TeCLA (Term-Context-Language Analysis).
5 Related Work The previous section argued for a flexible evolution approach combining the acquisition of top-down as well es bottom-up requirements to modify the ontology accordingly. In the following, different existing ontology creation and evolution approaches are presented. They can be distinguished into community-based and (semi-)automatic approaches, that use computational methods. Community-based approaches are related to realising top-down requirements, as the user community explicitly pushes the changes into the ontology. In the case of (semi-) automatic approaches, the changes are identified (bottom-up) from underlying data using different algorithmic approaches such as machine learning. However, many approaches focus on creating ontologies rather than on ontology evolution. This shows that these knowledge models have been treated as static for a long time and that the encapsulated domain knowledge is assumed to not change [8]. An early work in this area to construct ontologies collaboratively is presented by Domingue [3]. His aim was to support a (possibly widespread) community in constructing an ontology, as the ontology represents a common view. He presents the two web-based systems Tadzebao and WebOnto which complement each other. Tadzebao supports asynchronous and synchronous discussions on ontologies. WebOnto, on the contrary, provides features for collaborative browsing, creating and editing of ontologies. The systems Ontoverse [22] and Onto-Wiki [10] follow a wiki-based approach. This approach explicitly focuses on the collaborative construction of ontologies and supports the cooperative design process. Term meanings and problems can be discussed through the system by the user community before these terms are integrated into the system. An important difference between these two systems is that Ontoverse has a rolebased concept and restricts the usage to specific user groups. It allows parallel development of ontologies with different user groups in separated workspaces. The access to OntoWiki on the other hand is open and not limited. OntoWiki is realised using a standard Wiki, whereas Ontoverse developed a new wiki-based platform supporting the different steps of an ontology development process: a planning and conceptualisation phase; a phase of editing the structural data; and finally a maintenance phase, where updates, corrections and ontology enrichments are performed [22]. Both systems can be used also to further evolve existing ontologies. However, only explicit requirements are considered in these systems. Holsapple and Joshi [12] suggest applying a structured procedure on the collaborative working process. This procedure uses a Delphi-like method which incorporates
200
J. Huster
feedback loops within the group. Step by step the terms of the ontology are enriched and better understood. In contrast to the wiki-based systems before, they do not suggest a support system. They aim is rather to support a coordinated and structured way of cooperation based on that methodology. Computational methods analyse texts and aim at reducing the workload on the user side by identifying new terms not mentioned before and currently not linked to the ontology. Detecting changes in vocabulary is generally related to fields like emerging trend detection or topic evolution [7], [14]. An interesting approach helping to analyse the term meaning and its usage is presented by Kruse [15], though it is just related to term evolution in general and not particularly to ontologies. He presents an interview based decision support system to identify drifts in the meaning and usage of terms through comparison of interview results. A 3D visualisation of the interview results spanning a semantic space is created for each person. Kruse performed an interview with 100 people in 2007 and performed the same study again in 2008. He asked the people to state their attitude regarding car brands, definition of status and moral concepts as well as mobility in their daily life. The results of these two studies revealed a change in the definition of status, in contrast to the common understanding of status. Status, once directly related to big, representative and expensive cars, was and is no longer connected to these things. It is rather linked with environmental protection, social justice and appropriate functionality. A rather small car is preferred by the people in their daily life. The interview in 2007 already showed Kruse these changes in the preferences, long before this change was realized by the market. Finally, in 2008 the press stated that the automotive industry have problems selling big and luxury cars. This was the outcome of the preference changes which had happened in the people’s minds months before and which had been identified in the surveys by Kruse. Other approaches do not only extract term candidates or focus on term meaning and usage, but also try to discover conceptual structures. Maedche and Staab [16] present a framework that combines different methods to acquire concepts as well as to identify relationships between concepts from natural language. These relationships are both relevant and not relevant for the ontological taxonomy. They applied their system in the domain of telecommunications. For mining a concept taxonomy in the first step they used domain specific dictionaries in addition to domain specific texts. Further relations between concepts are discovered based on generalised association rules. These approaches aim at an (at least partial) automation of ontology construction and are subsumed in the field of ontology learning. To sum up, we can say that community based approaches offer a platform for exchange and collaborative editing of ontologies. Therefore, the ontology is often presented visually in some treelike structure. There is no support in identifying new terms, new requirements are only considered from the user side. In contrast, (semi-)automatic approaches extract terms and/or relations for ontology construction and evolution and reduce the work load on the user. The user often has the final decision if the extracted items should be included or not. The final ontology is visualised in different ways. Sometimes the elements added in the last step are marked accordingly. Explicit change requests from the user side are not integrated.
Support for Ontology Evolution in the Trend Related Industry Sector
201
As already argued (cf. sections 3 and 4) both sides (user and semi-automatic) have to be considered, as well as an evolution within the working context to establish flexible and continuous evolution. Additionally, it is important to support the user in accessing the relevant information by appropriate visualisation, to identify and monitor the changes in the knowledge before they are incorporated in the ontology. Also, the experience of Kruse applying his interview method shows that changes are (unconsciously) realised by people before the changes reach printed media and are then realised consciously by the market or the public. By explicitly integrating the changes from the user side ideas can be tested and developed further. Hence, the experts are an important knowledge source. The graphical visualisation of analysis results supports the expert user to discover changes in domain specific knowledge in printed media and to identify missing concepts in the ontology from bottom up. The realisation of our proposed system is presented in the next section.
6 System Overview TeCLA is a semi-automatic, visual system, supporting the experts in monitoring trend relevant concept in relevant fashion magazines. It allows intuitive access to upcoming colour and material combinations. After the identification of trends the community of experts (marketing specialists, designer and trend scouts) is able to evaluate identified drifts in the concepts, match them with known concepts in the domain ontology, and possibly adapt them accordingly. The experts are thus able to form their trend specific ontology step by step. It is especially important to include the domain experts in the process of ontology evolution for several reasons [5], [10]. Firstly, the ontology aims at a consensual domain understanding and knowledge. Hence, the evolution process requires cooperation and exchange of information between different people, especially domain experts. This is also one basic step of knowledge creation. The strength of the community cannot be replaced by a single ontology expert. Secondly, the process stays under control of the community, which enhances the acceptance of the resulting ontology, especially in such a creative domain. The interaction with the system supports the knowledge creation and meets the rather creative and weakly structured way of working in this domain. In the following we report on the realisation and usage of the system. 6.1 Visualisation and Change Detection The first step in using TeCLA is the configuration of the analysis matrix for a particular trend detection (cf. Fig. 3, showing an example of the final matrix including results). The analysis matrix defines the group of magazines and articles as well as the period and aggregation level of time for which trend relevant concepts shall be analysed in the magazines. The user can decide for example between monthly, quarterly, or yearly. The trend relevant concepts are selected from the ontology by the user e.g. colours, materials, architects. Based on these concepts TeCLA computes the concept term cooccurrences and represents them as term context stars (shown in the matrix cells in Fig. 3 for the concepts ‘green’ and ‘brick’), one for each selected concept in each cell of the analysis matrix.
202
J. Huster
Fig. 3. Trend detection matrix presenting term context stars
Fig. 4. Inked context terms according to their ontological category
The terms appearing in the context of the selected concept are determined using methods of natural language processing. Magazines are first linguistically pre-processed: The tokenised texts are automatically annotated with part-of-speech tags that indicate the grammatical categories of each word. By means of dictionaries, for each known term a matching concept from the ontology is attached (word sense tagging). See [20] for more details on the web-service based architecture used in this system. As soon as the user has provided the list of concepts that shall be analysed in the magazines, target fragments of texts are identified with the help of the word sense tags, e.g., all sentences containing the concept ‘green’. We then use partial grammars that describe the possible positions of interesting terms in the context of a concept we are interested in, e.g. all adjectives that are related to the concept. Terms that match these grammar rules are extracted from the texts.
Support for Ontology Evolution in the Trend Related Industry Sector
203
The final task is then to visualise the extracted results. The graphical representation distinguishes between concepts and terms. Concepts are linked to the ontology and appear in the centre of the term context star. The terms that appear in the context of this concept surround the concept in form of bubbles. The relative size of each context term bubble corresponds to the number of times this term appears with the considered concept. Suppose the user has analysed the context of the concepts ‘green’ and ‘brick’ in the magazine AIT, which is a professional journal on architecture, interior and technical solutions. In this magazine green and blue appear together as these colours are combined often. In September this colour combination has obviously grown in interest and became more popular when compared to the months before. It disappeared in the subsequent month; instead, green drew attention in the shade of ‘light green’. The concept of our term context stars is related to Heringer’s [11] ideas for visualising lexical fields and tag clouds, known from the Web 2.0 community [9]. This visualisation metaphor helps the users to easily recognise dominant concepts as well as the related term co-occurrences from the texts. Moreover, different term context stars of the same concept in different cells of the matrix can easily be compared regarding a change in occurrence frequency or a drift of context terms (cf. Fig. 3). The visualisation functionality was realised using a standard graphical programming library. Given a visualisation of term context stars in TeCLA, the domain expert can click on concepts or context terms to show all the articles in the considered magazines that contain the respective concept and context terms. The concept, as well as the clicked term is highlighted in the articles. This helps the users to get a quick insight in the more detailed context of the concept. Therefore this feature is especially important for the daily work: ‘What is precisely written in the article?’, ‘Does it detail an interior of a restaurant or bar?’, etc. 6.2 Support for Ontology Evolution To support the evolution of the knowledge model, a system feature allows the user to easily compare terms from the context star with concepts in the ontology. All context terms which have a connection to the domain ontology are highlighted, then (cf. Fig. 4). The user gets a quick overview about which terms which are already included in the ontology and which are not. The former ones can be used as search terms for further analysis, while the latter ones may be worth adding to the ontology. Additionally, the associated ontological category of the context term can be accessed by the experts. Fig. 4 shows, that the terms ‘glass’, ‘steel’ and ‘concrete’ of the term context star ‘brick’ in month September are related to the ontological category Materials. The terms ‘blue’ and ‘orange’ are associated with the category Colors. Finally,‘beams’,‘house’, ‘wall’, ‘lounge’ and ‘building’ are related to Objects. The marketing staff and the designers in the home textile industry especially asked for a feature, which enables them to assess certain aspects more easily. ‘Is a colour concept found rather in combination with other colours or materials?’, ‘Is a specific form or surface coloured in a certain shade and hue or realised with particular materials?’ All this information is important to develop ideas in their work as a designer or a marketing specialist. In July for example old bricks are used in some products (cf. Fig. 4).
204
J. Huster
ĐŽŵŵŝƚ ĐŚĂŶŐĞ
Fig. 5. Changing a term in the ontology due a shift in the meaning
In September old bricks are not only used to realise special designs but also used in combination with other materials to achieve particular effects (cf. Fig. 4). Bricks are used in combination with glass and steel, for example. This can be easily seen by the user because the ‘material’ concepts are visually marked. Having identified an emerging term which should be linked to the ontology, the user changes from the ‘analysis view’ into the ‘edit ontology’ view of the system (cf. Fig. 5). Here the user decides whether he wants to add a new concept or to change a term due to a drift in the meaning. For a new concept the user specifies which concept of the ontology should be the super concept. He types in a name for the new concept (the URI is added from the system in the background), as well as the definition. In case of a change in the meaning of a term the user selects the option change term. Suppose the marketing and designers identified that there is a shift in the colour of brown. The tone formerly called brown is now called shilf. Therefore, the expert selects the concept brown and types in the new term (cf. Fig. 5). After committing the change by clicking the appropriate button, the change is directly visible in the ontology. In case of adding a new concept clicking the commit button means that a change request is send to an ontology expert. The ontology expert checks for the required update and possibly commits the change. The new version of the ontology is then made available to the system.
7 Evaluation To evaluate our system we used observational methods such as thinking aloud protocols and structured interviews to assess functionality, usefulness and usability. The users were marketing experts and designers from a carpet producer. They were given a system presentation of a few minutes to be able to use the features according to the given tasks during the evaluation.
Support for Ontology Evolution in the Trend Related Industry Sector
205
In the testing scenario the users were asked to perform a whole walk-through of the system. They performed an analysis of different trend relevant concepts to identify emerging and descending trends and were asked to use the different features provided to support their work. They were asked to write down their ‘trend findings’. Afterwards they were given a final questionnaire to evaluate especially the usability and usefulness of the different features and the system as a whole. The tests have shown that term context stars are a good visualisation to represent the term co-occurrences. The provided interaction possibilities and features as well as the usability were evaluated as good. Trend lines (increasing and decreasing) can be identified quite easily based on the colour and size of a term bubble in a context star. Common terms in different stars are easily recognised and assessed by using the comparison functionality of TeCLA. The modification of the underlying ontology is possible due to ideas of the users as well as based on analysis results derived from the fashion magazines allowing flexible adaptation. The users appreciated that they have the control during evolution and that they are allowed to directly test ideas against the magazines. Knowledge from the magazines is made explicit through integration into the ontology and completing the knowledge of the experts. A general remark from the users was to change the colour coding of term highlighting in the different features. Instead of using an additional colour to mark the relevant terms, the irrelevant terms should be shaded in grey. This would result in an easier identification of important terms. The overall results given by the users were positive; they would all use the tool in similar contexts again.
8 Conclusions and Outlook In this paper we presented an explorative approach for the constant evolution of a domain ontology in a creative, dynamic domain such as the home textile industry. Based on the insights into the knowledge creation cycle (section 3) we developed a system design considering both tacit and explicit knowledge to perform the transformation steps to enable this evolution. The text mining analysis is coupled with a high degree of user interaction and allows top-down modification of the ontology. In addition to the compared systems for ontology evolution as presented in section 5 our system provides intuitive visualisation that supports the observation of knowledge changes over a period of time. Popular concepts and co-occurring terms in the texts can be recognised and analysed over time. The identified drifts give the experts an overview on the market changes and help to evolve the ontology. The system allows for explicit and implicit modification. Thereby, the ontology can evolve with the domain thus better supporting the designer in searching the magazines efficiently for further trends and trend changes. The community in this case is formed by marketing experts, designers, trend scouts and product managers. This is a group of domain experts with sound knowledge working closely together. They evaluate recognised upcoming and descending terms and decide what should be modified in the ontology accordingly. We do not provide explicit collaboration support in the system, as this group of experts is small and usually closely working together. Socialisation is quite well established. The ontology acquires and represents the knowledge explicitly.
206
J. Huster
During the whole development cycle informal feedback was collected from our project partners from the domain of carpet production several times. It turned out early that an explorative, community based approach with a high degree of user involvement is necessary to establish trust in the analysis results. The structured evaluation proved this approach again. The user interaction helps to derive ideas on possible trend lines, taking into account the experience and background knowledge of the experts. As mentioned in the beginning the aspect of ontology versioning is closely related to the topic of evolution. In the trend sector, it is also helpful to have access to older versions of the ontology and to be able to follow the changes made in the ontology. Hence our work will focus on a history feature in the next step. This feature will list all the changes which were applied to get the current ontology. All the changes can then be retraced by the experts and provide additional information on the evolution of trend concepts. The system design itself is flexible enough to exchange the underlying text mining approach or the visualisation. Thus it is possible to adapt the basic evolution approach, described in section 4 for other creative and dynamic domains. Acknowledgements. The research presented in this paper has been funded by the AsIsKnown project (http://www.asisknown.org) within the Information Society Technologies (IST) Priority of the 6th Framework Program (FP6) of the EU, and by the Research School within the Bonn-Aachen International Centre for Information Technology (B-IT).
References 1. Becella, M.: Produkte verkaufen sich nur u¨ ber Innovation und Design. BTH Heimtex (2007) 2. Bloehdorn, S., Haase, P., Sure, Y., Voelker, J.: Ontology Evolution. In: Davies, J., Studer, R., Warren, P. (eds.) Semantic Web Technologies - Trends and Research in Ontology-based Systems, pp. 51–70. John Wiley, Chichester (2006) 3. Domingue, J.: Tadzebao And Webonto: Discussing, Browsing, Editing Ontologies On The Web. In: 11th Knowledge Acquisition for Knowledge-Based Systems Workshop, Banff, Canada (1998) 4. Fensel, D.: OIL: An Ontology Infrastructure for the Semantic Web. IEEE Intelligent Systems 16(2) (2001) 5. Fensel, D.: Ontologies: Dynamic Networks of Formally Represented Meaning. In: 1st Semantic Web Working Symposium, California, USA (2001) 6. Flouris, G.S., Manakanata, D.S., Kondylaki, H.S., Plexousaki, H.S., Antonio, G.U.: Ontology change: classification and survey. The Knowledge Engineering Review 23(2) (2008) 7. Gohr, A., Hinneburg, A., Schult, R., Spiliopoulou, M.: Topic Evolution in a Stream of Documents. In: Proceedings of the SIAM International Conference on Data, Nevada, USA (2009) 8. Haase, P., Sure, Y.: State-of-the-Art on Ontology Evolution. Deliverable 3.1.1.b, Institute AIFB, University of Karlsruhe (2004) 9. Hassan-Montero, Y., Herrero-Solana, V.: Improving Tag-Clouds as Visual Information Retrieval Interfaces. In: International Conference on Multidisciplinary Information Sciences and Technologies, InSciT 2006, M´erida, Spain (2006) 10. Hepp, M., Bachlechner, D., Siorpaes, K.: OntoWiki: community-driven ontology engineering and ontology usage based on Wikis. In: WikiSym 2006: Proceedings of 2006 International Symposium on Wikis, Odensee, Denmark (2006)
Support for Ontology Evolution in the Trend Related Industry Sector
207
11. Heringer, H.J.: Das H¨ochste der Gef¨uhle - Empirische Studien zur distributiven Semantik. Verlag Stauffenburg, T¨ubingen (1998) 12. Holsapple, C.W., Joshi, K.D.: A collaborative approach to ontology design. Communications of the ACM 45(2), 42–47 (2002) 13. Klein, M., Fensel, D.: Ontology versioning for the Semantic Web. In: Proceedings of the 1st International Semantic Web Working Symposium, California, USA (2001) 14. Kontostathis, A., Holzman, L.E., Pottenger, W.M.: Use of Term Clusters for Emerging Trend Detection. Technical Report (2004) 15. Kruse, P.: Ein Kultobjekt wird abgewrackt. GDI Impuls, Wissensmagazin f¨ur Wirtschaft, Gesellschaft, Handel, Nr. 1 (2009) 16. Maedche, A., Staab, S.: Mining ontologies from text. In: Dieng, R., Corby, O. (eds.) EKAW 2000. LNCS (LNAI), vol. 1937, pp. 189–202. Springer, Heidelberg (2000) 17. Nonaka, I., Takeuchi, H.: The Knowledge-Creating Company: How Japanese Companies Create the Dynamics of Innovation. Oxford University Press, Oxford (1995) 18. Polanyi, M.: The Tacit Dimension. Routledge & Kegan Paul, London (1966) 19. Sauerwein, T., Schilgen, H.J.: Gesch¨aftsf¨uhrer Verband Deutsche Heimtextilindustrie, Interview u¨ ber Branchensituation, Wahre Renner der Textilbranche und Nachhaltigkeit (2009) 20. Simov, K., Simov, A., Ganev, H., Ivanova, K., Grigorov, I.: The CLaRK System: XML-based Corpora Development System for Rapid Prototyping. In: Proceedings of the 4th International Conference on Language Resources and Evaluation (LREC), Lisbon, Portugal, pp. 235–238 (2004) 21. Stojanovic, L., Maedche, A., Motik, B., Stojanovic, N.: User-Driven Ontology Evolution Management. In: G´omez-P´erez, A., Benjamins, V.R. (eds.) EKAW 2002. LNCS (LNAI), vol. 2473, pp. 285–300. Springer, Heidelberg (2002) 22. Weller, K.: Kooperativer Ontologieaufbau. 28. Online Tagung der DGI. Frankfurt a.M., Germany (2006) 23. Welty, C., Lehmann, F., Gruninger, G.,Uschold, M.: Ontology: Expert Systems All Over Again? Invited panel at AAAI 1999: National Conference on Artificial Intelligence, Austin, Texas (1999)
Extracting Trustworthiness Tendencies Using the Frequency Increase Metric Joana Urbano, Ana Paula Rocha, and Eugénio Oliveira LIACC – Laboratory for Artificial Intelligence and Computer Science Faculdade de Engenharia da Universidade do Porto – DEI Rua Dr. Roberto Frias, 4200-465, Porto, Portugal {joana.urbano,arocha,eco}@fe.up.pt
Abstract. Computational trust systems are currently considered enabler tools for the automation and the general acceptance of global electronic business-tobusiness processes, such as the sourcing and the selection of business partners outside the sphere of relationships of the selector. However, most of the existing trust models use simple statistical techniques to aggregate trust evidences into trustworthiness scores, and do not take context into consideration. In this paper we propose a situation-aware trust model composed of two components: Sinalpha, an aggregator engine that embeds properties of the dynamics of trust; and CF, a technique that extracts failure tendencies of agents from the history of their past events, complementing the value derived from Sinalpha with contextual information. We experimentally compared our trust model with and without the CF technique. The results obtained allow us to conclude that the consideration of context is of vital importance in order to perform more accurate selection decisions. Keywords: Situation-aware Trust; Dynamics of Trust; Multi-agent Systems.
Extracting Trustworthiness Tendencies Using the Frequency Increase Metric
209
Trying to cope with this question, trust community is moving towards a second generation of models that explore the situation of the trust assessment in order to improve its credibility. However, few proposals have been made on this specific area (see [12], [13], [14], [15]). In this paper, we propose a situation-aware technique that allows the extraction of tendencies of agents’ behavior1. This technique allows, for instance, to detect whether a given supplier has a tendency to fail or to succeed contracts that are similar to the current business need (e.g. in terms of good, quantity and delivery time conditions). We performed experiments that show that this technique enhances traditional CTR systems by bringing context into the loop; i.e. it not only concerns if a given supplier is generally trusted good or bad, but if it is trusted good or bad in the specific contractual situation. Also, our approach differs from the situation-aware proposals mentioned above in the way that it does not imply the use of hierarchical-based structures (e.g. ontology) and that it is able to detect fine-grain subtle dissimilarities in related situations. Moreover, it was designed in order to allow the effective estimation of trustworthiness values even when the trust evidences about the agent in evaluation are scarce. The situation-aware technique we propose can be used in conjuction with any existing trust aggregation engine. In our experiments, we use Sinalpha, a sigmoid-like aggregator that we have developed ([17]) that distinguishes from traditional trust proposals by embedding properties of the dynamics of trust. In this paper, we review the fundamental characteristics of Sinalpha and present our conclusions about the relevance of the inclusion of such properties in trust aggregation engines. Although we contextualized the use of our trust system in the sourcing/ procurement part of the supply chain, agent-based trust and reputation systems are of general interest in many other domains (for instance, general business, psychology, social simulation, system resources’ management, etc), and apply to all social and business areas of the society where trust is deemed of vital importance. The remaining of this paper is structured as follows: Section 2 describes our study about the relevance of considering properties of the dynamics of trust in the aggregation engine of CTR systems. Section 3 describes the technique we developed in order to complement traditional CTR engines with situation-aware functionality. Section 4 presents the experiments we run in order to evaluate the proposed situation-aware technique, and Section 5 concludes the paper.
2 Using Trust Dynamics in the Aggregation Engine 2.1 The Sinalpha Model In [17], we described Sinalpha, a trust aggregation engine based on an S-like curve (Figure 1) that allows for an expressive representation of the following properties of the dynamics of trust: 1
This paper in an extended version of the paper presented at ICEIS 2010, the 12th International Conference on Enterprise Information Systems [16].
• Asymmetry: stipulates that trust is hard to gain and easy to lose; • Maturity: measures the maturity phase of the partner considering its trustworthiness, where the slope of growth can be different in different stages of the partner trustworthiness; • Distinguishably: distinguishes between possible different patterns of past behavior. The choice of the S-like shape was based on the concept of the hysteresis of trust and betrayal, from Straker [18]. In this work, the author proposes a path in the form of a hysteresis curve where trust and betrayal happens in the balance between the trustworthiness of a self and the trust placed on the self. The S curve simplifies the hysteresis approach by using just one curve for both trust and betrayal representation and considering three different growth/decay stages: creating trust (first third of the curve), trust is given (second third of the curve), and taking advantage (last third of the curve). It is also worth to mention that the final Sinalpha mathematical formula, depicted at Figure 1 and explained in detail in [17], was based on Lapshin’s work [19]. Therefore, the aggregation of the trust evidences of the agent in evaluation using the Sinalpha model presents the following characteristics: the trustworthiness value of the agent grows slowly in the presence of evidences with positive outcomes when the agent is not yet trustable, accelerates when the agent acquires some degree of trustworthiness, and slowly decays when the agent is considered trustable (i.e., in the top right third of the curve). This allows for the definition of three different trust maturity phases, and constitutes the maturity property. The decrease movement upon evidences with negative outcomes follows the same logic, although the mathematical formula subjacent to the curve (cf. Figure 1) includes parameter , which takes different values dependently on the outcome of the evidence is positive 1.0 or tive 1.5 , permitting for slower growths and faster decays. This constitutes the asymmetry property. Finally, the mathematics of Sinalpha’s formula implies that the aggregation of the same number of positive and negative outcomes that are presented in different ordering results in different trustworthiness values. This constitutes the distinguishable property. Considering this last property, we have a somewhat different view than the one presented in [7], where the authors state that the aggregation of evaluations shall not depend on the order in which these evaluations are aggregated. The results we obtained in our experiments and discussed in the next section seem to support our conviction.
Extracting Trustworthiness Tendencies Using the Frequency Increase Metric
211
One can argue that we could use other S-like curves instead of a sin-based one, such as the Sigmoid curve. However, we intuitively feel that a Sigmoid curve permits a probably too soft penalization of partners that proved to be trustable but that failed the last n contracts. This can happens accidentally (e.g. due to an unexpected shortage of good or to distribution problems), but it is also described in the literature as a typical behavior of deceptive provider agents, who tend to build up a trustworthy image using simple contracts and then violate bigger contracts exploring the acquired trustworthiness ([20]). 2.2 Relevance of Sinalpha’s Trust Properties In this section, we review the main results obtained when we experimentally compared Sinalpha to a weighted mean by recency approach ([6]) that represents traditional statistical approaches. A description of the experiments performed is available at [17]. A more exhaustive comparison that we have conducted between trust models based on weighted means and heuristics-based models embedding properties of the dynamics of trust is presented in [21]. The first result we obtained show that Sinalpha is more effective in selecting good partners (i.e. agents that have a higher probability of fulfilling a contract) and in avoiding bad partners (agents that have a higher probability of violating a contract) than the weighted means approach. This is due to the maturity property of Sinalpha, where the entire historical path of the agent in evaluation is taken into account in the process of trust construction, meaning that agents have to accumulate several good experiences in the past until they are able to get an average to high trustworthiness score. In opposition, the weighted means approach allows the selection of agents with fewer past evidences, which can promote uninformed decisions. Another difference between the two approaches is related to the asymmetry property of Sinalpha, which allows it to more effectively identifying and acting upon partners that show the much undesired intermittent behavior, by giving more weight to violations’ penalty than to fulfillments’ rewards. A related situation occurs when agents show bursty-like intermittent behavior, i.e. when they present long sequences of positive outcomes and then long sequences of negative outcomes and so on. The results have shown that, in this situation, the weighted means approach can forgive past long sequences of negative outcomes if these sequences are followed by a long period of inactivity and a single recent positive outcome. One could argue that this forgiveness issue is solved by increasing the size of the window used, i.e. the number of the last past evidences considered. However, in our experiments we found it hard to select the optimal window size, as it deeply depends on the frequency of the contracts (historical evidences) made in the past ([21]). Finally, the forgiveness question does not apply to Sinalpha, due to the action of the maturity property, although we realized that it has a somewhat bigger tendency than the weighted means approach to enter a burst of deceptive behavior and also that it can be slower in penalizing good partners immediately after they invert their behavior. In these experiments, we could not evaluate, however, the potential full benefit of using the Sinalpha round shape at the extremities against simpler curves that do show similar trust dynamics properties (e.g. curves with linear shape). Indeed, in [21] we
212
J. Urbano, A.P. Rocha, and E. Oliveira
compared the performance of Sinapha against a simpler curve that uses the same λ and ω parameters (cf. Figure 1) but that lacks the softness round curve at the creating trust and taking advantage phases. The results of these experiments show similar performance of both curves in the tested scenarios. Therefore, we conclude that we need more complex models of the target population to further study the impact of the sigmoid-like shape of Sinalpha on its capability of distinguishing between partners. We leave this topic for future work.
3 Our Situation-Aware Trust Technique 3.1 Motivation for Situational Trust Computational trust estimations help an agent to predict how well a given target agent will execute a task and thus to compare between several candidate partners. However, there are some questions that a real-world manager would pose before making a decision that cannot be answered by simply aggregating available trust evidences into trustworthiness values. These questions involve somehow a certain level of intuition. We propose to first analyze three scenarios that might occur in real world business and that would help us to understand this concept. In the first scenario, an agent may decide to exclude from selection a candidate partner with which it had never entailed business before but that it knows that rarely fails a contract, just because the agent intuitively fears that this partnership would not be successful. For example, a high tech company may fear to select a partner from a country of origin without high technology tradition, even though this partner has proved high quality work in the desired task in the recent past. We call this situation the intuitive fear. For this scenario, it would be desirable that the selector agent could reason taking into account additional contextual information about the characteristics of the entity represented by the candidate agent. For instance, the presence of key figures such as the annual turnover or the number of employees of the entity would allow the selector agent to better know the entity. Also, the establishment of argumentation between both parties is a real-world procedure that could be automated into the computational decision process. In the second scenario, the agent may decide to exclude from selection a candidate partner that is currently entering the business, for which there is no trust or reputation information yet. This scenario deals with the problem of newcomers, for which there is no information about prior performance, and we name it absence of knowledge. The works in [6] and [15] suggest that in these cases the use of recommendations and institutional roles could be useful to start considering newcomers in the selection process. Although we do not address this situation in this paper, we consider using conceptual clustering of entities’ characteristics in the future in order to generate profiles of business entities. Thus, in a second step, the profile of the newcomer is compared with the profiles of business entities for which there is some trust information and an estimation of the newcomer trustworthiness is inferred. This approach implies that a minimum of the business entities’ characteristics is available, which constitutes a reasonable assumption for virtual organizations built upon electronic institutions and even for decentralized approaches where agents are able to present certificate-like information.
Extracting Trustworthiness Tendencies Using the Frequency Increase Metric
213
Finally, in the third scenario, the selector entity knows that a candidate partner is well reputed in fulfilling agreements in a given role and context, or even that it is generally trustworthy, but needs to know how well it would adapt to a (even slightly) different business context. For example, the evaluator knows that a given Asiatic supplier is a good seller of cotton zippers but it is afraid that it could fail in providing high quantities of this material in a short period of time, because it does not use to transact with the supplier in these specific conditions. We name this situation the contextual ignorance. In the next sections, we present a situation-aware trust technique that is able to extract tendencies of the behavior of agents in a contextualized way. This technique supports the evaluator agent in addressing the contextual ignorance question and can be used with any traditional trust aggregation model, such as the weighted means and the Sinalpha models referred in Section 2 and. Next, we present the scenario and notation used throughout the section. 3.2 Scenario and Notation Our scenario consists of a social simulation where, at every simulation round, trading client agents attempt to place orders of some type of textile fabrics to the best available supplier agents, taking into account the estimated trustworthiness of the suppliers in the context of the specific orders. At the end of the round, a contract is established between the client agent and the selected supplier agent. Formally, is a client agent from the set , ,…, of all clients considered in the simulation, and is a supplier agent from the set , ,…, of all suppliers considered in the simulation. Also, an order instantiates a business need that comprises the contractual context of the future transaction. For the sake of simplicity, we only consider here three contractual terms that are the elements of the set , , . Regarding domain values of the contractual terms, the set , , contains the possible values for the term ; the set , , contains the possible values for the term ; and the set , , contains the possible values for the term . Therefore, the contractual context is an ordered triple belonging to the 3-ary Cartesian product . An example of a contractual context associated to an order is , , . At every simulation round, each one of the clients broadcasts its current need specifying the corresponding contractual context . In response, all suppliers that still have stock on the desired fabric issue a proposal. In this scenario, a proposal is simply the identification of the supplier that has proposed. I.e., instead of using selection parameters such as the ones associated with price or payment conditions, the suppliers would be selected uniquely by their estimated trustworthiness. Then, after client agent selects supplier agent to provide the good as mentioned in the contractual context , both agents establish a contract specifying , and . When the transaction is completed, with either the supplier agent succeeding or failing to provide the good in the established conditions, a contractual evidence is generated accordingly. In our scenario, an evidence is an ordered tuple from the 4-ary Cartesian product that define the set of all evidences
214
J. Urbano, A.P. Rocha, and E. Oliveira
generated until evaluation time t, where , is the set of all possible outcomes for the contract (i.e. it takes the value when the contract is succeeded is the subset of all evidences and otherwise). In the same way, where appears as the supplier counterparty. This means that , , , : , , is the contractual history of agent at the evaluation time. If supplier had never transacted before, then . 3.3 Our Situation-Aware Trust Model Figure 2 illustrates the selection algorithm used by our clients at every simulation round in order to select the proposal that best fits their needs. 1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14: 15: 16: 17: 18: 19:
:
:
,
,
, 0
0
, 0
0
Fig. 2. The SELECTION algorithm
The client picks up randomly a proposal from the set of received proposals and calculates the trustworthiness value of the proponent (lines 5 to 9). Then, for each one of the remaining proposals, it estimates its trustworthiness (lines 10 to 15) and updates the best proposal (lines 16 to 18). The algorithm returns the proposal from the most estimated trusted supplier for the specific context (line 19). As can be observed from Figure 2, our trust model is composed of two components, as described earlier in this document. Indeed, the first component is 0, 1 , a function that aggregates all outcomes from the available evidences using the formula illustrated in Figure 1. The second component is our situation-aware technique, , 0, 1 , which is a binary operator that measures the adequacy of the proponent supplier to the contractual context of the current need of the client. Therefore, the trustworthiness value of supplier as estimated by client is given in Equation 1.
Extracting Trustworthiness Tendencies Using the Frequency Increase Metric
,
,
215
(1)
This is the same as to say that, in a given moment, an agent may be qualified as trustworthy in some situation and as untrustworthy in a (maybe slightly) different situation. Next, we describe our binary operator , in more detail. 3.4 The CF Operator We developed the CF operator, a technique that allows non-situational aggregation engines (such as Sinalpha or the ones based on weighted means) to make inference about the trustworthiness of an agent based on context. It is an online, incremental technique that does not rely on predefined measures of similarity or ontology-based inference as the ones presented in [12], [13], [14] and [15]. The algorithm of CF is presented in Figure 3. 1: 2: 3: 4: 5: 6: 7: 8:
0, 1
, :
,
:
0 1
Fig. 3. The CONTEXTUAL-FITNESS algorithm
From the figure above, we verify that the algorithm starts by putting all evidences of agent that have outcome in a separate class (line 5). This negative outcome can represent, for example, the past transactions of the supplier agent that triggered relevant contractual sanctions, although the meaning of such outcome can be established by each individual client agent. Then, at line 6, a behavior tendency is extracted for this class using the Frequency Increase metric [22], presented in the following Equation: #
# #
#
.
(2)
In the equation above, # is the number of times that a given attribute appears in the class, # is the total number of evidences in the class, # is the number of times that the attribute appears in all classes, and # is the total number of evidences kept for the agent in evaluation. Therefore, by applying the Frequency Metric to the set of false evidences of agent , the algorithm tests, one by one, which contractual term-value pair can be considered relevant. The parameter α in the equation above is the degree of the required extent of frequency increase, and determines the granularity of tendency extraction. At the end of the procedure, the most significant contractual characteristics of the class of false evidences are extracted (line 6). It is worth to mention that, depending
216
J. Urbano, A.P. Rocha, and E. Oliveira
on the degree of the required extent of frequency increase and on the evidence set of the agent in evaluation, it is possible that the algorithm does not return any tendency. Finally, at lines 7 and 8, the extracted false tendency is compared to the contractual context of current order of the client. If there is a match, it means that the supplier has a tendency to fail this type of contracts, and therefore the value (and the global trustworthiness value, cf. Equation 1) is zero. Otherwise, there is no evident signal that the supplier is inapt to perform the current transaction, and its final trustworthiness score is given by the Sinalpha output. Figure 4 illustrates an example of a match between the contractual context of a current order and the failure tendency of the agent in assessment. In there, client wants to purchase high quantities (1800000 meters) of chiffon in short delivery times (seven days), as given by . At the same time, supplier has a failure tendency in delivering in short delivery times, independently of the fabric and the quantity . Therefore, the final trustworthiness value of the supconsidered, as given by plier for this specific order is zero, as given by Equation 1, which strongly reduces the chance of the supplier being selected by to the current business transaction. :
, 1080000, 7 : , ,
Fig. 4. An example of a match between the contractual context of an order and the false tendency behavior of an agent
The process just described of extracting tendencies of negative behavior in order to prevent unfitted selection decisions is a dynamic incremental process that shall be repeated at every trustworthiness assessment. The benefits of such an approach are two-folded: first, it allows the extraction of tendencies even when there are few trust evidences about the agent in evaluation; then, it allows capturing the variability of the behavior of the agent at any time, which is a desired requirement for trust models designed to operate in real-world environments. We draw here a final remark about our trust technique: as described in this section, CF is only dealing with the negative class of the evidences of the agent in evaluation. However, we believe that the use of the positive class and the use of distinct degrees of fitness could allow refining our algorithm, and this constitutes ongoing work.
4 Experiments In order to evaluate the benefits of the proposed situation-aware trust model, we run a series of experiments using the scenario described in Section 3.2. All experiments were run using the Repast tool (http://repast.sourceforge.net). 4.1 Experimental Testbed and Methodology The main parameters of the experiments are presented in Table 1.
Extracting Trustworthiness Tendencies Using the Frequency Increase Metric
217
Table 1. Configuration of the experiments Parameter Fabrics Quantities Delivery Time # buyers # of sellers Types of sellers Seller stock # rounds / # runs CF α threshold
Value {Chiffon, Cotton, Voile} {Low, Medium, High} {Short, Medium, Big} 20 50 Uniform distribution over the types considered in population Up to 4 contracts per round 60 / 20 0.25
In these experiments, we wanted to evaluate if the situation-aware technique would improve the ability of the trust system in selecting partners by taking into account the current business context. Therefore, we run the same experiments using, first, just the Sinalpha component (the SA model), and then the enhanced version of Sinalpha that uses our proposed situation-aware technique (the CF model). Therefore, at every simulation round, each client announces a business need in the form of a contractual context . Every supplier that still has stock on the specified fabric manifests its intention in providing the material in the conditions specified by . Then, client selects the best proposal using as a criterion the estimated trustworthiness of each proponent. I.e. it either uses SA or CF to perform the trustworthiness estimations, depending on the trust approach in evaluation. Finally, an outcome is generated for the transaction between and the selected supplier, based on the type of this supplier. Table 2. Characterization of the different types and populations of suppliers Type SHQT SHDT SHFB SHFBQT SHFBDT SHQTDT Good Fair Bad I-SHQT I-SHDT I-SHFB
Description Handicap on high quantities Handicap on short delivery time Handicap on specific fabric H. on fabric and high quantity H. on fabric and short del. time H. on high quant. and short d. time
Prob. Success 0.05 (handicap), 0.95 (otherwise)
Pop.A X X X
Pop.B
95% 80% 50% Same as SHQT, SHDT and SHFB, with a probability of 66.7% of changing handicap at round 30
Pop.C X X X X X X X X X
X X X
Populations of Suppliers. We used different types of suppliers and populations in the experiments. Each type of supplier reflects its ability in fulfilling or violating a contract, as described in Table 2.
218
J. Urbano, A.P. Rocha, and E. Oliveira
Evaluation Metrics. We used two different metrics to evaluate the trust models. The first one is the average utility of clients, which is the ratio of the number of contracts that were fulfilled by the suppliers over the total number of contracts, averaged over all clients and all rounds. The second metric is the average utility per round, which measures the same ratio at a round basis. Intuitively, the greater the ability of the trust model in distinguishing between the dynamic behaviors of the proponent suppliers, the better decisions it will make and, consequently, the higher utility it will get. 4.2 Results Figure 5 shows the results in terms of average utility for populations A, B and C.
Successful Contracts (%)
Average Utility 95% 90% 85% 80% 75% 70% 65%
SA CF
A
B
Populations
C
Fig. 5. Average utility for populations A, B and C
In population A, CF clearly outperforms SA, with an average utility of 85.27% (standard deviation 1.68%) against the average utility of 70.80% (sd. 5.10%) of SA. In population B, where agents have two thirds of probability of inverting their behavior at the middle round, CF still outperforms SA with 80.54% of average utility (sd. 2.36%) against the 67.87% (4.98%) numbers of SA. Finally, at population C, where there is a mix of several different types of suppliers, SA raises substantially its performance, achieving 86.22% of average utility (sd. 1.85%), although it still underperforms the CF approach that achieves 88.92% of utility (sd. 1.43%). Another view of the results is given in Figure 6, where the results of the utility are given in a per round basis. Analyzing the figure above, we verify that in population A (dotted lines), the SA approach oscillates around an average number of 14 (in 20) successful contracts per round. On the opposite, the CF approach shows an ability of learning the behavior of the suppliers, continually increasing the utility since the first rounds, where the evidences available about every supplier are still scarce. Concerning population B (dashed lines), we verify that CF has a much steeper fall in terms of utility than the SA approach at round 30, when the behavior of most of the suppliers is inverted. However, we also verify that the utility of CF is always bigger than the utility of SA for this population, and that the CF starts recovering soon after round 30, showing its ability in dynamically updating the tendencies of behavior in the presence of few new evidences.
Extracting Trustworthiness Tendencies Using the Frequency Increase Metric
219
Utility Per Round 19 Successful Contracts
18 17 16 15 14 13 12 1
6
11
16
21
26
31
36
41
46
51
56
Rounds SA (A)
CF (A)
SA (B)
CF (B)
SA (C)
CF (C)
Fig. 6. Utility per round for populations A, B and C
Finally, in population C (continuous lines), we verify that both SA and CF start learning the behavior of the suppliers much earlier than in the previous experiments with populations A and B. In reality, this happens because population C presents a more diversified population, with 4/9 of the population being generally good (with suppliers of types Good, HFBQT, HFBDT and HQTDT). What should be retained from these results is that CF outperforms SA even in the presence of such good suppliers. 4.3 Discussion From the results obtained in the experiments, we verify that the SA approach, representing the traditional non situation-aware trust models, is effective in differentiating between good and bad suppliers, i.e., suppliers that have a high and a low probability in fulfilling a contract, respectively. However, in real and open business environments, it is expected that the existing population of suppliers does not show this binary behavior, but that instead is composed of suppliers that are more fitted to specific contractual characteristics and that can failed in other types of contracts. In this reality, populations A and B have shown that a situation-less trust model is not effective in differentiating between different suppliers’ characteristics, with the clients keeping selecting the same suppliers over and over again, occasionally failing the contracts for which the suppliers have an handicap. On the contrary, we verify that our CF trust model is able to distinguish between suppliers that present different tendencies of failure, and that this ability allows CF clients to make better decisions, reflected in the higher utility they achieved in all the experiments with populations A, B and C. Moreover, we observed that CF is fast in detecting changes in the behavior of suppliers, as it dynamically updates the failure
220
J. Urbano, A.P. Rocha, and E. Oliveira
tendencies extracted when it has new evidences. In the same way, it is able to extract tendencies even when the number of trust evidences about the supplier in evaluation is scarce.
5 Conclusions In this paper, we presented CF, a simple situation-aware technique that extracts failure tendencies from the history of past evidences of an agent, based on the Frequency Increase metric. This technique can be used with any traditional trust system in order to enhance the estimation of trustworthiness scores. Although other situation-aware approaches are now being proposed in the trust management field, we believe that our proposed CF technique presents some benefits over them. First, it can be used with any of the existing traditional trust aggregation engines. Secondly, it is an online process, meaning that it captures the variability in the agents’ behavior in a dynamic way. Finally, it does not rely on ontology-based situation representations, and therefore the analysis of the similarity between the situation in assessment and the past evidences of the agent in evaluation does not require specific, domain-based similarity functions. Also, it allows for fine-grain dissimilarity detection (e.g. it distinguishes between the similar though different situations of providing one container of cotton in 7 or in 14 days) that can be hard to express using pre-defined distance functions. We evaluated the CF technique using Sinalpha, a traditional aggregation engine approach enhanced by the inclusion of properties of the dynamics of trust. We overviewed the benefits of embedding the properties of asymmetry, maturity and distinguishably in trust aggregation engines. However, we also concluded that the study of the benefits of Sinalpha’ sinusoidal shape that follows work on the area of Psychology needs proper data/models concerning the behavior of real-world organizations, and we will address the acquisition of such data sets in future work. Acknowledgements. This research is funded by FCT (Fundação para a Ciência e a Tecnologia) project PTDC/EIA-EIA/104420/2008. The first author enjoys a PhD grant with reference SFRH/BD/39070/2007 from FCT.
References 1. Ramchurn, S., Sierra, C., Godo, L., Jennings, N.R.: Devising a trust model for multi-agent interactions using confidence and reputation. Int. J. Applied Artificial Intelligence 18, 833–852 (2004) 2. Jøsang, A., Ismail, R.: The Beta Reputation System. In: Proceedings of the 15th Bled Electronic Commerce Conference, Sloven (2002) 3. Zacharia, G., Maes, P.: Trust management through reputation mechanisms. Applied Artificial Intelligence 14(9), 881–908 (2000) 4. Erete, I., Ferguson, E., Sen, S.: Learning task-specific trust decisions. In: Procs. 7th Int. J. Conf. on Autonomous Agents and Multiagent Systems, vol. 3 (2008) 5. Sabater, J.: Trust and Reputation for Agent Societies. Number 20 in Monografies de l’institut d’investigacio en intelligència artificial. IIIA-CSIC (2003)
Extracting Trustworthiness Tendencies Using the Frequency Increase Metric
221
6. Huynh, T.D., Jennings, N.R., Shadbolt, N.R.: An integrated trust and reputation model for open multi-agent systems. Autonomous Agents and Multi-Agent Systems 13(2), 119–115 (2006) 7. Sabater, J., Paolucci, M., Conte, R.: Repage: Reputation and image among limited autonomous partners. Journal of Artificial Societies and Social Simulation 9, 3 (2006) 8. Castelfranchi, C., Falcone, R.: Principles of trust for MAS: cognitive anatomy, social importance, and quantification. In: Procs. Int. Conference on Multi-Agent Systems (1998) 9. Jonker, C.M., Treur, J.: Formal Analysis of Models for the Dynamics of Trust Based on Experiences. In: Garijo, F.J., Boman, M. (eds.) MAAMAW 1999. LNCS, vol. 1647, pp. 221–231. Springer, Heidelberg (1999) 10. Marsh, S., Briggs, P.: Examining Trust, Forgiveness and Regret as Computational Concepts. In: Golbeck, J. (ed.) Computing with Social Trust, pp. 9–43. Springer, Heidelberg (2008) 11. Melaye, D., Demazeau, Y.: Bayesian Dynamic Trust Model. In: Pěchouček, M., Petta, P., Varga, L.Z. (eds.) CEEMAS 2005. LNCS (LNAI), vol. 3690, pp. 480–489. Springer, Heidelberg (2005) 12. Tavakolifard, M.: Situation-aware trust management. In: Proceedings of the Third ACM Conference on Recommender Systems, pp. 413–416 (2009) 13. Neisse, R., Wegdam, M., Sinderen, M., Lenzini, G.: Trust management model and architecture for context-aware service platforms. In: Meersman, R., Dillon, T., Herrero, P. (eds.) OTM 2009. LNCS, vol. 5871, pp. 1803–1820. Springer, Heidelberg (2009) 14. Rehak, M., Gregor, M., Pechoucek, M.: Multidimensional context representations for situational trust. In: IEEE Workshop on Distributed Intelligent Systems: Collective Intelligence and Its Applications, pp. 315–320 (2006) 15. Hermoso, R., Billhardt, H., Ossowski, S.: Dynamic evolution of role taxonomies through multidimensional clustering in multiagent organizations. In: Yang, J.-J., Yokoo, M., Ito, T., Jin, Z., Scerri, P. (eds.) PRIMA 2009. LNCS, vol. 5925, pp. 587–594. Springer, Heidelberg (2009) 16. Urbano, J., Rocha, A.P., Oliveira, E.: Refining the Trustworthiness Assessment of Suppliers through Extraction of Stereotypes. In: Filipe, J., Cordeiro, J. (eds.) ICEIS 2010 - Proceedings of the 12th International Conference on Enterprise Information Systems, AIDSS, Funchal, Madeira, Portugal, vol. 2, pp. 85–92. SciTePress (2010), ISBN: 978-989-8425-05-8 17. Urbano, J., Rocha, A.P., Oliveira, E.: Computing Confidence Values: Does Trust Dynamics Matter? In: Lopes, L.S., Lau, N., Mariano, P., Rocha, L.M. (eds.) EPIA 2009. LNCS (LNAI), vol. 5816, pp. 520–531. Springer, Heidelberg (2009) 18. Straker, D.: Changing Minds: in Detail. Syque Press (2008) 19. Lapshin, R.V.: Analytical model for the approximation of hysteresis loop and its application to the scanning tunneling microscope. Review of Scientific Instruments 66(9), 4718– 4730 (1995) 20. Schlosser, A., Voss, M.: Simulating data dissemination techniques for local reputation systems. In: Procs. of the Fourth International Joint Conference on Autonomous Agents and Multiagent Systems, pp. 1173–1174 (2005) 21. Danek, A., Urbano, J., Rocha, A.P., Oliveira, E.: Engaging the Dynamics of Trust in Computational Trust and Reputation Systems. In: Jędrzejowicz, P., Nguyen, N.T., Howlet, R.J., Jain, L.C. (eds.) KES-AMSTA 2010. LNCS, vol. 6070, pp. 22–31. Springer, Heidelberg (2010) 22. Paliouras, G., Karkaletsis, V., Papatheodorou, C., Pyropoulos, C.D.: Exploiting Learning Techniques for the Acquisition of User Stereotypes and Communities. In: Procs. of UM 1999 (1999)
FONTE: A Prot´eg´e Plug-in for Engineering Complex Ontologies Jorge Santos1 , Lu´ıs Braga1 , and Anthony G. Cohn2 1
Departamento de Engenharia Inform´atica Instituto Superior de Engenharia, Porto, Portugal {ajs,1050515}@isep.ipp.pt 2 School of Computing, Leeds University, Leeds, U.K. [email protected]
Abstract. Humans have a natural ability to reason about scenarios including spatial and temporal information but for several reasons the process of developing complex ontologies including time and/or space is still not well developed and it remains an one-off, labour intensive experience. In this paper we present F ONTE (Factorising ONTology Engineering complexity), an ontology engineering methodology that relies on a divide and conquer strategy. The targeted complex ontology is built by assembling modular ontologies that capture temporal, spatial and domain (atemporal and aspatial) aspects. In order to support the proposed methodology we developed a plug-in for Prot´eg´e. Keywords: Ontologies, Knowledge engineering, Temporal/Spatial reasoning and representation.
1 Introduction Temporal and spatial concepts are ubiquitous in human cognition hence representing and reasoning about these knowledge categories is fundamental for the development of intelligent applications [1]. Despite the extensive research regarding the engineering of complex domain ontologies with time and/or space [2, 3, 4] this process is still not well developed and it remains an one-off, labour intensive experience, mainly because: i) the engineering process requires the consideration of several ontological issues (e.g., primitives, density, granularity, direction) often implying a complex trade-off between expressiveness and decidability; ii) the domain experts often have an intuitive and informal perception of time and space, whereas the existing models of time and space are complex and formal; and iii) the temporal component introduces an extra dimension of complexity in the verification process, making it difficult to ensure system completeness and consistency. These issues have been considered in the development of FONTE (Factorising ONTology Engineering complexity), an ontology engineering methodology that relies on a divide and conquer strategy [5, 6]. This type of strategy has been successfully applied in the resolution of other complex problems [7] (e.g., mathematical induction or recursive algorithms in computer sciences). The targeted complex ontology is built by factorising concepts into their temporal, spatial and domain (atemporal and aspatial) aspects, and then assembling the temporally/spatially situated entity from J. Filipe and J. Cordeiro (Eds.): ICEIS 2010, LNBIP 73, pp. 222–236, 2011. c Springer-Verlag Berlin Heidelberg 2011
FONTE: A Prot´eg´e Plug-in for Engineering Complex Ontologies
223
these primitive concepts. This is more similar to a Cartesian Product than a union of ontologies. Each of these component ontologies is built/acquired independently, allowing a factorisation of complexity. The ontologies assembly is performed through an iterative and interactive process that combines two types of inputs: i) human assembly actions between the component ontologies; and ii) automatic assembly proposals obtained from semantic and structural analysis of the ontologies. This process is propelled by a set of rules and a set of constraints. The set of rules drives a semi-automatic process proposing assembly actions; the set of constraints allows the assessment of which generated proposals are valid. A prototype tool implemented in Prolog was designed to support the previous version of FONTE method (just for time, not space), which provides the essential functionalities for the assembly process through a simple command line interface [6]. This prototype was tested in the assembly of ontologies specified in F-Logic. In this paper, we present a plug-in for the Prot´eg´e platform that was designed to take benefit of the OWL format, in particular of OWL-DL. Using OWL is an advantage since it is currently the standard language for the representation of ontologies; however, it does not allow some operations of temporal assembling that are based in the existence of generic axioms used by F-Logic, which were very rich in expressivity. As described further in this paper, the assembly method uses a set of assembly rules that allow the tool behaviour to be defined. Additionally, a tool to facilitate the specification of meta-modeling rules was developed. The rest of the paper is organised as follows. Firstly we provide a summary of related work in section 2. Then we describe FONTE, the semi-automatic process of assembling two ontologies (section 3), with some detail for its main algorithm, data structures, assembling of classes and properties. Some examples of the engineering of temporal aspects in ontologies will be presented to illustrate the potentialities of the proposed methodology. To this end, the temporal ontology Time-Entry and the domain ontology SWRC about the Semantic Web Research Community will be used. We describe the support tool (a Prot´eg´e plug-in) developed to drive the process, and the tool developed for editing assembly rules (section 4). Finally in the section 5 we present the conclusions and some possible directions for future work.
2 Related Work As mentioned above, temporal and spatial concepts are ubiquitous in human cognition. Representing and reasoning about these concepts is therefore fundamental in Artificial Intelligence, particularly when approaching problems and/or applications like planning, scheduling, natural language understanding, common-sense and qualitative reasoning, and multi-agent systems [8, 9]. A temporal representation requires the characterisation of time itself and temporal incidence [10]. Space must be characterised by elements representing basic spatial entities and primitive spatial notions expressed over them [8]. Moreover, whereas the principal relations between temporal entities are based on ordering, in the case of space, many more different kinds of relations are possible due to the higher dimensionality, including richer mereotopological and directional relations [11]. An ontology is an explicit specification of a conceptualisation about a specific portion of the world [12]. Ontologies main purpose is to provide formal representations of
224
J. Santos, L. Braga, and A.G. Cohn
models that can be easily shareable and understandable both by humans and machines. Ontologies have become an important topic of research and are used in many areas, including Knowledge Engineering [13]. The fast growth of the WWW has established a knowledge sharing infrastructure, increasing the importance of Knowledge Engineering [14]; consequently, the ontologies have gained renewed usage as artifacts within distributed and heterogeneous systems. The most recent development in standard ontology languages is OWL - Web Ontology Language (www.w3.org/TR/owl-features). This has three sub-languages (Lite, DL and Full) which present different grades of expressiveness/decidability; OWL-DL (based on Description Logics) provides the most interesting and widely accepted trade-off between expressiveness and decidability. In recent years, different ontologies about time and space have been developed and are now available in the public domain. There are two types of such ontologies: specific ontologies about time and/or space like OWL-Time (www.w3.org/TR/owl-time), SWEET-Time and SWEET-Space (sweet.jpl.nasa.gov/ontology) and upper ontologies (also called general) that include components describing time and/or space like SUMO (www.ontologyportal.org), OpenCYC (www.opencyc.org), GUM (www.ontospace.unibremen.de/ontology/gum.html) and more recently COSMO (http://micra.com/ COSMO/). There is a growing interest in the topic of modularity in ontology engineering [15, 16, 17] mainly because the ontology engineering is a complex process that includes multiple tasks (e.g., design, maintenance, reuse, and integration of multiple ontologies). Modularity has been used to tackle complex processes such as: – engineering a rule-based system by task analysis [18]; – engineering an ontology-based system by developing with patterns [19, 20] or developing sub-ontologies and merging them [21]. All these methods promote the idea of sub-dividing the task of building a large ontology by engineering, re-using and then connecting smaller parts of the overall ontology. The MADS system [22] also aims to support the engineering of temporal and spatial aspects through a graphic system that supports an Entity-Relationship analysis. MADS allows the knowledge engineer define temporal/spatial characteristics for the model concepts. However, this approach is very distinct from the one proposed by FONTE, because the temporal/spatial modeling actions are not generated in a semi-automatic mode; and the temporal and spatial theories are embedded in the application interface so the ontology engineer is unable to use a specific theory of time or/and space.
3 FONTE Method The assembly process comprises two main building blocks. First, the specification of temporal and/or spatial aspects for a domain ontology (atemporal and aspatial) remains dependent on the conceptualisation of the ontology engineer. Second, in order to facilitate and accelerate the joint assembly of timeless and spaceless domain concepts with temporal and/or spatial notions, the interactive process is supported by heuristics for asking and directing the ontology engineer.
FONTE: A Prot´eg´e Plug-in for Engineering Complex Ontologies
225
3.1 Assembling Algorithm The assembly process runs as depicted in figure 1. The process starts by an Initial Setup. Some basic operations are performed, namely loading the ontologies to be assembled, loading a set of rules (one set for each ontology) to drive the process and initialising some process parameters. The rules and parameters are defined separately from the tool in order to allow for adaptations to the particular needs of different time and/or space ontologies. However the rules and parameters do not change when a new domain ontology is to be assembled. The Target Ontology initially corresponds to the union of the timeless and spaceless domain ontology with the time and space theory. The user may commence by restructuring some part of the domain ontology to include temporal and/or spatial aspects through defining and performing (what we call) task instances. Each task instance (either user initiated or automatically proposed) aims to create a new temporal/spatial concept by assembling an atemporal/aspatial domain concept or role with a temporal/spatial one. When performing such restructuring task instances, a Structural Analysis aims to find related classes (e.g., sub or super classes in the domain ontology) and puts the appropriate task instances into the Proposed Task Instances. In the Structural Analysis step a set of tests is performed that restrict the set of possible task instances to plausible ones, which are then proposed by insertion into the Proposed Task Instances. As more information becomes available in subsequent iterations, the usefulness of results provided by the structure analysis improves. In subsequent iterations the engineer decides whether to accept an automatically proposed task instance from the Proposed Task Instances. Alternatively, the user may take new initiatives and define and execute a new task instance from scratch. Later, a set of logical tests (Verify Consistency) is performed to detect the existence of any knowledge anomalies (e.g., circularity or redundancy). In the Execute Task Instance step the corresponding changes are made to the target ontology. The user may subsequently decide either to perform another iteration or to go to Conclude Process and accept the current Target Ontology as the final version. 3.2 Data Structures We have already informally used the notion of task in order to describe to an action template (i.e., a generic task) that may be instantiated and executed in order to modify a current target ontology. A task is defined by the Rule and the Task Question. Rule. It defines the Assembly Task behaviour, it uses: a set of keywords of structured programming (e.g., if, then, else) with the commonly expected semantics; some special keywords (do, propose and check, whose semantics we provide later in this section); and the evocation of another rules. Task Question. Before the execution of a task, the system prompts a task question in natural language to the engineer in order to determine if the proposal should really be accepted or not and in order to ask for additional constraints that the user might want to add. The task question is defined by a List of words and parameters used to compose a sentence in natural language.
226
J. Santos, L. Braga, and A.G. Cohn
Ontology 1 (e.g., Time)
Begin Process
Parameters
Ontology 2 (e.g., Space)
Assembly Rules 1
Assembly Rules 2
Ontology N Assembly Rules N
Initial Setup
Domain Concepts Ontology Semantic Analysis
Structural Analysis
User: Create or Accept
Create New Task Instance
Proposed Task Instances Target Ontology
Accept Proposed Task Instance Verify Consistency
Constraints
Historic of Task Instances
Execute Task Instance
User: Iterate
Process Flow
Data Flow Conclude Process
Fig. 1. Assembly Process
In order to manage various task instances, the assembling algorithm uses the following data structures: Proposed Task Instances. List of tuples (TaskInstance, TriggersList, Weight, QuestionInstance) storing proposed task instances (also referred as Proposals) together with the triggers that raised their proposal and their weight according to which they are ranked on the task list. TriggersList. Denotes the list of items that have triggered the proposal. A trigger is a pair (TriggerType,TriggerId) where TriggerType has one of the values class, property, restriction or axiom and the TriggerId is the item identifier. For instance, the pair (concept, Person) is a valid trigger. The list is useful to query for proposals raised by a specific item or TriggerType.
FONTE: A Prot´eg´e Plug-in for Engineering Complex Ontologies
227
Weight. Since competing task instances may be proposed, Weight is used to reflect the strength of the proposal on the TaskList. Additionally, since a task instance may be proposed as consequence of the assembly different classes and/or properties, the weight is increased in order to reflect the probability of being accepted. Historic of Task Instances. List of all tasks that were previously performed. This list is useful to allow the undo operation and to provide statistics about the assembly process. Task Constraints List. List of tuples (TaskInstance,Expression) storing logical constraints about previously performed task instances. do(TaskInstance). The function do performs logical tests over existing task constraints about TaskInstance. If there is no impediment it executes the task instance and creates a corresponding entry on the Task History. propose(TaskInstance,Trigger,Weight). The function propose creates a proposal by asserting the corresponding tuple in the list of Proposed Task Instances. check(Condition). The function check performs a logical test in order to check if the Condition is true or false in the scope of the participant ontologies. 3.3 Input Modular Ontologies In order to illustrate the assembly process two ontologies will be used as building blocks for the target ontology, the temporal ontology Time-Entry and the domain ontology SWRC about Semantic Web Research Community. Time-Entry (www.isi.edu/∼hobbs/owl-time.html) is a sub-ontology of OWL-Time (see figure 2 for the UML-like depiction of an excerpt) that embodies concepts like Instant or Interval often found in ‘standard’ ontologies like SUMO and assumes a standard interpretation by representing time points and time intervals as real numbers and intervals on the real line. As mentioned before, a temporal representation requires the characterisation of time itself and temporal incidence; these are represented in TimeEntry ontology by TemporalEntity and Event, respectively. Temporal Entities. In the temporal ontology we used as a case study there are two subclasses of TemporalEntity: Instant and Interval. The relations before,after and equality can hold between Instants, respectively represented by the symbols:≺, , =, allowing to define an algebra based on points [23]. It is assumed that the before and after are irreflexive, asymmetric, transitive and strictly linear. The thirteen binary relations proposed in Allen’s interval algebra [24] can be defined in a straightforward way based on the previous three relations [25]. Events. There are two subclasses of Event, IntervalEvent and InstantEvent, in order to be possible to express continuous and instantaneous events. This temporal ontology provides the properties begins and ends which allow to capture the beginning and ending instants of an event. The assembly process can be used either for the development of ontologies with time from scratch as for re-engineering existing ones in order to include time. For our case study we have used the time-less SWRC-Semantic Web Research Community (www.ontoware.org/swrc/) ontology that served as a seed ontology for the knowledge
228
J. Santos, L. Braga, and A.G. Cohn TemporalThing before:TemporalThing begins:InstantThing ends:InstantThing
isA
TemporalEntity
Event IntervalThing
InstantThing
inside:InstantThing
Instant
InstantEvent
Interval
IntervalEvent
Fig. 2. Excerpt of Time-Entry Ontology isA
SWRC
hasA
Organization
University
Project
Person
isAbout:Topic
member:Project
Student
Topic
Employee
studiesAt:University
Fig. 3. Excerpt of SWRC Ontology
portal of OntoWeb. SWRC comprises 54 classes, 68 restrictions and 44 properties. In figure 2 we present an excerpt of SWRC ontology that was used in order to elucidate the assembly process. 3.4 Assembly of Classes As mentioned before, system proposals are generated based on rules and constraints. In the initial phase, the engineer takes the initiative. From the initial modifications, some first proposals may be generated automatically, and from these, new proposals are spawned. Furthermore, the assembly of classes with temporal attributes needs to fulfill fewer constraints than the assembly of properties. Thus, proposals for modifications with classes are typically made first and elaborated in this subsection. The figure 4 shows an excerpt of the ontology SWRC emphasising some of its classes and properties. Namely, the classes Project and Person, as well as the sub- classes of the latter: Employee, Student, AcademicStaff and PhDStudent. The property supervises, and its inverse property supervisor, capture the relationship between
FONTE: A Prot´eg´e Plug-in for Engineering Complex Ontologies
Employee isA
isA
Person
isA
Student isA
supervises
AcademicStaff
229
PhDStudent supervisor
worksAtProject
worksAtProject
Project
Fig. 4. Excerpt of SWRC Ontology
Algorithm 1. Assemble Class Task. rule assembleClass(c1,c2) if c2=TemporalThing then do: createRelation(isA,c1,c2); do: assembleRelatedClasses(c1,c2); do: assembleRelatedProperties(c1,c2); propose: specializeClass(c1,c2); end Algorithm 2. Specialize Class Task. rule specializeClass(c1,c2) if c2=TemporalThing then answer ← ask(Select subclass of #c1); if answer=IntervalThing then do: assembleClass(c1,time:IntervalThing); end if answer=InstantThing then do: assembleClass(c1,time:InstantThing); end if answer=Event then do: assembleClass(c1,time:Event); end ...; end
AcademicStaff and PhDStudent; the property worksAtProject captures the notion that both an AcademicStaff as a PhDStudent can work in a given Project. For the running example here, we assume that a user links the classes Person (from SWRC ontology) with TemporalThing (from Time-Entry ontology). This action triggers the execution of the rule assembleClass that subclasses a concept c1, viz. Person, from a c2, viz. TemporalThing. The corresponding task assembleClass (see algorithm 1) creates a new isA relation between the Person and TemporalThing and then proposes further assembling tasks for related classes and properties. Additionally, a proposal is created in order to allow further specialisation of Person, depending in the engineering options Person
230
J. Santos, L. Braga, and A.G. Cohn
could be later defined as IntervalThing, InstantThing, Event or TemporalEntity as detailed in specializeClass task (see algorithm 2). The reader may note that this result crucially depends in the used temporal theory, but that rules could be easily modified to accommodate other theories. Additionally, as mentioned before, the assembly rules do not need to be modified when a new domain ontology (e.g., medicine, power systems) is to be assembled since they are only dependent on the time/space theory. 3.5 Assembly of Properties From the assembly of classes there follow proposals for the modification of properties (captured by OWL restrictions). For instance, when we assume that Person was previously modified to become subclass of TemporalThing it becomes plausible that also the properties that are related to Person should incur in changes. Also, because the isA relation is transitive, is plausible to say that some/all of its subclasses (direct or indirect) are also temporal, so the properties that are related to them should incur in changes too. From the example, F ONTE produces proposals for temporal assembling the following restrictions: – – – –
The changes occur in analogy to the tasks defined for the assembly of classes. In addition however, there arise further possibilities in order to constrain the life-time of the actual relationship by the life-time of the participating classes instances. Thus, supervises(AcademicStaff, PhDStudent) is replaced by supervises(AcademicStaff, PhDStudent, Interval) and — maybe — further constraints on the time instant as added by the engineer. The most common approach to dealing with predicates of higher arity than two in languages like OWL is to reify the relationships through extension of each relation into an concept that itself has binary relationships [15]. One of the approaches that used to represent n-ary relations is introducing a new class (token) for a relation. This pattern is well documented (www.w3.org/TR/swbp-n-aryRelations/) and will be use in our use case example. Regarding the methodology we propose is independent from the approach used for processing temporal/spatial reification. Considering that one member of the academic staff may work at a project during some time, the restriction worksAtProject(AcademicStaff,Project) should be temporalized. Through the use of the chosen pattern, a new class is created/selected to play the role of token class (WorksAtProjectRel). This token captures the relation between AcademicStaff, Project and Interval. So, it have following restrictions: – has value(WorksAtProjectRel, Project) (for reasoning proposes is also defined its inverse property is value for(Project, WorksAtProjectRel); – intDuring(WorksAtProjectRel, Interval). Since the restrictions worksAtProject(AcademicStaff, Project) and worksAtProject (PhDStudent, Project) proposed to be temporalized are very similar (both use the
FONTE: A Prot´eg´e Plug-in for Engineering Complex Ontologies
231
IntervalThing isA Employee
isA
isA
Person
worksAtProject
isA
worksAtProject WorksAtProjectRel
AcademicStaff
Student isA PhDStudent
has_value
intDuring is_value_for
Project
Interval
Fig. 5. Excerpt of temporalised SWRC Ontology - property worksAtProject IntervalThing isA Employee
isA
isA
supervises
Person
isA
Student
has_value
isA
SupervisesRel
AcademicStaff supervisor
during
PhDStudent is_value_for
Interval
Fig. 6. Excerpt of temporalized SWRC Ontology - property supervises
same property with the same range), the token class used to describe the temporal relation would be the same, avoiding the duplication of token classes. The final result of temporal assembling is presented in figure 5. The restriction supervises(AcademicStaff, PhDStudent) may also be temporal, since some member of academic staff supervises PhD students during a time-span. The process of temporal assembling of this restriction is analogue to that previously described, with the assertion of an extra restriction supervisor(PhDStudent, AcademicStaff) may be specified to ensure the inverse reasoning and keep the structural logic. The restriction supervisor(PhDStudent, AcademicStaff) follows the same procedure. Hence, the final result of the temporal assemble those two restrictions is presented in figure 6.
4 Prot´eg´e Plug-in and Rule Editor As described in section 3, a rule consists in the definition of the actions to be performed in the target ontology after the performance of an assembly action (e.g., creation of a relation isA between a domain and a temporal concept). Due to the characteristics of the platform (Prot´eg´e), a rule can evoke both other rules (as defined in the section 3) or functions. A function allows to perform basic operations to manipulate the ontologies
232
J. Santos, L. Braga, and A.G. Cohn hasA
Proposal Current Proposals
Task hasA
listOf
isA
listOf
uses
Accepted Proposals
Trigger
TaskContext
isA
Rejected Proposals
listOf
OntologyConcept
Rule uses Function
isA Property
Restriction
Axiom
Class
Fig. 7. F ONTE plug-in architecture
(e.g., create, delete and modify classes or properties), and provide access to the API functionalities of the Prot´eg´e platform in a transparent mode. In order to facilitate the edition/creation of rules, a specific tool supported in a graphic interface was developed; details of this tool are presented in section 4.2. The F ONTE plug-in architecture (see figure 7) relies in different abstraction levels which present several advantages for the knowledge engineer, such as: – the knowledge engineer does not need to know the specificities of the Prot´eg´e API to manipulate the ontologies. In addition, Functions provide an abstraction level between Rules and Prot´eg´e API assuring independency between the Rules and the Prot´eg´e API version; – the Rules may be created/edited during the execution time and do not require the modification of the application and consequent compilation; – different rules set (which should be stored in distinct files) allow to use different temporal/spatial theories in the assembly process in a flexible way.
4.1
F ONTE Plug-in for Prot´eg´e
Prot´eg´e is one of the most widely used open source ontology editor and knowledge-base framework, which provides a powerful graphic interface. In order to support the iterative and interactive process used in FONTE a plug-in Prot´eg´e (version 3.4) was developed. This plug-in provides a set of functionalities, such as: i) linking concepts of the domain and temporal/spatial ontology; ii) to accept, reject or even delay the execution of a task; iii) and to visualise statistics of the assembly process. As presented in figure 8, the plug-in presents two panels for the manipulation of ontologies (in the left-hand side) and a list of proposals (in the right-hand side). The panel further to the left contains the domain ontology (SWRC, which is timeless and spaceless); from this panel it is possible to access the classes and properties hierarchies. The other panel contains the temporal/spatial ontologies to be used as construction blocks for the production of the target ontology. The list of proposals contains the records of the task instances generated by the system. Details of this list are present below.
FONTE: A Prot´eg´e Plug-in for Engineering Complex Ontologies
233
Fig. 8. F ONTE plug-in for Prot´eg´e
To promote the assembly process the knowledge engineer needs to select the ontologies that will participate in the assembly process as well as the files containing the assembly rules for each ontology; these can be selected using the setup window (triggered by the setup button showed in the figure). All the tasks that are successfully performed (either triggered manually by userdriven action or automatically by the structural analysis module) are added to a list containing the instance tasks history. Associated to each task instance proposal there is a question in natural language, a trigger list and the task weight. The question in natural language is composed by a phrase that summarises the proposal objective, instantiated with the elements contained in the instance task. The trigger list is composed by the elements that triggered the proposal. The weight provides an indication of the importance of each proposal; the higher the weight, the higher the possibility of the proposal to be accepted during the assembly process. As the assembly process progresses, more proposals are generated. If different concepts happen to propose the same task instance, all the elements that have triggered that proposal are included in the trigger list and the proposal weight is increased to reflect its relevance. All the proposed task instances are stored in the list of proposals, which can be sorted by different criteria (e.g., id, trigger or weight). The user can then accept, reject, or even delay for later analysis, each of the proposals. In order to avoid overloading the knowledge engineer with unuseful proposals, the system defines that a proposal that has been rejected before can not be proposed again. The proposals are therefore filtered by an auto-rejection system. However, it should be noted that the knowledge engineer has the ability to recover a rejected proposal. In addition to the functionalities previously described, the plug-in also provides statistics about the assembly process and allows to produce assembly script files. The assembly process statistics summarises the results of the tool performance, including the initial and current status of the domain ontology, the number of tasks that has been initiated by the user and how many proposals have been accepted or rejected. A script
234
J. Santos, L. Braga, and A.G. Cohn
file contains a sequence of performed tasks; this is particularly useful when the knowledge engineer needs to totally or partially repeat a certain set of tasks. 4.2 Rules Editor An application was developed to facilitate the creation of external files of rules. This supports the knowledge engineer through a simple and interactive graphic interface. The file management system (see figure 9(a)) provides a graphic visualisation of the rules included in each file and offers several functionalities, such as: to sort the list through different criteria, to modify the order through which the rules are interpreted during the assembly process, to visualise the rules in XML or pseudo-code, and also to remove, edit or create new assembly rules. The files of rules and the rules included in each file can be enriched with a description. In addition, the tool has a mechanism to support the creation/edition of rules (see figure 9(b)). This mechanism alerts the knowledge engineer about potential consistency errors (e.g., to use a non declared variable) or warnings (e.g., to declare a variable that is not used). The variables can also be enriched with a description. In addition, and similarly to what happens in several programming languages, comments can be added to the code, which are ignored by the plug-in parser/interpretor. Several advantages acrue from the use of this tool, namely: – – – –
simple and intuitive manipulation of the files of rules; the XML code is automatically generated, without syntactic errors; easy management of the existing files of rules; automatic verification of consistency in the creation/edition of rules.
(a) Assembly Rule File Manager.
(b) Editing an Assembly Rule.
Fig. 9. Assembly Rule Manager
FONTE: A Prot´eg´e Plug-in for Engineering Complex Ontologies
235
5 Conclusions and Future Work In this paper we described FONTE, a method that supports the engineering of complex ontologies including temporal and/or spatial knowledge that allows factorisation of the process complexity by dividing the problem in parts: modelling the domain concepts ontology (atemporal and aspatial), modelling or acquiring the temporal and /or spatial ontology, and finally producing the target ontology by assembling these modular ontologies. A Prot´eg´e plug-in was developed in order to support the FONTE method, which allows FONTE to be used in an integrated form in the development of ontologies. The FONTE methodology works independently of the temporal/spatial theory since it allows the definition/use of sets of assembly rules for each specific theory. A tool to support the creation/editing of these rule sets was also presented. The tasks remain for future work: – the generic characteristics of the proposed method should be tested with different spatial/temporal ontologies, including spatio-temporal ontologies (also called 4D ontologies); – it would also be interesting to develop a functionality to predict the impact of the acceptance of a certain proposal; – improving the generation of automatic proposals during the assembly process. This may be achieved through the use of semantic analysis, previously successfully used in diverse processes of ontology engineering (e.g., merging, mapping and alignment). This will allow the generation of more and better assembly proposals at an earlier stage of the process and consequently making it progressively more automated, given that the current FONTE version is limited to structural analysis of the classes and properties hierarchy; – application of the assembly process in the automatic modification of rules for ontology querying such SWRL (www.w3.org/Submission/SWRL/) and RIF (www.w3.org/2005/rules/wiki/RIF Working Group).
References 1. Harmelen, F., Lifschitz, V., Porter, B.: Handbook of Knowledge Representation. Elsevier, Amsterdam (2008) 2. Staab, S., Maedche, A.: Knowledge portals: Ontologies at work. AI Magazine 22, 63–75 (2001) 3. Vale, Z., Ramos, C., Faria, L., Malheiro, N., Marques, A., Rosado, C.: Real-time inference for knowledge-based applications in power system control centers. Journal on Systems Analysis Modelling Simulation (SAMS) 42, 961–973 (2002) 4. Milea, D., Frasincar, F., Kaymak, U.: An OWL-Based Approach Towards Representing Time in Web Information Systems. In: Procs of 20th Belgian-Dutch Conference on Artificial Intelligence, pp. 343–344 (2008) 5. Santos, J., Staab, S.: Engineering a complex ontology with time. In: 18th International Joint Conference on Artificial Intelligence (IJCAI), Acapulco/Mexico, pp. 1406–1407 (2003) 6. Santos, J., Staab, S.: FONTE - Factorizing ONTology Engineering complexity. In: The Second International Conference on Knowledge Capture (K-Cap 2003), Florida/USA, pp. 146– 153 (2003)
236
J. Santos, L. Braga, and A.G. Cohn
7. Cormen, T.H., Leiserson, C.E., Rivest, R.L.: Introduction to Algorithms. MIT Press, Cambridge (2000) 8. Stock, O.: Spatial and Temporal Reasoning. Kluwer Academic Publishers, Norwell (1997) 9. Fisher, M., Gabbay, D., Vila, L. (eds.): Handbook of Temporal Reasoning in Artificial Intelligence. Foundations of Artificial Intelligence Series, vol. 1. Elsevier Science & Technology Books (2005) 10. Vila, L., Schwalb, E.: A theory of time and temporal incidence based on instants and periods. In: Proc.International Workshop on Temporal Representation and Reasoning, pp. 21–28 (1996) 11. Cohn, A.G., Renz, J.: Qualitative Spatial Representation and Reasoning. In: Handbook of Knowledge Representation, pp. 551–596. Elsevier, Amsterdam (2007) 12. Gruber, T.: Towards Principles for the Design of Ontologies Used for Knowledge Sharing. In: Formal Ontology in Conceptual Analysis and Knowledge Representation, pp. 93–104. Kluwer, Dordrecht (1993) 13. Staab, S., Studer, R. (eds.): Handbook on Ontologies. Springer, Heidelberg (2004) 14. Studer, R., Decker, S., Fensel, D., Staab, S.: Situation and Perspective of Knowledge Engineering. In: Knowledge Engineering and Agent Technology, vol. 52. IOS Press, Amsterdam (2004) 15. Welty, C., Fikes, R., Makarios, S.: A reusable ontology for fluents in OWL. In: Proceedings of FOIS, pp. 226–236 (2006) 16. Lutz, C., Walther, D., Wolter, F.: Conservative Extensions in Expressive Description Logics. In: Veloso, M. (ed.) Procs. of Twentieth International Joint Conference on Artificial Intelligence (IJCAI 2007), pp. 453–458. AAAI Press, Menlo Park (2007) 17. Grau, B.C., Horrocks, I., Kazakov, Y., Sattler, U.: A logical framework for modularity of ontologies. In: Proc. IJCAI 2007, pp. 298–304. AAAI, Menlo Park (2007) 18. Schreiber, G., Akkermans, H., Anjewierden, A., Hoog, R., Shadbolt, N., Van de Velde, W., Wielinga, B.: Knowledge engineering and management, The CommonKADS Methodology. MIT Press, Cambridge (1999) 19. Clark, P., Thompson, J., Porter, B.: Knowledge patterns. In: KR 2000, pp. 591–600 (2000) 20. Staab, S., Erdmann, M., Maedche, A.: Engineering ontologies using semantic patterns. In: Procs. IJCAI 2001 Workshop on E-Business & the Intelligent Web (2001) 21. Noy, N., Musen, M.: PROMPT: Algorithm and Tool for Automated Ontology Merging and Alignment. In: MIT Press/AAAI Press (ed.) Proc. AAAI 2000, Austin, Texas (2000) 22. Parent, C., Spaccapietra, S., Zim´anyi, E.: Conceptual Modeling for Traditional and SpatioTemporal Applications: The MADS Approach. Springer-Verlag New York, Inc., Secaucus (2006) 23. Vilain, M., Kautz, H., Beek, P.: Constraint propagation algorithms: a revised report. Readings in Qualitative Reasoning about Physical Systems (1989) 24. Allen, J.: Maintaining knowledge about temporal intervals. Communication ACM 26, 832– 843 (1983) 25. Freksa, C.: Temporal reasoning based on semi-intervals. Artificial Intelligence 54, 199–227 (1992)
An Advice System for Consumer’s Law Disputes Nuno Costa1, Davide Carneiro1, Paulo Novais1, Diovana Barbieri2, and Francisco Andrade3 1
Department of Informatics, University of Minho, Braga, Portugal [email protected], {dcarneiro,pjon}@di.uminho.pt 2 Faculty of Law, Salamanca University, Salamanca, Spain [email protected] 3 Law School, University of Minho, Braga, Portugal [email protected]
Abstract. The development of electronic commerce in the last years results in a new type of trade which the traditional legal systems are not ready to deal with. Moreover, the number of consumer claims increased mainly due to the increase in B2C relations and many of these are not getting a satisfactory response. Having this in mind, together with the slowness of the judicial system and the cost/beneficial relation in legal procedures, there is the need for new suited and effective approaches. In this paper we use Information Technologies and Artificial Intelligence to point out to an alternative way of solving these conflicts online. The work described in this paper results in a consumer advice system aimed at fastening and making easier the conflict resolution process, both for consumers and for legal experts. Keywords: Alternative dispute resolution, Online dispute resolution, Multi-agent systems.
include negotiation, mediation or arbitration and already take place away from courts. There is now an urgent need to port these methods from the real to the virtual world in order to make them suited to the new business models, resulting in faster and cheaper processes [1].
2 Alternatives to Courts In most of the countries, litigation in court has some well identified disadvantages. Namely, it is usually characterized as a slow and expensive process. These are the two main factors that keep disputant parties away from courts. However, there are more known disadvantages. The fact that it is a public process, with a high public exposure is also undesirable. In fact parties generally like to maintain the privacy about all the aspects of the process, which is not always possible in litigations in court. Another major disadvantage is that the parties have an inferior role on the definition of the outcome. Indeed, the outcome is decided by a judge without an intervention of the parties: what the judge decides is the final outcome, regardless of the opinion or satisfaction of the parties. The fact that the judge is pointed out instead of being agreed on by the parties is also a disadvantage: when parties select and agree on a 3rd party for solving the dispute they are establishing the first point of agreement and taking the first step for a mutually satisfactory outcome. In the search for efficient and valid alternatives to traditional litigation in courts that could attenuate the mentioned disadvantages two main trends have emerged: Alternative Dispute Resolution (ADR) and Online Dispute Resolution (ODR). ADR includes methods such as mediation, negotiation or arbitration that basically aim at putting the parties into contact, establishing points of agreement and peacefully solving the conflict away from the courts. ODR, on the other hand, aims not only at using such methods in virtual environments but also at the development of technologyenabled tools that can improve the work of legal practitioners and the role of the parties in the whole process. 2.1 Alternative Dispute Resolution Several methods of ADR may be considered, “from negotiation and mediation to modified arbitration or modified jury proceedings” [2]. In a negotiation process the two parties meet each other and try to obtain an agreement by conversation and tradeoffs, having in common the willing to peacefully solve the conflict. It is a non-binding process, i.e. the parties are not obliged to accept the outcome. In a mediation process the parties are guided by a third neutral party, chosen by both, that acts as an intermediate in the dispute resolution process. As in negotiation, it is not a binding process. At last, the arbitration process, which is the one most similar to litigation. In arbitration, a third, independent party, hears the parties and, without their intervention decrees a binding outcome. Although ADR methods represent an important step to keep these processes away from courts, there is still the need for a physical location in which the parties meet, which may sometimes be impracticable, in the not so uncommon situations in which parties are from different and geographically distant countries. A new approach is therefore needed, one that uses the advantages of
An Advice System for Consumer´s Law Disputes
239
already traditional ADR methods and, at the same time, relies on the information technologies for bringing the parties closer together, even in a virtual way. 2.2 Online Dispute Resolution ODR uses new information technologies like instant messaging, email, videoconference, forums, and others to put parties into contact, allowing them to communicate from virtually anywhere in the world. The most basic settings of ODR systems include legal knowledge based systems acting as simple tools to provide legal advice, systems that try to put the parties into contact and also “systems that (help) settle disputes in an online environment” [3]. However, these rather basic systems can be extended, namely with insights from the fields of Artificial Intelligence, specifically agent-based technologies and all the well known advantages that they bring along. A platform incorporating such concepts will no longer be a passive platform that simply concerns about putting the parties into contact [4]. Instead, it will start to be a dynamic platform that embodies the fears and desires of the parties, accordingly adapts to them, provides useful information on time, suggests strategies and plans of action and estimates the possible outcomes and their respective consequences. It is no longer a mere tool that assists the parties but one that has a proactive role on the outcome of the process. This approach is clearly close to the second generation ODR envisioned by Chiti and Peruginelli as it addresses the three characteristic enumerated in [4]: (1) the aim of such platform does not end by putting the parties into contact but consists in proposing solutions for solving the disputes; (2) the human intervention is reduced and (3) these systems act as autonomous agents. The development of Second Generation ODR, in which an ODR platform might act “as an autonomous agent” [4] is indeed an appealing way for solving disputes. ODR is therefore more than simply representing facts and events; a utile software agent that performs useful actions, also needs to know the terms of the dispute and the rights or wrongs of the parties [4]. Thus, software agents have to understand law and/or and processes of legal reasoning and their eventual legal responsibility [5]. This kind of ODR environment thus goes much further than just transposing ADR ideas into virtual environments; it should actually be “guided by judicial reasoning”, getting disputants “to arrive at outcomes in line with those a judge would reach” [6]. Although there are well known difficulties to overcome at this level, the use of software agents as decision support systems points out to the usefulness of following this path.
3 UMCourt: The Consumer Case Law Study UMCourt is being developed at University of Minho in the context of the TIARAC project (Telematics and Artificial Intelligence in Alternative Conflict Resolution). The main objective of this project is to analyze the role that AI techniques, and more particularly agent-based techniques, can play in the domain of Online Dispute Resolution, with the aim of making it a faster, simpler and efficient process for the parties. In that sense, UMCourt results in an architecture upon which ODR-oriented services
240
N. Costa et al.
may be implemented, using as support the tools being developed in the context of this project. These tools include a growing database of past legal cases that can be retrieved and analyzed, a well defined structure for the representation of these cases and the extraction of information, a well defined formal model of the dispute resolution process organized into phases, among others. The tools mentioned are being applied in case studies in the most different legal domains, ranging from divorce cases to labor law. In this paper, we present the work done to develop an instance of UMCourt to the specific domain of the consumer law. As we will see ahead, the distributed and expansible nature of our agent-based architecture is the key factor for being able to develop these extensions, taking as a common starting point the core agents developed. In a few words, the consumer law process goes as follows. The first party, usually the buyer of the product or service, starts the complaint by filling an online form. The data gathered will then be object of analysis by a group of agents that configure an Intelligent System that has a representation of the legal domain being addressed and is able to issue an outcome. At the same time, other agents that make up the core of the platform analyze past similar cases and respective outcomes, that are presented to the user in the form of possible outcomes, so that the user can have a more intuitive picture of what may happen during the process and therefore fight for better outcomes. At the end, a Human mediator, will verify the proposed solution. He can agree with it or he can change it. In both cases, the agents learn with the human expert. If the expert agrees with the outcome proposed, the agents strengthen the validity of the cases used, otherwise the opposite takes place. This means that the system is able to learn with both correct and incorrect decisions: failure driven learning [7]. The developed system is not to be assumed as a fully automatic system whose decisions are binding but as a decision support system which is aimed at decreasing the human intervention, allowing a better management of the time spent with each case and, nevertheless, still giving the Human the decision making role. The main objective is therefore to create an autonomous system that, based on previous cases and respective solutions, is able to suggest outcomes for new cases. Among the different law domains that could be object of our work in a project intended to analyze possible ways of solving disputes on-line, we choose consumer law. This choice was made after noticing that consumer claims in Portugal, particularly those related to acquisition of goods or services, are not getting, most times, the solutions decreed in the Portuguese law, undoubtedly due to an unfair access to justice, high costs of judicial litigation versus value of the product/service and the slowness of the judicial procedure. All this generally leads the consumer to give up on the attempt to solve the conflict with the vendor/supplier. Having all this into consideration, we believe that an agent-based ODR approach, with the characteristics briefly depicted before, is the path to achieve a better, faster and fairer access to justice. 3.1 Consumer Law As mentioned above, the legal domain of this extension to UMCourt is the Portuguese consumer law. Because this domain is a quite wide one, we restricted it to the problematic of buy and sell of consumer goods and respective warranties contracts. In this
An Advice System for Consumer´s Law Disputes
241
field there is a growing amount of conflicts arising between consumers and sellers / providers. In this context, the approach was directed to the modeling of concrete solutions for the conflicts arising from the supply of defective goods (embodied mobiles or real estate). We also thought relevant to consider financial services as well as the cases in which there are damages arising out of defective products, although this is yet work in progress. Regarding the boundaries that were established for this extension of UMCourt, we have tried to model the solutions for conflicts as they are depicted in Decree of Law (DL) 67/2003 as published by DL 84/2008 (Portuguese laws). Based upon the legal concepts of consumer, supplier, consumer good and the concluded legal business, established on the above referred DL and on the Law 24/1996 (Portuguese law), we developed a logical conduct of the prototype, having in view the concrete resolution of the claims presented by the buyer. In this sense, we considered the literal analysis of the law, as well as the current and most followed opinions in both Doctrine and national Jurisprudence. During the development and assessment of the platform, we realized that the prototype can be useful in cases when the consumer (physical person) [8] is acquiring the good for domestic/private use [9], or is a third acquirer of the good (Law 24/1996, article 2nd nr.1, and DL 67/2003, article 1st B, a) and 4th nr. 6). Besides these cases, it is also usefully applied in situations in which the consumer has celebrated a legal contract of acquisition, buy and sell within taskwork agreement, or renting of embodied mobile good or real estate (DL 67/2003, article 1st A and 1st B, b)). Still, contracting must take place with a supplier acting within the range of his professional activities, being this one the producer of the good himself, an importer in the European Union, an apparent producer, a representative of the producer or even a seller (Law 24/1996, article 2nd nr. 1 and DL 67/2003, art. 1st B, c), d) and e)). At last, the defect must have been claimed within the delay of warranty (DL 67/2003, articles 5 and 9), and the delay in which the consumer is legally entitled to claim his rights towards the supplier has as well to be respected (DL 67/2003, article 5 A). Once the legal requests are fulfilled, the solutions available to the consumer will be: repairing of the good (DL 67/2003, articles 4th and 6th); replacement of the good (DL 67/2003 articles 4th and 6th); reduction of price (DL 67/2003 article 4th); resolution of the contract (DL 67/2003, article 4th) or statement that there are no rights to be claimed by the consumer (DL 67/2003, art. 2nd, nrs. 3 and 4, arts. 5, 5A and 6). These decrees have been modeled in the form of logic predicates and are part of the knowledge of the software agents, which use these predicates in order to make and justify their decisions. 3.2 Architecture As stated before, the architecture of UMCourt is an agent-based one. The development of ODR tools that might act “as an autonomous agent” [15] is indeed an appealing way for solving disputes. Such tools imply that agents are able of reading their environment (which comprises the parties, the problem domain and characteristics, the norms and other parameters). Agents also need to have enhanced communication skills that allow them to exchange complex knowledge with both parties. Thus, agents
242
N. Costa et al.
need a knowledge representation mechanism able to store the data gathered during all the phases of the process (which may include data about norms addressed, problem domain, items in dispute, among others). Agents also need advanced cognitive skills for dealing with this information and eventually infer conclusions and propose strategies and advice for the parties. Additionally, agents are a tool suited for addressing some of the new challenges that the legal field is facing. UMCourt is built on such architecture, as presented in Figure 1 in which a view of the core agents that build the backbone of the architecture is shown. This backbone has as the most notable services the ability to compute the Best and Worst Alternative to a Negotiated Agreement, BATNA and WATNA, respectively [10] and the capacity to present solutions based in the observation of previous cases and their respective outcomes [11].
Fig. 1. A simplified vision of the system architecture
The interaction of the user starts by registering in the platform and consequent authentication. Through the intuitive dynamic interfaces, the user inputs the requested needed information. After submitting the form, the data is immediately available to the agents that store it in appropriate well defined XML files. This data can later be used by the agents for the most different tasks: showing it to the user in an intuitive way, automatic generation of legal documents by means of XSL Transformations, generation of possible outcomes, creation of new cases, among others. Alternatively, external agents may interact directly with the platform by using messages that respect the standard defined. Table 1 shows the four high-level agents and some of their most important roles in the system. To develop the agents we are following the evolutionary development
An Advice System for Consumer´s Law Disputes
243
methodology proposed by [12]. We therefore define the high level agents and respective high level roles and interactively break down the agents into more simple ones with more specific roles. The platform, without the extensions, is at this moment constituted by 20 simpler agents. To the agents that make part of the extension we will call from now on extension agents. Between each of these phases tests can be conducted to access the behaviour of the overall system. Table 1. A description of the high level agents and respective main roles High-level Agent
Description
Security
This agent is responsible for dealing with all the security issues of the system
This agent provides methods for Knowledge interacting with the knowledge Base stored in the system
Reasoning
This agent embodies the intelligent mechanisms of the system
Interface
This agent is responsible for establishing the interface between the system and the user in an intuitive fashion
Main Roles Establish secure sessions with users Access levels and control Control the interactions with the knowledge base Control the lifecycle of the remaining agents Read information from the KB Store new information in the KB Support the management of files within the system Compute the BATNA and WATNA values Compute the most significant outcomes and their respective likeliness Proactively provide useful information based on the phase of the dispute resolution process Define an intuitive representation of the information of each process Provide an intuitive interface for the interaction of the user with the system Provide simple and easy access to important information (e.g. laws) according to the process domain and phase
This means that the advantages of choosing an agent-based architecture are present throughout all the development process, allowing us to easily remove, add or replace agents. It also makes it easy to later on add new functionalities to the platform, by simply adding new agents and their corresponding services, without interfering with the already stable services present. This modular nature of the architecture also increases code reuse, making it easier to develop higher level services through the compositionality of smaller ones. The expansibility of the architecture is also increased with the possibility to interact with remote agent platforms as well as to develop extensions to the architecture, like the one presented in this paper. We also make use of the considerable amount of open standards and technologies that are nowadays available for the development of agent-based architectures that significantly ease the development, namely FIPA standards and platforms such as Jade or Jadex [14]. 3.3 Data Flow in the System All modules that integrate the system meet the current legislation on Consumer law. When the user fills the form to start a complaint, he indicates the type of good
244
N. Costa et al.
acquired, the date of delivery and the date of defective good denunciation, stipulating also the date when the good was delivered to repair and/or substitution. He can also indicate the period of extrajudicial conflict resolution attempt, if necessary. To justify these dates the user has to present evidence, in general, the issued invoices, by uploading them in digital format. Concerning the defective good, he must indicate its specification and the probable defect causes. At last, he has to identify the supplier type as being a producer or a seller. After filled, the form is submitted. Figure 5 shows a screenshot of the online form. When the form is submitted, a group of actions is triggered with the objective of storing the information in appropriate well defined structures (Figure 2 and 3). As mentioned before, these structures are XML files that are validated against XML Schemas in order to maintain the integrity of the data. All these files are automatically created by the software agents when the data is filled. The extension agent responsible for performing these operations is the agent Cases.
Fig. 2. Representation of the structure of tables that stores the information of each case
After all the important information is filled in and when a solution is requested, these and other agents interact. Agents BATNA and WATNA are started after all the information is provided by the parties through the interface (Figure 5). These agents then interact with the extension agents Cases and Laws in order to retrieve the significant information of the case and the necessary laws to determine the best and worst scenarios that could occur if the negotiation failed and litigations was necessary. Agent Outcomes interacts with extension agent Cases in order to request all the necessary information in order to retrieve the most similar cases. All this information (WATNA, BATNA and possible outcomes) is then presented to the user in a graphical way so that it may be more intuitively perceived (Figure 4). In that sense, the likeliness is represented by the colored curves which denote the area in which the cases are more likely to occur. A higher likeliness is denoted by a line that is more distant from the axis. To determine this likeliness, the amount of cases in the region is used, as well as the type of case (e.g. binding or persuasive precedent, decisions of higher or lower court) and even if there are groups of cases instead of single cases, as sometimes highly similar cases are grouped to increase the efficiency.
An Advice System for Consumer´s Law Disputes
245
Fig. 3. Excerpts from a XML Schema file of a case
WATNA
BATNA P1 Increasing Satisfaction
P2 Increasing Satisfaction
BATNA
ZOPA
WATNA
Fig. 4. The graphical representation of the possible outcomes for each party
The graphical representation also shows the range of possible outcomes for each of the parties in the form of the two big colored rectangles and the result of its intersection, the ZOPA – Zone of Potential Agreement [13], another very important concept that allows the parties to see between which limits is an agreement possible. The picture also shows each case and its position in the ordered axis of increasing satisfaction, in the shape of the smaller rectangles. Looking at this kind of representation of information, the parties are able to see that the cases are more likely to occur for each party when they are in the area where the colored lines are further away from the axis of that party. Therefore, the probable outcome of the dispute will probably be near the area where the two lines are closer. At this point, the user is in a better position to make a decision as he possesses more information, namely important past similar cases that have occurred in the past. In this position the user may engage in conversations with the other party in an attempt to negotiate an outcome, may request an outcome or may advance to litigation, if the WATNA is believed to be better than what could be reached through litigation. If the user decides to ask the platform for a possible solution, the Reasoning extension agent will contact the extension agents Cases and Laws in order to get the information of the case and the laws that should be applied and will issue an outcome.
246
N. Costa et al.
Fig. 5. Screenshot from an online form (in portuguese)
The third neutral, when analyzing the outcome suggested, may also interact with these agents, for consulting a specific law or aspect of the case. He analyses all this information, and decides to accept or not to accept the decision of the system. After the solution is verified, it is validated and presented to the user. 3.4 Example and Results To better expose these processes, let us use as an example a fictitious case (Figure 6): a physical person that acquires an embodied mobile good for domestic/private use. The celebrated legal contract is of the type buy and sell. The date of good delivery is October 22nd, 2009. The date at which the consumer found the defect in the good occurred at October 26th, 2009 but the good was delivered to repair and/or substitution on October 30th, 2009. There was no extrajudicial conflict resolution attempt. As evidence, the user uploaded all invoices relative to the dates mentioned. Concerning the defect that originated the complaint, the user mentioned that the good did not meet the description that was made to him when it was bought. In this case, the supplier acts within the range of his professional activities and he is the producer of the good. When a solution is requested, the system proceeds to the case analysis and reaches a solution. The good is under the warranty delay: 11 days, calculated through the difference between the date of good delivery and the current date.
An Advice System for Consumer´s Law Disputes
247
Fig. 6. Excerpt from a fictitious case
The limit of two months between the date of the defect detection has been respected: 7 days, calculated by the difference between the date of defect finding and the current date. Two years have not passed since the date of denunciation: 2 days, calculated by the difference between the date of denunciation and the current date, deducting the delay which user was deprived of the good because of repair/substitution (since no date of good delivery after repair and/or substitution is declared, the default is the current date). The period of extrajudicial conflict resolution attempt is also deductable, but in this case it doesn’t occur. As the good was delivered for repair and/or substitution, the supplier has two choices: either to make the good repair in 30 days (at the maximum) without great inconvenience, and at no cost (travel expenses, man power and material) to the consumer; or to make the good replacement by another equivalent one. This rather yet simplistic approach is very useful as a first step on the automation of these processes. The case shown here is one of the simplest ones but the operations performed significantly ease the work of the law expert, allowing him to worry about higher level tasks while simpler tasks, that can be automated, are performed by autonomous agents.
4 Conclusions In the context of consumer law, only some aspects have been dealt with, still remaining to be modeled: a) the situations covered by the Civil Code, when DL 67/2003 is not to be applied; b) the cases considered in DL 383/89 of damages arising from defective products; and c) the issues of financial services, namely concerning consumer’s credit. The work developed until now, however, is already enough assist law experts, enhancing the efficiency of their work. The next steps are in the sense of further improvements of the agents while at the same time continuing the extension to other aspects of consumer law that have not yet been addressed in this work. Specifically, we will adapt a Case-based Reasoning
248
N. Costa et al.
Model that has already been successfully applied in previous work in order to estimate the outcomes of each case based on past stored cases. Acknowledgements. The work described in this paper is included in TIARAC Telematics and Artificial Intelligence in Alternative Conflict Resolution Project (PTDC/JUR/71354/2006), which is a research project supported by FCT (Science & Technology Foundation), Portugal.
References 1. Klamig, L., Van Veenen, J., Leenes, R.: I want the opposite of what you want: summary of a study on the reduction of fixed-pie perceptions in online negotiations. Expanding the horizons of ODR. In: Proceedings of the 5th International Workshop on Online Dispute Resolution (ODR Workshop 2008), Firenze, Italy, pp. 84–94 (2008) 2. Goodman, J.W.: The pros and cons of online dispute resolution: an assessment of cybermediation websites. Duke Law and Technology Review (2003) 3. De Vries, B.R., Leenes, R., Zeleznikow, J.: Fundamentals of providing negotiation support online: the need for developing BATNAs. In: Proceedings of the Second International ODR Workshop, Tilburg, pp. 59–67. Wolf Legal Publishers (2005) 4. Chiti, G., Peruginelli, G.: Artificial intelligence in alternative dispute resolution. In: Proceedings of LEA, pp. 97–104 (2002) 5. Brazier, T., Jonker, M., Treur, J.: Principles of Compositional Multi-agent System Development. In: Proceedings of the 15th IFIP World Computer Congress, WCC 1998, Conference on Information Technology and Knowledge Systems, pp. 347–360 (1998) 6. Muecke, N., Stranieri, A., Miller, C.: The integration of online dispute resolution and decision support systems. In: Expanding the horizons of ODR, Proceedings of the 5th International Workshop on Online Dispute Resolution (ODR Workshop 2008), Firenze, Italy, pp. 62–72 (2008) 7. Leake, D.B.: Case-Based Reasoning: Experiences, Lessons, and Future Directions. AAAI Press, Menlo Park (1996) 8. Almeida, T.: Lei de defesa do consumidor anotada, Instituto do consumidor. Lisboa (2001) (in portuguese) 9. Almeida, C.F.: Direito do Consumo. Almedina. Coimbra (2005) (in portuguese) 10. Notini, J.: Effective Alternatives Analysis in Mediation: “BATNA/WATNA” Analysis Demystified (2005), http://www.mediate.com/articles/notini1.cfm (last accessed on September 2010) 11. Andrade, F., Novais, P., Carneiro, D., Zeleznikow, J., Neves, J.: Using BATNAs and WATNAs in Online Dispute Resolution. In: Proceedings of the JURISIN 2009 - Third International Workshop on Juris-informatics, Tokyo, Japan, pp. 15–26 (2009) ISBN 4915905-38-1 12. Jennings, N., Faratin, P., Lomuscio, A., Parsons, S., Wooldridge, M., Sierra, C.: Automated Negotiation: Prospects, Methods and Challenges. Group Decision and Negotiation 10(2), 199–215 (2001) 13. Lewicki, R., Saunders, D., Minton, J.: Zone of Potential Agreement. In: Negotiation, 3rd edn. Irwin-McGraw Hill, Burr Ridge (1999) 14. Bellifemine, F., Caire, G., Greenwood, D.: Developing Multi-Agent Systems with JADE. John Wiley & Sons, Ltd., West Sussex (2007) 15. Peruginelli, G., Chiti, G.: Artificial Intelligence in alternative dispute resolution. In: Proceedings of the Workshop on the Law of Electronic Agents – LEA (2002)
SACMiner: A New Classification Method Based on Statistical Association Rules to Mine Medical Images Carolina Y. V. Watanabe1 , Marcela X. Ribeiro2 , Caetano Traina Jr.3 , and Agma J. M. Traina3 1 2
Federal University of Rondˆonia, Department of Computer Science, Porto Velho, RO, Brazil Federal University of S˜ao Carlos, Department of Computer Science, S˜ao Carlos, SP, Brazil 3 University of S˜ao Paulo, Department of Computer Science, S˜ao Carlos, SP, Brazil [email protected]
Abstract. The analysis of images to decision making has become more accurate thanks to the technological progress on acquiring medical images. In this scenario, new approaches have been developed and employed in the computeraided diagnosis in order to be a second opinion to the physician. In this work, we present SACMiner, which is a new method of classification that takes advantage of statistical association rules. It works with continuous attributes and avoids introducing the bottleneck and inconsistencies in the learning model due to a discretization step, which is required in the most of the associative classification methods. Two new algorithms are employed in this method: the StARMiner∗ and the V-classifier. StARMiner∗ mines association rules over continuous feature values and the V-classifier decides which class best represents a test image, based on the statistical association rules mined. The results comparing SACMiner with other traditional classifiers show that the proposed method is well-suited in the task of classifying medical images. Keywords: Statistical association rules, Computer-aided diagnosis, Associative classifier, Medical images.
1 Introduction The complexity of medical images and the high volume of exams per radiologist in a screening program can lead to a scenario prone to mistakes. Hence, it is important to inforce double reading and analysis effective but costly measures. Computer-aided diagnosis (CAD) technology offers an alternative to double reading, because it can provide a computer output as a “second opinion” to assist radiologists in interpreting images. Using this technology, the accuracy and consistency of radiological diagnoses can be improved, and also the image reading time can be reduced [1]. Therefore, the need of classification methods to speed-up and to assist the radiologists in the image analysis task has been increased. These methods must be more accurate and demand low computational cost, in order to provide a timely answer to the physician. One promising approach for that is the associative classification mining that uses the association rule discovery techniques to construct classification systems. In the image domain, usually images are submitted to processing algorithms to produce a (continuous) feature vector J. Filipe and J. Cordeiro (Eds.): ICEIS 2010, LNBIP 73, pp. 249–263, 2011. c Springer-Verlag Berlin Heidelberg 2011
250
C.Y.V. Watanabe et al.
representation of them. The feature vectors are introduced to association rule mining algorithms to reveal their intra- and inter-class dependencies. These rules are then employed for classification. In general the association-rule based approaches reach higher values of accuracy when compared to other rule-based classification methods [2]. A preliminary version of this work was presented at ICEIS 2010 [3]. Here, we also present the new method called Statistical Associative Classifier Miner (SACMiner), aimed at breast cancer detection using statistical association rules. The method employs statistical association rules to build a classification model. First, the images are segmented and submitted to a feature extraction process. Each image is represented by a vector of continuous visual features, as texture, shape and color. In the training phase, statistical association rules are mined relating continuous features and image classes. The rules are mined using a new algorithm called StARMiner*, which is based on the feature selection algorithm StARMiner, proposed by [4], to produce more semantically significant patterns. StARMiner* does not require the discretization step, like the other methods. This avoids embedding the inconsistencies produced by the discretization process in the mining process and also, makes the whole process faster. In the test phase, a voting classifier decides which class best represents a test image, based on the statistical association rules mined. The experiments comparing SACMiner with traditional classifiers show that the proposed method reaches high values of accuracy, sensitivity and specificity. These results indicate that SACMiner is well-suited to classify regions of interest of mammograms and breast tissue to detect breast cancer. Another advantage of SACMiner is that it builds a learning model that is easy of understanding, making the user aware of why an image was assigned to a given class. Moreover, the proposed method has a low computation cost (linear on the number of the dataset items) when compared to other classifiers. This paper is structured as follows. Section 2 presents concepts and previous work related to this paper. Section 3 details the proposed method. Section 4 shows the experiments performed to evaluated the method. Finally, Section 5 gives the conclusion and future directions of this work.
2 Background and Related Work Associative classification is a recent approach in the data mining field that employs association rules to build predictive models to predict new sample classes. One of the associative classification major advantages over other classification methods is the ease-understanding model, based on association rules and employed in the classification. Generally, domain specialists feel more comfortable when using such approach, because they can be aware of the model employed to automatically label the samples. Several associative classification techniques have been proposed in recent years. In general, these techniques use several different approaches to discover frequent itemsets, extract rules, rank rules, remove redundant and noisy rules and weight the rules in the classification approach [5]. In general, the associative classification approaches have three main phases: first, a traditional association rule mining algorithm is applied to a training dataset to mine the association rules; second, a learning model is built based on the mined rules; third, the learning model is employed to classify new data samples.
The SACMiner Method
251
The problem of mining association rules consists in finding sets of items that frequently occurs together in a dataset. It was first stated in [6] as follows. Let I = {i1 , . . . , in } be a set of literals called items. A set X ∈ I is called an itemset. Let R be a table with transactions t involving elements that are subsets of I. An association rule is an expression of the form X → Y , where X and Y are itemsets. X is called body or antecedent of the rule, and Y is called head or consequent of the rule. Let |R| be the number of transactions in relation R. Let |Z| be the total number of occurrences of the itemset Z in transactions of relation R. The Support and confidence measures (Equations 1 and 2) are used to determine the rules returned by the mining process. Support =
|X ∪ Y | |R|
Conf idence =
|X ∪ Y | |X|
(1) (2)
The problem of mining association rules, as it was first stated, involves finding rules on a database of categorical items that satisfy the restrictions of minimum support and minimum confidence specified by the user. This problem is also called traditional association rule mining and it involves finding rules that correlate categorical (nominal) data items. Many associative classifiers have been proposed in the literature, employing the traditional association rule mining and some variation in the first phase of the associative classification process. In [7], a frequent itemset determination based on the support is employed to find “Emerging Patterns”. The variation in support values among different classes generate a score that is employed in the classification process. According to [8] the associative classifiers based on the traditional association rules have two major drawbacks: the high processing overhead resulting from the large number of association rules and the overfitting problem, which is caused by the confidencebased rule evaluation measure employed in most associative classifiers. However, the directly application of the associative classifier to analyze medical images is another challenge, mainly regarding to the continuous nature of the image features that should be employed to build the association rule-based learning model of the classifier. In fact, the association rules have been employed in image mining using discrete and categorical attributes. One of these works was presented in [9]. In this work, the images are previously segmented in blobs. The segmentation process aggregates pixels according to their similarity. After this process, a feature vector is generated to represent each blob. A distance function is applied to compare pairs of blobs from different images, and if they are considered similar, they are represented by the same object identifier. Each image is represented by a record composed of a set of object identifiers. The image records are submitted to a traditional association rule mining algorithm, such as Apriori [10], generating rules relating the object identifiers. Previous works applying association rules to classify mammograms were also developed showing promising results, where the focus was the development of associative classifiers based on traditional association rules. For example, [11] presented an association rule method to classify mammograms based on categorical items. In this method,
252
C.Y.V. Watanabe et al.
a record combining three features of shape and the image classification is generated for each image. The features are discretized in ten equal-sized intervals before applying the association mining algorithm. The rules are mined with the restriction of not having a classification item in the body part. A new image is classified according to a kind of voting classifier, where the support and the confidence of the rules are employed to label the new images. The main problem of this technique is the equal-sized discretization process, which may embed inconsistencies in the data, increasing the error rate of the classifier. In [12], an associative classifier was presented to classify mammograms. In the preprocessing phase, images are cropped and enhanced using histogram equalization. Features of mean, variance, skewness and kurtosis were extracted from the images, and together with some other descriptors (e.g. breast position and type of tissue), compose the image features that are used in the process of association rule-mining. The rules are mined using low confidence values and the classifier label is restricted, so that it occurs only in the head of the rules. The associative classifier employed is based on the voting strategy, i.e. the classifier counts the number of rules that a new image satisfies and chooses its class. In [13], a method that employs association rules in a set of discretized features of mammogram images was proposed. The method uses a discretized feature vector and keywords from the image diagnosis to compose the image register. The training image registers were submitted to an association-rule mining algorithm, restricting the keywords to occur only in the head of the rule. The mined rules were submitted to an associative classifier to give a score for each keyword. Instead of returning just a class label, the method returns a set of keywords that describe a given image. If the score, calculated according to the mined association rules, is greater than a given value, the keyword is taken to compose a set of the diagnosis keywords of the image, otherwise the keyword is discarded. In [2], a method for mammogram classification that uses a weighted association-rule based classifier is presented. First, the images are preprocessed and features of texture from each region of interest were extracted. Second, the features are discretized and submitted to an association-rule algorithm. The produced rules are employed for mammogram classification. Images are represented by feature vectors of continuous values. In fact, most papers from the literature that works with continuous values require the discretization of continuous data before applying the association rule mining. However the discretization process causes loss of information in the mining process. This may causes a significant decrease in the precision of the classifier, which is not desirable, especially when working with medical image analysis. Thus, an approach that handles quantitative values should be more appropriated to work with images. In [14,4,15] procedures for mining quantitative association rules, which relate continuous-valued attributes, are presented. Statistical association rules were initially defined in [14]. An example of a statistical association rule is: sex = female → hourly wage: mean = U.S. 12.07, hourly wage: overall mean = U.S. 9.09 A statistical association rule indicates a relationship or an interesting trend in a database. The mainly purpose of statistical association rules is to locate subsets of the
The SACMiner Method
253
database that have unexpected behaviors. Thus, a statistical association rule, in its most general form, can be expressed by: subset of the population → interesting behavior For a set of quantitative values, the best description of their behavior is their distribution. Aumman and Lindell [14] suggest the use of mean and variance to describe the behavior of a quantitative attribute, and to ensure the mining of only interesting patterns. They define that a subset has an interesting behavior if their distribution is different from the rest of the population. This interesting behavior can be described in terms of various measures that reflect the statistical distribution of the subset, e.g. mean, median and variance. In this work, we propose to employ statistical association rules to improve computeraided diagnosis system without depending on discretized features, which is described in the next section. Our method, called SACMiner, suggests a second opinion to the radiologists. Two algorithms were developed to support the method. The first one is the Statistical Association Rule Miner∗ (StARMiner∗ ), which mines rules selecting the features that best represent the images. The second algorithm is the Voting Classifier (V-Classifier), which uses the rules mined by the StARMiner∗ to classify images. To validate the proposed method, we performed experiments using several datasets, and present the results from three different datasets of medical images. We also compare SACMiner with other well-known classifiers from literature. The results indicate that the statistical association rules approach presents high-quality in the task of diagnosing medical images.
3 Proposed Method: SACMiner The proposed method, called SACMiner, employs statistical association rules to suggest diagnosis of medical images as a second opinion to the radiologists. Two algorithms were developed to support the method. The first one is the Statistical Association Rule Miner∗ (StARMiner∗ ), which mines rules selecting the features that best represent the images. The second algorithm is the Voting Classifier (V-Classifier), which uses the rules mined by the StARMiner∗ to classify images. As shown in Figure 1, the method works in two phases. The first one is the training phase, that includes the image representation of the training set and the statistical association rule mining executed by the proposed algorithm StARMiner∗ . StARMiner∗ selects the most meaningful features and produces the statistical association rules. On the second phase, the images from the test set are processed to be represented by feature vectors is the test phase, that includes the image representation of the images from the test set and the feature vectors from the test images are extracted and submitted to the the feature selection and the building rules through association rules. The pipeline and the algorithm of the proposed method are presented in Figure 1 and Algorithm 1, respectively. The method works in two phases: training and test. In the first one, features are extracted from the images and place in the corresponding feature vectors. This step includes the image pre-processing. After that, the feature vectors are the entry for the SACMiner method. Two algorithms were developed to support the method: the StARMiner∗ and the Voting classifier (V-classifier). StARMiner∗ uses the
Algorithm 1 Steps of the Proposed Method. Require: Training images, a test image Report (class of the image test). 1: Extract features of the training images 2: Execute StARMiner∗ algorithm to mine association rules 3: Extract features of the test image 4: Execute the Classifier 5: Return the suggested report (class)
feature vectors and the classes of the training images to perform statistical association rule mining. It selects the most meaningful features and produces the statistical association rules. In the test phase, the feature vectors from the test images are extracted and submitted to the V-classifier, which uses the statistical association rules produced by the StARMiner∗ to suggest a diagnosis class for the test image. We discuss each step of the SACMiner method in the following subsections. 3.1 The StARMiner∗ Algorithm StARMiner∗ is a supervised classification model whose goal is to find statistical association rules over the feature vectors extracted from images, providing the attributes that best discriminate images into categorical classes. It returns rules relating feature intervals and image classes. Formally, let us consider xj an image class and fi an image feature (attribute). Let Vmin and Vmax be the limit values of an interval. A rule mined by the StARMiner∗ algorithm has the form:
The SACMiner Method
fi [Vmin , Vmax ] → xj
255
(3)
∗
An example of a rule mined by StARMiner is 5[−0.07, 0.33] → benignant mass This rule indicates that images having the 5th feature value in the closed interval [-0.07,0.33] tend to be images of benignant masses. Algorithm 3.1 shows the main steps of StARMiner∗ . To perform the association rule mining, the dataset under analysis is scanned just once. StARMiner∗ calculates the mean and the standard deviation for each attribute and the Z value, used in the hypotheses test. Two restrictions of interest in the mining process must be satisfied. The first restriction is that the feature fi must have a behavior in images from class xj different from its behavior in images from the other classes. The second restriction is that the feature fi must present a uniform behavior in every image from class xj . The restrictions of interest are processed in line 7. Let T be the image dataset, xj an image class, Txj ∈ T the subset of image class xj , and fi the ith feature of the feature vector. Let μfi (Txj ) and σfi (Txj ) be, respectively, the mean and the standard deviation of feature fi in images from class xj ; μfi (T − Txj ) and σfi (T − Txj ) corresponds to, respectively, the mean and the standard deviation of feature fi values of the images that are not from class xj . A rule fi [Vmin , Vmax ] → xj is computed by the algorithm, only if the rule satisfies the input thresholds: Δμmin , σmax and γmin : – Δμmin is the minimum allowed difference between the average of the feature fi in the images from class xj and the remaining images in the dataset; – σmax is the maximum standard deviation of fi values allowed in the class xj ; – γmin is the minimum confidence to reject the hypothesis H0 . The hypothesis H0 states that the mean of fi values inside and outside the class xj are statistically equal: H0 : μfi (Txj ) = μfi (T − Txj ).
(4)
The values of Vmin and Vmax are computed as:
∗
Vmin = μfi − σmax
(5)
Vmax = μfi + σmax
(6)
StARMiner has the interesting property that the maximum number of rules mined by a class xj is the total number k of image features. The complexity of this algorithm is Θ(ckN ), where N is the number of instances of the dataset, k is the number of features and c is the number of classes. StARMiner∗ is based on the idea of the feature selection algorithm StARMiner. The main difference between StARMiner and StARMiner∗ algorithms is that the second has the advantage of mining more semantically meaningful rules. While StARMiner only relates classes to features that best discriminate them, StARMiner∗ finds rules relating class and the feature intervals where a particular behavior has occurred.
256
C.Y.V. Watanabe et al.
Algorithm 2: The StARMiner∗ Algorithm. Require: Database T : table of feature vectors {xj , f1 , f2 ,...,fn }, where xj is the image class and fi are the image features; thresholds Δμmin , σmax and γmin . Ensure: Mined rules 1: Scan database T; 2: for each class xj do 3: for each feature fi do 4: Compute μfi (Txj ) and μfi (T − Txj ); 5: Compute σfi (Txj ) and σfi (T − Txj ); 6: Compute Z value Zij ; 7: if (μfi (Txj )−μfi (T −Txj )) ≥ Δμmin AND σfi (Txj ) ≤ σmax AND (Zij < Z1 OR Zij > Z2 ) then 8: Write xj → fi [μfi − σmax , μfi + σmax ]; 9: end if 10: end for 11: if any rule is found then 12: Choose the feature fi which Z value is the biggest 13: Write fi [μfi − σmax , μfi + σmax ] → xj ; 14: end if 15: end for
3.2 The Proposed Classifier We developed a classifier that uses the mined rules by StARMiner∗ . The main idea is counting ‘votes’. For each class, we count the number of rules that are satisfied. This counting is normalized by the number of rules of the class. The output is the class that obtain more votes. Algorithm 3 shows the algorithm of the V-Classifier method. Algorithm 3: The V-Classifier. Require: Mined Rules in the form fi [μfi − σmax , μfi + σmax ] → xj , and a feature vector g from a test image, where gi are the features Ensure: Report (class of the image test). 1: for each class xj do 2: votexj =0; 3: for each feature fi do 4: if gi is in [μfi − σmax , μfi + σmax ] then 5: votexj = votexj + 1; 6: end if 7: end for 8: Divide votexj by number of rules of the class xj ; 9: end for 10: Return the class of max(votexj ).
We can observe that the computational cost of SACMiner is low, since StARMiner∗ is linear on the number of images (dataset items) and the V-Classifier is linear on the
The SACMiner Method
257
number of rules. The low computational cost of the method is stressed by the fact that StARMiner∗ has the property that the maximum number of rules mined by a class xj is the total number k of image features.
4 Experiments We performed several experiments to validate the SACMiner method. Here, we present three of them. Two are related to the task of suggesting diagnosis for Regions Of Interest (ROIs) of mammograms, considering benignant and malignant masses. And the third one is related to the breast tissue classification and breast cancer detection using electrical impedance spectroscopy. We use two different approaches. In the first one, the experiments were performed using the holdout approach, in which we employed 25% of the images from the datasets for testing and the remaining images for training. The second approach was the leave-one-out. To show the efficacy of this method, we compare it with well known classifiers: 1NN, C4.5, Naive Bayes and 1R. The 1-nearest neighbor (1-NN) is a classifier that uses the class label of the nearest neighbor to classify a new instance, using the Euclidean distance. The C4.5 [16] is a classifier that builds a decision tree in the training phase. The Naive Bayes [17] is a classifier that uses a probabilistic approach based on the Bayes theorem to predict the class labels. And finally, the last one, 1R [18], is a classifier based on rules that classify an object/image on the basis of a single attribute (they are 1-level decision trees); it involves discrete attributes. To compare the classifiers, we compute measures of accuracy, sensitivity and specificity. The accuracy is the portion of cases of the test dataset that were correctly classified. The sensitivity is the portion of the positive cases that were correctly classified. And the specificity is the portion of the negative cases that were correctly classified. An optimal prediction can achieve 100% sensitivity (i.e. predicts all images from the malignant group as malignant) and 100% specificity (i.e. does not predict any image from the benignant class as malignant). To compute these measures, let us considering the following cases: – – – –
True positive: malignant masses correctly diagnosed as malignant; False positive: benignant masses incorrectly identified as malignant; True negative: benignant masses correctly identified as benignant; False negative: malignant masses incorrectly identified as benignant.
Let the number of true positives be TP, the number of false positives be FP, the number of true negative be TN and the number of false negative be FN. Equations 7, 8 and 9 present the formulas of accuracy, sensitivity and specificity, respectively. TP + TN TP + TN + FP + FN
(7)
sensitivity =
TP TP + FN
(8)
specificity =
TN TN + FP
(9)
accuracy =
258
C.Y.V. Watanabe et al.
Experiment 1: The 250 ROIs Dataset This dataset consists of 250 ROIs taken from mammograms collected from the Digital Database for Screening Mammography - DDSM dataset1 . The dataset is composed of 99 benignant and 151 malignant mass images. In the image pre-processing step, the images were segmented using an improved EM/MPM algorithm proposed in [19]. This algorithm segments the images using a technique that combines a Markov Random Field and a Gaussian Mixture Model to obtain a texture-based segmentation. The segmentation of images are accomplished according to a fixed number of different texture regions. In this experiment, we segmented the images in five regions. After the segmentation step, the main region is chosen for the feature extraction. This choice is based on the visual characteristics that all these ROIs are centered. Hence, our algorithm uses the centroid of the image to choose the main region. The Figure 2 illustrates the pre-processing step.
(a) (b) (c) Fig. 2. (a) Original image; (b) Image segmented in 5 regions; (c) Mask of the main region
For the segmented region, eleven features based on the shape are extracted: area, major axis length, minor axis length, eccentricity, orientation, convex area, filled area, Euler number, solidity, extent and perimeter. Therefore, it is important to highlight that the feature vector generated is quite compact, composed of only 11 values. In step 2, the feature vectors from the training images set were submitted to StARMiner∗ to mine statistical association rules. This algorithm mined the following rules: A[−0.0120, 0.1770] → Benignant
(10)
C[−0.0075, 0.1825] → Benignant F [−0.0133, 0.1767] → Benignant
(11) (12)
L[0.2973, 0.4873] → M alignant
(13)
In these rules, A represents the feature of tumor mass area; C, the convex area feature; F , the filled area feature; and L, the major axis length feature. These rules mean that masses whose area are in the interval [-0.0120,0.1770], convex area in [-0.0075,0.1825] and filled area in [-0.0133,0.1767] tend to be benignant. On the other hand, masses whose major axis length is in [0.2973,0.4873] tend to be malignant. For this experiment, we considered a confidence level of 90% to the Z-test and to compute the intervals of rules. The four mined rules and the feature vectors of the test images were introduced to the classifier. The results using the holdout and the leave-one-out approaches are shown in the Tables 1 and 2, respectively. 1
Analyzing Table 1, we observe that SACMiner presented the highest values of accuracy and specificity in the holdout approach. When we analyze the sensitivity, we can note that Naive Bayes obtained the best result. However, when we analyze it with its specificity, we observe that Naive Bayes has a low power to classify the benignant images. In Table 2, SACMiner leaded to the highest value of accuracy together to the 1R Classifier. In this case, the association rule approach is the best one to classify masses. One advantage of SACMiner over 1R is that SACMiner does not demand the data discretization step. Besides, SACMiner produced just four rules, while 1R produced eight. All the rules mined by 1R were from the feature major axis length (L), the second attribute of the feature vector, and they are describe as: if L < 0.1840 then Benignant
(14)
else if L < 0.2181 then Malignant else if L < 0.2367 then Benignant
(15) (16)
else if L < 0.2572 then Malignant else if L < 0.2716 then Benignant
(17) (18)
else if L < 0.3126 then Malignant else if L < 0.3424 then Benignant
(19) (20)
else if L ≥ 0.3424 then Malignant.
(21)
Experiment 2: The 569 ROIs Dataset This dataset consists of 569 feature vectors obtained from the UCI Machine Learning Repository [20]2 . These features describe characteristics of the cell nuclei present in the 2
http://archive.ics.uci.edu/ml/datasets.html
260
C.Y.V. Watanabe et al.
Table 3. Comparison between SACMiner and other well-known classifiers using the holdout approach Classifiers SACMiner 1R Naive Bayes C4.5 1-NN
image. Features were computed from breast masses and they are classified in benignant and malignant masses. For each of the three cell nucleus, the following ten features were computed: mean of distances from center to points on the perimeter, standard deviation of gray-scale values, perimeter, area, smoothness, compactness, concavity, concave points, symmetry and fractal dimension. Thus, the feature vectors have 30 features, and the classes are distributed in 357 benignant and 212 malignant. In the step 2, StARMiner∗ mined 19 rules for each class. The results using the holdout and the leave-one-out approaches are shown in Table 3 and Table 4, respectively. When we analyze the results using the holdout approach in Table 3, we can see that SACMiner leads the highest values of accuracy, sensitivity and specificity. Thus, when we consider the results using the leave-one-out approach, we also observe that the accuracy measure is one of the highest, presenting the same result that 1-NN, and leads the value of sensitivity. Experiment 3: The Tissue Breast Dataset Another kind of exam that can be used to discriminate breast tissue and especially to detect breast cancer is the electrical impedance spectroscopy (EIS) [21]. A dataset that was represented by this approach was obtained from the UCI Machine Learning Repository, and it is composed of 106 feature vectors. Each vector is composed of 9 features and is classified in one of the six classes of freshly excise tissue that were considered using electrical impedance measurements: Carcinoma (21 cases), Fibro-adenoma (15), Mastopathy (18), Glandular (16), Connective (14), Adipose (22). The first three are pathological tissue classes and the last three are normal tissue classes. Details concerning the data collection procedure as well as classification of the cases and frequencies used are given in [21].
The SACMiner Method
261
Table 5. Comparison of accuracy between SACMiner and other well-known classifiers using the holdout approach Classifiers Total Carcinoma Fibroadenoma SACMiner 0.885 1 0.75 1R 0.577 1 0 Naive Bayes 0.731 1 0.333 C4.5 0.769 1 0 1-NN 0.731 0.833 0
For this dataset, the classification consisted in discriminating each one of the classes. To evaluate the power of discrimination of the method among the classes, we performed the holdout approach, in which 75% of the data were randomly selected for training the learning algorithms and the remaining data were used for test. We computed the average accuracy obtained among the classifiers and the accuracy per class. In the trainig phase the feature vectors were submited to the StARMiner∗ , which mined 34 statistical rules, as follows: Carcinoma: Fibro-adenoma: Mastopathy: Glandular: Connective: Adipose:
4 rules 8 rules 7 rules 7 rules 5 rules 3 rules.
According to Table 5, SACMiner leads to the highest value of average accuracy in the holdout approach, giving accuracy rates up to 88.5%. Moreover, SACMiner also leads the values of accuracy when we consider the accuracy per class, which means that SACMiner is better than the other methods or has the same performance to distinguish Carcinoma, Fibro-adenoma, Matopathy, Glandular and Conective tissues. Therefore, SACMiner is well suited to detect breast cancer comparing with the other tradicional methods from literature. The improvement in accuracy is up to 11.6% comparing with C4.5, which produced the second best result (76.9% of accuracy). Moreover, the rules generated by StARMiner∗ compose a learning model that is easy of understanding, making the user aware of why an image was assigned to a given class.
5 Conclusions In this paper we proposed SACMiner, a new method that employs statistical association rules to support computer-aided diagnosis for breast cancer. The results of using real datasets show that the proposed method achieves the highest values of accuracy, when compared with other well-known classifiers (1-R, Naive Bayes, C4.5 and 1-NN). Moreover, the method shows a proper balance between sensitivity and specificity, being a little bit more specific than sensitive, what is desirable in the medical domain, since it is more accurate to spot the true positives.
262
C.Y.V. Watanabe et al.
Two new algorithms were developed to support the method, StARMiner∗ and V-Classifier. StARMiner∗ does not demand the discretization step and generates a compact set of rules to compose the learning model of SACMiner. Moreover, the computational cost is low (linear on the number of dataset items). V-classifier is an associative classifier that works based on the idea of classes votes. The experiments showed that the SACMiner method produces high values of accuracy when compared to other traditional classifiers. Thus, SACMiner contributed to minimize the drawbacks of using association rules employing the statistical approach, producing a compact set of rules that presents a strong power of generalization in the test phase. In addition, SACMiner produces rules that allow the comprehension of the learning process, and consequently, it makes the system more reliable to be used by the radiologists, since they can understand the whole process of classification. Acknowledgements. We are thankful to CNPq, CAPES, FAPESP and Microsoft Research, University of S˜ao Paulo and Federal University of Rondˆonia for the financial support.
References 1. Arimura, H., Magome, T., Yamashita, Y., Yamamoto, D.: Computer-aided diagnosis systems for brain diseases in magnetic resonance images. Algorithms 2(3), 925–952 (2009) 2. Dua, S., Singh, H., Thompson, H.W.: Associative classification of mammograms using weighted rules. Expert Syst. Appl. 36(5) (2009) 3. Watanabe, C.Y.V., Ribeiro, M.X., Traina Jr., C., Traina, A.J.M.: Statistical Associative Classification of Mammograms - The SACMiner Method. In: Proceedings of the 12th International Conference on Enterprise Information Systems, vol. 2, pp. 121–128 (2010) 4. Ribeiro, M.X., Balan, A.G.R., Felipe, J.C., Traina, A.J.M., Traina Jr., C.: Mining Statistical Association Rules to Select the Most Relevant Medical Image Features. In: First International Workshop on Mining Complex Data (IEEE MCD 2005), Houston, USA, pp. 91–98. IEEE Computer Society Press, Los Alamitos (2005) 5. Thabtah, F.: A review of associative classification mining. Knowledge Engineering Review 22(1), 37–65 (2007) 6. Agrawal, R., Imielinski, T., Swami, A.N.: Mining Association Rules between Sets of Items in Large Databases. In: Proceedings of the 1993 ACM SIGMOD ICMD, Washington, D.C., pp. 207–216 (1993) 7. Dong, G., Zhang, X., Wong, L., Li, J.: CAEP: Classification by aggregating emerging patterns. In: Arikawa, S., Nakata, I. (eds.) DS 1999. LNCS (LNAI), vol. 1721, pp. 30–42. Springer, Heidelberg (1999) 8. Yin, X., Han, J.: Cpar: Classification based on predictive association rules. In: SIAM International Conference on Data Mining, pp. 331–335 (2003) 9. Ordonez, C., Omiecinski, E.: Discovering association rules based on image content. In: IEEE Forum on Research and Technology Advances in Digital Libraries (ADL 1999), Baltimore, USA, pp. 38–49 (1999) 10. Agrawal, R., Srikant, R.: Fast algorithms for mining association rules. In: Intl. Conf. on VLDB, Santiago de Chile, Chile, pp. 487–499 (1994) 11. Wang, X., Smith, M., Rangayyan, R.: Mammographic information analysis through association-rule mining. In: IEEE CCGEI, pp. 1495–1498 (2004)
The SACMiner Method
263
12. Antonie, M.L., Za¨ıane, O.R., Coman, A.: Associative Classifiers for Medical Images. In: Za¨ıane, O.R., Simoff, S.J., Djeraba, C. (eds.) MDM/KDD 2002 and KDMCD 2002. LNCS (LNAI), vol. 2797, pp. 68–83. Springer, Heidelberg (2003) 13. Ribeiro, M.X., Bugatti, P.H., Traina, A.J.M., Traina Jr., C., Marques, P.M.A., Rosa, N.A.: Supporting content-based image retrieval and computer-aided diagnosis systems with association rule-based techniques. Data & Knowledge Engineering (2009) 14. Aumann, Y., Lindell, Y.: A statistical theory for quantitative association rules. In: ACM Press (ed.) The Fifth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Diego, California, United States, pp. 261–270 (1999) 15. Srikant, R., Agrawal, R.: Mining quantitative association rules in large relational tables. In: ACM SIGMOD International Conference on Management of Data, Montreal, Canada, pp. 1–12. ACM Press, New York (1996) 16. Quinlan, R.: C4.5: Programs for Machine Learning, San Mateo, CA (1993) 17. Domingos, P., Pazzani, M.: On the optimality of the simple Bayesian classifier under zeroone loss. Machine Learning 29(2-3), 103–130 (1997) 18. Holte, R.C.: Very Simple Classification Rules Perform Well on Most Commonly Used Datasets. Machine Learning 11, 63–91 (1993) 19. Balan, A.G.R., Traina, A.J.M., Traina Jr., C., Marques, P.M.d.A.: Fractal Analysis of Image Textures for Indexing and Retrieval by Content. In: 18th IEEE Intl. Symposium on Computer-Based Medical Systems - CBMS, Dublin, Ireland, pp. 581–586. IEEE Computer Society, Los Alamitos (2005) 20. Asuncion, A., Newman, D.: UCI Machine Learning Repository (2007) 21. Silva, J.E.d., S´a, J.P.M., Jossinet, J.: Classification of breast tissue by electrical impedance spectroscopy. Medical and Biological Engineering and Computing 38, 26–30 (2000)
A Hierarchical Approach for the Offline Handwritten Signature Recognition Rodica Potolea, Ioana Bărbănţan, and Camelia Lemnaru Technical University of Cluj-Napoca, Computer Science Department Baritiu st. 26, room C9, 400027, Cluj-Napoca, Romania {Rodica.Potolea,Ioana.Barbantan,Camelia.Lemnaru}@cs.utcluj.ro
Abstract. The domain of offline handwritten signature recognition deals with establishing the owner of a signature in areas where a person’s identity is required. As the amount of handwritten signatures is constantly increasing, it becomes harder to distinguish among the signature instances, that is why methods should be found in order to maintain a good separation between the signatures. Our work is focused on identifying the techniques that could be employed when dealing with large volumes of data. In order to achieve this goal, we propose a hierarchical partitioning of data by utilizing two dataset reduction techniques (feature selection and clustering) and by finding the classifier that is appropriate for each signature model. By applying our proposed approach on a real dataset, we report the best results for a dataset of 14 instances/class divided into 8 clusters and a recognition accuracy of 92.11%. Keywords: Data mining, Signature recognition, Feature selection, Clustering, Accuracy, Data partitioning.
A Hierarchical Approach for the Offline Handwritten Signature Recognition
265
Despite the above mentioned limitations the systems dealing with signature recognition are widely spread and represent a rather cheap way of identification and authentication in diverse areas such as financial domain where they are employed for authenticating banking documents. 1.2 Existing Solutions The signature systems are designed to provide answers based on several distinctive features that are extracted from the signatures. The way in which the features are obtained divides the signature recognition systems in two categories: offline and online systems. Offline systems utilize static features that are extracted from the scanned image of the signature. Online systems use both static and dynamic features which refer not to the signature itself, but to the signing style, and are captured by a special device, consisting of a tablet and a pen. The dynamic features can be the speed, number of strokes, pen pressure or timing information during the act of signing. A usage limitation of such systems is the need of the person’s presence. Contrasting to the online systems which contain more personal information about the individual, the offline systems are more widespread because they are cheaper and are already available on paper. In [1] is presented a new online method of gathering dynamic features from a signature. The authors propose the usage of a glove which offers a greater degree of freedom in writing than a pen with a tablet. The system detects the forgeries based on the similarity between the claimed signature and the signatures existing in the database. In [2] an offline signature identification system is presented that uses Support Vector Machines to fuse multiple classifiers: Gaussian empirical rule, Euclidean and Mahalanobis distance. The signatures of 600 authors were used in the system, 65% of the signatures being original and the rest forgeries. The identification rate obtained for the fused classifiers is superior to each of the three methods used individually. A hybrid on/offline system is proposed in [3], where the scanned signatures are segmented based on the information obtained when data was acquired dynamically using a digitizer. The system works using a pattern matching method by overlapping every two signatures and computing a distance measure between them.
2 Proposed Model A signature recognition system operates in two phases: the training phase during which a model is built from the training data and the recognition phase when a new instance is presented to the model and a class is assigned to that instance. Each phase consists of several steps. Before the actual training phase in an offline system, each signer provides several repeatable signatures on paper and the electronic representation of the signatures is obtained by scanning those samples. Then some image preprocessing and feature extraction steps follow. The extracted features build up the feature vectors used to train the model. Optionally, feature reduction methods may be employed, to speed up the recognition process and improve the recognition rate. In the recognition phase, the system employs the model to determine the owner of a new signature, offered to the system as a signature feature vector.
266
R. Potolea, I. Bărbănţan, and C. Lemnaru
2.1 Preprocessing Steps 2.1.1 Normalization Steps In an offline system the features are extracted from scanned signature images (Fig. 1) and feature vectors are computed for each signature instance. In order to have the same range of values for the corresponding features in the feature vectors, some normalization steps are required. The typical normalization techniques applied on images are: geometric normalization, image segmentation, noise removal, color correction and edge or contour extraction.
Fig. 1. Scanning Fig. 2. Noised image
Fig. 3. Noise free image
Fig. 4. One pixel-width
Noise Removal. The noise introduced by the pens used to sign, or by the scanning process, is usually removed via a median filter (Fig.2 and Fig. 3). the
Color Correction. The color information is not important as the interest is on having a clear distinction between the background and the signature, so the images are converted into black and white images. Contour Extraction. Because the signatures can be written with pens of different widths, a thinning process is required, such that all signatures reach one pixel width (Fig. 4). The thinning process may be implemented with Hilditch’s algorithm [4]. 2.1.2 Feature Extraction Recognition is performed using several features extracted from the normalized signature image. Each image provides the values for a signature feature vector, which will further identify a signature in the system. The main categories of features representative for a signature are summarized in the following. Border features contain features extracted based on the extreme points that make up the signature. Concentration features represent the distribution of the pixels in the image. Number features describe features made up by certain properties neighboring pixels have and the position features determine the position of signature’s center of mass. The last category of features is represented by the angle features which determine the relative position of the signature with respect to a slant line or the degree of closeness between the individual letters, reported as angular values. 2.1.3 Feature Selection Feature selection is an important step in a data mining process in general, and in image processing and signature recognition in particular. In most cases it improves the performance of the whole process while reducing the search space. The basic idea of a feature selection method is to search through the space of all possible combinations of features and find an efficient (optimal if possible) subset of features with respect to the proposed goal (accuracy, speed, complexity, etc.) and problem (prediction, estimation, etc.). Due to the NP-hard nature of the problem, various heuristics were devised in order to obtain a fast acceptable solution. Theoretically, the irrelevant and
A Hierarchical Approach for the Offline Handwritten Signature Recognition
267
the redundant features should be eliminated [5]. In practice the exact identification of those features is too time-consuming. Moreover, different classifiers deal quite well with different types of features. For instance, Decision Trees work well with redundant features, being biased by the irrelevant ones, while for Naïve Bayes quite the opposite occurs. Therefore, the selected feature subset could be suboptimal (for the sake of tractability) yet perform well with a given classifier. In order to determine the appropriate subset of features for a given dataset, several search methods can be applied, such as wrapper, filter and embedded. The wrapper and filter techniques compute the subset of relevant attributes on the entire set of attributes at once. While the wrapper methods evaluate the performance of the subset using a model, in the case of the filter algorithms, a simpler filter is evaluated. Embedded methods refer to inducer-specific feature selection, such as the approaches present in decision trees, artificial neural networks, etc. [6]. Wrapper methods employ performance evaluations on a learning algorithm in order to estimate the worth of a given attribute subset. Although much slower than filters wrappers have been shown to achieve significant performance improvements in classification problems [7]. The Correlation based Feature Selection (CFS) belongs to the filter methods. This filter evaluates the worth of a subset of features by considering the individual predictive ability of each feature along with the degree of redundancy between them; subsets of features that are highly correlated with the class while having low inter-correlation are preferred. They are independent of any classifier. Moreover, the comparative evaluations performed in [8] have shown that it achieves a comparable performance to the wrapper approach in terms of classification accuracy, while requiring a smaller amount of time to generate the attribute subset. When employing feature selection for a signature recognition system, the techniques recommended are the ones applied in the image and pattern recognition area. Principal Component Analysis (PCA) is one of the most popular methods used in data compression. This method decomposes the feature vector space into two subspaces: the principal subspace consisting of the features considered as having the highest importance and its orthogonal complement. The separation is performed by computing the eigenvalues corresponding to the feature vectors. The principal components represent the axes of the new feature space and are sorted in decreasing order of significance. The axes offer information about the variance, helping identify the patterns in data [5]. The number of selected axes represents the number of selected features. There are three basic criteria of choosing the right features: the Eigenvalue-one criterion, the Scree test and Proportion of variance accounted for. Each of them has advantages and specific limitations; however, the method proved to be sensitive to image processing in general. 2.2 Classification Step Several classification methods like: Neural Networks, Support Vector Machines, Bayesian classifiers or Decision Trees are known to perform well in pattern recognition models, hence in a signature recognition system. What these classifiers have in common is the separation of classes from each other. Clustering techniques making no assumptions about the structure of the data are also widely used in this area [9].
268
R. Potolea, I. Bărbănţan, and C. Lemnaru
The Bayes classifiers are probabilistic classifiers. The Naïve Bayes (NB) learner is a simplified form of the Bayes learners relying on the independence of features assumption. The posterior probability of each instance is computed and the instance is assigned to the class with the highest probability obtained [8]. The Bayesian classifiers are known to perform well in areas like character and speech recognition or video and image analysis [10]. The Neural Networks (NN) are based on the operations of biological neural networks. A distinctive characteristic of the NN is that they find the decision boundary between classes rather than modeling classes separately and then combining the separate models. From the class of NN, one of the most used classifiers in the handwriting recognition domain is the MultiLayer Perceptron (MLP). The k-nearest neighbor (kNN) classifier is an instance based learner that postpones the process of building of a model until a new instance is to be classified, thus generating a model faster than the MLP, and it computes the similarity between instances based on a distance measure. Support Vector Machines (SVM) can be used in text classification as each word can be represented by a vector [11]. The Decision Tree classifiers are considered simple classifiers and represent importance as they offer a structured representation of the space and they are robust to noise. Clustering solves the learning problems by grouping instances that are similar between them and dissimilar among the others. Through clustering the outliers in a dataset can be observed. As opposed to prediction when the class of an instance is to be predicted, for clustering the class is not specified and the goal is to group instances, based on the resemblance to each other. 2.3 Model Tuning In a classification problem, besides algorithm selection, careful engineering of the available data to obtain an appropriate training set is a key element in ensuring that a good model is built. A factor which influences this process is the size of the available data. The time required to build a model and obtain the information required, increases with the size of the training set. Moreover, and even more importantly, classifier’s performance is affected by the dataset size, and the number of instances in a class [12]. There is no general formula to determine the number of instances per class and number of classes in the dataset, and they have to be specifically determined for each particular problem. Several experiments and studies have to be performed in order to determine this type of information. The signature recognition domain is no exception and the performance is biased by several factors such as: dataset size, number of classes and number of instances in a class. 2.3.1 Number of Instances In a real signature recognition system, the number of signature samples that can be collected from one person is quite small. That is why an analysis on how the number of instances affects the performance of the system is required. In order to determine which is the most appropriate number of instances to be used for training the model, an analysis of the learning curve helps making a trade-off between performance and a feasible number of signatures acquired.
A Hierarchical Approach for the Offline Handwritten Signature Recognition
269
2.3.2 Dataset Size When dealing with large datasets, we are faced with the problem of distinguishing between a large number of classes (different signatures). In such cases, the performance of some classifiers may be affected as it is harder to distinguish among numerous classes. Moreover, the time required to solve the problem might become unacceptably large. Thus a dataset reduction method is recommended. To determine how to split a dataset into smaller datasets with fewer classes easier to be separated, several techniques are available. Among them, clustering ensures a good trade-off between performance and time required to solve the task.
3 Experiments 3.1 Problem Description Our initial model [13] for the signature recognition system consists of the following steps, presented in (Fig. 5). First the signatures are collected on paper the signers being asked to provide several samples of their signature (20). The signatures are converted to electronic format via a scanning process followed by several preprocessing steps. The noise contained by the images is removed by applying a median filter, then the images are binarized and a thinning algorithm is applied in order to obtain a one-pixel width signature representation. Then the feature extraction step follows, where we extracted the representative characteristics of the signatures organized in feature vectors. Each feature vector represents an instance, and all the instances together build up the dataset. Several classifiers are used to train the model. Their performance is the recognition rate reported as classification accuracy.
Fig. 5. System flow diagram
Signature features should be distinctive and offer a good description of an individual’s signature. We have extracted 25 features (23 from literature, 2 originally proposed), grouped the features in five categories presented in Table 1 [14]. The first category of features includes the two new introduced features which are distance based: Top-bottom Euclidean distance and Left-right Euclidean distance. They measure the Euclidean distance from the leftmost and rightmost pixel and from the top to the bottom pixel.
270
R. Potolea, I. Bărbănţan, and C. Lemnaru
Table 1. Extracted features grouped into categories (* are the original features proposed) Attribute categories 1. Features obtained from the extreme points 2. Features extracted from the histogram 3. Features related to the number of pixels 4. Features obtained with respect to the pixel position 5. Features having as result an angular value
# of attributes 6+2* 6 4 4 3
Category name Border features Concentration features Number features Position features Angle features
The following features have been considered in each category: • Border Features = {Width, Height, Left-right, Top-bottom, Area, Aspect ratio, Signature area, Width/Area} (Fig. 6 and Fig. 7) • Concentration Features = {Maximum value of horizontal and vertical histogram, Number of local maximum of horizontal and vertical histogram, Top heaviness, Horizontal dispersion} • Number Features = {Number, Edge points, Cross points, Mask feature} • Position Features = {Sum of X and Y positions, Horizontal and vertical center of the signature} • Angle Features = {Inclination, Baseline slant angle, Curvature} (Fig. 8)
Fig. 6. Border features
Fig. 7. Width, Height
Fig. 8. Inclination
3.2 Feature Selection In order to obtain an efficient subset of features to describe a signature, we have employed several experiments using feature selection strategies: a wrapper, a filter and PCA. The goal was to reduce the search space and remove noisy features, thus improving model robustness. With PCA is assumed that the optimal subset of features is selected, but comparing the classification accuracies obtained with 19 features selected by PCA and top 19 features selected by both CFS and wrapper methods (12 common, and 7 the next best either from CFS or wrapper), we notice that for our particular problem PCA does not prove to be an effective technique. In Table 2 are presented the results obtained for a dataset of 20 instances and 84 classes, 14 instances and 76 classes and 20 instances and 76 classes when classifying using the 19 features selected with PCA and the top 19 selected with both CFS and wrapper methods. Table 2. Feature selection results
PCA Wrapper+CFS
20 inst/84 classes 77.5 86.33
14 inst/76 classes 79.7 87.77
20 inst/76 classes 80.02 87.78
A Hierarchical Approach for the Offline Handwritten Signature Recognition
271
Table 3. Feature subsets obtained by the wrapper and filter methods employed
Commonly selected features (12)
Independently selected features
Commonly removed (4) Independently removed features
Wrapper method Filter method Number, Width, Aspect Ratio, Signature Area, Maximum value of the vertical histogram, Number of local maxima of the horizontal histogram, Number of edge and cross points, Mask features, Top-bottom Euclidean distance, Slant angle, Gradient Vertical center of gravity, Sum of Y positions, Maximum of horizontal histogram, Number of local maxima of Height, Area, Left-right Euclidean distance vertical histogram, Number of local maximum of the vertical histogram, Top heaviness, Horizontal dispersion Horizontal center of gravity, Ratio of width to area, Sum of X positions, Curvature
Height, Area, Left-right Euclidean distance
Vertical center of gravity, Sum of Y positions, Maximum of horizontal histogram, Number of local maxima of vertical histogram, Top heaviness, Horizontal dispersion
In Table 3 the behavior of the wrapper method is compared to the CFS method. The subset of features we have decided upon to use further in our experiments is composed of a combination of features selected with the two methods. We have started with the commonly selected features and repeatedly added the remaining features. Based on the influence the selected feature had upon the classification accuracy we have decided whether to keep or remove the feature from our subset. The final subset contains 19 features, among which one of the two features we proposed. We have performed the experiments using several individual classifiers. Among them the Naïve Bayes and Multilayer Perceptron classifiers proved to have the best behavior for our dataset. In both cases – using the entire set of features and the subset of features selected with feature selection - the NB classifier outperforms the MLP classifier both in terms of accuracy and the time required to train the model which is why in our next experiments we will use the NB classifier. The mean classification accuracy obtained in the case of the NB classifier of 84.79% (when for the MLP is of 79.96%) for the entire set of features and 91.40% when using the reduced set of features. 3.3 Scalability of the Problem Theoretically, the more signature instances are acquired from a single person, the better the system’s recognition performance. However, in a real system it is not feasible to collect as many as 20 signature instances from a person. We analyzed the way the number of instances affects the model, in order to determine how dimensionality affects performance. Moreover, in practice, the number of classes increases constantly. Therefore we performed several experiments to study the learning curve behavior with different number of instances in a class and different number of classes. We have analyzed the learning curve for the number of classes starting from 5 classes and progressively increasing the amount with 5 classes, up to 84 classes (Fig. 9). We noticed that as the number of classes increased, the classification accuracy decreased. So we considered splitting the dataset into two smaller datasets. With 55
272
R. Potolea, I. Bărbănţan, and C. Lemnaru
classes in a dataset, we tried to identify which the optimal number of instances required in a class is. From the behavior of the learning curve, we observed that for more than 11 instances/class, a robust model could be built. The experiments are presented in more detail in [13].
Fig. 9. Learning curve with increasing number of classes and 20 instances/class
Fig. 10. Learning curve with increasing number of instances/class and 55 classes
Fig. 11. Clustering using different number of instances/class and 84 classes
In our current work we performed a series of experiments in which we considered fewer classes and different number of instances/class when training the model, similar to the one conducted in (Fig. 10). We started from 5 classes and increased their number for each experiment with an increment of 5 classes. The experiments were performed with datasets containing 11 up to 16 instances/class. As it is shown in (Fig. 11), the accuracy decreases as the number of classes increases, so in order to obtain a better model, it should be built with fewer classes. Moreover, it can be observed that that the peak performances are achieved when classification/recognition is performed with a dataset containing 14 instances/class. 3.4 Hierarchical Partitioning of Data Based on Clustering As an alteration of the classification accuracy is noticed as the number of classes increases, to maintain high classification accuracy, a technique to keep more control
A Hierarchical Approach for the Offline Handwritten Signature Recognition
273
on the number of classes is to be employed. We considered a hierarchical approach: splitting the data into smaller datasets via a clustering method and subsequently supply each subset to the NB classifier, for building classification sub-models. When a new instance arrives and needs to be classified, the hierarchical classifier first clusters the instance to find the best classification sub-model for it. It then presents the instance to the specific classification sub-model, which assigns the appropriate class label to the instance.
Fig. 12. Generic training process of the hierarchical classifier
Fig. 13. Generic classification process for the hierarchical classifier
The training phase of the hierarchical approach consists of splitting the dataset into clusters and building models with the NB classifier to each subset. We conducted 5 different experiments on a real dataset with 76 classes, with different number of instances/class (11, 14, 16, 18, 20), analyzing both the influence of the clusters and the number of instances/ class on the recognition rate. We used the Simple k-Means clustering technique for splitting the dataset into clusters. Starting from entire dataset (k=1) we evaluated the behavior of the system increasing the number of clusters employed. As the time exponentially increased (from minutes when k<7, to hours for 7 clusters and days when k>7) the evaluation for k>9 became impractical. The subsets were evaluated with the NB classifier using a 10-fold cross-validation method. The results indicate that none of the evaluated dataset settings (i.e. number of instances per class) outperforms all the others for all types of partitioning (i.e. number of clusters evaluated). However, most of them seem to have several local maxima. (Fig. 14) clearly shows the existence of partitioning intervals in which different pairs of settings perform the best. Theoretically, how the instances are grouped into clusters should affect the performance of the entire system. Defining the impurity as the number of instances that do not belong to the cluster containing the largest number of instances from the given class, we have performed an analysis of the impurity of the clusters. Generally, as the number of instances per class increased, the impurity of the clusters also increased. The cluster impurity affects the second step of the training process: a large impurity means that the instances of a class are scattered along several clusters, making that class more difficult to learn in the classification sub-model corresponding to that cluster. A small cluster impurity is therefore preferable. As a consequence, when
274
R. Potolea, I. Bărbănţan, and C. Lemnaru
Fig. 14. Performance analysis of the sub-models induced from clustering with 1-9 clusters, on the datasets having 11, 14, 16, 18 and 20 instances per class
tuning the number of instances per class to use for training and the number of clusters in which the classes should be split into, a trade-off between the accuracy and the impurity has to be made. The results indicate that we can achieve the same performance if we decrease the size of the classification sub-problems when the number of available instances per class is relatively small (we achieved very similar performance with 14 instances per class and 8 clusters as with 20 instances per class and 7 clusters). This suggested that, as the number of classes increases, we might have to consider a 2-stage clustering process, such as to obtain relatively small subclassification problems. Also, we need to consider the time aspect. The speed of building the models for 7 and 8 clusters is significantly better than the speed of building a 9-cluster model. For the second step of the hierarchical approach, a specific classifier can be used for each cluster, to build the classification sub-models. We conducted our experiments using the classifiers previously proposed in our preliminary investigations [15]. Each cluster was evaluated with the specified classifiers and the classifier corresponding to each cluster was selected based on the best accuracy obtained. In (Fig. 17) the logical workflow of the experiments is presented. We propose this strategy as we want to answer the following questions: Q1: Are the models adequate for each cluster? and Q2: Is the generic feature set the most appropriate for each cluster? To find the answers, we have to first check whether the NB is indeed the suited learner in each case, and secondly to identify specific feature subsets for each data subset (isolated in the clustering process). First, we performed a classification using the Naïve Bayes learner. Then, in order to boost the accuracy of the system, two strategies have been followed: either use a different classifier for each learner and select the most appropriate one, or use feature selection and determine the subset of features that should be used for each cluster with the NB classifier. If the first option seems more successful, we could improve it by applying feature selection to those learners too.
A Hierarchical Approach for the Offline Handwritten Signature Recognition
Fig. 15. Curves representing the local maxima when partitioning into clusters
275
Fig. 16. Curves representing the impurity of clusters for different # of inst./class and 76 classes
Table 4. Optimal number of instances per class and the corresponding number of clusters
Number of instances/class Optimal number of clusters Minimum number of class/cluster Maximum number of classes/cluster Mean number of classes/cluster Accuracy
14 8 17 4 9.5 ~91%
20 7 20 4 10.85 ~91%
Fig. 17. Logic flow of the experimental work
Experiment 1: Naïve Bayes accuracy. First, we determined the recognition rates for each of the clusters by employing the Naïve Bayes classifier. The results are presented in Table 5, column 3.
276
R. Potolea, I. Bărbănţan, and C. Lemnaru Table 5. Accuracy of different classifiers applied to clusters with 14 instances/class
Experiment 2: Specialized classifier for each cluster. Next we experimented with different classifiers in order to determine the classifier that generates the best recognition rate for each of the clusters. We employed 7 classifiers and determined their classification accuracies: BayesNet (BN), Naïve Bayes (NB) – as it proved to be the best classifier in our previous work [13], [14], the Support Vector Machines (SVM), since it achieved the highest recognition rates in [16], Neural Networks (MultiLayer Perceptron-MLP), the kNN classifier, as it is a widely used method in handwritten recognition systems ([17], [18]) and C4.5 and Random Forest (RF) decision trees, as they are known to perform a good separation between classes. The Bayesian and Decision Tree classifiers have also the advantage of being fast learners, as compared to the SVM or the MLP. We used the Weka [19] implementation of the classifiers: SMO for the SVM learner, J4.8 for the C4.5 decision tree and IBk for the kNN learner. All classifiers have been evaluated with their default parameters. For the kNN classifier we evaluated the system with three different values for k: 1, 3, 5. We reported only the best results those for k=1. We noticed that the NB classifier performs the best on most clusters. The results obtained when applying different learners for each cluster are presented in Table 5, for datasets with 14 instances/class. The best results for each of the clusters are presented in bold. Based on the best results obtained by the previous experiments, we employed feature selection on each of the clusters with their specialized classifier. We used the wrapper attribute evaluator with the learner that achieved the best performance and best first as subset search method, with the default parameters. With feature selection the average classification accuracy on the clusters has increased with up to 2%. The results are presented in Table 6 for the 14 instances/class dataset. Experiment 3: Feature selection with Naïve Bayes. First, we applied feature selection using the NB classifier in the wrapper for each of the clusters. With the subset of features determined, we built a new model with NB in each cluster, and evaluated the performance obtained. In most cases, an improvement compared to the experiments with the entire set of features was noticed (Table 6, column 5). Experiment 4: Feature selection with specialized classifiers for the clusters combined the two strategies: build a different learner for each cluster (the one attaining the best performance), and apply the feature selection as a preprocessing step (with a wrapper method). The results are presented in Table 6 column 6. In (Fig. 18) it is shown how a specialized classifier improves the classification accuracy. The first column from the left represents the classification employed with the Naïve Bayes
A Hierarchical Approach for the Offline Handwritten Signature Recognition
277
classifier, the second, the classification accuracy obtained with specialized classifiers, and the last two are the same evaluations but on the reduced sets of features obtained with feature selection method. The values of the accuracy are presented in Table 6.
Fig. 18. Improvements when employing different classifiers for the clusters with and without feature selection, for the dataset containing 14 instances/class Table 6. Improvements with feature selection on the clustered dataset: 8 clusters, 14 instance/class Cluster no. No. classes/ cluster 1 12 2 6 3 3 4 16 5 10 6 19 7 6 8 4 Mean 76 accuracy Improvement -
For cluster 3 in (Fig. 18) we obtained three increments. The first increment of 4.58% represents the improvement obtained when we change the classifier from the Naïve Bayes to the Support Vector Machines. The second increment of 5.93% in opposition to the classification with the Naïve Bayes, is obtained when classification is performed with the Naïve Bayes classifier on the reduced set of features. The third increment is represented by the difference of accuracy obtained when employing the specialized classifier (SVM) on the reduced set of features and the accuracy obtained with the Naïve Bayes classifier: 8.32%. The second cluster has similar behavior, yet less improvement. For the other clusters there is only one increment because for both classifications with and without feature selection, the classifier used is the Naïve Bayes. The interpretation of the results is that when having a small number of classes, with feature selection we obtain a small set of attributes, meaning that the separation between the instances of the same class is easily done. When employing the SVM
278
R. Potolea, I. Bărbănţan, and C. Lemnaru
classifier on the same dataset, the number of features selected is larger. The average accuracy for each set of experiments is computed with the weighted mean between the number of classes and the accuracy obtained for each cluster. After generating all the values, we computed 1.62% to be the increment, compared to our previous model.
4 Conclusions and Future Work This paper presents a new method of hierarchical partitioning of data using clustering, method that helps increasing the accuracy of a previously proposed signature recognition system [13]. Another improvement of the recognition rate is obtained by applying different classifiers on each of the obtained clusters and employing the feature selection preprocessing method. On our dataset we determined via experiments the number of clusters to split a dataset into. The results have shown that peak performances are obtained on a 14 instances/class dataset using 8 clusters and a 20 instances/class dataset using 7 clusters. Then as the clusters contain similar classes, we determined the suitable classifier for each of the clusters. Our experiments show that the Naïve Bayes learner was selected as most appropriate learner for most of the clusters, as it achieves the best accuracies. When applying feature selection on the datasets selected in each clusters, we selected the subset of features that characterize best each dataset from the clusters. When classification was performed with the reduced set of features, improvements in the recognition rate were noticed. With the two employed methods: clustering and the preprocessing method, feature selection, we obtained an increase of 1.62% in the accuracy against our previous model on a dataset containing 14 instances/class and 3.23 for a dataset with 20 instances/class. A problem we should address is how to have a uniform splitting of instances into clusters. When using the SimpleKMeans instances from the same class can belong to different clusters. We currently solved this issue by assigning all instances to the cluster containing the largest number of instances per class. As future work we propose implementing a hierarchical decomposition of the data by increasing the number of classes and performing two clustering steps. The first one to cluster the given dataset and then a second clustering step should be applied on the partitions generated.
References 1. Sayeed, S., Kamel, N., Besar, R.: A Novel Approach to Dynamic Signature Verification Using Sensor-Based Data glove. American Journal of Applied Sciences 6(2), 233–240 (2009) 2. Kisku, D.R., Gupta, P., Sing, J.K.: Offline Signature Identification by Fusion of Multiple Classifiers using Statistical Learning Theory. International Journal of Security and Its Applications 4, 35–45 (2010) 3. Zimmer, A., Ling, L.: A Hybrid On/Off Line Handwritten Signature Verification System. In: Proceedings of the 7th International Conference on Document Analysis and Recognition, vol. 1, p. 424 (2003) 4. Azar, D.: Hilditch’s Algorithm for Skeletonization. Pattern Recognition Course, Montreal (1997)
A Hierarchical Approach for the Offline Handwritten Signature Recognition
279
5. Han, J., Kamber, M.: Data Mining, Concepts and Techniques, 2nd edn. Morgan Kaufmann, San Francisco (2006) 6. Vidrighin, C., Muresan, T., Potolea, R.: Improving Classification Accuracy through Feature Selection. In: Proceedings of the 4th IEEE International Conference on Intelligent Computer Communication and Processing, ICCP, Cluj-Napoca, Romania, pp. 25–32 (2008) 7. Kohavi, H., John, R., Pfleger, G.: Irrelevant Features and the Subset Selection Problem. In: Machine Learning: Proceedings of the Eleventh International Conference, pp. 12–129. Morgan Kaufman Publishers, San Francisco (1994) 8. Hall, M.A.: Correlation based Feature Selection for Machine Learning. Doctoral dissertation, Department of Computer Science, The University of Waikato, Hamilton, New Zealand (2000) 9. Stephens, S., Tamayo, P.: Supervised and Unsupervised Data Mining Techniques for the Life Sciences, pp. 35–37. Oracle and Whitehead Institute/MIT, USA (2003) 10. Pauplin, O., Jiang, J.: A Dynamic Bayesian Network Based Structural Learning towards Automated Handwritten Digit Recognition. In: Graña Romay, M., Corchado, E., Garcia Sebastian, M.T. (eds.) HAIS 2010. LNCS, vol. 6076, pp. 120–127. Springer, Heidelberg (2010) 11. Brank, J., Grobelnik, M., Milic-Frayling, N., Mladenic, D.: Feature selection using linear Support Vector Machines. Microsoft Research, Microsoft Corporation (2002) 12. Weiss, G., Provost, F.: Learning when Training Data are Costly: The Effect of Class Distribution on Tree Induction. Journal of Artificial Intelligence Research 19, 315–354 (2003) 13. Bărbănţan, I., Vidrighin, C., Borca, R.: An Offline System for Handwritten Signature Recognition. In: Proceedings of IEEE ICCP, Cluj-Napoca, pp. 3–10 (2009) 14. Bărbănţan, I., Lemnaru, C., Potolea, R.: A Hierarchical Handwritten Offline Signature Recognition System. In: 12th International Conference on Enterprise Information Systems Funchal, Madeira, Portugal (2010) 15. Bărbănţan, I., Potolea, R.: Enhancements on a Signature Recognition Problem. In: Proceedings of IEEE ICCP, Cluj-Napoca, pp. 141–147 (2010) 16. Ozgunduz, E., Senturk, T.: Off-line Signature Verification and Recognition by Support Vector Machines. In: 13th European Signal Processing Conferenece (2005) 17. Azzopardi, G., Camilleri, P.: K.: Offline handwritten signature verification using Radial Basis Function Neural Networks. WICT Malta (2008) 18. McCabe, A., Trevathan, J., Read, W.: Neural Network-based Handwritten Signature Verification. Journal of Computers 3(8) (2008) 19. Hall, M., Frank, E., Holmes, G., Pfahringer, B., Reutemann, P., Witten, I.H.: The WEKA Data Mining Software: An Update. SIGKDD Explorations 11(1) (2009)
Meta-learning Framework for Prediction Strategy Evaluation Rodica Potolea, Silviu Cacoveanu, and Camelia Lemnaru Technical University of Cluj-Napoca, Romania {rodica.potolea,camelia.lemnaru}@cs.utcluj.ro, [email protected]
Abstract. The paper presents a framework which brings together the tools necessary to analyze new problems and make predictions related to the learning algorithms’ performance and automate the analyst’s work. We focus on minimizing the system dependence on user input while still providing the ability of a guided search for a suitable learning algorithm through performance metrics. Predictions are computed using different strategies for calculating the distance between datasets, selecting neighbors and combining existing results. The framework is available for free use on the internet. Keywords: Meta-learning framework, Data set features, Performance metrics, Prediction strategies.
Meta-learning Framework for Prediction Strategy Evaluation
281
on the former problem(s) is expected to achieve similar performance on the new problem. To make use of this approach, the expert should evaluate with various techniques a large amount of data. Also, the success of the selected learning algorithm on the new problem depends on the expert’s strategy for selecting similar problems. Creating a framework which brings together the tools necessary to analyze new problems and make predictions related to the learning algorithms’ performance would automate the analyst’s work. This results in a significant speed-up and an increased reliability of the learning algorithm selection process. Such a framework enables discovering of good classifier, regardless of the user knowledge of the data mining domain and the problem context (for users that can't decide what metric is best suited for their context). We have already proposed such a tool in [4] and developed an implementation based on classifiers provided by the Weka framework [17]. Its main goal is to automatically identify the most reliable learning schemes for a particular problem, based on the knowledge acquired about existing data sets, while minimizing the work done by the user, still offering flexibility. The initial focus was on selecting a wide range of data set features and improving the classifier prediction accuracy. Also, new data sets are added easily, which enhances continuous performance improvement for the system.
2 Related Work Aha [1] proposes a system that constructs rules which describe how the performance of classification algorithms is determined by the characteristics of the data set. Rendell et al. [12] describe a system VBMS, which predicts the algorithms that perform better for a given classification problem using the problem characteristics (number of examples and number of attributes). The main limitation of VBMS is that the training process runs every time a new classification task is presented to it, which makes it slow. The approach applied in the Consultant expert system [14] relies heavily on a close interaction with the user. Consultant poses questions to the user and tries to determine the nature of the problem from the answers. It does not use any knowledge about the actual data. Schaffer [13] proposes a brute force method for selecting the appropriate learner: execute all available learners for the problem at hand and estimate their accuracy using cross validation. The system selects the learner that achieves the highest score. This method has a high demand of computational resources. STATLOG [10] extracts several characteristics from data sets and uses them together with the performance of inducers (estimated as the predictive accuracy) on the data sets to create a meta-learning problem. It then employs machine learning techniques to derive rules that map data set characteristics to inducer performance. The limitations of the system include the fact that it considers a limited number of data sets. Moreover, it incorporates a small set of data characteristics and uses accuracy as the sole performance measure. P. B. Brazdil et al. propose in [3] an instance-based learner for obtaining an extensible system which ranks learning algorithms based on a combination of accuracy and time.
282
R. Potolea, S. Cacoveanu, and C. Lemnaru
3 Model This section presents a formal model of our automated learner selection framework (Fig. 1). The basic functionality of a learner selection system is to evaluate, rank and suggest an accurate learning algorithm for a new problem submitted to the system together with the user’s requirements. The suggested algorithm is expected to induce a model which achieves the best performance in the given context. The user involvement is limited to providing the input problem (i.e. the data set which the user needs to classify) and specifying the requirements. The result is presented to the user as a list of learning algorithms, arranged in decreasing performance order. Recommending more than a single learning algorithm is important as it allows users to decide on the specific strategy to follow (from the trade-off between speed and performance, to the availability of tools, or existing knowledge to deal with necessary techniques). The system also minimizes the time it takes to provide the results by organizing tests in a queue. The queue is ordered according to priority and it pushes slower tests to the end.
Fig. 1. Framework model
The process of obtaining the predictions is roughly divided into selecting the similar problems and obtaining predictions from similar problems. In order to provide accurate predictions for new data sets the system relies on the problems and the solutions (classifier + performance) obtained on those problems. The system has the ability to increase its knowledge by adding new problems and the corresponding solutions. To do this, an important functionality is the ability to run learning algorithms and evaluate those algorithms. This occurs mainly in the initialization phase of the system. After a significant number of problems have been collected, the prediction phase produces reliable outcomes. In this phase, a new problem is submitted to the system, along with its requirements. The system finds similar problems in its collection. The similarity refers to the resemblance, in terms of meta-features, between the problem under evaluation and problems in the collection, and is evaluated by computing a distance between the analyzed problem and the stored problems. A subset of the nearest stored problems is selected as neighbors of the analyzed problem. The results obtained by the learning algorithms for every neighbor problem are known (represent background knowledge, stored in the collection, together with the data sets and their meta-features). The performance score of each classifier is obtained by evaluating the results obtained by that classifier from the perspective of the
Meta-learning Framework for Prediction Strategy Evaluation
283
user requirements. The framework then predicts the performance score for the classifier on the analyzed problem as a combination of the performance scores obtained by that classifier on the neighboring problems. The final list of recommended learning algorithms is ordered by their predicted performance scores. An extension of the initialization phase runs during the system idle time, when no predictions are performed. More specifically, the system trains and evaluates models on each new problem and saves the results. Thus the system’s knowledge increases and its prediction capabilities improve. 3.1 Meta-features In order to estimate the similarity between problems, a series of meta-features are extracted from the data sets. The meta-features we employ in this framework can be divided into four categories. One is focused on the type of the attributes in the data sets: the total number of attributes of a data set, the number of nominal attributes, the number of boolean attributes and the number of continuous (numeric) attributes. Another category is focused on analyzing the properties of the nominal and binary attributes of the data sets: the maximum number of distinct values for nominal attributes, the minimum number of distinct values for nominal attributes, the mean of distinct values for nominal attributes, the standard deviation of distinct values for nominal attributes and the mean entropy of discrete variables. Similar to the previous category, the next category focuses on the properties of the continuous attributes the data sets have. It includes the mean skewness of continuous variables, which measures the asymmetry of the probability distribution, and the mean kurtosis of continuous variables representing the peak of the probability distribution. A final category gives a description of the dimensionality of the data set. It contains the overall size, represented by the number of instances, and imbalance rate information. The mean and maximum imbalance rates of the classes in the data set are computed (in case there are only 2 classes, the mean and maximum imbalance rates are equal). We have chosen to include the imbalance information since most real-world problems are imbalanced, i.e. one class is represented by a smaller number of instances compared to the other classes. Moreover, studies indicate that, in such problems, the imbalance ratio is the most important factor which affects classifier performance. [7][8][9][10] 3.2 Classifier Metrics The performance score of a classifier depends on the problem requirements provided by the user. When designing the system, we have focused on minimizing its dependence on the user input. However, we still need to provide the user with a method of guiding the search for a suitable learning algorithm. For this purpose, the system provides nine metrics, divided into three categories, as proposed in [5]. The metrics in Table 1, along with a general purpose metric are described in this section. Most classifier performance metrics are generated from the confusion matrix produced by the induced model on a test sample. The confusion matrix is the most precise indicator of the model performance. However, it is relatively difficult to follow, especially for multi-class problems. Therefore, more simple metrics, derived from the confusion matrix, are preferred.
284
R. Potolea, S. Cacoveanu, and C. Lemnaru Table 1. Performance metrics
Probability • Geometric Mean • Generalized Geometric Mean
While most users are concerned with the accuracy of the generated models, in some cases they might prefer to focus on improving a different performance criterion (for instance to maximize the sensitivity or specificity). This difference usually comes from the different costs associated to specific errors, or from some particularity of the problem (such as imbalanced data, high cost of acquiring instances from a particular class, unknown or dynamic distribution of data). The accuracy of a classifier is the percentage of test instances that are correctly classified by the model (also referred to as the recognition rate). It has been the most widely employed metric in the early (theoretical) stage of data mining research, together with its complement – the error rate. Even in current approaches they are often employed when assessing the performance of new learning schemes, being appropriate metrics for real-world, balanced problems. However, when dealing with imbalanced or cost-sensitive data, for example, they provide an insufficient measure of the performance, due to their symmetric nature. In such cases, metrics which provide a more directed focus are required. One such metric, which focuses on the recognition of the minority class, is the true positive rate: the proportion of positive cases that were correctly identified. Also known as sensitivity (in ROC curves) and recall (in information retrieval), it favors models which focus on identifying the positive cases, even if this leads them to misclassifying a number of negative cases as positive. The corresponding measurement for the negative class is the true negative rate, also called specificity, and is computed as the number of negative examples correctly identified. These two measures give more specific information than the accuracy alone. However, when comparing two or more classifiers in imbalanced problems, besides having an estimation of how exact they perform in the identification process, it is important to track their relevance, i.e. how many examples which are identified as belonging to a given class actually belong to that class. This is done by the positive predicted value, or precision. It is given by the number of actual examples identified as positive, out of all classified as positives. The system also includes the false positive rate and the false negative rate as metrics. The ROC graph is a representation of the true positive rate (sensitivity) as a function of the false positive rate (1-specificity). A model built by a classifier generates a single point in the graph. The entire curve is built by generating several models (i.e. points in the ROC space). This can be done by repeatedly modifying the decision threshold of the classifier. ROC curves allow for the comparison of several classifiers. However, by simply analyzing the ROC curve, one can only get an approximate comparison. A more exact measure is provided by the area under the curve. It is a scalar measure which should be maximized; therefore, a curve is better than another if it has a larger AUC. In case of a perfect classifier, the area under the ROC curve is 1, because the false positive rate is 0 while the true positive rate is 1.
Meta-learning Framework for Prediction Strategy Evaluation
285
Besides the fundamental metrics the system evaluates for a data set, some combined metrics are available as well. The geometric mean metric is used to maximize the true positive and the false positive rate at the same time. The generalized geometric mean (1) maximizes the geometric mean while keeping the difference between the true positive and the true negative rate at a minimum (i.e. the negative and the positive classes have the same importance). ,
∑
,
(1)
,
Problem requirements, provided in terms of performance metrics, represents the way users set the focus of the search for a learning algorithm. When a classifier’s performance is measured, the resulting score is computed from the model’s confusion matrix or area under the ROC curve, based on the requirements. Both the confusion matrix and the area under the ROC curve are computed in the initialization phase, when evaluating the performance of a trained model (using 10-fold cross validation). For allowing users from different areas of expertise which are interested in discovering a generally good classifier, we propose a metric which combines the accuracy, the geometric mean and area under the ROC curve. It is obtained by computing the average of three above mentioned metrics, each being the best in its category, as observed in [5]. 3.3 Distance Computation The distance computation phase consists in calculating the distances between the currently analyzed data set and all the data sets stored in the system database. The distance is computed by using the data set meta-features (all numeric values) as coordinates of the data set. By representing a data set as a point in a vector space, the distance can be evaluated using any metric defined on a vector space. The first distance computation strategy considered is the normalized Euclidean distance (E). The Euclidean distance is the ordinary distance between two points in space, as given by the Pythagorean formula, and are normalized in the [0,1] range. Another distance evaluation available in our framework is the Chebyshev distance (C). The Chebyshev distance is a metric defined on a vector space where the distance between two vectors is the greatest of their differences along any coordinate dimension. In our system, the largest difference between data set meta-features is the distance between two data sets. These distances are also normalized. The advantage of the Chebyshev distance computation strategy is that it takes less time to decide the distances between data sets. A possible problem with the Chebyshev distance is allowing one single feature to represent a data set. One single largest feature might not offer enough description of the data set to lead to accurate neighbor selection and final predictions. As a solution we propose a variation of the Chebyshev distance (C3), where the largest three differences between features are selected and their mean is computed. By using this strategy, computing the distance between
286
R. Potolea, S. Cacoveanu, and C. Lemnaru
datasets is more efficient than using the Euclidean distance and we use more information about the dataset than in the Chebyshev distance. 3.4 Neighbor Selection Neighbor selection decides which data sets influence the performance predictions for the analyzed data set. While selecting closer data sets as neighbors is justified, how many data sets should be selected is an open problem. If we choose a fixed number of neighbors we make sure the prediction computation always takes the same amount of time. We have implemented the Top 3 neighbor selection strategy (T3) which selects the 3 closest data sets as neighbors. Selecting the closest n neighbors is a sensible decision for a strategy, but there might be cases in which the closest n data sets are still quite far. Another approach would be setting a threshold distance, after which a data set is not considered a neighbor anymore. When using a fixed threshold value we don't always find neighbors for a dataset. Without neighbors, the system is not able to compute the performance predictions. A solution is a variable threshold, depending on the average distance between every two data sets in the system at the moment a new data set arrives. This solution assumes continuously updating the average distance and holding a different threshold value for all the distance computation strategies, which is time-consuming. We chose to implement several strategies similar to the variable threshold considered above, but we compute the threshold value using the distance between the analyzed data set and all the data sets in the system and select as neighbors only data sets closer than this threshold. These strategies select a variable number of neighbors every time and the performance predictions computation time increases considerably. In Above-Mean neighbor selection strategy (MEA) we considered the mean of the distances between the input data set and the rest of the data sets as the threshold. Above-Median neighbor selection (MED) chooses the closer half of the data sets as neighbors. Above-Middle neighbor selection (MID) computes the mean of the closest and furthest neighbors and uses it as threshold. 3.5 Prediction Computation In the last phase the classifier performance predictions are generated. Voting strategies define the way neighbor data sets influence the final predictions. Each neighbor casts a performance score to each classifier in the system. The performance score for a classifier depends on the performance of its model on the data set evaluated from the point of view of the metric selected by the user. These scores are combined to obtain the final score for each classifier. This final score is the actual predicted performance for that classifier, and they are decreasingly ordered based it. We have implemented two voting strategies. The first one is equal, or democratic, voting (D). Each neighbor selected in the previous phase predicts a performance for the current classifier. We sum all the performances and divide them by the number of neighbors. The result is the predicted performance of the current classifier. Each neighbor has the same influence in deciding the final performance of a classifier. The second voting strategy is weighted voting (W). For this, the distances between the analyzed data set and its neighbors act as the weight of the neighbor vote – a closer data set has a higher weight and more influence on the final prediction.
Meta-learning Framework for Prediction Strategy Evaluation
287
4 System Description and Experimental Work 4.1 System Design The system is divided into 3 main parts: entities, services and the web controllers. Entities (Fig. 2) are used to store information in the database. They are designed to break up raw data and keep the elementary items which might be informative.
Fig. 2. System entities
A data set for which a classification process is required, is represented by a number of instances (the dimension of the data set). Each instance represents the values of the features characterizing the target concept, the value for which the prediction is expected. Each data set has an owner. A data set is stored only with the specific acknowledgement of the owner, and is the only one having further access to that data. Sample objects are created from a data set using sampling without replacement. The sample instances represent only a fraction of the instances in the data set. Each sample object stores its instances, the fraction size and a reference to the data set. Samples are created automatically when a data set is loaded into the system. Their size ranges from 10% to 100% of the data set, 10% steps, so every data set has ten samples, each with the same distribution as the original data set. The data set neighbor object is used to store a link between two data sets, and represents a one-way relationship. A set of neighbors is a set of data sets and is defined by the distance type and the neighbor selection strategy. The data set neighbor object stores references to the data set and its neighbor. The distance type and neighbor selection strategy used to discover the neighbor are also stored in the data set neghbor object, along with the distance between the data set and the neighbor. After all neighbors of a data set were found, a weight is computed for each of them. The weight is used when computing predictions using a weighted voting strategy. The classifier object stores the available classifiers of the system in the database. Classifier objects do not have owners and are used across the application for classification tasks created by all users. The classifier object stores a name, a description and a Java classifier object. The Java classifier object is saved as a binary object to the database. It is possible to use any Java object as a classifier but currently the system only works with objects extending the Weka AbstractClassifier object. The prediction, test and performance objects are very similar to each other. They all store information specific to one of the three processes that a user can perform. Each of the three objects stores a reference to a sample and a classifier. A prediction object is used to store predictions. Being time-consuming process, storing and loding them when necessary is the best choice. Predictions are computed
288
R. Potolea, S. Cacoveanu, and C. Lemnaru
by using performance information from the neighbors of the data set. Predictions are grouped into prediction sets. A prediction set is defined by the distance type, neighbor selection strategy, the performance metric and voting strategy used to compute the prediction. The prediction object also stores the predicted score for the sampleclassifier combination. Test objects are used for test management. They store the creation date and the date when they were added to the queue. A test can be created as an express test: it runs before regular tests, ensuring fast preliminary predictions. The status a test can have is: NEW, QUEUED, FAILED or SUCCESSFUL. NEW tests have just been created not run yet. When the user decides to run a test, the test status is changed to QUEUED, making the test a target for the queue service. If a test fails, the system retries to run the test a number of times. If no result is obtained after the maximum number of retries have been reached, the test is marked as FAILED. If the result is obtained, the test is marked as SUCCESSFUL. A performance object is the result of successfully running a test. The performance object stores the results obtained when running a classifier on a sample: the accuracy, the confusion matrix, area under ROC, training and testing times. The system uses property objects to save preferences in the database. A property object contains the name and the value of the property. Property objects are used to save the minimum and maximum values for each specific feature of each data set. System services handle all the logic of the application: computing meta-features, running classifiers, saving information to the database are all done here. The data set service handles database operations for data set and data set neighbor objects. It is also responsible for computing the data set features, but creating the data set samples and finding the data set neighbors are handled by two different services. The data set sample service handles the creation of samples by selecting instances without replacement from the data set. The data set distance service computes the distance between data sets and finds the neighbors of a data set. While the property service is a general service designed to handle database operations for any system properties, it is currently used specifically for saving the minimum and maximum values of every data set feature to the database. These values are used in the distance computation service, to normalize the distance between two data sets and also in the neighbor selection process. The classifier service only handles database operations on classifier objects. In order to run classifiers, the system first creates tests and save them in the database using the test service. When running a test, the test service passes the classifier object and the sample object of the test to the runner service. The runner service evaluates the data set using the given classifier within a 10-fold cross validation process. The runner service uses the performance service to extract performance objects from the evaluation object obtained in the data set testing process. The performance service also computes the average of the performances obtained on each of the ten folds of the sample. Predictions for a data set are generated by the prediction service. A set of predictions is computed for a data set using performances registered on its neighbors. In order to select the correct neighbor set, the prediction service needs the distance type and the neighbor selection strategy. After obtaining the performance objects from the selected neighbor set, the prediction service uses the metric type and voting strategy
Meta-learning Framework for Prediction Strategy Evaluation
289
provided to compute the predictions. The prediction service uses the metric service to obtain a score from a performance object. This score is saved in the prediction objects generated by the prediction service. The queue service loads the next queued test. Tests are ordered in the queue firstly by their priority (express tests are at the top of the queue) and secondly by the time they were added to the queue. When a test fails and has to be rerun, it is re-inserted to the end of the queue. Keeping the queue in the database increases the system robustness in case the server crashes, the state of the queue is not lost. Queue operation is handled by a queue object. The queue object constantly checks the database for the next queued test. When a test is found, a queue task object is created and run in a separate thread. If a queue task is running, the queue object does not check the database for new tests. Instead it measures the time elapsed since the task was started. If the task execution time exceeds a threshold, the queue object closes the task, marks the test as failed and starts the next test. Failed tests are rerun for a number of times, each time the queue object will increase the allowed time. This way the system runs the quicker tests first, postponing slower tests, ensuring the availability of a large number of results in short time. The user service handles database operations on user objects. Along with the user service, the security service handles user login and password validation. 4.2 Workflow The system offers a web interface which first authenticates users and then offers them access to a number of functionalities. Although users have access only to their data sets the suggestions are made based on a global evaluation. They can choose to upload a new data set to the system, generate and view predictions on their data sets or run tests and verify those predictions. A new data set can be added using the upload page. Currently, only .arff data sets can be uploaded. When a new data set is loaded, the system automatically generates its samples and searches for a first set of neighbors. The user needs to specify a distance type and a neighbor selection strategy. This page is handled by the data set controller. The data set features page displays information about the data set. It lists the data set features and the minimum and maximum values for each of these features. The data set neighbors page displays all the neighbors found for a data set. A new set of neighbors can be generated using a different distance type and neighbor selection strategy. In case a new neighbor set is generated over an existing one, the old set is overwritten. If new data sets were added to the database since a user uploaded his data set, using this procedure closer neighbors can be discovered. Both these pages are handled by the data set controller. The predictions page (Fig. 3) displays predictions for a data set. The inputs for this page are the distance type, neighbor selection strategy, performance metric and voting strategy used to compute the predictions. Predictions are displayed in a table having with columns for sample sizes and rows for classifiers. This page highlights classifiers that obtained the best predictions. The page is handled by the predictions controller.
290
R. Potolea, S. Cacoveanu, and C. Lemnaru
The tests generation page is similar to the predictions page, allowing the user to start tests for a data set, and to select which tests should be run in express mode. The page can accept as inputs the information necessary for the predictions page and use this information to generate a prediction that aids in selecting express tests. The test list page displays the tests in the queue that belong to the user. It also displays the failed tests and allows the user to reschedule them. These pages are handled by the tests controller. The results page displays test results from performance perspective. A score is computed for each test result using the provided performance metric. The compare page allows for the comparison of predictions and results from the perspective of one of the metrics in the system. It displays the result, the prediction and the difference between them for each classifier and data set sample. Negative differences are marked in red and positive differences in green. A negative difference denotes that the system predicted a better performance than the one that was actually obtained, or that the prediction is not trustworthy. A positive difference suggests that the system had enough information to predict an achievable performance.
Fig. 3. Predictions page
A typical usage scenario would be to upload a data set and compute its neighbors using the desired distance type and neighbor selection strategy. Then, the user can request predictions using the desired metric and voting strategy, and obtain a first recommendation on which classifiers perform best on the given data set. The third typical step is to run tests on the data set. This is a time-consuming operation and depends on the number of tests already queued. The user has the option to save some time by selecting a number of tests and run them in the express mode. The best scenario is selecting the tests that have good predictions and running them in express mode in order to check the accuracy of the predictions. The user can return to the application at a later time to check the test results using different metrics and compare them with the predictions.
Meta-learning Framework for Prediction Strategy Evaluation
291
4.3 System Features When first proposing a framework for classifier selection in [4], we focused on selecting a wide range of data set features and improving classifier prediction time. In the attempt to improve the prediction capabilities of our framework, we automated the voluntary addition of new data sets. The prediction was performed by a kNN classifier which computed the distances between the analyzed data set and all data sets stored in the system. It then selected the three closest data sets as neighbors and estimated the predicted performance as the mean between the performances obtained by the classifiers on the neighboring data sets. As an improvement of the initial data set classification procedure, we considered the possibility of using different strategies when selecting the neighboring problems and analyze how the neighbors affect the final performance predictions. We are also trying to answer the question of how different strategies behave when making performance predictions for different metrics. The ideal outcome would be to find a set of approaches that gives the best results for all metrics, and always use them. However, if the selection of a given tactic influences the predictions for different metrics generated by the framework, we may consider always using the best strategies for the metric selected by the user. We have divided the classifier accuracy prediction process into three phases: distance computation, neighbor selection and prediction computation (or voting). For each of these phases we propose and test several strategies. 4.4 Experimental Workflow We have initialized our system with 19 benchmark data sets that range from very small to medium sizes (up to 6000 instances) [15]. Also, the following classifiers are available: Bayes network, Naive Bayes, decision trees, neural network, support vector machines (using their implementations provided by Weka). We have performed evaluations with all the possible combinations of the implemented strategies for distance computation, neighbor selection and performance score prediction (Table 2). Table 2. Strategy Combinations Distance Computation Strategy E = Normalized Euclidean C = Normalized Chebyshev C3 = Normalized Top 3 Chebyshev
Neighbor Selection Strategy T3 = Top 3 MID = Above Middle MED = Above Median MEA = Above Mean
Voting Strategy D = Equal (Democratic) W = Weighted
Ex: C3-MID-D means: Normalized Top 3 Chebyshev strategy used to compute the distance between data sets The Middle distance used as threshold for neighbor selection Equal Voting used to determine the performance predictions
For a test, we selected a performance metric and executed the following steps: 1. selected a strategy combination a. selected a data set and used it as the analyzed data set i. used the remaining 18 data sets as data sets stored in the system
292
R. Potolea, S. Cacoveanu, and C. Lemnaru
ii. used the selected strategy combination to predict performances iii. compared the predicted performances with the actual performances obtained in the initialization stage on the selected data set b. select next data set 2. compute the deviation mean and the absolute deviation mean on all data sets and classifiers for this strategy 3. select next strategy combination We have applied the above strategy for the following metrics: accuracy, geometric mean, generalized geometric mean, area under ROC, general purpose metric. The deviations have been normalized for each metric. In total, 456 prediction tests for each selected metric have been performed. We have computed the deviation between the predicted and true performance as the difference between the performance prediction and the actual performance. In case the system predicted that a classifier achieves a higher performance than it actually obtained, this value is negative. The absolute deviation between a performance prediction and the actual performance is the absolute value of the difference between the two. 4.5 Results This section presents the results of the evaluations performed with our framework to find the combination of strategies that works the best in predicting performance scores for the learning algorithms in the system. The goal is to select the strategies that minimize the deviation means for all the metrics. By studying the absolute deviation mean results (Fig. 4), we observe that voting strategies do not influence the final predictions very much. Weighted voting (W) obtains better results than democratic voting (D), but in most cases the difference is so small that it does not justify the additional resources needed to compute the weight of each neighbor. Moreover, we observe that the distance computation and neighbor selection strategies that obtain the smallest absolute deviations are, in order: (1) Top 3 Chebyshev distance with Top 3 neighbor selection, (2) Chebyshev distance with Top 3 neighbor selection and (3) Euclidean distance with Above Middle neighbor selection. By analyzing the deviation mean results (Fig. 5) we can infer more details on the way each of the three selected strategy combinations work. Our first choice, Top 3 Chebyshev distance with Top 3 neighbor selection, has negative deviation means for the accuracy and generalized geometric mean metrics. From here we deduce that the strategy combination is overly optimistic on these metrics – most of the time it will predict performances that are not met when we derive the model and evaluate it. We would prefer that our system slightly underestimates the performance of a model on a new data set. We can observe that the second strategy combination, Chebyshev distance with Top 3 neighbor selection, makes optimistic predictions on the exact same metrics. This strategy combination looks like the best choice when predicting classifier performance evaluated with the general metric, but is not appropriate for the other metrics in the system. The strategy combination with the best results on all metrics is
Metaa-learning Framework for Prediction Strategy Evaluation
293
Euclidean distance with Above A Middle neighbor selection and democratic votiing. This combination obtains positive p deviation means for most metrics, having a vvery small negative deviation fo or the geometric mean. This is the preferred behavior foor a system and we can conclude that this is the best combination of strategies.
Fig. 4. Absolute deviation mean
Fig. 5. Deviation mean
6 General purpose metric deviation mean Fig. 6.
We can observe from Fiig. 6 that the deviation mean of the general purpose meetric is close to the average deviation means of the other metrics. Therefore, we can cconfirm the conclusion in [5] that t a general-purpose metric has the best correlation w with the other metrics.
5 Conclusions This paper describes the architecture of an automated learner selection framew work 232:8080/). We focus on the enhancements considered and (found at http://193.226.5.2 tested for the system descrribed in [4]. These enhancements consist in increasing the
294
R. Potolea, S. Cacoveanu, and C. Lemnaru
data set pool, adding new performance metrics and meta-features and improving the prediction accuracy of the system. The addition of metrics widens the evaluation criteria and allows a more problem-specific assessment of classifiers. Two of the newly added metrics, the generalized geometric mean and the general purpose metric, represent original proposals. Moreover, the general-purpose metric proposed has suggested a new approach in dealing with data sets inputs with no associated metrics. Another enhancement was the addition of new benchmark data sets. By increasing the amount of data sets available to the system we improve the outcome of the neighbor estimation step. We also implemented the context for adding complex prediction strategies. We implemented and evaluated 24 strategy combinations for computing the final performance predictions for classifiers. The analysis of the results suggests as a best strategy the Euclidean distance with Above Middle neighbor selection and Democratic voting. This strategy predicts values close to the actual performance, without surpassing it. The tests also reveal that the voting strategies do not significantly influence the final results. Our present focus is on discovering the most relevant data set features, by performing feature selection on the meta-data set or deriving new and more relevant features [11]. We attempt to improve the Above Mean neighbor selection strategy, by computing and constantly updating a mean distance between every two data sets in our database. Limiting the neighbor selection strategy as the number of problems in the system increases is another present concern. Another improvement is the possibility of generating a “best-possible” model. For this we intend to use different classifiers, each classifier optimized to increase the true positive rate on a certain class, thus maximizing the prediction power of the model.
References 1. Aha, D.W.: Generalizing from Case Studies: A Case Study. In: Proceedings of the Ninth International Conference on Machine Learning, pp. 1–10 (1992) 2. Bensusan, H., Giraud-Carrier, C., Kennedy, C.J.: A Higher-Order Approach to MetaLearning. In: Proceedings of the ECML 2000 Workshop on Meta-Learning: Building Automatic Advice Strategies for Model Selection and Method Combination, pp. 33–42 (2000) 3. Brazdil, P.B., Soares, C., da Costa, J.P.: Ranking Learning Algorithms: Using IBL and Meta-Learning on Accuracy and Time Results. Machine Learning 50, 251–277 (2003) 4. Cacoveanu, S., Vidrighin, C., Potolea, R.: Evolutional Meta-Learning Framework for Automatic Classifier Selection. In: Proceedings of the 5th International Conference on Intelligent Computer Communication and Processing, Cluj-Napoca, pp. 27–30 (2009) 5. Caruana, R., Niculescu-Mizil, A.: Data Mining in Metric Space: An Empirical Analysis of Supervised Learning Performance Criteria. In: Proceedings of the Tenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 69–78 (2004) 6. Giraud-Carrier, C., Bensusan, H.: Discovering Task Neighbourhoods Through Landmark Learning Performances. In: Proceedings of the Fourth European Conference of Principles and Practice of Knowledge Discovery in Databases, pp. 325–330 (2000) 7. Japkowicz, N., Stephen, S.: The Class Imbalance Problem: A Systematic Study. Intelligent Data Analysis 6(5), 429–450 (2002)
Meta-learning Framework for Prediction Strategy Evaluation
295
8. Kalousis, A.: Algorithm Selection via Meta_Learning, PhD Thesis, Faculte des sciences de l’Universite de Geneve (2002) 9. Linder, C., Studer, R.: Support for Algorithm Selection with a CBR Approach. In: Proceedings of the 16th International Conference on Machine Learning, pp. 418–423 (1999) 10. Michie, D., Spiegelhalter, D.J., Taylor, C.C.: Machine Learning. Neural and Statistical Classification. Ellis Horwood Series in Artificial Intelligence (1994) 11. Niculescu-Mizil, A., et al.: Winning the KDD Cup Orange Challenge with Ensemble Selection. In: JMLR Workshop and Conference Proceedings 7 (2009) 12. Rendel, L., Seshu, R., Tcheng, D.: Layered concept learning and dynamically variable bias management. In: 10th Internatinal Joint Conf. on AI, pp. 308–314 (1987) 13. Schaffner, C.: Selecting a classification method by cross validation. Machine Learning 13, 135–143 (1993) 14. Sleeman, D., Rissakis, M., Craw, S., Graner, N., Sharma, S.: Consultant-2: Pre and postprocessing of machine learning applications. International Journal of Human Computer Studies, 43–63 (1995) 15. UCI Machine Learning Data Repository, http://archive.ics.uci.edu/ml/ (last accessed January 2010) 16. Vilalta, R., Giraud-Carrier, C., Brazdil, P., Soares, C.: Using Meta-Learning to Support Data Mining. International Journal of Computer Science & Applications, 31–45 (2004) 17. Witten, I.H., Frank, E.: Data Mining: Practical Machine Learning Tools and Techniques, 2nd edn. Morgan Kaufmann Publishers, Elsevier Inc. (2005)
Part III
Information Systems Analysis and Specification
Process Mining for Job Nets in Integrated Enterprise Systems Shinji Kikuchi1, Yasuhide Matsumoto1, Motomitsu Adachi1, and Shingo Moritomo2 1 Fujitsu Laboratories Limited, 4-1-1 Kamikodanaka, Nakahara-ku, Kawasaki, Kanagawa 211-8588, Japan 2 Fujitsu Limited, 1-5-2 Higashi-Shimbashi, Minato-ku, Tokyo 105-7123, Japan {skikuchi,ymatsumo,moto.adachi,moritomo}@jp.fujitsu.com
Abstract. Batch jobs such as shell scripts are used to process large amounts of data in large scale enterprise systems. They are cascaded via certain signals or files to process their data in the proper order. Such cascaded jobs are called “job nets”. In many cases, it is difficult to understand the execution order of batch jobs in a job net because of the complexity of their relationships or because of lack of information. However, without understanding the behavior of batch jobs, we cannot achieve reliable system management. In this paper, we propose a method to derive the execution pattern of the job net from its execution logs. We developed a process mining method which takes into account the concurrency of batch job executions in large scale systems, and evaluated its accuracy by a conformance check method using job net logs obtained from an actual large scale supply chain management system. Keywords: Process mining, Batch job, Job net, Integrated enterprise system, Behavior analysis.
certain time such as overnight or at the end of the month. After a job finishes, it can invoke another job and hand over its processing results via files or signals output from the previous job. By invoking a job from another job running on a different server or subsystem, we can choreograph some subsystems to process their common data in the proper order, as described in Fig. 1. Therefore, we can say that these batch jobs play important roles in bridging the gap between subsystems and connecting them so that the whole system can process data properly. We call a set of batch jobs concatenated and executed in a defined order a “job net”. It is, however, extremely difficult to understand the behavior and the execution orders of the jobs in these kinds of “tangled” job nets, because the clues to solving the problem are scattered everywhere. For example, even if job scheduling information is stored in several job net manager systems for the invocation of jobs, they might be managed by the administrator of a subsystem or by each individual department. Because of this “silo” management, access to this kind of information from outside the department might be prohibited. In addition, in many cases, the information regarding the triggers (files and signals) invoking the jobs is embedded in the job’s script or the program itself. Deriving the information regarding the triggers from program code analysis is practically impossible. For these reasons, it is difficult to understand the behavior of interrelated batch jobs. This problem can worsen in the case of the integration of larger systems such as M&A. However, without understanding the behavior of job nets, we cannot achieve reliable service management, such as predicting the finishing time of jobs or determining which job was the root cause when the execution of jobs are delayed. Therefore, there is a strong need for a technique for understanding the behavior of job nets. Against this background, we developed an analysis method to derive a model of job nets representing their execution order from the job net log recording their execution results by using a process mining technique. In this method, we improve the Heuristic Miner process mining algorithm by taking into account the concurrent execution of jobs. We then applied our method to job net logs derived from an actual SCM system and evaluated the accuracy of our approach by a conformance check method. The rest of this paper is organized as follows. First, in Section 2, we survey related work. Next, Section 3 explains our job net mining algorithm in detail. We then show how it works through a case study in Section 4 using an actual set of log data and evaluate its performance. Following this, Section 5 concludes the paper and outlines future challenges. Logistics
Marketing Job net manager
Invoke
Production
Signal
File File
File
File File
Signal
File … Batch job … Job net
Fig. 1. Job nets connecting different systems via files and signals
Process Mining for Job Nets in Integrated Enterprise Systems
301
2 Related Work One of the most important major techniques for deriving the behavioral characteristics of systems is the process mining approach [2]. Process mining is a method of extracting the information about a process from its execution results (event logs) in order to construct a process model that can represent the behavior of systems or processes. The process model can be represented by some state transition systems such as the Markov model or Petri Net. Various algorithms for process mining have been proposed so far, such as the alpha-algorithm [3] and genetic algorithm [4]. These algorithms are intended for application to the analysis of business processes usually executed by human beings and consisting of less than a dozen events. The computational time for these algorithms therefore tends to increase rapidly with the number of events per process. While this does not matter when the process consists of only a small number of events, we cannot apply these methods directly for job net analysis since the job nets in large scale systems can consist of hundreds of jobs. Computational time for the Heuristics Miner algorithm [5] is relatively small because of its simplicity and straightforwardness. It is, however, possible that this simple algorithm cannot achieve sufficient accuracy in job net analysis for large systems where we have to take into account the possibility that many jobs are executed concurrently. There is therefore a strong need for an algorithm that is specialized for job net mining so as to achieve both short computational time and sufficient accuracy.
3 Job Net Mining Method In this section we explain our job net mining method in detail. First, we define its data structure. Next, we explain our mining algorithm based on the Heuristic Miner algorithm with some improvements for taking concurrency in batch job execution into account. Then, we explain how the accuracy of our mining method can be evaluated through a conformance check approach. 3.1 Data Structure Fig. 2 summarizes the input and output data for our approach. As explained in Section 1, in many cases we cannot obtain or determine the location of the information defining the schedules or relationships of the batch jobs. Therefore, we assume here that we can obtain only the job net event logs which are output as the execution results of these jobs. This kind of log is relatively easy to obtain, since it is usually created so that the administrators of job nets can diagnose their behavior after a problem has occurred. We also assume that the start time and end time of each job is recorded in the job net logs. For simplicity, we assume here that the granularity of the timestamp is 1 second and each job is executed no more than once per day. In our analysis, we define the time window (e.g. overnight, from 0:00 am to 6:00 am) on which we focus attention. Then we extract the data within the time window to be used for our analysis. We refer to the sequence of log data for a job net executed in the time window on a particular day as an instance of the day.
302
S. Kikuchi et al. Job net model ( output ) Job net log (raw data) (March 1st – 31st)
Instances Event
…..
3/1 00:00 Job1 Start 3/1 00:10 Job2 Start 3/1 00:15 Job2 End … 3/31 23:40 Job53 End 3/31 23:50 Job89 End
March 3rd March 2nd March 1st 3/1 00:00 Job1 Start 3/1 00:10 Job2 Start 3/1 00:15 Job2 End … 3/1 05:58 Job35 End
Job net Mining
Likely following events
Job1(Start)
Job1(End) xor Job2(Start)
Job2 (Start)
Job2(End) xor Job1(End)
Job1 (End)
Job3(Start)
Job2 (End)
Job4(Start) and Job5 (Start)
…
Job1(E) Job1(S)
Target time window Every night ( 0:00 - 6:00 am)
Job3(S) Job4(S)
XOR XOR
Job2(S)
Job2(E)
AND
Job5(S)
Fig. 2. Input and output data for job net mining
The output from our method is a job net model representing the common patterns of orders of events emerging in many instances. Here we assume that each event is either the beginning or the finishing of a job recorded in the logs. The model contains order relations between each preceding event and a set of (likely) following events. It can be represented by tables or directed graphs as shown in the right hand part of Fig. 2. If a preceding event has more than two possible following events, we should determine those branches as either an AND-fork or an XOR-fork. The AND-fork means that all of the following events will occur after the preceding event, while the XOR-fork means that only one of the following events will occur after the preceding event. 3.2 Mining Algorithm Since a large number of batch jobs may be executed simultaneously in large scale systems consisting of many servers, our analysis has to take the concurrency in job net mining into account in order that sufficient accuracy is achieved. We therefore developed an algorithm consisting of the following three steps. First, we determine the set of jobs which are likely to start at the same time from timestamps recorded in the log. Next, we derive the order of events using the Heuristic Miner algorithm. Finally, we modify the Heuristic Miner results using the information regarding concurrent jobs derived in the first step. The details of these steps are as follows. Step 1: Concurrent job detection from timestamp In the first step, we determine the set of jobs which start at almost the same time for reasons such as the preceding job triggering several following jobs, or jobs happening to be scheduled to start at the same time by different administrators. We use the following evaluation functions to determine whether jobs Ji and Jk are likely to start at the same time.
p( J i , J k ) ≡
N (| S ( J k ) − S ( J i ) |< τ p ) N (Ji )
(1)
N ( J i ) represents the number of instances including execution of job Ji. N (| S ( J i ) − S ( J k ) |< τ p ) represents the number of instances in which the difference
Process Mining for Job Nets in Integrated Enterprise Systems
303
between the start times of Ji and Jk is smaller than the threshold τ p sec. We can say that Ji and Jk tend to start at the same time if p( J i , J k ) is close to 1. Using equation (1), we define the set c( J i ) of jobs which are likely to start at the same time as job Ji.
c ( J i ) ≡ {J k | p ( J i , J k ) > τ c }
(2)
This means that if p( J i , J k ) is larger than τ c , Jk is included in c( J i ) . Step 2: Event order analysis by Heuristics Miner Heuristics Miner [5] is a process mining algorithm which derives patterns in the order of events from event logs independent of the events’ timestamps. This method determines the existence of consecutive order relations between events using the following function ei ⇒W ek .
⎛ | ei >W ek | − | ek >W ei | ⎞ ⎟⎟ ei ⇒W ek ≡ ⎜⎜ ⎝ | ei >W ek | + | ek >W ei | +1 ⎠
(3)
Function | ei >W ek | represents a count of the instances in which event ei’s next event was ek. Here, we take into account only the order of events, independent of timestamps. We consider that there is an order relation between event ei and ek when the function ei ⇒W ek is over a given threshold. In our analysis, we adopt an allactivities-connected-heuristic that derives at least one preceding event for each event. Here, we define two thresholds: (1) Dependency threshold τ D and (2) Relative to best threshold τ R . If (ei ⇒W ek ) > τ D , we conclude that there is an order relation between event ei and ek. If event ek does not have any preceding event ei such that (ei ⇒ ek ) > τ D , we select an event ex such that (e x ⇒W ek ) ≥ (e y ⇒W ek ) for any other event ey. We then consider that there is an order relation between event ei and ek if (ei ⇒W ek ) ≥ (e x ⇒W ek ) − τ R . Next, we use the following function ei ⇒W e j ∧ ek to determine whether the order relations ei ⇒W ek and ei ⇒W e j from the same event ei represent an ANDbranch or XOR-branch.
⎛ | e j >W ek | + | ek >W e j | ⎞ ⎟ ei ⇒W e j ∧ ek ≡ ⎜ ⎜ | e > e | + | e > e | +1⎟ i W k ⎝ i W j ⎠
(4)
If the value of this function is larger than threshold τ A , we assume that the two relations are AND-branches, meaning that both following events will eventually occur after the preceding event ei . Otherwise, we conclude that they are XOR-branches, i.e. that only one of the following events will occur after the preceding event.
304
S. Kikuchi et al.
In our analysis, we assume each event to be either the start event or end event of a job. In the remainder of this paper, we denote the job Ji’s start event and end event by eiS and eiE respectively. Step 3: Adjustment for concurrency After determining the sets of concurrent jobs in Step 1 and the jobs’ order relations in Step 2, we adjust the results of the latter by those of the former’s in Step 3. Fig. 3 shows the general concept of the adjustment. Here we suppose that Step 1 determined that the set of jobs J k1 , J k 2 , J k 3 and J k 4 start at the same time. The
corresponding start events of these jobs are represented in the dotted rectangle by ekS1 , ekS2 , ekS3 , and ekS4 respectively. We also suppose that the order relation from a preceding event ei to the start event of some of these jobs (e.g. J k1 ) is determined by Step 2, as shown by the arrow in the left hand part of Fig. 3. In such cases, while any jobs in the set of following jobs ( J k1 , J k 2 , J k 3 and J k 4 ) can start after the preced-
ing event ei , the relations between event ei and the following events other than ekS1 are not correctly detected by the Heuristics Miner algorithm. It is difficult for Heuristics Miner to correctly determine such concurrencies because the occurrence of several events at almost the same time can be recorded in their logs in random order. In order to solve this problem, we adjusted the model derived in Step 2 using the results of Step 1 as follows. (1) Select a relation ei ⇒W ekS and a set of jobs c ( J k ) which start at the same time as the start event ekS of job J k . (2) Establish the order relations from the preceding event ei to the start event of the jobs in c ( J k ) . (3) Designate the relations thus established as AND-branches The result of this adjustment can be seen on the right hand side of Fig. 3. By performing this adjustment in our model construction, we can take into account the concurrent job information which may be overlooked by the Heuristic Miner algorithm. Result from Step 1 and 2
Adjustment in Step 3
Jobs start at the same time
Jobs start at the same time
ekS1
ekS1 ekS2 ei
AND-branch
ekS2
ei ekS3
ekS3
ekS4
ekS4
Fig. 3. Adjustment in Step 3
Process Mining for Job Nets in Integrated Enterprise Systems
305
3.3 Conformance Check
In order to evaluate the accuracy of our mining algorithm, described in Section 3.2, we use a conformance check [6, 7] which evaluates how well process models derived by a process mining algorithm express the patterns emerging in event logs by “replaying” the instances of the logs on the obtained models and detecting inconsistencies between the model and the logs. The general concept of the conformance check is shown in Fig. 4. First, we prepare a process model derived from a process mining algorithm. We also prepare instances of logs for evaluation of their conformance with the model. Next, we replay on the model, one by one, the events recorded in the instances. In this replay, we predict the candidates for the next events following each preceding event by referring the process model. For example, in the case shown in Fig. 4, after the first event e1 occurs in instance A, we predict that the next event will be either e2 or e3 , because these events are the following events for e1 in the process model. Likewise, after the second event e2 occurs, we predict one of the events e3 , e4 , or e7 will be the third event. Here event e3 still remains as one of the expected next events since the links e1 → e2 and e1 → e3 are AND-branches meaning that both e2 and e3 can occur after the preceding event e1 . Next, we check whether or not each prediction is correct. We conclude that the model conforms to the instance if the i-th event recorded in the instance is included in the (i-1)-th expected next events predicted by the model. In Fig. 4, while the first three events ( e1 , e2 and e4 ) are predicted correctly, the occurrence of the fourth event e6 is not predicted by the model, because it is not included in the third set of expected next events. If the number of such events, those not expected by model, is
Instance A 00:00 00:05 00:08 00:10 …
Process model
e1 e2 e4 e6
e2 e1
e7
AND
e3
e5 XOR
e6 Replay
No. Event
e1 e2 e4 e6
1 2 3 4 …
e4 XOR
…
Predict next events
Expected next events
e2 e3 e3 e4 e7 e3 e3 e8 …
e8
Wrong !
Fig. 4. Conformance Check
306
S. Kikuchi et al.
small, we can conclude that the model fits well with the given instance. This fitness can be evaluated by the following “fitness” function which is simplified from the original functions [6, 7] so that it suits the conditions in our job net analysis.
∑ f =1− ∑
k
i =1 k
mi
i =1
(5)
ni
In this fitness function, k represents the number of instances used for the evaluation, ni is the number of events recorded in the i-th instance and mi is the number of events which are not predicted correctly by the given process model. A value of the function close to 1 indicates that the model fits well with the given instances. All the same, if we include all the events of the instance in the set of expected next events, we can always achieve a high value for the fitness function. This, however, would be meaningless because it does not narrow down the set of possible next events. Therefore, the smaller number of expected next events derived from a model, the closer that model appropriately represents the structure of the process, and the more valuable it is. To evaluate this characteristic, we use the following “appropriateness” function, which has also been tailored to our purpose.
∑ a=
k n (M i =1 i
( M − 1) ⋅
− xi )
∑
k
i =1
(6)
ni
M is the number of events emerging in the model and xi represents the average number of expected next events in the replay of the i-th instance. If the model can always narrow down the expected next events to just one event, the value of the appropriateness function is 1. When checking the conformance of the model with the instances, we evaluate both the fitness and appropriateness functions.
4 Experiment 4.1 Setup
We evaluated our approach using the following setup. First, we collected job net log data from an actual SCM system. This system was created by interconnecting 18 servers fulfilling different roles such as marketing, production management, and logistics. Of these 18 servers, we picked out the data recorded in the five main servers, on which many of the job nets are executed. For evaluation, we prepared the two sets of data specified in Table 1: Log A is data obtained overnight on weekdays in June and Log B is data obtained for the same days and times in July. Each job’s start/end timestamp is recorded in the data. In order to evaluate whether our approach is able to predict the order of job executions correctly, we constructed the job net model from Log A and separately checked its conformance with Log A and with Log B. In addition, in order to evaluate the effectiveness of our mining algorithm, we compared the results of our approach (using all of the steps 1, 2 and 3 in Section 3.2) with the
Proceess Mining for Job Nets in Integrated Enterprise Systems
307
Ta able 1. Log data used for experiments 'XUDWLR RQ /RJ$ /RJ%
-XQH VWWK \RQO\ :HHNGD\ -XO\V VWVW :HHNGD\ \RQO\
1XPEHU RI GD\V
7LPH ZLQGRZ
1XPEHU RI MREVDYJ
DP DP
DP DP
Heuristic Miner Algorithm m (using Step 2 only). For the thresholds, we used τ p = 1 (sec), τ c = 0.5 , τ D = 0.8 , τ R = 0.1 , and τ A = 0.1 . 4.2 Results We implemented our algoriithm in Java and executed the experiments described in the previous subsection by usin ng a PC with Windows XP Professional Edition, 4.3G GHz CPU, and 1GB memory. The T job net mining task in each experiment finished witthin 10 minutes. Since 3,356 ind dividual jobs were recorded in Log A, the number of eveents (job start and end events) in i the job net models constructed in each experiment w was 6712. Fig. 5 shows a part of o the derived model drawn by Graphviz [8] with the arrrow attributes (AND or XOR) omitted o for simplicity. Table 2 summarizes thee results of the experiments. Comparing the numbers of unexpected events in Heuristiic Miner (Case 1 and 2) with the numbers in our approoach
Log A (June) Log A (June) Log A (June) Log A (June)
Log A (June) Log B (July) Log A (June) Log B (July)
2035.2 2054.7 2035.2 2054.7
Number of unexpected events (avg.) 246.4 273.5 93.7 122.1
Number of expected next events (avg.) 64.3 64.8 75.2 76.4
Fitness Appropriat eness 0.879 0.867 0.954 0.941
0.991 0.991 0.989 0.989
(Case 3 and 4), it can be seen that the latter are much smaller than the former. This results in a higher value of fitness parameter for our approach than for Heuristic Miner. Furthermore, the numbers of expected next events and the appropriateness values in both algorithms are almost the same. Therefore, we can conclude that a more precise model can be constructed through our approach than through the Heuristic Miner algorithm alone, without having much impact on the appropriateness parameters. In addition, the difference between the results produced by the same algorithm (Case 3 and 4) is quite small. Therefore, we can conclude that our algorithm is able to predict the behavior of the job nets in July using the model constructed from the logs recorded in June with the same precision as in the case where the log data used for model construction and for conformance checking are the same. 4.3 Visualization Function
We implemented a web-based visualization function to visualize the job net analysis results. Fig.6 shows its screenshot. It outputs the job net structure analysis results in the following way. First, we input everyday’s job net execution logs to this function. Next, it derives the order relations between jobs by our proposed algorithm. Then, the relation information is input to Graphviz and it calculates the position of the node representing each job. The positions of these nodes are ordered from the top of the graph to the bottom by the average time instants at which these jobs are executed. Finally, this function inputs the Graphviz’s outputs and display the job net structure graph by using Adobe Flex framework [9]. This tool helps job net administrators comprehend the progress of job net processes and identify the root cause of problems as following ways. First, this function can change the colors of nodes representing jobs depending on the execution status of these jobs. For example, the nodes painted in purple in Fig. 6 represent the jobs whose executions in a certain day were excessively delayed compared with the average execution time. This graphical representation enables us to identify the sets of delayed jobs at a glance and determine the possible root cause of the chained delays from the execution order relation between these jobs. Next, this function can “replay” the job net execution status on the screen. For example, if we assign a certain past time instant to the function, it can calculate the each job’s status (e.g. not executed yet, executing, finished) at that time and displays the nodes in different colors depending on their status. Therefore, it can output the jobs’ execution status at any time instant. By changing the colors of these nodes continuously along with the time series, we can easily see the progress of job net executions in a certain day. It helps us understand the behavior of job nets and determine the cause of problems.
Process Mining for Job Nets in Integrated Enterprise Systems
309
Fig. 6. Screenshot of visualization tool
Of course, this tool has basic functions such as scrolling, zoom-in and zoom-out. Therefore, we can easily look at the part we pay attention to.
5 Conclusions We proposed a job net mining method to derive the execution order of job nets from their logs. In this method, we identify the set of jobs executed at the same time. Using this information, we then modify the job net model derived by the Heuristic Miner algorithm. Through conformance checking using the log data of job nets executed in an actual SCM system, we confirmed that our method enables construction of a job net model that represents the order relations between jobs more accurately and appropriately than that obtained through Heuristics Miner alone. We are now considering the following work for the future. First, we plan to improve our visualization tool for the concise visualization of the structure and characteristics of job nets. Since it is difficult for system administrators (humans) to understand the relationships between over 1000 events in a single directed graph, we need a method of extracting the important part of the model or abstracting its structure in order to make it more understandable. Next, using the proposed approach, we plan to develop a method of predicting the finishing times of job nets. Since one of the biggest concerns many administrators of job nets have is whether or not the job nets will finish within the deadline, this function will be able to help them manage their job nets more efficiently.
310
S. Kikuchi et al.
Finally, we plan to develop a method for analyzing the model derived by our approach. For example, when failures or delays occur in job net execution, the job representing the root cause can be detected by backtracking through the order relations in the derived model. In addition, by measuring the execution durations of jobs, the critical path, taking a large amount of time to finish, can be detected. This information is useful for reorganizing job nets so as to reduce their execution times. By these analysis techniques, we will be able to improve reliability in the management of large scale integrated complex computer systems. Acknowledgements. We would like to thank Masaru Ito for his help in collecting job net data and for giving us much useful advice.
References 1. Fujitsu SystemWalker Operation Manager v13.3, http://www.fujitsu.com/ global/services/software/systemwalker/products/operationmgr/ 2. van der Aalst, W.M.P., Reijers, H.A., Weijters, A.J.M.M., van Dongen, B.F., Alves de Medeiros, A.K., Song, M., Verbeek, H.M.W.: Business Process Mining: An Industrial Application. Information Systems 32(5), 713–732 (2007) 3. van der Aalst, W.M.P., Weijters, A.J.M.M., Maruster, L.: Workflow Mining: Discovering Process Models from Event Logs. IEEE Transactions on Knowledge and Data Engineering 16(9) (2004) 4. van der Aalst, W.M.P., Alves de Medeiros, A.K., Weijters, A.J.M.M.: Genetic process mining. In: Ciardo, G., Darondeau, P. (eds.) ICATPN 2005. LNCS, vol. 3536, pp. 48–69. Springer, Heidelberg (2005) 5. Weijters, A.J.M.M., van der Aalst, W.M.P., Alves de Medeiros, A.K.: Process Mining with the Heuristics Miner-algorithm. In: BETA Working Paper Series, WP 166, Eindhoven University of Technology (2006) 6. Rozinat, A., van der Aalst, W.M.P.: Conformance Checking of Processes Based on Monitoring Real Behavior. Information Systems 33(1), 64–95 (2008) 7. Rozinat, A., van der Aalst, W.M.P.: Conformance Testing: Measuring the Fit and Appropriateness of Event Logs and Process Models. In: Proceedings of First International Workshop on Business Process Intelligence (BPI 2005), pp. 1–12 (2005) 8. Gansner, E., North, S.: In: An open graph visualization system and its applications to software engineering. Software – Practice & Experience 30(11), 1203–1233 (2000) 9. Adobe Flex Framework, http://labs.adobe.com/technologies/flex/
Identifying Ruptures in Business-IT Communication through Business Models Juliana Jansen Ferreira1,2, Renata Mendes de Araujo1,2, and Fernanda Araujo Baião1,2 1
Research and Practice Group in Information Technology (NP2Tec) Federal University of the State of Rio de Janeiro (UNIRIO), Rio de Janeiro, RJ, Brazil 2 Graduate Program in Informatics (PPGI), Federal University of the State of Rio de Janeiro (UNIRIO), Av. Pasteur 458, Urca, Rio de Janeiro, RJ, Brazil {juliana.ferreira,renata.araujo,fernanda.baiao}@uniriotec.br
Abstract. In scenarios where Information Technology (IT) becomes a critical factor for business success, Business-IT communication problems raise difficulties for reaching strategic business goals. Business models are considered as an instrument through which this communication may be held. This work argues that the business model communicability (i.e., the capability of a business model to facilitate Business-IT communication) influences on how Business and IT areas understand each other and on how IT teams identify and negotiate appropriated solutions for business demands. Based on the semiotic theory, this article proposes business model communicability as an important aspect to be evaluated for making Business-IT communication cycle possible, and to support this evaluation a set of communicability tags are proposed to categorize communication ruptures identified during the evaluation of business models communicability. Keywords: Information systems specification, Communicability, Communication ruptures, Business modeling, Business-IT alignment, Business-IT communication, Communicability evaluation.
areas have the same understanding about the business context. Business models are considered as an instrument through which IT area can share the same understanding of the business area of their working contexts [2]. Research indicates the use of business modeling as a facilitator of communication for IS specification, helping the interaction between the stakeholders, both from business and IT parts, business analyst and IT analyst [2][1][21][15][3][5]. The IT analyst needs to understand the business reality so he can specify a IS capable to support that business needs. This research question is: “how to improve the understanding of business context by IT analysts to provide the alignment in the ISs specification from business models?” Considering the business model as a means through which this the communication between business analyst and IT analyst takes place, the capability of a business model to facilitate the communication (which we will call communicability) may be considered as an important feature for Business-IT alignment to be effective. The main issue here is how to improve the understanding of the business context by the IT analyst so that he/she can specify ISs aligned with business contexts represented in business models. IS specification, in this work, concerns the identification of main IS functional requirements and data conceptual models. This research proposes the use of Semiotics as an approach to the evaluation of business models communicability. Semiotics, which is applied and discussed in many areas of research as psychology, anthropology, and philosophy, is the study of signs, the relation among those signs and what they mean all together [17][4]. With this purpose in mind, semiotics concepts are applied. This research proposes the application of semiotics concepts to business models, considering those models as a set of signs and their relations that have meaning for those who model, who analyze or use the models as a communication instrument. In this research case, the focus is the business model as the message being communicated by the business analyst to the IT analyst to specify IS, helping on the interaction between the stakeholders from IT and business parts. This work also proposes the definition of communication ruptures, as defined by the HCI (Human-Computer Interaction) area semiotic theory - Semiotic Engineering- to evaluate the communicability features in systems interfaces [6] applied to business models communicability evaluation. This paper defines business models communicability and explains how it is related to semiotics foundation and semiotics engineering. We propose a set of communication ruptures which may be found by IT analysts when considering business models for IS requirements specification. The set of communication ruptures helps in identifying specific points of the business model where communicability needs to be improved in the future, and determine business model communicability level. The ruptures serve as guidance for planning actions that should be carried out at modeling time, when the business models are being designed, so as to produce business models that are bound to be powerful communication instruments for Business-IT alignment. Section 2 defines concepts of business modeling and their relation to IS specification. Section 3 presents the concepts of semiotics and their application. Section 4 defines communicability for business models, while Section 5 presents the proposed set of communicability tags and case studies results. Section 6 concludes the paper and outlines contributions, limitations and future work.
Identifying Ruptures in Business-IT Communication through Business Models
313
2 Business Models for Information Systems Specification Business modeling may have different objectives. Some approaches focus on business process improvement [21], others consider the need to identify requirements to develop IS that support the business models [15][3][5] and others consider business model automation [7][12]. At the present research, the business model is understood as an instrument to support the communication between the stakeholders of an IS implementation, business analyst and IT analyst (Fig. 1), when there exists a business context and need which demands IT solutions and support.
Fig. 1. Business models as instrument of communication between business and IT
We focus on the communication between an IT analyst and a business analyst during IS specification, when the IT analyst uses the business model as the representation of how business occurs. In this situation, the business analyst, who created the business model, is trying to communicate the business context where the IS should be inserted to the IT analyst. In those scenarios, the IT analyst interacts with the business analyst through the business model. There are several methods for specifying IS from business models [1][21][15][3][5][7]. All of them assume that the business models must communicate to IT analyst what is necessary to the IS specification that will support that business model. The business models must communicate the business context, presenting what the business analyst would inform to the IT analyst so the IS specification could be aligned with the business needs. Therefore, it is important to evaluate if the business models that are being produced can be an effective communication instrument for specifying IS. The focus is to evaluate how well the business model is able to communicate, rather than the business model per se. Its communicability can be identified and evaluated when the IT analyst is using the business model during IS specification. This communicability evaluation should be able to identify communication ruptures - the moments when the business model was not able to communicate, or some
314
J.J. Ferreira, R.M. de Araujo, and F.A. Baião
miscommunication occurred. The number and type of the communication ruptures resulting from this evaluation could be used as input for improving the business model regarding its communicability feature towards being a better communication tool between business and IT (Fig. 2). This work attempts to identify a set of communication ruptures which can be further used both as reference for communication evaluation methods as well as guidelines for the definition of business modeling heuristics for communicability .
Fig. 2. Business model communicability evaluation providing improvement business-IT alignment for IS specification
3 Semiotic Theory and its Application Semiotic is a multidisciplinary theory (related to various areas of knowledge, such as psychology, anthropology, philosophy, linguistics and others) focusing on signs, their relations and communicability. A sign is "something that stands for something or to someone in some capacity” [15]. It may be understood as a discrete unit of meaning, and includes words, images, gestures, scents, tastes, textures, sounds – essentially, all possible ways in which information can be communicated as a message by any sentient, reasoning mind to another. Semiotics is related to the human impressions on the meaning of things in the world, but also concerns the communication (intent) held with the use of those signs and their relations [8][17]. Pierce [17] defines a sign as a triad: representation, reference and meaning. The representation is how the sign is presented in a giving language, reference relates to the existence of the sign in the real world and meaning is the interpretation (semantic comprehension) that people built in their minds when they are exposed to a representation of the reference. From a semiotic perspective, it does not make sense to mention representation without reference and meaning. One example of the application of semiotics is the Semiotic Engineering, a research field of the HCI (Human-Computer Interaction) area. Semiotic Engineering emphasizes the ability of designers to communicate their intent through interactive
Identifying Ruptures in Business-IT Communication through Business Models
315
interface discourse [6]. The Communicability Evaluation Method (CEM) proposed in [19] evaluates and enhances interaction in software applications by identifying communication ruptures, so that the interface communicability can be improved. The communication ruptures identify the points where the interface can be improved to be more communicative [19]. The CEM core is a set of communicability tags defined to categorize the communication ruptures, or breaks, during the user’s interaction with the system interface. There are thirteen (13) communicability tags defined at CEM. Each one is related to a main characteristic associated with the communication rupture observed during the user interaction with the system interface. Communication ruptures are categorized as temporary, partial or complete (Table 1). Table 1. CEM communicability tags adapted from [19] Communicability tag
Communicability tag category
“I give up.”
Complete rupture
The user is aware of the rupture.
“Looks fine to me.”
Complete rupture
The user is not aware of the rupture.
“Thanks, but no, thanks.”
Partial rupture
“I can do otherwise.”
Partial rupture
“Where is it?”
Temporary rupture
“What happened?”
Temporary rupture
“What now?”
Temporary rupture
“Where am I?”
Temporary rupture
“Oops!”
Temporary rupture
“I can’t do it this way.”
Temporary rupture
“What’s it?”
Temporary rupture
“Help!”
Temporary rupture
“Why doesn’t it?”
Temporary rupture
Communicability tag characteristic
The user does understand (but do not adopt) the design solution offered. The user do not understand (and therefore does not adopt) a design solution that is offered. The user can not find an expression of the interface language to say (to the system) what to do. The user does not notice or do not understand what the agent of the designer (the interface) is saying. The user can not even formulate what would have meant to say at this time. Although using appropriate elements of the interface language, it was spoken in the wrong context. The user used a wrong expression (which means something else, or it means nothing). The conversation followed a course of signs that will not lead to anything. Though implicit metacommunication (for example, by inspection or “tip” calls). Though explicit metacommunication (request for help) Through experimentation and reasoning (repeating and examining interactions or certain types of interaction, to understand why they are not going well).
316
J.J. Ferreira, R.M. de Araujo, and F.A. Baião
Taking a business model as a set of signs according to the Semiotics theory, its communicability feature may be further investigated towards their effective usage for communication. Communication ruptures – as defined by HCI Semiotic engineering – may be applied to business models, since they function as the interface between the business analyst and the IT analyst, where the communication for an IS specification can be held.
4 Communicability Evaluation of Business Models for IS Specification Applying the concept of communicability to business models for IS specification, we define the communicability of a business model as the capability of a business model to facilitate the communication between business analyst and IT analyst during an IS specification. We argue that business model communicability directly influences the ability of an IT analyst to understand the business model as it was designed by the business analyst. If the IT analyst understands the business context represented by the business model, the probability that he/she elaborates an IS specification aligned to the business needs increases. The need to evaluate the communicability is also identified for business models. As the business analyst is the designer of the communication of business models, the messages to the IT analyst must be evaluated, looking for communication ruptures which can be inputs for the business model communicability improvement. A communication rupture of a business model is the identification of a point in the business model, where it was not able to communicate, or otherwise the communication was incomplete or incorrect of any information or understanding necessary for the specification of the IS. Ruptures are identified during the interaction of the IT analyst and the business analyst through the business models. Ruptures can be categorized into temporary, partial or complete, in the following way that is analogous to the ruptures categories proposed by CEM. Temporary ruptures occur when sense-making (reasoning, interpretation, communication) is interrupted and the IT analyst is aware of the wrong usage of sign communication. Partial ruptures occur when the IT analyst is aware of the sign representation in the business model, but he decides not to use it. Complete ruptures occur when the IT analyst is aware of the miscommunication, and decides to settle with their knowledge of the domain or leave the artifact incomplete. Being aware of communication ruptures, a communicability diagnosis of business models for IS specification can be formulated and used as the input for improving business model communicability. Therefore, CEM [19] was used as a reference and inspiration for business model evaluation for information systems specification.
5 Communication Ruptures of Business Models in IS Specification During this research, no communicability ruptures categorization for this context was found in literature. Therefore, our research strategy was to perform exploratory studies,
Identifying Ruptures in Business-IT Communication through Business Models
317
using CEM communicability tags (Table 2) as reference, to identify an initial set of communication ruptures tags (communicability tags) to categorize the communication ruptures of specification of IS from business models, as described in section 5.1. After defining the initial set of communicability tags, this set was validated through three study cases where the business models communicability was observed and communication ruptures were identified. The goal was to verify if the communication ruptures identified could be categorized by one of the proposed communicability tags. 5.1 Business Models Communicability Tags for IS Specification Three (3) exploratory studies were performed to observe and investigate communication ruptures between IT analyst and business analyst through the business model, while the IT analyst tries to specify an IS. The studies domain is related to a process of real estate management. The business context is of a large organization that needs to manage its real state assets, regarding tax payment; ownership regularization, real estate documentation (like real estate writ, environment taxes, and ownership transfers), real estate documentation and taxes pendency. The exploratory studies scenario was defined as follows: IT analysts profiles – the IT analyst selected for the studies are indicated at table 2. Table 2. IT analyst’s profiles that participate on exploratory studies Exploratory Study (ES) ES I ES II ES III
Knowledge of business modeling Theoretical Theoretical and practical Theoretical
Level of experience on IS specification High High Low
The observer – the observer was an IT analyst with business modeling experience. The observer also had experience on communicability evaluation related to HCI. The observer objective was to identify and register the communication ruptures during the study. Tasks to perform – the IT analysts were asked to elaborate a class diagram and a use case specification (both in UML notation) from the business model. The final artifacts presentation was chosen by the IT analyst, so it would not be a difficulty factor that could cause false communication ruptures. Business model presentation – the business model was represented in a document called “business process book”, or simply the “book”. The book is a document composed by process flows, processes and activities descriptions, elements descriptions as documents, business rules, input/output informations; and business terms. This representation format was well known by the IT analyst, so this was not a difficulty factor that could cause false communication ruptures. Business models domain – the business models domain chosen for the studies was known by the IT analysts. Business models types – the main models used on the study were the eEPC (Extended Event-Driven Process Chain) that represents the business process
318
J.J. Ferreira, R.M. de Araujo, and F.A. Baião
workflows, and the FAD (Function Allocation diagram) that details one activity considering its input/output information or artifacts, its performer and any other relevant information [11]. The idea while defining the exploratory studies scenario was to prevent other factors to cause communication ruptures during the studies and influence on the results. We wanted to focus on the communication ruptures associated to the interaction between the IT analyst and the business models. The exploratory studies analyses were performed using the observer records during the studies, where some verbalization from the IT analyst and some questioning by the observed to the IT analyst where used to characterize the communication ruptures identified during the studies. For an illustration of the studies records, the partial execution of one exploratory study is described below. Exploratory study partial execution record. The IT analyst started the task searching for the classes that would compose the diagram. The IT analyst narrated that she searched for domain concepts on the business model used for the study. She explored the business process book (the book) looking into the process and activities names and descriptions, trying to identify the domain concepts. During this task, at some moments, the IT analyst had doubts related to the domain concept’s candidate: “Is this a domain concept for sure? What does it mean?” This kind of questioning happened more than once; sometimes the answer was discovered right away by another description or model presented in the book (“Yes, this is a domain concept!”), but in other times the question remained and the IT analyst decided to consider the concept as a domain concept or not, but without confirmation of the business model (“I’m not sure, but I think this is it!”). The IT analyst looked for relationships and candidate methods on the process and activities description. Some doubts related to relationships were narrated: “What composes a real estate history? Which are its attributes? Where is this information? Where is it?” Looking further into the book, she found an activity related to real estate history analysis. By this activity description, she found the answer for her questions about real estate history. While looking for relationships among concepts, there were doubts about the relation among real estate manager, real estate and real estate pendency: “Is the property manager also responsible for the property pendency? Is this it?” The class diagram elaborated changed during the book exploration due to new information found while reading the book, which changed the way the IT analyst understood concept definitions: “Oops! This is not what I thought it was. Let me change the diagram”. The analyst narrated that estate pendency and the solution of real estate pendency seems to have a relationship but it was not clear for her. She looked through the book hoping to find some information that could clarify this relationship but with no success. So she decided to leave this relationship off the class diagram: “I give up! I do not know if this relationship should exist or not so I will leave it as it is, with no relationship”. Considering the three (3) exploratory studies, some observations and considerations where placed under discussion. Although the IT analyst profiles were distinct,
Identifying Ruptures in Business-IT Communication through Business Models
319
this difference did not seem to influence on the communication ruptures observed. The same level of difficulty was observed for all three IT analysts. The “done point”, when the IT analyst considered the artifact completed, was defined by each analyst. Since there are multiple paths to elaborate an artifact and the criteria to consider that artifact complete are subjective, the “done point” was not the same for all studies. Again, since there are multiple paths to elaborate an artifact, the anticipation of potential communication ruptures was limited. Since the communication ruptures are related to the path which the “conversation” takes, the ruptures were identified during the evaluation and after by analyzing the records registered by the evaluator or observer. From the thirteen (13) tags of CEM used as reference to categorize the communication rupture, five (7) of them were not observed during the exploratory studies: “What happened?”, “What now?”, “Where am I?”, “Help!”, “Why doesn’t it?”, “Looks fine to me.” and “Where is it?”. The first six (6) tags have in common the relationship with communication ruptures associated with the SI responses to user actions. Since the interaction of IT analyst and business models are limited, been one of those limitations the lack of reaction from the model, the absence of those tags are understandable. The tag “Where is it?” was renamed to “Where can I find it?” due to differences at the ruptures characteristics on interface interaction and business model interaction. Although the verbalization of this rupture during business model interaction was similar to interface interaction, the characteristics had fundamental differences that let us to rename the tag on IS specification from business models context. Four (4) new tags were defined to identify communication ruptures that could not be associated to previous existing tags: “Oops… Not found!” and “I can do it my way” were defined to tag complete ruptures, while “Is it?” and “Where can I find it?” were defined to tag temporary ruptures. The tag “Is it?” were sometimes substituted by complete category tags: “I give up.” and “I can do it my way.”. The tag “Oops… Not found!” showed no occurrence in exploratory studies, but the communication rupture associated can be inferred from comments at various times identified during the exploratory studies. Some temporary ruptures were resolved by the existence of other elements that contribute to its understanding, in the absence of those elements the temporary tag would be replaced by the new complete tag “Oops… Not found!” given that the element is not found in the business model. The Table 3 presents the set of ten (10) business models communicability tags for IS specification defined after the execution of the three exploratory studies, their categories, main characteristics and de number of occurrences of each tag on each exploratory study (ESI, ESII or ESIII). 5.2 Communicability Tags Validation To validate the proposed set of communicability tags defined after the exploratory studies, three (3) case studies were performed. For the studies executions dependable e undependable variables were defined. There were used two (2) dependent variables: DV1 -The viability in the association of communication ruptures identified to the communicability tags proposed and DV2The incidence of each communicability tag associated with communication ruptures.
320
J.J. Ferreira, R.M. de Araujo, and F.A. Baião
Table 3. Set of business models communicability tags for IS specification defined after exploratory studies Communicability Communicability tag tag category
“I give up.”
Complete
“I can do it my way.”
Complete
“Oops… Not found!”
Complete
“Thanks, but no, thanks.”
Partial
“I can do otherwise.”
Partial
“I can’t do it this way.”
Temporary
Communicability tag characteristic In the absence of further explanation on a business element (like lack of a term or business rule) the IT analyst “gives up” form that point of the “conversation”. The IT analyst does not get the response he wants and decide how to "answer" his own question from his knowledge of the domain, without the support of the business model. He infers without the support of the model. The IT analyst realizes that a model element expected to be found available in the model is not present. This tag marks the absence of an element, graphic or textual, of the business model. The IT analyst believes that he can get the response he wants in a certain way, but chooses otherwise knowingly. ("Preference"). The IT analyst does not get the response he wants and decide how to "answer" his own question from other concepts found in the business model. He infers with the support of the model. The IT analyst realizes during the IS specification SI (Development of IT artifact) that the decisions made to specification the IS are not correct and should be reconsidered. He changes the artifact.
ES I
ES II
ES III
1
2
1
3
2
2
*
*
*
4
4
2
2
4
3
2
1
1
Identifying Ruptures in Business-IT Communication through Business Models
321
Table 3. (continued) Communicability Communicability tag tag category
“Where can I find it?”
Temporary
“Oops!”
Temporary
“What’s it?”
Temporary
“Is it?”
Temporary
Communicability tag characteristic The IT analyst needs certain information about a business element, but not located at that time. He can find later, or not find it (“Oops… Not found!”). The IT analyst realizes that he made a mistake in understanding a business element and corrects his error. The IT analyst needs the support of other business elements to understand an element. (Ex: use of business terms, descriptions and so on.) The IT analyst needs the support of other business elements to confirm his understanding about a business element.
ES I
ES II
ES III
6
5
4
3
1
1
4
2
2
5
6
2
The first was used to verify if the set of tags was applicable on an evaluation of business model communicability and the second was used to verify the incidence of communicability tags that could indicate that the tag characteristic defined is too generic or misunderstood, so this tag and its characteristics should be re-examined. These variables were measured through the observation made during the execution of the case studies and analysis of material produced in those studies. There were used (7) seven undependable variables: UV1 - Experience of the IT analyst in business modeling, and his experience related to the concepts regarding the presentation format of business models, UV2 - The IT analyst understanding of the business context of models, UV3 - Experience of the IT analyst in the IS specification, using the class diagram and use cases specification, UV4 - Experience of the evaluator on assessments of communicability, UV5 - The ability of the evaluator in identifying communication ruptures, UV6 - The evaluator understanding of the characteristics of the communicability tags and UV7- Complexity of the business models. These variables were evaluated using the knowledge of the profile of evaluators and IT analysts, and observations made during the preparation and execution of case studies. The execution of the case studies was conducted in a similar way as the exploratory studies, where the evaluator asked the IT analyst who narrate their rational for the task of elaborating the artifacts for the IS specification from business models. The evaluator also noted the artifact itself produced, in addition to questioning the IT analyst during the task, seeking to investigate possible communication ruptures observed. Also the
322
J.J. Ferreira, R.M. de Araujo, and F.A. Baião
studies scenarios were defined to avoid false ruptures. After identifying the communication ruptures, they were categorized based on the set of communicability tags proposed and analyzed according to variables defined for the case studies. In the case studies I and III, the evaluator was the researcher who presented the proposal of the communicability tags in this work. In case study II, the evaluator was another IT analyst with theoretical and practical experience in business modeling, trying to verify if another analyst could understand the tags and apply the evaluation, identifying the communication ruptures and associating these ruptures to the communicability tags proposed. For the tree (3) case studies the IT analysts had participated in the design of the business models used. The case studies were performed using a set of business models related to a funding process of a federal university in Brazil, like funding solicitation for academic material acquisition. Table 4. Summary of dependable variables results Dependable variable DV1 DV2
Case study I VIABLE UNIFORM DISTRIBUTION
Case study II VIABLE UNIFORM DISTRIBUTION
Case study III VIABLE UNIFORM DISTRIBUTION
The dependable variables values were defined as: VIABLE / NOT VIABLE - If all the ruptures were associated with a proposed tags and UNIFORM DISTRIBUTION / NON-UNIFORM/ DISTRIBUTION - How were distributed the incidence of tags and if the incidence of a tag indicated a situation that needed to be analyzed. The results for the cases studies are presented at Table 5. Table 5. Summary of dependable variables results Undependable variable UV1 UV2 UV3 UV4
Case study I HIGH HIGH HIGH HIGH
Case study II HIGH HIGH HIGH LOW
Case study III HIGH HIGH HIGH HIGH
UV5 UV6 UV7
HIGH HIGH LOW
MEDIUM MEDIUM HIGH
HIGH HIGH HIGH
The undependable variables values were defined as: high, medium and low. The results as presented at Table 6. From the ten (10) tags proposed, “Oops… Not found!” and “What’s it?” tags did not occur in the case studies. According to the characteristics of these tags, the absence of them in the case studies may be due to the vast knowledge of business models from IT analysts. Also because of the domain and model knowledge, some ruptures were avoided by the TI analysts during the interaction with the models.
Identifying Ruptures in Business-IT Communication through Business Models
323
Table 6. Consolidation of communicability tags from case studies Communicability tag
Communicability tag category
CSI
CSII
CSIII
Total
“I give up.”
Complete
0
1
2
3
“I can do it my way.”
Complete
1
1
0
2
“Oops… Not found!”
Complete
0
0
0
0
“Thanks, but no, thanks.”
Partial
1
1
2
4
“I can do otherwise.”
Partial
1
1
1
3
“Where can I find it?”
Temporary
3
3
8
14
“Is it?”
Temporary
2
3
5
10
“I can’t do it this way.”
Temporary
1
2
5
8
“Oops!”
Temporary
2
0
1
3
“What’s it?”
Temporary
0
0
0
0
The complexity if the models did not influenced at the communicability. The number of ruptures tagged was bigger for a more complex model, but the characteristics of the tags were very similar form the less complex model. The communication ruptures observed were also similar for TI analysts with difference profile. Indicating that the IT analyst profile did not influenced on the communication with the business model. Some inference ruptures were identified during the case studies (“I can do it my way.”) but other might have been lost, since main way of evaluation was the IT analyst narration. The names of the business elements seemed to be a common reference for the element understanding. This raised the question about the importance of naming the elements while modeling the business. No tags were directly related to it, but were an interesting observation for future consideration. The highest incidence of temporary communicability tags ("Where’s it?", "Is it?" and “I can’t do it this way.”) in the three case studies indicate that the communication process is being accomplished through the interpretation of business model for an IT analyst. Along the communication he has several questions that end up being answerred as he interprets and understands the business context that the model shows.
324
J.J. Ferreira, R.M. de Araujo, and F.A. Baião
6 Conclusions and Future Work The results of the three case studies showed that the proposed set of communicability tags was applicable. The tags for IS specification from business models may be used as references for creating a communicability evaluation method for business models used as reference to IS specification. This research proposes a shift in the focus of communication evaluation to business models. This communication takes place when an IT analyst seeks to understand the business context through a business model, during the specification of an IS to support the business process. The objective is to evaluate how well the business model is able to communicate for the specification of an IS. The object of evaluation is communicability, the communication skills of business model and not the model itself. Issues related to the quality of the business model influence the communicability of these models and thus were considered as part of evaluating their communicability. As the idea of communicability evaluation of business models for IS specification was inspired by a communicability evaluation method of HCI (CEM) [19], where the metacommunication is the object of the evaluation, the usage of tags and analogy of communicability shall consider the communication made though the business models as the object of that evaluation. There is a communication between business analyst and the IT analyst through the business model during the IS specification. This requires adjustments on CEM and its conceptual components, the communicability tags and their characteristics. This work defined communicability of business models and points to improvement directions in business models design. It also contributes by applying communicability evaluation to the domain of business models for IS specification. Therefore, IS specification may be improved by looking at the IT analyst understanding of the business problem at specification phase, preventing future problems with the IS. Finally, this work improves business-IT alignment research with a semiotic view of the relationship between business and IT and defines the business model as an instrument of communication between people. Future works include the evaluation of the proposed ideas in different scenarios, using other modeling methodologies and notations for business process modeling. Further case studies will explore the use of communicability tags for business models for IS specification in other scenarios, the definition of a communicability evaluation method for business models used to IS specification and, with this method, investigate the evolution of business models communicability, and finally the investigation of the relation between models quality and communicability. Acknowledgements. The authors want to thank CAPES and CNPq - Brazilian Funding Agencies - for partially supporting this research.
References 1. Barjis, J., Augusto, J.C., Ultes-Nitsche, U.: Towards more adequate EIS. Science of Computer Programming 65(1) (2006); Special Issue on: Increasing Adequacy and Reliability of EIS 2. Barjis, J.: The Importance of Business ProcessModeling in Software Systems Design. Journal of The Science of Computer Programming 71(1), 73–87 (2008)
Identifying Ruptures in Business-IT Communication through Business Models
325
3. Bittencourt, R., Araujo, R.M.: Identificando Expectativas de Qualidade de SIs com o apoio de Modelos de Negócio (Identifying IS quality needs using Business Models). In: Workshop de Gestão de Processos de Negócio (Brazilian Business Process Management Workshop) (2008) (in Protuguese) 4. Chandler, D.: Semiotics: The Basics. Routledge, New York (2002) 5. Cruz, P.O.: Heurísticas para identificação de requisitos de sistemas de informação (Heuristics for requirements’ identification for information systems). Dissertation of M.Sc. NCE/UFRJ, Rio de Janeiro, RJ, Brasil (2004) (in Portuguese) 6. De Souza, C.S.: The Semiotic Engineering of Human-Computer Interaction. MIT Press, Cambridge (2005) 7. Dehnert, J., Van Der Aalst, W.M.P.: Bridging the gap between business models and workflow specifications. International Journal of Cooperative Information Systems 13(3), 289– 332 (2004) 8. Eco, U.: A Theory of Semiotics. Macmillan/Indiana University Press, London/ Bloomington (1976) 9. Ekstedt, M., Johnson, P., Plazaola, L., Silva, E., Vargas, N.: An Organizational-Wide Approach for Assessing Strategic Business and IT Alignment. In: Proceedings of Portland International Conference on Management of Engineering and Technology (PICMET 2005), Portland, USA (2005) 10. Eriksson, H., Penker, M.: Business Modeling with UML: Business Patterns at Work. John Wiley & Sons, Chichester (2000) 11. IDS Scheer: ARIS Method, 2087 p (2003) 12. Iendrike, H., Dos, S., Araujo, R.M.: Projeto de Processos de Negócio visando a automação em BPMS (Business Process Project towards BPMS automation). In: Workshop Brasileiro em Gestão de Processos de Negócio (Brazilian Business Process Management Workshop). XIII Brazilian Symposium on Multimedia and Web. SBC, Porto Alegre (2001) (in Portuguese) 13. Dehnert, J., Van der Aalst, W.M.P.: Bridging the gap between business models and workflow specifications. International Journal of Cooperative Information Systems 13(3), 289– 332 (2004) 14. Luftman, J., Kempaiah, R.: An Update on Business-IT Alignment: “A Line” Has Been Drawn. MIS Quarterly Executive 6(3) (2007) 15. MacKnight, D., Araújo, R.M., Borges, M.R.: A Systematic Approach for Identifying System Requirements from the Organizational Business Model. In: II Brazilian Symposium on Information Systems (2005) 16. Marques, C., Sousa, P.: Getting into the misalignment between Business and information Systems. In: The 10th European Conference on Information Technology Evaluation, Madrid, Spain (2003) 17. Peirce, C.S.: Collected Writings. In: Hartshorne, C., Weiss, P., Burks, A.W. (eds.) 8 vols. Harvard University Press, Cambridge (1931-1958) 18. Plazaola, L., Silva, E., Vargas, N., Flores, J., Ekstedt, M.: A Metamodel for Strategic Business and IT Alignment Assessment. In: Conference on Systems Engineering Research. University of Southern California, USA (2006) 19. Prates, R.P., De Souza, C.S., Leitão, C.F., da Silva, E.J.: The Semiotic Inspection Method. In: VII Brazilian Symposium on Human Factors in Computing Systems (2006) 20. Sharp, A., Mcdermott, P.: Workflow Modeling: Tools for Process Improvement and Application Development, p. 345. Artech House, Norwood (2008) ISBN: 1- 58053-021-4 21. Yu, E.: Models for supporting the redesign of Organizational Work. In: Proceedings of Conference on Organizational Computing Systems (COOCS 1995), pp. 225–236 (1995)
A Business Process Driven Approach to Manage Data Dependency Constraints Joe Y.-C. Lin and Shazia Sadiq School of Information Technology & Electrical Engineering, The University of Queensland, Brisbane, Australia {jlin,shazia}@itee.uq.edu.au
Abstract. A major reason for the introduction and subsequent success of Business Process Management (BPM) and related tools is their ability to provide a clear separation between process, application and data logic. However, in spite of the abstraction value that BPM provides, a seamless flow between the technology layers has not been fully realized in mainstream enterprise software. The result of this disconnect is disparity (and even conflict) in enforcing various rules and constraints. In this paper, we address the problem of disconnect between the data relevant constraints defined within business process models and data dependency constraints defined in the data layer. We propose a business process (model) driven approach wherein such constraints can be modelled at the process level, and enforced at the data level through an (semi) automated translation into DBMS native procedures. The simultaneous specification ensures consistency of the business semantics across the process and data layers. Keywords: Business process management, Data flow, Constraint modelling.
A Business Process Driven Approach to Manage Data Dependency Constraints
327
Fig. 1. Building Blocks of Process-Enabled Enterprise Systems
• Business processes are primarily captured through modeling and business logic is primarily implemented through coding of application components. • Application components have minimal direct awareness of one another and also have minimal direct awareness of “where and how” they are being utilized in BPM layer. • BPM takes the primary responsibility to achieve business objectives through configuration, coordination, collaboration, and integration of application components. • Clear mapping between design time conceptual modeling environment to capture ‘real life’ business processes and runtime execution environment supported by IT infrastructure. • Similar BPM principles are applied in achieving intra-application, application to application, system to system, as well as business to business integration In spite of the abstraction value that BPM provides through explicit articulation of process models, a seamless flow between the data, application and process layers has not been fully realized in mainstream enterprise software, thus often leaving process models disconnected from underlying business semantics captured through data and application logic. The result of this disconnect is disparity (and even conflict) in enforcing various rules and constraints in the different layers. In this paper, we propose to synergise the process and data layers through the introduction of data dependency constraints. These constraints can be modelled at the process level, thus providing the benefits of abstraction and clarity of business semantics. At the same time, we propose an automated translation of these constraints into DBMS native procedures. The simultaneous and consistent specification ensures that disparity between the process and data logic can be minimized. The remaining paper is organized as follows: We first present a detailed discussion on related work in section 2, which encompasses data dependency constraints in general as well as managing of data dependency and data flow in BPMSs. We will then
328
J.Y.-C. Lin and S. Sadiq
introduce in section 3, two types of data dependency constraints that characterize certain notions of data dependency in business processes. These are presented within a typical architecture of a BPMS. We will demonstrate that the constraints cannot be easily modelled in current business process modelling languages and will provide a discussion on their properties. We present in section 4, an automated translator of the constraints into DBMS native procedure for constraint enforcement in the data layer, and finally discuss the main contributions and future extensions of this work in section 5.
2 Related Work Historically, one of the first successes in data integrity control was the invention of referential integrity enforcement in relational database systems [1]. The generality of this solution, based on a formal definition of a class of constraints, made this data management concept uniformly applicable (independently from application domain), thus eliminating large numbers of data integrity errors. Since then, data dependency constraints have been widely studied with many classes of constraints introduced. In [2] the authors proposed a class of integrity constraints for relational databases, referred to as conditional functional dependencies (CFDs), and study their applications in data cleaning. In contrast to traditional functional dependencies (FDs) that were developed mainly for schema design, CFDs aim at improving the consistency of data by enforcing bindings of semantically related values. The authors in [3] classify the recent research on CFDs and indicate that it generally followed three directions. The first involves reasoning about CFDs' axiomatization, consistency, and implications, such as [2], [4]. The second studies the problem of estimating the confidence of CFDs by employing CFDs to characterize the semantics of data [2], [3]. The third uses CFDs for data cleaning [5]. In this paper, we aim to extend the data dependency constraints of process enabled systems through the use of CFDs in business process models. In general, the process model is a definition of the tasks, ordering, data, resources, and other aspects of the process. Most process models are represented as graphs mainly focussed on the control flow perspective of activity sequencing and coordination, such as Petri nets [6], [7], [8]. In addition, some process models (often in scientific rather than business domain) focus on the data flow perspective of the process, i.e. data-centric approaches. The importance of a data-centric view of processes is advocated in [9] and [10]. In [9], the authors promote an “object view” of scientific workflows where the data generated and used is the central focus; while [10] investigates “attribute-centric” workflows where attributes and modules have states. Further, a mixed approach was proposed by [11] which can express both control and data flow. [12] and [13] uses a productdriven case handling approach to address some concerns of traditional workflows especially with respect to the treatment of process context or data. [14] proposed document-driven workflow systems where data dependencies, in addition to control flows, are introduced into process design in order to make more efficient process design. Another approach called the Data-Flow Skeleton Filled with Activities (DFSFA) is proposed in [15] to construct a workflow process by automatically
A Business Process Driven Approach to Manage Data Dependency Constraints
329
building a data-flow skeleton and then filling it with activities. The approach of DFSFA uses data dependencies as the core objects without mixing data and activity relations. [16] propose a conceptual framework for advanced modularization and data flow by describing a workflow language which introduces four language elements: control ports, data ports, data flow, and connectors. Their view of workflow's data flow is specified separate from its control flow by connecting tasks' data ports using a first-class data flow construct. Also worth mentioning is the work on data flow patterns [17], in particular the internal data interaction pattern namely DataInteraction – Task to Task (Pattern 8). It refers to the ability to communicate “data elements” between one task instance and another within the same case, and provides three approaches, namely a) Integrated Control and Data Channels b) Distinct Control and Data Channels c) No Data Passing that uses a global shared repository. [18] studies the activity-centred paradigm of existing WfMS are too inflexible to provide data object-awareness and discussed major requirements needed to enable objectawareness in process management systems. Despite these contributions from research in modelling data flow perspectives of business processes, widely used industry standard such as BPMN will only show the flow of data (messages), and the association of data artefacts to activities, that is, it doesn’t express the data flow (logic) below the Data Object level. It can be observed that data artefacts can have interdependencies at a low level of granularity which if not explicitly managed, can compromise the integrity of the process logic as well as corrupt underlying application databases. We propose to use concepts and contributions from research in data integrity management through data dependency constraints to overcome this limitation in business process models. Our work is focussed on a specific class of data dependencies, that have the capacity to not only enrich the process model, but also provide a means of enforcing the constraints across all layers of the process enabled enterprise system, namely process, application and data. The next section details our approach.
3 Data Dependency Constraints for Process Models We present in Figure 2 a reference BPM architecture to provide the background for managing data dependency constraints through BPM. Our aim is to demonstrate the above mentioned layers namely Data logic, Business or Application logic and Process logic within the architecture: • The Data logic components provide repositories for business and corporate data as well as documents, mails, content management system data, etc. • The Business logic components provide business application functionalities through various type of application and the coordination of these applications are through the web-based tools provided by the BPM Suite or via custom developed interface with the BPM tools. • The BPM Suite provides the core BPM functionalities which includes two main parts, Business Modeller and Workflow Application Service.
330
J.Y.-C. Lin and S. Sadiq
AdminUser
Legend Process Logic
Business Logic
DataLogic Administration Tool
User
UserWorklist
Desktop
Worklist Handler
Process ModellingTool
Process Enactment Engine
Internet WebServer Interface
Authentication
Business Applications
BusinessModeller
WorkflowApplicationService
Intranet WebServer
Webbrowser
Monitoringand Analysis
Portal Server
Process Repository
WFRelevant Data Repository
Other External Application
BPMSuite Interface
Document Server
Mail Server
DocData Repository
MailData Repository
Content Management Server
Other BPMS
CMSData Repository
Workflow Application Database
Data Repository
Data Repository
Fig. 2. BPM reference architecture
In Figure 2, it is worth to identify the differences between Process Relevant Data and Application Data. Process Relevant Data is used by the Business Process Management System (in addition to other uses) to determine the state transitions of a process instance, for example pre- and post-conditions, transition conditions, etc. Such data may affect the choice of the next activity to be chosen and may be manipulated by related applications as well as by the process engine. On the other hand, the Application Data is application specific and strictly managed by applications supporting the process instance. In terms of the data flow pattern Data-Interaction – Task to Task in [17], the Process Relevant Data refers to the third category i.e. the use of a “Global Shared Repository”. In the context of the above architecture, we propose to introduce the modelling and enforcement of two classes of data dependency constraints though the BPMS. We identify these as so-called Change Dependency Constraint and Value Dependency Constraint. To understand the semantics behind the constraints, consider the following scenario. Assuming a hotel booking system introduces special booking rates to the process, where 3 specific data elements entered in activity Select City, Select Hotel, Check out are named A, B and C respectively. Suppose we would like to specify a constraint that ensures if A and B are entered in certain values, the value of C would be predetermined. For example, if Date = “10/Mar/09” and “Hotel” = Hilton, then Discount = 10%. This constraint would guarantee the data quality of the applications associated with the process is synchronized with the process definition, as well as the ability to dynamically modify the “condition values” without changing the process definition. While the values do not dictate the values of every instance, but rather options of the
A Business Process Driven Approach to Manage Data Dependency Constraints
Select City A
Select Hotel
Don'tUse Reward$
Select Date
B
C D E
Use Reward$
ConfirͲ mation
331
Checkout F
E
processrelevantdataforprocessinstanceX City
Hotel
Date
Discount
Reward$ BookingFee
Timestamp1
Brisbane
Timestamp2
Brisbane
Hilton
Timestamp3
Brisbane
Hilton
10/Mar/09
10%
$10
Timestamp4
Brisbane
Hilton
10/Mar/09
10%
$10
$10 $10
$5
Fig. 3. Example scenario
possible combinations, the dashed line implies this weak relationship. Current business process models do not have the feature to support the specification of such “value dependency”. Similarly we can observe that changes in data values can also have dependencies. For example, assume the user decides to spend his/her membership reward point for further discount, the system would automatically deduct the amount from the balance at the checkout. This constraints enforces the automatic calculation of Process Relevant Data “Reward $”, we term such a constraint as “change dependency”. In the following discussion, we will provide a means of specifying the constraints using a well-known notion of data tableaus borrowed from database integrity constraint management. The tableaus allow us to specify constraints in a concrete manner, as well reason with their properties. However, we first need to present some background concepts on process schema and instance, as well as data tableaus. Definition 1 (Process Schema). A tuple P = (N, C, D, L, K) is called a process schema with: • N is a set of finite Nodes. Each node n ∈ N has a type T ⊆ E ∪ A ∪ G such that E ∪ A ∪ G, E ∩ A = φ, A ∩ G = φ, E ∩ G = φ,where E denotes the set of Event types (e.g., Start, End, etc.) and A denotes the set of Activity types (e.g. User, Manual, Service, etc.) and G denotes the set of Gateway types (e.g., AND-SPLIT(Fork), XORSPLIT(Choice), AND-JOIN(Join), XOR-JOIN(Merge) ) • C is a set of Connecting objects or Control Flow. Connect Relation C ⊆ N ╳ N is a precedence relation (note: nsrc. ndest nsrc. ndest) ∈ C )
→ ≡( ,
• D is a set of process data elements. Each data element d ∈ D has a type D where D denotes the set of atomic data types (e.g., String, number, etc.) • L ⊆ N ╳ D is a set of data links between node objects and data elements. For the purpose of this research, we assume the link exists at the point of node completion, i.e. the value of the data elements equals the value stored in database at the end of the activity (node).
332
J.Y.-C. Lin and S. Sadiq
• For each link l∈L, l can be represented by a pair - node[l] or n[l]=n where n∈N represent node of l. - data[l] or d[l]=d where d∈D represent data element of l. • K:C → TC(D) ∪ φ assigns to each control flow an optional transition conditions where TC(D) denotes the set of all valid transition conditions on data elements from D. Definition 2 (Process Instance). A process instance I is defined by a tuple (PI, NSPI, VPI)where: • PI := (NI, CI, DI, LI) denotes the process schema of I which is determined during runtime, where NI denotes the node set and CI denotes the control flow set and DI denotes the data element set and LI denotes the data elements link set. • NSPI describes node states of I: NSPI :NI →{Initial, Scheduled, Commenced, Completed } • VPI denotes a function on DI, formally: VPI : DI →DomDI ∪ {Undefined}. This means for each data element d∈DI has a value either from domain DomDI or an Undefined value which has not been stored yet. • In particular, we denote V[LI]PI as the values of data elements link sets of Process Instance PI, which is a function on LI, formally: V[LI]PI: DI →DomDI ∪ {Undefined} and LI=NI╳DI Definition 3 (Data Tableau). A data tableau TLI is a tableau with all attributes in L’, referred to as the value pattern tableau of L’ or V[L’], where for each l in L’ and each tuple t∈TLI, t[l] is either a constant in the domain Dom(d) of l, or an unnamed variable ‘ _‘. • L’⊆ L, therefore the maximum number of attributes in TLI equals |L|. • t[l] = ‘_’ means that the value can be anything within Domd ∪ {Undefined} • For example: a tableau can be presented as the following Table 1. Tableau for Definition 3
_ 10
_ 10
_ 10
This table implies that the value of d1 can be anything within the Dom(d1) throughout n1 to n3, but if = 10, then the values of d1 at n2 and n3 must remain consistent. 3.1 Constraint Specification Using the notion of data tableaus from above, we can specify value and change dependency constraints as below in Figure 4 and 5 respectively. In Figure 4 a data dependency is defined through the value relationship between multiple data items. The Tableau T represents the conditional values Hotel, Date and
A Business Process Driven Approach to Manage Data Dependency Constraints
333
Tableau T SelectHotel,Hotel SelectDate,Date Checkout,Discount Hilton
B Select Hotel
Select City
10/Mar/09
10%
C D Select Date
Don'tUse Reward$ Checkout Confirmation Use Reward$
Discount at Task SelectHotel, SelectDate and Checkout respectively. The Tableau suggests a conditional rule such that if Hotel equals to Hilton and Date equals to 10/Mar/2009, then the Discount will be 10%. Otherwise the data will not be accepted. In this example, instance 1 does not satisfy this rule therefore the data is invalid. In Figure 5 another type of data dependency is given, which defines the conditions under which a data value can be changed. The example below defines the conditional values of Reward$ at “SelectDate”, “Use Reward$”, “Don’t Use Reward$” and
334
J.Y.-C. Lin and S. Sadiq
“Checkout” respectively. Since the Tableau suggests a conditional rule such that if Reward$ at SelectDate equals to $M and if the path “Use Reward$” is taken and BookingFee$ equals to $F then the Reward$ at Checkout would equal to $(M-F). In this example, instance 4 does not satisfy this rule therefore the data is invalid. Together, the above two examples demonstrate a new type of constraint which we collectively refer to as “Conditional Data Dependency”. We define a Conditional Data Dependency as below: Definition 4 (Conditional Data Dependency or CDD). A conditional data dependency φ is a pair (F:X →Y, T), where • X, Y are sets of links X,Y∈L, • F:X →Y is a standard Data Link Dependency, F ⊆ L ╳ L is a precedence relation (note: lfrom lto lfrom, lto) ∈ F ) • alternatively, we can represent a data link dependency f as →, where node[lfrom] = ni, data[lfrom] =dp, node[lto]=nj, data[lto]=dq. • T is a tableau with all attributes in X and Y, referred to as the pattern tableau of φ. Where for each l in X or Y and each tuple t∈ T, t[l] is either a constant in the domain Dom(d) of l, or an unnamed variable ‘_‘. In particular, we define:
→ ≡(
Definition 5 (Value Dependency Constraint) A Value Dependency Constraint φ is a pair (F:X →Y, T), where For all , … in X and Y, ni ≠ nj implies dp ≠ dq This means, the Tableau T defines the relationships of value of multiple data elements Definition 6 (Change Dependency Constraint) A Change Dependency Constraint φ is a pair (F:X →Y, T), where For all , … in X and Y, ni ≠ nj implies dp = dq This means, the Tableau T defines the changes of value of the same data element. 3.2 Constraint Analysis We observe that the constraint specification exhibits certain properties namely Subset, Transitivity, Union, Decomposition and Pseudo transitivity. Understanding the properties is essential to provide a non-redundant and conflict-free specification. Although it is not the aim of this paper to present a detailed analysis of the constraints or verification algorithms, we present below a summary of the properties in order to, better understand the semantics of the constraint specification. Subset: One Conditional Data Dependency can subsume another. Given two CDDs, F1:[X1 →Y1],T1 and F2: [X2 →Y2],T2, F1⊇F2 iff X1⊇X2 and Y1⊇Y2, and ∀tuples t2∈T2, ∃tuple t1∈T1 such that t1⊇t2 Transitivity: Given two CDDs, F1:[X →Y],T1 and F2: [Y →Z],T2, We can derive a F3:[X →Z],T3 such that ∀tuples t2∈T2 ∃tuple t1∈T1 such that t1[Y] = t2[Y]. Therefore, from the property of CDD Transitivity, we can define a new operator ⊕ which merges two CDDs into one.
A Business Process Driven Approach to Manage Data Dependency Constraints
335
⊕ Merge: Given two CDDs, F1:[X1 →Y1],T1 and F2: [X2 →Y2],T2, F1⊕F2 = F3: [X3 →Y3], T3 iff X3 = X1∪X2 and Y3 = Y2 and ∀tuples t3∈T3 ∃tuple t1∈T1 and t2∈T2 such that t3[X3] = t1[X1] and t1[Y1] = t2[Y2] and t3[Y3] = t2[Y2] Union: Given two CDDs, F1:[X →Y],T1 and F2: [X →Z],T2, We can derive a F3:[X →YZ],T3 such that ∀tuples t3∈T3 ∃tuple t1∈T1 and t2∈T2 such that t1[X] = t2[X] = t3[X] and t3[YZ] = t1[Y]∪t2[Z] Therefore, from the property of CDD Union, we can define a new operator ⊗ which merges two CDDs into one. ⊗Join: Given two CDDs, F1:[X1 →Y1],T1 and F2: [X2 →Y2],T2, F1⊗F2 = F3: [X3 → Y3], T3 iff X1 = X2 = X3 and Y3 = Y1∪Y2 and ∀ tuples t3∈T3 ∃tuple t1∈T1 and t2∈T2 such that t1[X1] = t2[X2] = t3[X3] and t3[Y3] = t1[Y1]∪t2[Y2]. 3.3 Summary To summarize the above, we are proposing a new type of data dependency constraint to model dependencies within process relevant data. We call such constraint a “Conditional Data Dependency”. The CDD extends the current process modelling specification by introducing a tableau to specify the data dependency. Such constraint allows us to define business rules to ensure data integrity through the process layer to data layer. While the specification of the CDDs allows us to specify additional data constraints, the correctness of the specification is also important. A number of conflicts may emerge into the constraint specification namely a) Invalid Data Link Attributes, b) Conflict between Data Link Dependency with Control Flow, c) Conflict between the tuples within the Tableau, d) Conflict between Data flow and Control Flow, etc. For example, Consider a CDD ψ1 = (<Select Date, Date> → , T1), where T1 consists of two pattern tuples (10/Mar/2009, 10%) and (10/Mar/2009, 20%). Then there is no instance Pi can possibly satisfy ψ1. Indeed, for any tuple t in Pi, the first pattern requires if the date equals to 10/Mar/2009, the discount will be 10%, which is contradictory to the second pattern value 20%. Such conflict is a typical conflict between the tuples within the Tableau. Since it is not the scope of this paper to discuss the methods for detecting and resolving these constraint conflict problems, we refer the works in [19] [2] which can be used as a road map for implementation of the verification algorithms. Design of specific verification algorithms for the proposed constraints is also part of our future work. In the remaining paper we assume that a non-redundant and conflict-free constraint specification is available to the BPM system in the form of a data tableau.
4 Implementation and Evaluation In this section we would like to demonstrate how a proof of concept can be built for the above approach. The objective is to demonstrate the specification of the constraints at the process level, and enforcement at the data level. We present the proof of concept through a light weight implementation of a workflow engine Chameleon built using Microsoft Windows Workflow Foundation (WWF). Figure 6 shows an overview of the Chameleon 3 architecture.
336
J.Y.-C. Lin and S. Sadiq
Fig. 6. Chameleon 3 architecture
Fig. 7. WWF Rule Set for Value Dependency
A Business Process Driven Approach to Manage Data Dependency Constraints
337
It can be observed that the architecture is derived from the reference BPM Suite we mentioned previously in section 3. The Chameleon 3 Process Modelling Tool is built based on the Windows Workflow Foundation Designer Tools with extended functionality. One of the most useful features of Windows Workflow Foundation technology is that it allows us to implement customized User Activity types which enables us to implement the activities to include extra properties and functionalities; such extension allows us to access the underlying Application Data and Process Relevant Data at the modelling level. At design time, the WWF GUI process designer tool has the access to any pre-built User Activity library we implement. We then use the designer tool to create the process model and export the process definition to an XML like language called XOML. This process definition will then be imported into the Chameleon Suite by using the Web Admin tool hosted by Microsoft IIS Server. The WWF services interpret the XML process definition and stores the model as a process template by a Process Repository database hosted by Microsoft SQL Server. At runtime, the Admin User creates and manages instances which the engine provides supporting scheduling services for persisting a workflow’s state, handling transactions and also provides other services, such as mechanisms for communicating with software outside the workflow. The Workflow Users then use the web interfaced Chameleon Workflow Application and perform tasks on the Application or Web forms corresponding to the work items on their worklists. In order to support the specification and implementation of the proposed Conditional Data Dependency, we developed the following enhancement to Chameleon 3 architecture. 1. A Conditional Data Dependency Designer Tool that uses an intuitive user interface for specifying the constraint. This tool is an Add-On to the existing WWF Designer that can capture the constraints as tableaus as shown in the previous section and then exported to the required XML format. 2. An automatic translator to convert the specified CDDs into rules in XML format and subsequently insert the WWF built-in “Policy Activity” into the XML file before importing the model to the Chameleon Suite. 3. We present a simple procedure to translate the CDD constraint to WWF native rule set as follows: begin For each tuple in Tableau T Writeln(If ) For each data value in X Writeln(dp = Vp) If not last value in X Writeln( and) End If End For Writeln(Then ) For each data value in Y Writeln(dq = Vq) If not last value in Y Writelin( and) End If End For End For End.
338
J.Y.-C. Lin and S. Sadiq
4. The above procedure generates a WWF native rule which can be processed by the WWF service. This service automatically generates the underlying data validation codes; hence the data integrity is enforced at runtime as intended. Figure 7 shows an example of the translation of the Value Dependency described in Figure 4 which generates a rule set for native WWF “Policy Activity”. A policy activity is simply a programmatic check for “If” condition “Then” executes a specified “Action”.
5 Conclusions One of the biggest challenges in current large scale enterprise systems is overcoming the growing disconnect and consequent disparity in various parts of the system. In this paper, we have attempted to address the disparity in data integrity constraints that may arise due to a disconnect between business process models and underlying databases and related applications. We have provided a means of specifying two types of data dependency constraints on process relevant data. Further we have provided a proof of concept on how the constraint specification can be utilized simultaneously at the process as well as data level thus minimizing the opportunity for disparity between them. Although we briefly mentioned in this paper the importance of the analysis and verification of the constraints, ensuring correctness of the specification (i.e. nonredundant and conflict-free) remains an interesting and challenging extension of this work. We also envisage further extension of the prototype implementation, namely Chameleon 3, to include a smart CDD builder.
References 1. Date, C. J.: Referential Integrity. In: Proc. of 7th Int. Conf. on VLDB, September 9-11, Cannes, France, pp.2-12 (1981). 2. Fan, W., Geerts, F., Jia, X., Kementsietsidis, A.: Conditional Functional Dependencies for Capturing Data Inconsistencies. ACM Transaction on Database Systems 33(2), article 6 (2008) 3. Cormode, G., Golab, L., Korn, F., McGregor, A., Srivastava, D., Zhang, X.: Estimating the Confidence of Conditional Functional Dependencies. In: 35th SIGMOD International Conference on Management of Data, pp. 469–482 (2009) 4. Bravo, L., Fan, W., Geets, F., Ma, S.: Extending dependencies with conditions. In: International Conference on Very Large Data Bases (2007) 5. Cong, G., Fan, W., Geerts, F., Jia, X., Ma, S.: Improving data quality: Consistency and accuracy. In: International Conference on Very Large Data Bases (2007) 6. Aalst, W.M.P., Hofstede, A.H.M.: Verification of workflow task structures: A Petri-netbased approach. Information Systems 25(1), 43–69 (2000) 7. Object Management Group/Business Process Management Initiative, Business Process Modelling Notation, http://www.bpmn.org/ 8. Object Management Group, Unified Modelling Language, http://www.uml.org/
A Business Process Driven Approach to Manage Data Dependency Constraints
339
9. Ailamaki, A., Ioannidis, Y., Livny, M.: Scientific workflow management by database management. In: Int. Conf. on Statistical and Scientific Database Management, pp. 190– 199 (1998) 10. Hull, R., Llirbat, F., Simon, E., Su, J., Dong, G., Kumar, B., Zhou, G.: Declarative workflows that support easy modification and dynamic browsing. In: Int. Joint Conf. on Work Activities Coordination and Collaboration, pp. 69–78 (1999) 11. Medeiros, C., Vossen, G., Weske, M.: WASA: a workflow-based architecture to support scientific database applications. In: Revell, N., Tjoa, A.M. (eds.) DEXA 1995. LNCS, vol. 978. Springer, Heidelberg (1995) 12. Reijers, H., Limam, S., van der Aalst, W.M.P.: Product-based Workflow Design. Management Information Systems 20(1), 229–262 (2003) 13. Aalst, W.M.P., Weske, M., Grünbauer, D.: Case handling: a new paradigm for business process support. Data and Knowledge Engineering 53, 129–162 (2005) 14. Wang, J., Kumar, A.: A framework for document-driven workflow systems. In: van der Aalst, W.M.P., Benatallah, B., Casati, F., Curbera, F. (eds.) BPM 2005. LNCS, vol. 3649, pp. 285–301. Springer, Heidelberg (2005) 15. Du, N., Liang, Y., Zhao, L.: Data-flow skeleton filled with activities driven workflow design. In: 2nd Int. Conf. on Ubiquitous Information Management and Communication, pp. 570–574 (2008) 16. Joncheere, N., Deridder, D., Van Der Straeten, R., Jonckers, V.: A Framework for Advanced Modularization and Data Flow in Workflow Systems. In: Bouguettaya, A., Krueger, I., Margaria, T. (eds.) ICSOC 2008. LNCS, vol. 5364, pp. 592–598. Springer, Heidelberg (2008) 17. Russell, N., ter Hofstede, A.H.M., Edmond, D., van der Aalst, W.M.P.: Workflow Data Patterns: Identification, Representation and Tool Support. In: Delcambre, L.M.L., Kop, C., Mayr, H.C., Mylopoulos, J., Pastor, Ó. (eds.) ER 2005. LNCS, vol. 3716, pp. 353–368. Springer, Heidelberg (2005) 18. Künzle, V., Reichert, M.: Towards Object-Aware Process Management Systems: Issues, Challenges, Benefits. In: Halpin, T., Krogstie, J., Nurcan, S., Proper, E., Schmidt, R., Soffer, P., Ukor, R. (eds.) BPMDS 2009. LNBIP, vol. 29, pp. 197–210. Springer, Heidelberg (2009) 19. Sun, S., Zhao, J., Nunamaker, J., Sheng, O.: Formulating the Data-Flow Perspective for Business Process Management. Information Systems Research 17(4), 374–391 (2006)
Using Cases, Evidences and Context to Support Decision Making Expedito Carlos Lopes1, Vaninha Vieira2, Ana Carolina Salgado3, and Ulrich Schiel1 1
Federal University of Campina Grande, Computing and Systems Department P.O. box 10106 Campina Grande, PB, Brazil 2 Federal University of Bahia, Computer Science Department, Salvador, BA, Brazil 3 Federal University of Pernambuco, Informatics Center, Recife, PE, Brazil {expedito,ulrich}@dsc.ufcg.edu.br, [email protected], [email protected]
Abstract. Evidence-Based Practice (EBP) represents a decision-making process centered on justifications of relevant information contained scientific research proof found in the Internet. Context is a type of knowledge that supports identifying what is or is not relevant in a given situation. Therefore, the integration of evidence and context is still an open issue. Besides, EBP procedures do not provide mechanisms to retain strategic knowledge from individual solutions, which could facilitate the learning of decision makers, preserving evidences used. On the other hand, Case-Based Reasoning (CBR) uses the history of similar cases and provides mechanisms to retain problem-solving. This paper proposes the integration of the CBR model with EBP procedures and Context to support decision making. Our approach includes a conceptual framework extended to support the development of applications that combines cases, evidence and context, preserving the characteristics of usability and portability across domains. An implementation in the area of crime prevention illustrates the usage of our proposal. Keywords: Case-based reasoning, Evidence-based practice, Context, Decision making, Conceptual framework.
Using Cases, Evidences and Context to Support Decision Making
341
knowledge from individual solutions. This history could facilitate the learning of different end-users, in the future, preserving used evidences, since they can later be modified or removed from the Internet. On the other hand, an important model from the Artificial Intelligence area is CaseBased Reasoning (CBR). CBR uses the history of similar cases to support decision making providing mechanisms to retain problem-solving in Case Base [4]. Since both EBP procedures and CBR techniques are important to support decisionmaking, the integration of these two paradigms constitutes an interesting research topic to support problems solution, especially for complex problems. Besides, according to Dobrow et al. [5], the EBP procedures represent a decisionmaking process centered on justifications of relevant information. Defining relevant information is not an easy task. Context is a type of knowledge used to support the definition of what is or is not relevant in a given situation [6]. Applying EBP procedures to a particular patient case, for example, implies considering different contextual information regarding the generation of evidences and the patient case itself. Thus, “the two fundamental components of an evidence-based decision are evidence and context, and the decision-making context can have an impact on evidence-based decision-making.” [5]. But, the integration of evidence and context is still an open issue and, in fact, the evidence retrieval with contextual information can facilitate the reuse of evidence-based decision-making justifications involving similar situations. Systems that use context apply it to filter out and share more useful information so that this information can meet users’ needs. Thus, context is a significant tool to optimize a system’s performance and to reduce search results. Filtering mechanisms avoid more explicit interactions of the user with the application [6, 7]. This paper proposes the integration of Case-Based Reasoning model with Evidence-Based Practice procedures and the usage of context to support filtering mechanisms on decision making when the solution is gathered outside the organizational environment through research evidence. Since EBP can be applied to different areas (e.g. Health in general, Education, Social Work, Crime Prevention and Software Engineering in Computer Science), we extended a conceptual framework, that represents the integration of evidence and context, to incorporate case structure justified by evidence, preserving the characteristics of usability and portability in domains that use EBP. This framework supports system designers in the conceptual modeling phase, providing more agility, transparency and cohesion between models. To illustrate our proposal usage, we implemented the framework extended in the area of crime prevention. The rest of the paper is organized as follows. The key concepts regarding the main themes of this work are described in Section 2. Section 3 presents the conceptual framework extended using UML and the integration of EBP with context to the CBR model. Section 4 presents the application of the framework extended in the area of Crime Prevention. Related works are described in Section 5. Finally, Section 6 presents our conclusions and directions for further work.
2 Background This section defines context and provides an overview of Evidence-Based Practice and Case-Based Reasoning.
342
E.C. Lopes et al.
2.1 Context There are several definitions of context. A classical definition (highly referred) is proposed by Dey and Abowd [8] which states that context is “any information that characterizes the situation of an entity, where this entity is a person, place or object considered relevant in the interaction between the user and an application”. Context can also be seen as a set of information items (e.g. concepts, rules and propositions) associated with an entity [6]. An item is considered part of a context only if it is useful to support the resolution of a given problem. This item corresponds to a contextual element defined as “any data, information or knowledge that enables one to characterize an entity on a given domain” [6]. Contextual information regarding acquisitions is: (i) given by the user, whether from persistent data sources or from profiles; (ii) obtained from a knowledge base; (iii) obtained by means of deriving mechanisms; or (iv) perceived from the environment [9]. It is usually identified through the dimensions why, who, what, where, when and how [10]. Other important concept is related to the attention focus of the decision maker. One step in the task execution or problem-solving process is known as focus. The contextual elements should have a relevant relationship to the focus of a human agent (or software agent). In general, focus is what determines which contextual elements should be instantiated [10]. 2.2 Evidence-Based Practice According to Thomas and Pring [11], in general, information labeled as evidence is those whose collection had concerns about its validity, credibility and consistency with other facts or evidence. In relation to its credibility, the authors categorize evidence in three ways: 1. Based on professional practice, as a clinical or crime examination; 2. Generated by a process involving scientific procedures with a proven history in producing valid and reliable results, for example a collect performed by biomedical; 3. Based from published research that corresponds to critical reviews of the area, such as randomized clinical trial. “Evidence” in EBP, also called “research evidence”, corresponds to the third category above and means a superior type of scientific research proof, such as generated through systematic review1 in the highest level. These published researches are available in reliable data bases, usually found on sites over the Internet, and carried out by independent research groups [3]. This is the concept of evidence applied in this paper. Evidence-Based Practice (EBP) involves complex decision-making, based on available research evidence and also on characteristics of the actor of the problem, his/her situations and preferences. 1
A systematic review (SR) is a review that presents meticulous research and critical evaluations of primary studies (case study, cohort, case series, etc.), based on research evidence related to a specific theme. It contains analysis of qualitative results conducted in distinct locations and at different times. Meta-analysis is a SR of qualitative and quantitative characteristics [2].
Using Cases, Evidences and Context to Support Decision Making
343
In Medicine, EBP primary focus is to provide effective counseling to help patients with terminal or chronic illness to make decision in order to cure the illness, extend or increase the quality of their life [2]. What is objectively searched is “the integration of best evidence from research, clinical skill and preferences of the patient, regarding their individual risks and the benefits of proposed interventions” [3]. In crime prevention, EBP involves the correlations practice that has been proven through scientific research, aimed at reducing the recidivism of offenders. EBP primarily considers the risk and need principle of the offender, besides the motivation, and treatment and responsibility principles [12]. The EBP focus for education area is improving the quality of research and evaluation on education programs and practices, and hence, the information diffusion in the educational research field to be used by professionals or policies creators [11]. We generalize the EBP steps as follows: 1. Transforming the need for information into a question that can be answered; 2. Identifying the best evidence to answer the question; 3. Critically analyzing the evidence to answer: • Is it valid (appropriate methodology and proximity to the truth)? • Is it relevant (size and significance of the observed effects)? • Can it help (applicable in professional practice)? 4. Integrating critical analysis with professional skills and the values and cultural aspects of the actor of the problem answering: • How much the evidence can help the actor in particular (expectative from intervention suggested)? • Is it adaptable to actor’s goal and preferences (similarity between sample of the study and profile of the actor)? • How much safety can be expected (test result present into document)? 5. Evaluating the efficiency and effectiveness of the results of each step for future improvement. In step 1, the question is usually formulated with components called PICO: Problem (and/or actor), Intervention, Comparison of interventions, and Outcome [3]. In step 2, the identification of the best evidence is made, mainly, considering type of study, information source provider, data sources considered on study, types of intervention presented, results, references and sample. In steps 3 and 4, the questions were adapted from Heneghan and Badenoch [13] and the answers of them represent contextual information that supports decision-making. 2.3 Case-Based Reasoning Case-Based Reasoning is a kind of reasoning that search the solution for a certain problem through a comparative analysis of previous realities with a new similar reality occurred [4]. One case is the primary knowledge element structured in combination of problem features and actions associated with its resolution. It comprises three main parts: (i) description of the problem - in general, presents the characteristics of their
344
E.C. Lopes et al.
occurrence, intentions or goals to be achieved, and constraints (conditions that must be considered), (ii) solution - express the derived solution to the problem, and (iii) result - corresponds to a feedback of what happened in consequence of the implemented solution, including no success. The case can be enriched with other information, such as solving strategies (or a set of steps), justifications for the decisions that were taken, and system performance when handling the Case Base [14]. CBR systems use a structure for representation and organization of cases, called Case Memory, which is formed by the Case Base and the mechanisms for access to the base. To be more easily and quickly retrieved, the cases are indexed according to a set of characteristics that represent an interpretation of a specific situation. Indexing is an instrument whose function is to guide the similarity among cases [14]. Aamodt and Plaza [15] established the basic cycle of CBR processing that can be described as: given a problem, obtain relevant previous solutions (retrieve), adapt them for the current problem (reuse), propose a solution with validation (revise), and store the new case (retain) with its solution. Thus, CBR is not only a computational technique, but also a methodology for guided decision making [16].
3 Cases, Evidences and Context to Support Decision Making In order to integrate the three concepts and applies its in a guide of decision making, firstly, we extend the conceptual framework presented in Lopes et at. [17], which represents evidence and context, to incorporate case structure justified by evidences. This integration facilitates the reuse of evidence already applied, and consequently, the basic cycle of RBC needed to be adapted, as will show in this section. 3.1 Incorporating Case to a Conceptual Framework The primary aim of this conceptual framework is to provide a class structure that represents information related to EBP procedures, while taking into consideration information about its decision-making context. The domain analysis was done in juridical, medical and educational environments, and includes: bibliographical research, specific legislation research, analysis of real cases collected and interviews with decision-makers. The Figure 1 presents the conceptual framework extended. The classes that involve context are based on definitions given by Vieira [6].The focus is treated as an association of a task with an agent, which has a role in problem resolution. ContextualEntity represents the entities of the application conceptual model and is characterized by at least one contextual element. A contextual element is a property that can be identified by a set of attributes and relationships associated with a ContextualEntity [6]. The association between Focus and ContextualElements determines what is relevant for a focus. Characteristics attributed to the dimension (ContextType) and the method of acquiring contextual elements (AcquisitionType) are considered in the framework. Contextual sources (SourceType) may be internal or external to the decision-making environment (e.g., the patient’s medical records, a document with evidence obtained from websites).
Using Cases, Evidences and Context to Support Decision Making
345
Fig. 1. The conceptual framework extended to integrate cases, evidences and context
About EBP and its representation classes, the starting point is the observation of a problem motivated by an actor to be decided by an agent. Each problem is associated with an inquiry that is initiated by a formulated question (see step 1 of the EBP procedures in Section 2.2), and completed with a selfevaluation of the research performance and suggestions for the future (see step 5 of the EBP procedures), whose information is instantiated in the Research class. Each domain in which EBP is applied has a list of different types of questions (QuestionType). For example: "diagnosis" and "prognosis" in the medical area, "drug testing" and "occurring disorders" in the area of crime prevention, and "educational research" in education. During the research for evidences, several searches can be performed to retrieve documents. For the Seek class, the expression and the type of search (SeekType) “title”, “author” or “subject” - must be present. InformationSource represents the independent research groups that generate documents with evidence, such as Cochrane Collaboration (medical area) and Campbell Collaboration (areas of education and crime prevention). Springer Verlag is not generating evidence, but has held documents with evidence.
346
E.C. Lopes et al.
Each document presents a type of study that can be in all domains (e.g. systematic review, case study) or more present in the specific domain (cohort - in the medical area; narrative - in crime prevention; action-research - in education). Systematic review and meta-analysis are studies of second degree; the remains are of first degree [2]. The association between type of study and degree of study is represented by type attribute (Study_Degree) in the Document class. In the medical area, Evidence-Based Medicine Guidelines are clinical guidelines for primary care combined with the best available evidence. The framework is extendible from the perspective of using guidelines adapted as a type of study. After selecting the found evidences, the agent (decision maker) will choose the one that seems the most appropriate (step 2 of EBP), which is instantiated in the Evidence class. The result of the critical analysis – or in other words the validity, relevance and applicability of the best evidence (step 3 of EBP) – corresponds to contextual information. Relevance is a contextual element in Document, while applicability (practical utility) is in Evidence. Thus, Document and Evidence are specializations of ContextualEntity. The Intervention class is the result of an association among the Problem, Actor and Evidence classes. It contains a description of a decision made (intervening solution) where information about associated classes have been considered including preferences, values and cultural aspects (conduct, behaviour for example) of the actor with the problem presented (step 4 of EBP). A preference is a contextual element and hence Actor is a specialization of ContextualEntity. About case structure, in highlighted in Figure 1, The Case class is composed of Problem and Intervention component classes and aggregates Result and Learning classes. Intervention represents the case solution and it has a relation with Evidence class that corresponds to justification to the solution. Result and Learning classes are extensions of the framework. Result contains information about obtained outcome and the analysis of the outcome. As a research is part of the learning process, the Research class is a component of the Learning class and detains the descriptions for the solution procedures and for the search performance in the Case Base. 3.2 Adapting EBP Procedures and Context to CBR Cycle Aiming to incorporate EBP procedures into the CBR cycle, we have considered some points. In order to consider the reuse of evidences in domains that use EBP, the classical case structure needs to be extended to incorporate the justification of the solution (the research evidences found), becoming a case justified by evidence. Besides, information about learning in EBP (retrieved document history, decision maker’s selfevaluation and suggestions to the future) must be presented in the new case structure. In relation to contextual elements, it will be considered information related to actor’s profile (e.g. behavior) and preferences (e.g. treatment options and durability in Evidence-Based Medicine) and decision maker’s profile (e.g. expertise). In consequence, the CBR cycle needs to be adapted according to the Figure 2, which contains the following activities: Retrieve and Filter - Obtain cases with problems similar to a new case and apply filtering mechanisms to show solved cases by decision makers with similar profiles (e.g. expertise or areas of interest) to the new case decision maker’s;
Using Cases, Evidences and Context to Support Decision Making
347
Fig. 2. The adapted CBR cycle
Transform - It aims to build the PICO question to the presented problem. It corresponds to the step 1 of the EBP; Site Retrieval - it contains: (i) the document retrieval with found evidences regarding the keywords of the PICO question applying Information Retrieval techniques with similarity metrics and ranking schemes; and (ii) the selection of the best evidence considering information from the source provider and aspects presented in the document (e.g. type of study, evidence, data sources, etc.). It corresponds to the step 2 of the EBP; Evaluate Critically – during this activity, the methodological aspects and relevance of results for studies presented into document must be evaluated. Besides practical applicability of the best evidence, must be considerate for corresponding to step 3 of EBP. Integrate – The indication of evidence adaptation, based on intervention proposed in documents, and the measures of expectative and safety of this adapted intervention, must be integrated to actor’s goal and preferences – it is the step 4 of the EBP. Reuse - it is related to the construction of new solutions. To this it is necessary to consider: (i) similar cases solutions (or part of them); (ii) history of actor’s same cases (recurrence turns the solution more complex); (iii) the evidence integrated with actor’s goal and preferences; and (iv) the actor’s profile. Revise - Evaluate and test the solution recommended to determine their correctness, utility and robustness.
348
E.C. Lopes et al.
Self-evaluate - It is a self-evaluation about the PICO question building, the evidences search performance and the choice of the best evidence – it is the last step of the EBP. Retain - Corresponds to the case learned that is added to the Case-base. Suggestions for future researches, comments about strategies (or procedures) for the solution and system performance in Case-base (Retrieve and Filter activity) can be registered.
4 Applying the Conceptual Framework Extended in the Area of Crime Prevention In this section we present the application of the conceptual framework extended. At first, we represent cases, evidences and context integrated and, in sequence, we show an implementation of case justified by evidence for the area of Crime Prevention. The Pernambuco state court (Brazil) was chosen for application of the framework extended because of its pioneering work on “restorative justice” and “therapeutic justice”, themes inherent in Evidence-Based Crime Prevention. The main requirements are: to judge cases through judicial sentences, and to make interventions based on support programs to the involved participants with the objective to avoid recidivism. This work corresponds to the second requirement. 4.1 Integrating Case, Context and Evidence in the Area of Crime Prevention The instantiated framework extended is presented in Figure 3 enriched with the stereotypes <> and <>, corresponding respectively to the concepts ContextualElement and ContextualEntity. In respect to the focus, each EBP procedure corresponds to a task. The following tasks were identified: (i) "make a juridical question”; (ii) "find the best juridical research evidence" based on the designation of juridical site providers, types of study and search expressions associated with the given question; (iii) "make a critical analysis of the best evidence found"; (iv) "integrate the best evidence found with the values and preferences of the participant with presented problem"; and (v) “do a self evaluation of the judge’s performance” to measure all the tasks of EBP. "Translator" and "designer" are the respective roles for tasks (i) and (ii); "intervenor” for the task (iv), while "evaluator" for the other tasks. To model the first focus, we consider the JuridicalFact and Participant classes and the Victim and Defendant subclasses, which appear highlighted in the illustration. Figure 3 represents the modeling of all investigated focus. The association between Judge and JuridicalFact brings up the judges that make interventions of supported programs. The characterization of the problem is given through the constitution of the juridical facts and the circumstances that motivated the offender being represented in the JuridicalFact class. To facilitate the information retrieval based on problems, key terms related to the juridical fact will be instantiated in the JuridicalFact class. The offender’s personal data are represented in the Defendant subclass inherited from Participant. In several cases, the presence of victims occurs. Thus, Defendant and Victim are specializations of Participant.
Using Cases, Evidences and Context to Support Decision Making
Seek docValidStart: Date docValidEnd: Date expression : String 1 type: SeekType
retrieve
1..* accomplish
Fig. 3. The framework extended applied to the area of crime prevention
The formulation of a question is based on data from the participant, possible interventions (programs like parent counseling, to support victims of crime, cyber abuse in children and adolescents, etc.) and desired results. The question and its corresponding type are instantiated in the JuridicalResearch class. The historic attribute in this class should include number of documents accepted and rejected. Searches for evidences should mention the period of validity for the documents requested in each reliable site (start and end). For the ResearchedDocument class, the required attributes (besides the contextual elements) are: location (URI / URL), title, author, keywords, publication and sample of the study (participants, age interval, geographic and temporal aspects, etc.). Searches for secondary studies should be conducted on Campbell Collaboration’s and
350
E.C. Lopes et al.
Springer websites. Primary studies should be obtained on the websites of Courts (federal or state) and in rely electronic journals in the country (JusNavigandi, National Association of Therapeutic Justice, etc.). The homePage attribute value is the reference to the JuridicalEvidProvider class that holds judicial evidences. Regarding the Evidence class, it should contain a summary of found evidences and the suggested interventions contained in the document. Information about priority solution that contains the proposals of evidence-based intervention must be presented in the RestorativeIntervention class. Regarding contextual entities and elements for the Crime Prevention area were identified, as follows: 1. validity (ResearchedDocument) - indicates whether the document should be selected based on its quality and the methodological rigor; 2. relevance (ResearchedDocument) - indicates whether the set of outcomes in the document, often presented in statistical format, is consistent and significant; 3. applicability (Evidence) - indicates whether the evidence has practical utility in general; 4. sample (ResearchedDocument) – denotes contextual aspects about the study presented and serves to match with participants’ contextual information; 5. abilities (Participant) – represents the actor’s skills (profile), and is used to find mutual affinities with intervention programs (e.g. revenue); 6. availability (Participant) – registers the availability preferences, measured in days and shifts. A participant with a good availability chart has more alternatives and higher chances of fulfilling the intervention on the schedule defined by the Judge; 7. adaptability (RestorativeIntervention) – indicates the degree of coherence in the application of the evidence for the profile (including abilities) and preferences (availability) of the participant; 8. safety (RestorativeIntervention) - denotes the percentage of safety that has the decision maker to apply the specific evidence into a particular participant; 9. expectation (RestorativeIntervention) - refers to the percentage of expected support from the use of evidence in relation to the participant; 10. expertAffinity (Judge) – identifies a relation of expertise from the Judge profile on a given subject matter (e.g. homicide) representing mutual affinities among judges; 11. subjectSimilarity (ResearchedDocument, Judge) – it refers to the percentage of similarity between keywords in a document and subjects of interest for the Judge; 12. recurrence (Defendant, JuridicalFact) – indicates if the defendant is a primary defendant or not. It increases the complexity of the problem; 13. risk (Defendant) – it comes from juridical and psychosocial evaluations (profile). Behavior data, conduct and fact description, especially for recurrent cases, are bases for measuring the degree of risk; 14. complexity (RestorativeIntervention) – it comes from the juridical evaluation and represents the degree of difficulty that the judge had in solving the case and indicates the intervention program. Recurrence and risk increase this element; 15. situation (JuridicalFact, RestorativeIntervention) – indicates whether the problem is ongoing or solved.
Using Cases, Evidences and Context to Support Decision Making
Fig. 4. Case representation model for the area of crime prevention
Considering cases, it was necessary to represent the case justified by evidence according to the Figure 4, because the case-based reasoner depends heavily on the case structure and its contents to operate. The JuridicalFact and RestorativeIntervention classes represent case problem and solution, respectively. The intervention (description in RestorativeIntervention class) is based on evidence that serves of justification (summary) to the solution. Outcome description and discussion are preview in Result class. Procedures of the interventions (intervProc) and performance in search in Case Base (casebasePerf) are presented in Learning class that is complemented with data of the JuridicalResearch class. 4.2 Implementation We present the aspects related to the implementation of an example adapted from a real case in the crime prevention domain involving an alternative penalty - a model for infractions that are of minor and moderately offensive potential (e.g. contravention or illegal weapon possession). It deals with a new modality, face-to-face restorative justice, in which a victim that suffered an assault with a weapon from an alcoholic offender receives a support. In order to illustrate the technical viability of the proposed integration, a prototype was developed, using the Java language. It interacts with the framework JColibri 2.1 [18], which includes filter-based retrieval (FilterBasedRetrievalMethod) for cases that satisfy a query expressed in a subset of SQL. JColibri is an open source implementation that provides expansions for thesaurus inclusion, textual and semantic researches, techniques for information extraction, among other advantages. For the cases storage we used the database manager PostgreSQL version 8.3. Figure 5 presents some arguments to find cases justified by evidence in the Court’s Case-base. The similarity functions used in JColibri were, respectively, Maxstring (juridical fact description) and Equal (juridical fact participant – we used the initials of the authors’ name). We applied Salton’s cosine formula [19] used in Information Retrieval for keyword similarity between the formulated query and the retrieved documents with evidence. We made two information retrievals. Without use contextual elements, the results with several cases are presented in Figure 6a (the expertise d.c means “drug crimes”, c.a.c - “crimes against child” and hom. - “homicide”). When it was used contextual information parameters as filter, a considerable reduction was observed in the selected cases (see Figure 6b). The Judge’s expertise in the new case is “drug crimes”.
352
E.C. Lopes et al.
Fig. 5. Data for case retrieval with filtering mechanisms
Fig. 6. Retrieved cases justified by evidence: without filter (6a) and with filtering (6b)
Fig. 7. Data for searching evidences in Springer Verlag’s database
Using Cases, Evidences and Context to Support Decision Making
353
Fig. 8. Evaluate the best evidence found
Therefore, analyzing the presented cases, the judge did not considerate that they were sufficient to support solution and he searched for evidences on the Internet. The research began with the question containing: the problem and victim (woman with a psychological problem who was assaulted), intervention (face-to-face sessions), comparison of interventions (face-to-face sessions and conventional processes) and outcome (beneficial effects). The sources Campbell Collaboration and Springer Verlag were chosen and their home-pages were obtained. Figure 7 show data for the second search regarding documents published between 2005 and 2010. As shown in Figure 8, the document with the best evidence found was evaluated and its information was manually extracted from web sites and filled in the form. The integration of the best evidence found with goal and preferences is presented in Figure 9.
Fig. 9. Evaluate the integration between evidence with goal and preferences of the victim
354
E.C. Lopes et al.
Fig. 10. Case generation
Data from the victim were informed and are compatible with the best evidence found. To conclude, the case generation is presented in Figure 10. To do this, the decision maker needs to inform the intervention, results and data related to the learning process (whether relative to the EBP or to the solution procedures and Case Base performance). The victim agrees to participate in face-to-face meetings with the offender, since that in previously scheduled time and with the presence of authorities. This case is justified because many of the presented defendants suffered from violence in the past and the crime victims could turn into offenders in the future [20].
5 Related Works In this section we present some related work on the combination of themes involving EBP, Context and CBR. Dobrow et al. [5] emphasize the treatment of evidence with context. In a theoretical approach about Evidence-Based Decision-Making for health policy, the authors present a conceptual framework regarding the role of context in the evidence introduction, interpretation and application for decision-making support. Kay et al. [21] describe ONCOR, an ontology- and evidence-based approach applied to contexts. They provide an approach to build ontology of places, devices and sensors in ubiquitous computing in building environment. Locations, activities, services and devices are considered in ONCOR in order to treat context history to model indoor pervasive computing places.
Using Cases, Evidences and Context to Support Decision Making
355
A work combining PBE with RBC presents a knowledge-based system that interacts with the Web, called CARE-PARTNER [22]. Its purpose is to support users in tasks involving clinical care of cancer patients who have undergone transplants. The system applies reasoning about knowledge sources of expert committees, cases and guidelines for clinical practice. CARE-PARTNER also considers negative feedback for learning effect. In [23], the authors developed a music recommendation system in CBR that uses users’ demographics and behavioral patterns considering also his/her context. The system identifies the user and collects the weather data from a web service. In sequence, the system retrieves the users (similar to the user identified) who listened to music in the same context (profiles and weather) to select music for recommendation. This work uses Database and Case Bases (available music, user’s profile and listening history, respectively), and data from the Web. These related works consider a combination of themes and were developed for specific domains. But, none of them has the perspective of integrating the three concepts and providing extension for several domains.
6 Conclusions and Future Work This article proposes the extension of a conceptual framework that facilitate the development of applications centered in EBP, with the consideration of context, to incorporate the case concept and the reuse of solutions justified by evidences previously applied for domains that use EBP. It also proposes the incorporation of context and EBP procedures to the CBR processing cycle aiming to support decision making. To do this, it was necessary to represent the integration of context and evidence into a classic case structure. The classes of the framework extended and the integration representation were presented and experimented in the area of crime prevention. Contextual information related to the EBP in this domain was modeled. With a practical implementation for the Pernambuco state court, Brazil, we showed how CBR techniques and EBP procedures can be used to support Judge’s decision making. Besides, we observed that using contextual information in Case Base makes retrieval and filtering mechanisms more effective. Future researches encompass: (i) the incorporation of mechanisms to support group decision making; (ii) the creation of a semi-automatic Evidence-Oriented Information Extractor; and (iii) the development of a computational tool for risk assessment.
References 1. Loriggio, A.: De onde vem os problemas. Negocio Editora, São Paulo (2002) 2. Friedland, D.J., Go, A.S., Davoren, J.B., Shlipak, M.G., Bent, S.W., Subak, L.L., Mendelson, T.: Evidence-Based Medicine: A Framework for Clinical Practice. McGraw-Hill, USA (1998) 3. Sackett, D.L., Straus, S.E., Richardson, W.S., Rosenberg, W., Haynes, R.B.: Evidencebased medicine: how to practice and teach EBM. Elsevier Health Sciences, New York (2001)
356
E.C. Lopes et al.
4. Pal, S., Shiu, S.C.K.: Foundations of Soft Case-Based Reasoning. Wiley-Intersciense Publication, New Jersey (2004) 5. Dobrow, M.J., Goel, V., Upshur, R.E.G.: Evidence-based health policy: context and utilization. Social Science & Medicine 58(1), 207–217 (2004) 6. Vieira, V.: CEManTIKA: A Domain Independent Framework for Designing ContextSensitive Systems. Doctorate thesis, Federal University of Pernambuco, Brazil (2008) 7. Bunningen, A.: Context Aware Querying - Challenges for data management in ambient intelligence. Doctorate thesis, University of Twente (2004) 8. Dey, A.K., Abowd, G.D.: A Conceptual Framework and a Toolkit for Supporting the Rapid Prototyping of Context-Aware Applications. Human-Computer Interaction (HCI) Journal 16(2-4), 97–106 (2001) 9. Henricksen, K., Indulska, J.: Developing Context-Aware Pervasive Computing Applications: Models and Approach. Pervasive and Mobile Computing Journal 2(1), 37–64 (2006) 10. Brézillon, P.: Context modeling: Task model and practice model. In: Kokinov, B., Richardson, D.C., Roth-Berghofer, T.R., Vieu, L. (eds.) CONTEXT 2007. LNCS (LNAI), vol. 4635, pp. 122–135. Springer, Heidelberg (2007) 11. Thomas, G., Pring, R.: Evidence-Based Practice in Education. Open University Press, UK (2004) 12. Warren, R.K.: Evidence-Based Practice to Reduce Recidivism: Implications for State Judiciaries (2007), http://works.bepress.com/roger_warren/1 13. Heneghan, C., Badenoch, D.: Evidence-Based Medicine Toolkit. Blackwell Publishing Ltd., Oxford (2006) 14. Kolodner, J.: Case-Based Reasoning. Morgan Kaufmann Publishers, San Mateo (1993) 15. Aamodt, A., Plaza, E.: Case-Based Reasoning: foundational issues, methodological variations and system approaches. AI Communications 7(1), 39–59 (1994) 16. Belecheanu, R., Haque, B., Pawar, K.S., Barson, R.: Decision Support Methodology for Early Decision Making in New Product Development: A Case Based Reasoning Approach. University of Nottingham (1999), http://citeseer.ist.psu.edu/323023.html 17. Lopes, E., Schiel, U., Vieira, V., Salgado, A.: A Conceptual Framework for the Development of Applications Centred on Context and Evidence-Based Practice. In: Filipe, J., Cordeiro, J. (eds.) ICEIS 2010. LNBIP, vol. 73, pp. 331–347. Springer, Heidelberg (2011) 18. Bello-Tomás, J.J., González-Calero, P.A., Díaz-Agudo, B.: JColibri: An object-oriented framework for building CBR systems. In: Funk, P., González Calero, P.A. (eds.) ECCBR 2004. LNCS (LNAI), vol. 3155, pp. 32–46. Springer, Heidelberg (2004) 19. Salton, G.: Automatic Information Organization and Retrieval. McGraw-Hill, New York (1968) 20. Sherman, L.W., Strang, H., Angel, C., Woods, D.J., Barnes, G.C., Inkpen, N., Bennett, S.B., Rossner, M.: Effects of face-to-face restorative justice on victims of crime in four randomized, controlled trials. Journal of Experimental Criminology 2(3), 407–435 (2006) 21. Kay, J., Niu, W.T., Carmichael, D.J.: ONCOR: Ontology- and Evidence-based Context Reasoner. In: Intelligent User Interface – IUI 2007, pp. 290–293. ACM, Honolulu (2007) 22. Bichindaritz, I., Kansu, E., Sullivan, K.M.: Case-Based Reasoning in CARE-PARTNER: Gathering Evidence for Evidence-Based Medical Practice. In: Smyth, B., Cunningham, P. (eds.) EWCBR 1998. LNCS (LNAI), vol. 1488, pp. 334–345. Springer, Heidelberg (1998) 23. Lee, J.S., Lee, J.C.: Context Awareness by Case-Based Reasoning in a Music Recommendation System. In: Ichikawa, H., Cho, W.-D., Chen, Y., Youn, H.Y. (eds.) UCS 2007. LNCS, vol. 4836, pp. 45–58. Springer, Heidelberg (2007)
An Adaptive Optimisation Method for Automatic Lightweight Ontology Extraction Fabio Clarizia, Luca Greco, and Paolo Napoletano Natural Computation Laboratory Department of Information Engineering and Electrical Engineering University of Salerno, Via Ponte Don Melillo 1, 84084 Fisciano, Italy {fclarizia,lgreco,pnapoletano}@unisa.it http://nclab.diiie.unisa.it
Abstract. It is well known how the use of additional knowledge, coded through ontologies, can improve the quality of the results obtained, in terms of user satisfaction, when seeking information on the web. The choice of a knowledge base, as long as it is reduced to small domains, is still manageable in a semi-automatic mode. However, in wider contexts, where a higher scalability is required, a fully automatic procedure is needed. In this paper, we show how a procedure to extract an ontology from a collection of documents can be completely automatised by making use of an optimization procedure. To this aim, we have defined a suitable fitness function and we have employed a Random Mutation Hill-Climbing algorithm to explore the solution space in order to evolve a near-optimal solution. The experimental findings show that our method is effective. Keywords: Lightweight ontology, Topic model, Random mutation hill-climbing.
1 Introduction The idea of taking advantage of additional knowledge, which might be captured through a kind of informal knowledge representation, to retrieve relevant web pages in informational querying which are closer to user intentions and so improve the overall level of user satisfaction, has been broadly considered in [5]. The method proposed in [5] is mainly based on the definition of a kind of informal knowledge, that has been called an informal lightweight ontology, which can be semiautomatically inferred from documents through the probabilistic topic model described in [19]. Such a knowledge representation consists of concepts (lexically indicated by words) and the links between them and it can be exploited to obtain a greater specialisation of user intention and thus reduce those problems inherent in the ambiguity of language. The proposed technique has been developed as a web based server-side application and has been validated, first, through a comparison of retrieving performances (precision/recall) with a customised version of a web search engine (Google), and secondly, through a comparison based on human judgement carried out in order to evaluate the performance on a continuous scale. The experiments have been conducted in different contexts and, for each context, different groups of human beings have been asked J. Filipe and J. Cordeiro (Eds.): ICEIS 2010, LNBIP 73, pp. 357–371, 2011. c Springer-Verlag Berlin Heidelberg 2011
358
F. Clarizia, L. Greco, and P. Napoletano
to assign judgments of relevance, for each web page collected, by unifying the results obtained from both the proposed search engine and the classic one. The results obtained have confirmed that the proposed technique certainly increases the benefits in terms of relevance, so better satisfying the user intentions. However the most critical part of this approach concerns the ontology building and in particular two aspects: the choice of the informal knowledge (documents) feeding the builder and the identification of the best parameters of the algorithm which extracts the ontology itself. The first critical aspect can be solved by asking a group of experts to select documents which deal with the topic under test and then which are adequate to answer informational queries on that topic, [5]. The second aspect should be taken into account more formally. In fact, the topology of the graph representing an ontology changes with the value of three parameters (in section 4 we will discuss these parameters in greater depth). This implies that the score, which is assigned by the entire system to documents retrieved by means of that ontology, changes with these parameters. The choice of such values, as long as it is reduced to small domains, can still be taken in a semi-automatic mode, for instance by following an empirical reasoning, but if you want the automatic extraction of ontology to be scalable to wider contexts, then you need a fully automatic procedure. In this work we focus on the aspect of the ontology building process related to the problem of the optimisation of the score function. To this aim we have defined a suitable fitness function and we have employed a Random Mutation Hill-Climbing algorithm to explore the solution space in order to evolve a near-optimal solution. The experimental findings show that our method is effective. The paper is organised as follows: in section 2 we briefly introduce the application developed and tested in [5] and finally we focus on the scoring process 2.3; in section 3 we formally introduce the ontology building process and we consider the problem of parameters settings 3.2; in section 4 we consider the problem of score optimisation, discussing in greater depth the score function embedded in the application and showing how the topology of the ontology can significantly vary with parameters; and finally, in section 5, we show the experimental results that support our discussion and confirm that an optimisation procedure provides effective solutions.
2 iSoS: A Traditional Web Search Engine with Advanced Functionalities The proposed technique considers the exploiting of informal lightweight ontologies to improve the quality of the search results of a classic web search engine performing informational queries. In order to set up the appropriate framework to prove the effectiveness of our approach we have developed a web based server-side application, namely iSoS lite1 , in Java and Java Server Pages programming languages, which includes a customised version of the open source API Apache Lucene2. Thus it is composed of several main parts, amongst which we include: the web crawling, indexing, searching and scoring. 1 2
An Adaptive Optimisation Method for Automatic Lightweight Ontology Extraction
359
2.1 Web Crawling Each web search engine works by storing information about web pages retrieved by a Web crawler, a program which essentially follows every link that it finds by browsing the Web. Due to hardware limitations, we have not implemented a crawling system, but a smaller environment has been created in order to evaluate performance: the crawling stage has been performed by submitting a specific query to the famous web search engine Google (www.google.com), and by extracting the URLs from the retrieved results. As a consequence, the application downloads the corresponding web pages which are collected in specific folders and finally indexed. 2.2 Indexing The main aim of the indexing stage is to store statistics about words to make the search for them more efficient in terms of the speed required to retrieve them. A preliminary document analysis step is needed in order to recognise the tag, metadata and informative contents: this step is often referred to as the parsing step. A standard Lucene indexing procedure considers a document sequence where each document, represented as an integer, is a field sequence, with every field containing indexed words; such an index belongs to the inverted index family because it can list, for each word, the documents that contain it. A correct parsing procedure helps to set up well categorised sets of fields, thus improving the subsequent searching and scoring. Nevertheless, there are different approaches to web page indexing. For example, it must be said that some engines do not index whole words but only their stems. The stemming process reduces inflected words to their root form and this is a very common element in querying based systems such as Web search engines, since words with the same root are supposed to bring similar informative content. In order to avoid the indexing of common words such as prepositions, conjunctions and articles which do not bring any additional information, a stop-words filtering should also be adopted. Since stemmed words indexing and stop-words filtering often result in a lack of search precision, although they could help to reduce the index size, they are not the choice of important search engines (like Google). In the case of the proposed application, we have developed a customised Lucene analyser which allows the indexing of both words and their stems without stop-words filtering; it is possible then to include in the searching process ontologies composed of stemmed words and thus to optimise the ontologybased search without penalising the original query precision. 2.3 Searching and Scoring The heart of a search engine lies in its ability to assign a rank or an order to documents that match a query. This can be done through specific score computations and ranking policies. Several IR operations (including the scoring of documents related to a query, document classification and clustering) often rely on the vector space model where documents are represented as vectors in a common vector space [3]. For each document w a vector V (w) can be considered, containing a component, for each word (or term) of the vocabulary, which is represented by the term frequency-inverse document frequency (tf-idf ) weight . The tf-idf weighting process assigns to term w a weight in a document
360
F. Clarizia, L. Greco, and P. Napoletano
Table 1. An example of an ontology for the topic Renaissance Art. Refer to 1(c) for further details Concepti Conceptj Relation factor (ψij ) artist pittur 0.546743 artist pittor 0.730942 artist oper 0.535678 ··· ··· ··· Concepti Words Relation factor (ρis ) artist numer 0.490548 artist cos 0.511323 artist concezion 0.533455 ··· ··· ···
w given by tf-idfw,w = tfw,w × idfw , where tfw,w is the term frequency and idfw is the inverse document frequency. In this model, a query can also be represented as a vector V (q), allowing the scoring of a document w by computing the following cosine similarity: (tfq,w · idfq2 · Bq · N (q, q)) (1) S(q, w) = C(q) · Nq (q) · q∈q
where tfq,w correlates to the term’s frequency, defined as the number of times term q appears in the currently scored document w. Documents that have more occurrences of a given receive a higher score. The default computation for idfq is, idfq = term
M , with M being the total number of documents and dfq the total number 1 + log df q of documents in which the term q appears. This means that rarer terms give a higher contribution to the total score. C(q) is a score factor based on how many of the query’s terms are found in the specified document. Typically, a document that contains more of the query’s terms will receive a higher score than another document with fewer query terms. Nq (q) and Nq (q) are normalising factors used to make scores between queries comparable, therefore we will not explain them. Bq is a search boost factor of term q in the query q as specified in the query text (see query syntax next). Due to the boost factor we can assign a greater weight to a term of the query. Lucene embeds very efficient searching and scoring algorithms based on both this model and the Boolean model. In order to perform the ontology-based search, we have customised Lucene’s querying mechanism. Since our ontologies are represented as pairs of related words where the relationship strength is described by a real value (Relation factor), we have used the Lucene Boolean model and term boosting faculties to extend the original query with contributions from ontology. Table 1 shows an example of an ontology representation for the topic Renaissance Art; including such an ontology in the searching process would basically result in performing a query like:
((artist AND pittor)ˆ 0.730942) OR ((artist AND concezion)ˆ 0.533455)...
An Adaptive Optimisation Method for Automatic Lightweight Ontology Extraction
361
that means searching the pair of words artist AND pittor with a boost factor of 0.730942 OR the pair of words artist AND concezion with a boost factor of 0.533455 and so on.
3 A Probabilistic Approach to Informal Lightweight Ontology Building In literature, different approaches have been used to build ontologies: manual, semiautomatic and automatic methods [16,4]. Among semiautomatic and automatic methods we can distinguish those based on Machine Learning techniques from those based on pure Artificial Intelligence (AI) theories. Nevertheless, the great majority of existing methods rely on the concept of ontology as it is commonly acknowledged in computer science, that is an ontology is an explicit specification of conceptualisations [11]. This is often represented as a directed graph whose nodes represent concepts and edges represent relations between concepts. Furthermore, the AI community considers that the backbone of an ontology graph is a taxonomy in which the relations are is-a, whereas the remaining structure of the graph supplies auxiliary information about the modelled domain and may include relations like part-of, located-in, is-parent-of, and many others [12]. Although today there is no consensus on what concepts are [14], they are often lexically defined, and we refer to them by using natural language names. Therefore, when ontologies are visualised, their nodes are often shown with corresponding natural language concept names. However, whatever the definition of ontology, we can affirm that knowledge organising systems (e.g. classification systems, thesauri, and ontologies) should be considered as systems basically organising concepts and their semantic relations. Starting from these considerations on the main aspects of knowledge definition, some questions arise: 1. How formal should the definition of the concepts and their relations be in order to be universally shared and accepted? 2. How much human help do you need to build such a universally shared knowledge? The formality of an ontology can be understood in two ways: the degree of formality and expressivity of the language used to describe it; and how formal are the information sources used to build such a knowledge. Based on this consideration we can form a continuum of kinds of ontologies (as described in [20]), starting from terms and web directories (also called lightweight), and continuing to rigorously formalised logical theories. However, most of the specifications do agree that an ontology should be defined in a formal language, which in practice usually means a logic-based language suitable for automatic reasoning3, and should be fed with formal knowledge, that is the involvement of several experts. Nevertheless, the informality of an ontology is mainly related to the nature of the information which feeds the knowledge definition: has the information been, automatically or semi-automatically, extracted from some sources? If so, then this information contributes to form a kind of informal knowledge. As a consequence we can say that a kind of ontology is well determined by two aspects: 3
Note that it does not mean that such a language cannot be probabilistic (see what has been demonstrated in [15]).
362
F. Clarizia, L. Greco, and P. Napoletano
1. The formality, as well as the expressivity, of the language used to describe and represent it, which answers the first question; 2. The definition of the formality of the information sources, which answers the second question. In our opinion, the most widely accepted definition of ontology, which considers formality in both these aspects, appears not to be suitable to describe the informal knowledge, for example, that we can automatically infer from text documents. On the contrary, the definition of a kind of informal lightweight Ontology seems to provide a more flexible structure and applicability to an informal knowledge description of a context [10], [13], [17]. The main idea here is to introduce both the representation of a Graph of Concepts (GCs), which considers both the definition of a Concept (see Fig. 1) and a method to automatically learn such a representation from documents. We show that a GCs is a kind of informal representation and, since it is automatically learnt from documents, it can be considered as a kind of informal lightweight Ontology4 . 3.1 A Graph of Concepts Let us define a Graph of Concepts as a triple GC = N, E, C where N is a finite set of nodes, E is a set of edges weighted by ψij on N , such that N, E is an a-directed graph (see Fig. 1(a)), and C is a finite set of concepts, such that for any node ni ∈ N there is one and only one concept ci ∈ C. The weight ψij can be considered as the degree of semantic correlation between two concepts ci is-relatedψij -to cj and it can be considered as a probability: ψij = P (ci , cj ). Since the relations between the nodes can only be of the type is-related-to then this representation can be considered as a lightweight conceptualisation that we call O. The probability of O given a parameter τ can be written as the joint probability between all the concepts. By following the machine learning theory on the factorisation of undirected graph [1], we can consider such a joint probability as a product of potential functions where each of these functions can be considered as the weight ψi j. Then we have P (O|τ ) = P (c1 , · · · , cH |τ ) =
1 Z
ψij
(2)
(i,j)∈Eτ
where H is the number of concepts, Z = O (i,j)∈Eτ ψij is a normalisation constant and the parameter τ can be used to modulate the number of edges of the graph. Each concept ci can be defined as a rooted graph of words vs and a set of links weighted by ρis (see Fig. 1(b)). The weight ρis can measure how far a word is related to a concept, or in other words how much we need such a word to specify that concept. We can consider such a weight as a probability: ρis = P (ci |vs ). The probability of the concept given a parameter μ, which we call ci , is defined as the factorisation of potential functions ρis 4
In the following it may happen that we refer to an informal lightweight ontology using the word ontology.
An Adaptive Optimisation Method for Automatic Lightweight Ontology Extraction
(a)
363
(b)
(c) Fig. 1. 1(a) Theoretical representation of a Graph of Concepts. The weight ψij represents the probability that two concepts are semantically related. 1(b) Graphical representation of a Concept. The weight ρin represents the probability that a word is semantically related to a concept. 1(c) Graph of Concepts representing an informal lightweight ontology learnt by a set of documents on the topic “Renaissance Art”.
364
F. Clarizia, L. Greco, and P. Napoletano
P (ci |{v1 , · · · , vVμ }) =
1 ρis ZC
(3)
s∈Sμ
where ZC = C s∈Sμ ρis is a normalisation constant and Vμ is the number of words defining the concept, such a number depending on the parameter μ. In order to compute the expressions 2 and 3 we need to compute both ψij and ρis . In the following section we show how such graphical representations can be learnt from documents by making use of a probabilistic technique. 3.2 Learning Concepts and Semantic Relations through a Probabilistic Method A concept is lexically identified by a word of a vocabulary, and it can be represented by a set of connected words (see Fig. 1(b)). Formally, a word is an item of a vocabulary indexed by {1, · · · , V }, each word using unit-basis vectors that have a single component equal to one and all other components equal to zero. Thus, the vth word in the vocabulary is represented by a V -vector w such that wp = 1 and wq = 0 for p = q. A document is a sequence of L words denoted by w = (w1 , w2 , · · · , wL ), where wn is the nth word in the sequence. A corpus is a collection of M documents denoted by D = {w1 , w2 , · · · , wM }. If we are considering an automatic extraction of knowledge from a document (or a set of documents), which in our case is represented by a graph of concepts, then it is obvious that we need to determine when a word denotes a concept and/or contributes to define a part of it. For this purpose, our method treats each word in the first instance as a possible concept and it calculates its degree of association with the remaining words, that is the computation of the probability of a concept P (ci |{v1 , · · · , vV }=i ). Note that each word of the vocabulary can be potentially a concept, and so i ∈ {1, · · · , V }. It would be sufficient to calculate the probability of each concept to determine which of them is best specified by a set of words, in other terms which of them is most likely. In this way, we can determine which words of the corpus represent concepts and then calculate their statistical dependencies, ψij , to finally obtain the graph of concepts. Note that since each concept is represented by a word, then the computation of ρis = P (ci |vs ) = P (vi |vs ) where the concept ci is lexically identified by vi , and ψij = P (ci , cj ) = P (vi , vj ) where the concept ci and cj are lexically identified by vi and vj respectively. Those probabilities can be computed as a word association problem that, as we show next, can be solved by using the probabilistic topic model introduced in [19] and [2]. The original theory introduced in [19] mainly asserts a semantic representation in which the meanings of words are represented in terms of a set of probabilistic topics zn where the conditional independence between words wn and the “bags of words”5 assumption has been made. This model is generative and it allows the solving of several problems, including the word association problem. 5
The “bags of words” assumption claims that a document can be considered as a feature vector where each element in the vector indicates the presence (or absence) of a word, so that the information on the position of that word within the document is completely lost.
An Adaptive Optimisation Method for Automatic Lightweight Ontology Extraction
365
Let us assume that we write P (z) for the distribution over topics z in a particular document and P (w|z) for the probability distribution over words w given topic z. Each word wn in a document is generated by first sampling a topic from the topic distribution and then choosing a word from the topic-word distribution. We write P (zn = k) as the probability that the kth topic was sampled for the nth word, which indicates which topics are important for a particular document. Moreover, we write P (wn |zn = k) as the probability of word wn under topic k, which indicates which words are important for which topic. The model specifies the following distribution for each word vi of the vocabulary within a corpus of documents: P (vi ) =
D
P (wnd = vi |wd )P (wd )
d=1
=
T D
P (wnd = vi |znd = k, wd )P (znd = k|wd )P (wd )
(4)
d=1 k=1
where T is the number of Topics, and n refers to the nth word within a document. Note that in each document we can have more than one index referring to the word vi . Here we use n just to simply the notation. In the topic model, the word association can be considered as a problem of prediction: given that a cue is presented, which new words might occur next in that context? The resulting conditional probability can be obtained summing over all documents and topics as P (vi |vj ) =
D
d P (wnd = vi , wd |wn+1 = vj )
d=1
∝
T D
d P (wnd = vi |z = k, wd )P (wn+1 = vj |z = k, wd )P (z = k|wd )(5)
d=1 k=1 d where P (wnd = vi |wn+1 = vj , z = k, wd ) = P (wnd = vi |z = k, wd ) due to the bags of words and the exchangeability assumption of the topic model, [2], [19]. Several statistical techniques can be used for the unsupervised inferring procedure for great collections of documents. We will use a smoothed version of the generative model introduced by Blei et al. [2] called latent Dirichlet allocation, which makes use of Gibbs sampling. In this model, the multinomial distribution representing the gist is drawn from a Dirichlet distribution, a standard probability distribution over multinomials [9]. Results obtained by performing the LDA algorithm on a set of documents are two matrixes:
1. the words-topics matrix Φ: it contains the probability that a word vi is assigned to topic j, P (wnd = vi |znd = k, wd ); 2. the topics-documents matrix Θ: it contains the probability that a topic j is assigned to some word token within a document, P (z = k|wd ).
366
F. Clarizia, L. Greco, and P. Napoletano
4 Learning Concepts and Learning Graph Relations Given a set of documents, the whole procedure to automatically extract a GCs is composed of two stages: one for the identification of concepts from the vocabulary (learning of words-concepts relations) and another for the computation of the relations between the concepts (graph learning). These stages involve the choice of parameters, which can be manually or automatically set. In this section we show which parameters must be optimised, while in the next section, we will explore how to develop a procedure to determine such parameters automatically by making use of a multi-objective optimisation procedure based on the well known Random Hill-Climbing algorithm [8]. Let us consider one document from the corpus (the procedure still holds if we choose more than one document). It will contain V words/concepts and for each of them, by using the Φ matrix, we can compute the Eq. 3, P (ci |{v1 , · · · , vV }=i ). By analysing the probability distribution of the concepts obtained we can identify a threshold and then filter out those which are less probable, which correspond, for the model, to words that do not represent concepts. Alternatively, we can select a specific number of concepts H out of a set of plausible ones. In this way we consider H as a variable, a parameter, which assumes a value within a set of plausible numbers, and the best value for H can be identified by performing a kind of optimisation procedure. Once a number of concepts has been chosen, there are two steps remaining. For each concept ci , one step consists in determining a threshold μi to select the number of concept-word relations from P (ci |{v1 , · · · , vV }=i ), ∀i. The other step consists in computing the P (O|τ ), Eq. 2, in other words the value τ such that the most probable ψij factors, representing concept-concept relations, are determined. Summing up, we have to find a value for the parameter H, which establishes the number of concepts, a value for τ and finally values for μi ,∀i ∈ [1, · · · , H]. Thus we have H + 2 parameters which modulate the shape of the ontology. If we let the parameters assume different values, we can observe a different ontology Ot extracted from the same set of documents, where t is representative of different parameter values. In this case, we need a criterion to choose which ontology is the best for our purpose, e.g., we could be interested in those ontologies which better represent each document of the feeding repository. 4.1 Multi-objective Optimisation Procedure As we have shown in the previous section, we can obtain an ontology Ot for each set of parameters, Λt = (H, τ, μ1 , · · · , μH )t . If we can define a function which measures the quality of the current ontology Ot then we can choose a set of parameters for which the quality is the best. A way of saying that an ontology is the best possible for that set of documents is to demonstrate that it produces the maximum score attainable for each of the documents when the same ontology is used as a knowledge base for querying in a set containing just those documents which have fed the ontology builder. For this purpose let us suppose we use a corpus of M documents denoted by D = {w1 , w2 , · · · , wM }, and let Ot be the t-th possible ontology built from that repository with the set of parameters Λt . Each ontology can be represented by the Lucene boolean syntax, as we have
An Adaptive Optimisation Method for Automatic Lightweight Ontology Extraction
367
shown in section 2.3, which alternatively corresponds to two vectors, one representing all the pairs qt = {q1 , · · · , qU }t , and one representing each relation factor, that is the Lucene boost, Bqt = {Bq1 , · · · , BqU }t . By performing a Lucene search query that uses the ontology Ot on the same repository D, we obtain a score for each document wi through Eq. 1, and then we have St = {S(qt , w1 ), · · · , S(qt , wM )}t , where each of them depends on the set Λt . To compute the best value of Λ we can maximise the score value for each document, which means that we are looking for the ontology which best describe each document. It should be noted that such an optimisation maximises at the same time all M elements of St . Alternatively, in order to reduce the number of the objectives to be optimised, we can at the same time maximise the mean value of the scores and minimise their standard deviation, which turns a multi-objective problem into a two-objective one. Additionally, we can reformulate the latter problem by means of a linear combination of its objectives, thus obtaining a single objective function, i.e., Fitness (F ), which depends on Λt : F (Λt ) = Em [S(qt , wm )] − σm [S(qt , wm )] ,
(6)
where Em is the mean value of all elements of St and σm is the standard deviation. Summing up, we have Λ∗ = argmax{F (Λt )} t
(7)
The fitness function depends on H+2 parameters, hence the set of the possible solutions can grow exponentially. To limit such a set we can reduce the number of parameters, for instance considering μi = μ, ∀i ∈ [1, · · · , H]. By adopting this strategy, we can reduce the number of parameters H + 2 to 3, and such a reduction is invariant with respect to the number of concepts. The optimisation method we have chosen performs a search procedure through a zero-temperature Monte Carlo method, known in the Evolutionary Computation community as Random Mutation Hill-Climbing (RMHC) [8], which extends the Random Search by generating new candidate solutions as variations of the best known solution. RMHC has been demonstrated to be effective in many optimisation problems under NP-hard, deceptive and neutral cost functions [8,6,18,7] and works as shown in Algorithm 1. Algorithm 1. RMHC optimisation procedure. select a starting point by generating a random solution; set such a solution to the best-evaluated; while a pre-defined number of fitness evaluations has not been performed do generate a new candidate solution as a variation of the current best. if the mutation leads to an equal or higher fitness then set best-evaluated to the resulting solution; end if end while
368
F. Clarizia, L. Greco, and P. Napoletano
5 Experimental Results The experiments in [5] were conducted in different contexts and, for each context, the authors asked different groups of human beings to assign judgments of relevance for the set of web pages collected by unifying the results obtained from both the proposed search engine and a classic one. Such an evaluation procedure generated a human rated (ranked) list of documents for each context undertaken, and the first 10 positions were considered as the best representative for each topic. To perform the experiments for this paper we have exploited the results obtained in [5], in particular we have considered 2 contexts: 1. Topic: Arte Rinascimentale (AR). 2. Topic: Storia dell’Opera Italiana (OPI). Table 2. The results obtained for the contexts under testing. 2(a) Scores obtained for documents of AR at step t = 3 and t = 87127. 2(b) Scores obtained for documents of OPI at step t = 5 and t = 84652. (a) Rank for AR URLs t = 3 t = 87127 www.firenze-online.com 0,33 0,40 www.artistiinrete.it 0,28 0,40 www.arte-argomenti.org 0,25 0,34 www.arte.go.it 0,23 0,31 www.bilanciozero.net 0,22 0,41 www.visibilmente.it 0,22 0,38 digilander.libero.it 0,22 0,38 it.wikipedia.org 0,17 0,36 it.encarta.msn.com 0,16 0,34 www.salviani.it 0,06 0,33
Rank for OPI t = 5 t = 84652 0,22 0,50 0,19 0,47 0,25 0,44 0,19 0,40 0,06 0,35 0,13 0,34 0,35 0,33 0,10 0,26 0,14 0,22 0,11 0,17
(b)
Fig. 2. Mean value, standard deviation and fitness evolution for the topic AR, 2(a), and OPI, 2(b)
We have taken into account the most representative documents for each topic, which are the first 10 URLs of the human evaluated lists6 . Once a repository D for each topic 6
Due to the fact that the documents were downloaded in December 2009, they are no longer available on the Web. Copies of both the repositories are maintained at the Natural Computation Laboratory. Please refers to the corresponding author for any further details.
An Adaptive Optimisation Method for Automatic Lightweight Ontology Extraction
369
has been chosen, we have carried out the RHMC optimisation procedure. As regards the parameters to be optimised, we have considered the following ranges of variation: H ∈ [5, 15] ⊆ N and μ, τ ∈ [0, 1] ⊆ R. To boost the searching, instead of a random initialisation of the solution, we have used a point fixed solution, which has been computed as in the following example. In Fig. 1(c) we report such an example obtained from a set of documents on Renaissance Art in Italian by performing the ontology builder with H = 5, τˆ = Eψ −σψ and μ ˆ = Eρ −5·σρ (where Eψ and σψ are the mean value and the standard deviation of ψij ∀i, j respectively, while Eρ and σρ being the mean value and the standard deviation of ρin ∀i, n respectively). For each context we have performed several runs of the RHMC algorithm with different pseudo-random generator initialisations. Moreover, for each run we have set a maximum number of evaluations equal to 100000. At the end of the optimisation, we have chosen the best run for each topic. In Fig. 2(a) and Fig. 2(b) the evolutions of the fitness, the mean value and standard deviation are shown for AR and OPI respectively. In Tables 2(a) and 2(b) the differences between the scores assigned to each document at the beginning and at the end of the run are shown (i.e., the best ones). Note that at the beginning the values across the documents are non-homogeneous, showing a low mean value and high standard deviation. At the end of the algorithm we can see that the standard deviation is lower and the mean value is higher. Moreover, the distance between the mean value and the standard deviation at the beginning of the search is lower than the same difference at the end of the algorithm. This confirms that the proposed method is able to maximise the fitness as a single-objective problem. Finally, it should be pointed out that the maximum values of the fitness are very similar in all the runs, thus confirming the robustness of our approach.
6 Conclusions In this paper, we have shown how a procedure to extract a kind of informal lightweight knowledge from a collection of documents can be fully automatised by making use of an optimisation procedure. The informal knowledge, that we have called an informal lightweight ontology, has been defined as a Graph of Concepts, which is composed of nodes (concepts) and the weighted edges between them (standing for the semantic relations between the concepts). Each concept has also been defined as a graph but this time it is a-directed. Such a graph is rooted in the concepts and formed of words which specify the concept itself. Each word is related to the root through weighted links. Such weights of both the graphs can be interpreted as probabilities and the probability of a concept and of a graph of concepts have been learnt through a probabilistic technique: for this purpose a smoothed version of the Latent Dirichlet allocation, better known as the topic model, has been employed. Given a set of documents, the whole procedure to automatically extract a GCs is composed of two stages: one for the identification of concepts from the vocabulary (learning of words-concepts relations) and another for the computation of the relations between the concepts (graph learning). These stages involve the choice of parameters, that can be set either manually or automatically. To automatically set such parameters we have defined a suitable fitness function and we have employed a Random Mutation Hill-Climbing to explore the solution space in order to evolve a near-optimal solution.
370
F. Clarizia, L. Greco, and P. Napoletano
The experiments have been conducted in two contexts. The results obtained have confirmed that the proposed optimisation procedure is able to determine the optimal parameters at which the fitness function is maximised, thus demonstrating that our approach is effective. The fact that the mean value and standard deviation of the final scores, assigned by the optimisation procedure to each document, are maximised and minimised at the same time, implies that such a method can be used as a semantic clustering method. For instance, let us consider a repository and firstly compute, given the optimum ontology extracted by the same corpus, the distance between the score of a document A and the score of a document B. Then, let us compute the difference between A and another document C. If the first difference is lower than the second difference (in other terms the scores of the first example are more homogeneous), then it means that the first two documents are semantically closer than the other two. As a consequence, the score function can be used as a criterion to compute the homogeneity between documents, which can be considered as a prelude to a classification procedure. Concerning future work, we plan to investigate how the automatic extraction of a lightweight ontology can be used for document classification. Acknowledgements. The authors wish to thank Dr. Antonio Della Cioppa for his useful comments and suggestions about the RHMC optimisation procedure.
References 1. Bishop, C.M.: Pattern Recognition and Machine Learning. Springer, Heidelberg (2006) 2. Blei, D.M., Ng, A.Y., Jordan, M.I.: Latent dirichlet allocation. Journal of Machine Learning Research 3, 993–1022 (2003) 3. Christopher, D., Manning, P.R., Sch¨utze, H.: Introduction to Information Retrieval. Cambridge University Press, Cambridge (2008) 4. Cimiano, P.: Ontology Learning and Population from Text: Algorithms, Evaluation and Applications. Springer, Heidelberg (2006) 5. Clarizia, F., Greco, L., Napoletano, P.: A new technique for identification of relevant web pages in informational queries results. In: Proceedings of the 12th International Conference on Enterprise Information Systems: Databases and Information Systems Integration, pp. 70– 79 (2010) 6. De Falco, I., Della Cioppa, A., Maisto, D., Scafuri, U., Tarantino, E.: Extremal optimization dynamics in neutral landscapes: The royal road case. In: Artificial Evolution, pp. 1–12 (2009) 7. Duvivier, D., Preux, P., Talbi, E.-G.: Climbing up np-hard hills. In: Ebeling, W., Rechenberg, I., Voigt, H.-M., Schwefel, H.-P. (eds.) PPSN 1996. LNCS, vol. 1141, pp. 574–583. Springer, Heidelberg (1996) 8. Forrest, S., Mitchell, M.: Relative building-block fitness and the building-block hypothesis. In: Whitley, D.L. (ed.) Foundations of Genetic Algorithms 2, pp. 109–126. Morgan Kaufmann, San Mateo (1993) 9. Gelman, A., Carlin, J.B., Stern, H.S., Rubin, D.B.: Bayesian data analysis. Chapman & Hall, New York (1995) 10. Giunchiglia, F., Marchese, M., Zaihrayeu, I.: Towards a theory of formal classification. In: Proceedings of the AAAI 2005 International Workshop Contexts and Ontologies: Theory, Practice and Applications. AAAI Press, Menlo Park (2005)
An Adaptive Optimisation Method for Automatic Lightweight Ontology Extraction
371
11. Gruber, T.R.: A translation approach to portable ontology specifications. Knowl. Acquis. 5, 199–220 (1993) 12. Guarino, N.: Formal Ontology in Information Systems. IOS Press, Cambridge (1998) 13. Hepp, M., de Bruijn, J.: GenTax: A generic methodology for deriving OWL and RDF-S ontologies from hierarchical classifications, thesauri, and inconsistent taxonomies. In: Franconi, E., Kifer, M., May, W. (eds.) ESWC 2007. LNCS, vol. 4519, pp. 129–144. Springer, Heidelberg (2007) 14. Hjørland, B.: Concept theory. Journal of the American Society for Information Science and Technology 60, 1519–1536 (2009) 15. Jaynes, E.T.: Probability Theory - The Logic of Science. Cambridge Press (2003) 16. Maedche, A.: Ontology Learning for the Semantic Web. Kluwer Academic Publishers, Dordrecht (2002) 17. Magnini, B., Serafini, L., Speranza, M.: Making explicit the hidden semantics of hierarchical classifications. In: AI*IA 2003: Advances in Artificial Intelligence. LNCS, pp. 436–448. Springer, Heidelberg (2003) 18. Mitchell, M., Holland, J.H.: When will a genetic algorithm outperform hill climbing? In: Proceedings of the 5th International Conference on Genetic Algorithms, p. 647. Morgan Kaufmann Publishers Inc., San Francisco (1993) 19. Griffiths, T.L., Steyvers, M.,, J.B.T.: Topics in semantic representation. Psychological Review 114(2), 211–244 (2007) 20. Uschold, M., Gruninger, M.: Ontologies and semantics for seamless connectivity. SIGMOD Rec., 58–64 (2004)
Automating the Variability Management, Customization and Deployment of Software Processes: A Model-Driven Approach Fellipe Araújo Aleixo1,2, Marília Aranha Freire1,2, Wanderson Câmara dos Santos1, and Uirá Kulesza1 1
Federal University of Rio Grande do Norte (UFRN) Natal, Rio Grande do Norte, Brazil 2 Federal Institute of Education, Science and Technology of Rio Grande do Norte (IFRN) Natal, Rio Grande do Norte, Brazil {wanderson,uira}@dimap.ufrn.br, {fellipe.aleixo,marilia.freire}@ifrn.edu.br
Abstract. This paper presents a model-driven and integrated approach to variability management, customization and execution of software processes. Our approach is founded on the principles and techniques of software product lines and model-driven engineering. Model-driven engineering provides support to the specification of software processes and their transformation to workflow specifications. Software product lines techniques allows the automatic variability management of process elements and fragments. Additionally, in our approach, workflow technologies enable the process execution in workflow engines. In order to evaluate the approach feasibility, we have implemented it using existing model-driven technologies. The software processes are specified using Eclipse Process Framework (EPF). The automatic variability management of software processes has been implemented as an extension of an existing product derivation tool. Finally, ATL and Acceleo transformation languages are adopted to transform EPF process to jPDL workflow language specifications in order to enable the deployment and execution of software processes in the JBoss BPM workflow engine. Keywords: Software process, Software product lines, Model-driven development.
Automating the Variability Management, Customization and Deployment
373
definition and development of software processes for the new projects. The adequate management and execution of software processes can bring a better quality and productivity to the produced software systems. In this context, automated tools supporting the definition, customization and deployment are increasingly necessary. Although there are already many existing tools to specify processes [1] [2], there is a strong need to develop tools, technologies and techniques that help: (i) the management of components and variations of such processes; and (ii) the automatic composition and derivation of these elements to generate a customized process for a project. Furthermore, we know that the definition of a software process is a complex activity that requires much experience and knowledge of many areas and disciplines of software engineering. Our main research question is thus related to: how a software organization can reuse existing software processes by rapidly and automatically allowing their customization for new projects? On the other hand, there is recent research work that proposes languages and approaches to address the modeling and execution of software processes. Many of them explore the adoption of model-driven engineering techniques to support the process modeling. There is a strong need to promote not only the processes modeling but also provides mechanisms to support their effective execution. Workflow systems are a well-known approach that allows the effective execution of processes. However, current research work has not addressed the development of approaches that allow the process definition, deployment and execution in an integrated way. In this paper, we propose an approach that supports: (i) the variability management of software processes; (ii) the automatic product derivation of customized specifications of software processes; and (iii) the automatic transformation of these customized software processes to workflow specifications, which can be deployed and executed in existing workflow engines. Our approach is founded on the principles and techniques of software product lines [3] and model-driven engineering [4]. In order to evaluate the approach feasibility, we have implemented it using several model-driven technologies. The software processes are specified using Eclipse Process Framework (EPF). The variability management and product derivation of software processes has been implemented as an extension of an existing product line tool, called GenArch [5]. Finally, ATL and Acceleo [6] transformation languages are adopted to transform EPF process to jPDL workflow language specifications in order to enable the deployment and execution of software processes in the JBoss BPM workflow engine. The remainder of this paper is organized as follows. Section 2 presents existing research work on software processes reuse by identifying several challenges in the variability management of software processes. Section 3 gives an overview of the main elements and functionalities of our approach. Section 4 describes the approach implementation using existing model-driven technologies. Section 5 compares our approach to existing related work. Finally, Section 6 presents the conclusions and points out future work directions.
2 Software Process Reuse Over the last years, several approaches have been proposed that explore the development of software product lines (SPLs) [3]. The main aim of these approaches is to
374
F.A. Aleixo et al.
maximize reuse and minimize costs by promoting the identification and management of commonalities and variabilities (common and variable features) of software families. Software product line engineering promotes the effective reuse of software artifacts based on the organization of similar artifacts according to commonalities and variabilities [7]. A common and flexible architecture is designed to address the commonalities and variabilities of the SPL. Finally, a set of reusable assets is implemented following the SPL architecture. After the design and implementation of the SPL architecture and code assets, which is called domain engineering, new applications (products) can be easily derived by reusing and customizing the code assets developed for the SPL architecture. Currently, there are some existing tools, such as Gears [8], pure::variants [9] and GenArch [5], which automate the derivation of new applications/products from existing code assets. They facilitate the streamline selection, composition and configuration of code assets. In the software development process scenario, recent work has been developed to allow the reuse of process assets, in the same way that code assets can be reused. The Eclipse Process Framework [2] is one of these initiatives. It facilitates the definition of software processes using: (i) the Unified Method Architecture (UMA) metamodel; (ii) a supporting tool (EPF Composer [10]); and (iii) content (process asset) that can be used as the basis for a wide range of processes. The EPF Composer allows authoring, configuring and publishing methods. You can add, remove and change process elements according to your team and project needs. In other words, the EPF Composer allows software development processes be extended and customized in a simple way [11]. Rational Method Composer [1] is another existing tool that provides functionalities to the definition and specification of software processes. One important challenge in reuse of software process elements is to do a fast combination of these elements to generate a customized software process. Usually, this customization is necessary to meet specific demands (deadlines, resources, documentation, team experience). The right choices in the software process can guarantee that the team reaches the maximum of efficiency. The software process customization time is included in the overall time for the software development. The lost time at the process customization is lost in the overall software time to market. Although existing process definition tools, such as Eclipse Process Framework and Rational Method Composer, already provide some support to specify, define, and customize software processes, they do not support to the management of process variabilities and automatic customization of existing software processes. The customization of a software process in EPF, for example, is done by editing the process in EPF Composer [2]. In this edition process, the elements of the software process are added, removed and modified, and this is done individually element by element. As a result, this is a tedious and time-consuming process, which strongly influences the time and quality of the process customization. The main disadvantage of existing work is that there is no technique and associated tool that provide support to the automatic customization of software processes allowing the modeling and management of their existing variabilities. Other aspect that is not explored in existing research work is the explicit support to automatic deployment and execution of software processes. Current efforts for supporting the execution of software processes are disjoint of the efforts for the customization of software development process, even though these efforts are extremely related.
Automating the Variability Management, Customization and Deployment
375
3 A Model-Driven Approach for Process Definition, Customization and Execution In this section, we present an overview of our approach for process definition, customization and execution. It is founded on the principles and techniques of software product lines and model-driven engineering. Figure 1 illustrates the main elements of our approach and their respective relationships. Next we briefly explain the activities of the proposed approach. The first stage of our approach is the software process modeling and definition (steps 1 and 2 in Figure 1). Existing tools such as EPF provides support to address it by using the UMA metamodel. After that, our approach concentrates on the variability management of software process elements. This second stage consists on the creation of a variability model (e.g. feature model) that allows specifying the existing variabilities of a software process (steps 3 and 4). A product derivation tool can then be used to allow the selection of relevant features from an existing software process, thus enabling the automatic derivation of customized specifications of the software process addressing specific scenarios and projects (steps 5 and 6). Finally, our approach supports the automatic transformation of the software process specification to a workflow specification (steps 7 e 8) in order to make possible their deployment and execution in a workflow engine (steps 9 and 10). Through these transformations, the sequence of activities of the process is mapped to a workflow definition. In order to evaluate the feasibility of our approach, we have designed and implemented it using existing and available technologies. Figure 1 also provides an overview of the implementation of our approach. The process specification is supported by EPF composer using the UMA metamodel (step 2 in Figure 1). The variability management of the EPF specifications is addressed by GenArch product derivation tool [12] [13] [14]. This tool was extended to explicitly indicating which variabilities in a feature model are related to different process fragments from an EPF specification (step 4). The tool uses this information to automatically derive customized versions of a software process (step 6). Finally, we have implemented model-tomodel transformations (M2M) codified in ATL/QVT [15] to allow the translation of the EPF specification of an automatically customized process to JPDL model elements (step 7). This JPDL specification is then processed by a model-to-text (M2T) transformation implemented using Acceleo language [6] to promote the generation of Java Server Faces (JSF) web forms from a JPDL workflow specification (step 8). These web forms can then be deployed and executed in the JBoss Business Process Management (jBPM) workflow engine. Section 4 describes our approach in action by detailing a customization example of a software process. Our approach brings several benefits when compared to other existing research work [16] [7] [17]. First, it promotes the variability management of existing software processes by allowing to explicitly specifying which process fragments (activities, guides, roles, tasks, etc) represent variabilities (optional and alternative) to be chosen and customized when considering specific projects. Second, it allows automatically deriving, deploying and executing software processes in workflow engines by supporting the systematic transformation of process specifications to workflow specifications. Last but not least, the approach is flexible enough to allow the adoption of process and workflow specifications defined in different languages and notations, as
376
F.A. Aleixo et al.
Fig. 1. Approach overview
well as to promote the adoption of different tools to process definition, automatic derivation, deployment and execution.
4 Implementing the Model-Driven Approach In this section, we present the approach implementation by exploring the adopted techniques to managing software process variabilities and deploying software processes in workflows engines.
Automating the Variability Management, Customization and Deployment
377
4.1 Managing Variabilities in Software Processes Figure 2 presents a fragment of a case study developed in the context of research and development projects of a technical educational organization [18]. It illustrates three projects of software development, which are: (i) an integrated academic management information system, called SIGA; (ii) a professional and technological education information system, called SIEP; and (iii) an enterprise system development project, called PDSC. Each project used a customized version of the OpenUP process [2]. The detailed analysis of these OpenUP customizations allowed us identifying and modeling the commonalities and variabilities of this process family. Due to restriction space, in this paper we only focus on the project management discipline.
SIGA
SIEP
PDSC
Fig. 2. Fragment of study case result
Figure 2 presents the details of the plan project task of the project management discipline. Some steps of this task were performed in every project – the commonalities, such as: (i) establish a cohesive team; (ii) forecast project velocity and duration; (iii) outline project lifecycle; and (iv) plan deployment. Some steps can be executed or not (optional features), such as: (i) evaluate risks; (ii) estimate project size; and (iii) establish costs and articulated value. Some steps include the use of specific artefacts, which should demand the change of original document template provided by the OpenUP (alternative features). Examples of such alternative templates are: (i) risk list template – that can be top 10 or full list; and (ii) project plan template – that can be specified using the Redmine or MS-Project tools. Figure 3 shows the correspondent feature model for this fragment of the project management discipline. The variability management in a software process is based on the used representation notation. One of most cited notation is the SPEM [19], an initiative of the OMG.
378
F.A. Aleixo et al.
In our work, we have adopted an evolution of SPEM, called Unified Method Architecture – UMA [20], which is supported by the Eclipse Process Framework – EPF [20]. EPF was used to specify a software process line that models a family of processes that shares common and variable features. The software process line maintains all the process elements that can be found in any process to be customized from it. It allows systematically reusing common process content elements and fragments of existing software processes.
Fig. 3. Feature model for the case study
The variability management of the software process line is supported by a product derivation tool. This tool enables us to relate variability models (e.g. feature models) to the process content elements. This is a similar strategy adopted by existing product derivation tools to manage the variabilities of software product lines. In our approach, we have adapted an existing product derivation tool, called GenArch, to support the variability management of software process lines. The original implementation of GenArch provides three models: (i) feature model – that represents the commonalities and variabilities; (ii) architecture model – that represents all the code assets implemented for a software product line; and (iii) configuration model – that defines the mapping between features and code assets in order to enable the automatic product derivation. To enable the usage of GenArch in the software process line context, we replaced our architecture model by the EPF process model. It allows specifying how specific variabilities (optional and alternative features) from a feature model are
Automating the Variability Management, Customization and Deployment
379
mapped to existing elements from a process specification. Figure 5(A) shows an example of the variability management of process lines for project management process activities. As we can see, the feature model is used to indicate the optional, alternative and mandatory features of an existing process line. The configuration model defines the mapping of these features to process elements. The complete configuration model is automatically produced from feature variabilities annotations that are inserted in the EPF process specification. Each file of the EPF specification that corresponds to an identified feature is edited, and a XML comment is inserted. This XML comment indicates the name and the type of the associated feature. Figure 4 shows an example of feature annotation inside the Assess_Result activity from an EPF specification. As we can see, each annotation defines the name (Assess_Result), parent (tasks) and type (optional) of the feature that the related artifact represents.
Fig. 4. Feature annotation in an EPF specification
The following process variabilities have been found in the process line case study that we have modeled and specified: (i) optional and alternative activities in process workflows; (ii) optional and alternative steps from process activities; (iii) optional and alternative specification templates for any specific tool or activity; and (iv) optional and alternative technology developer guides that provides principles and guidelines to adopt specific technologies (CASE tools, modelling and programming languages, API libraries, components and frameworks, and so on). Besides, we are currently exploring fine-grained variabilities (parameters, variables, text portions) that can occur as part of the descriptions of process activities and steps. After specifying the mapping between variabilities specified in the feature model to the process elements from an EPF specification, GenArch tool can automatically derive customized versions of a software process line. This stage is similar to what is called product derivation [21] in software product line approaches. During the process derivation, the process engineer chooses the desired variabilities (optional and alternative features) in a feature model editor. Next, the GenArch tool processes the mappings specified in the configuration model to decide which process elements will remain in the final customized process according to the variabilities selection. Resolution of feature constraints and process component dependencies are also computed by the tool during this step of
380
F.A. Aleixo et al.
Fig. 5. Approach implementation
Automating the Variability Management, Customization and Deployment
381
process customization. Finally, after all this processing, the tool is responsible to produce theonly EPF specification that represents a customized process to be adopted by a specific project. After the feature selection, the Genarch can be used to generate a new process that makes sense in the features selected in the feature model. Figure 5(B) illustrates two examples of feature selection (configuration1, configuration2) that are processed by GenArch tool to produce two different set of project management activities for specific projects (SIGA, SIEP, and PDSC). The strategy for annotating XMI files representing software processes did not cover all the necessary elements to the complete variability management of the OpenUP process family. This happens because some features presented in the OpenUP delivery process are represented and specified as small pieces of XML specifications inside the EPF specification files. It impairs to mark the complete XMI files as related to a specific process variability using GenArch annotations. To solve this problem, we adopted the fragment mechanism provided by GenArch product derivation tool. A fragment is as a piece of text in a code asset (source code, configuration file, process description file) that is related to specific variabilities. Each fragment is explicitly specified in the XML file, represented in the architecture model, and related to specific features in the configuration file. Each fragment is included in a specific process during the automatic derivation stage, only if it is related to process variabilities explicitly selected. In our work, existing dependencies between software processes elements were specified in EPF. The usage of fragments was required in this context, because the dependencies between process elements in EPF specifications is spread and tangled along different XML files that specify the complete process. Additional details about the implementation of the fragment mechanisms can be found in [22] [18]. 4.2 Deploying and Executing a Software Process in a Workflow Engine Nowadays, organizations are investing in information technology to support their processes. With this increasing need to control and improve processes, we include the concept of Business Process Management (BPM), which in essence is the union of resources in information technology to the analysis of business management focused on improving business processes. Our approach allows automatically deploying and executing a customized software process automatically derived by GenArch in the jBPM workflow engine. jBPM [23] is a framework of JBoss that allows the creation of business solutions based on business processes using graphical notations and graph-oriented programming. It also provides a workflow engine. We use the jBPM engine to run and monitor software process activities, which were previously defined in EPF process specification and customized by GenArch tool. In our approach, we have implemented transformations that automatically convert the EPF process to the jPDL workflow specification language. This language is used to specify business processes graphically in terms of tasks, wait states, timers, automated actions, and so on. This model-to-model transformation was implemented using the ATLAS Transformation Language (ATL) inside the Eclipse platform. ATL is an implementation of the QVT (Query/Views /Transformations) transformation language [15], which is the OMG's standard language for expressing queries, views and transformations on MOF models. Figure 5(C)
382
F.A. Aleixo et al.
shows how an EPF customized specification produced as result of the variability management of the process line can be automatically translated to jPDL workflow specifications. It can be observed that some activities (ex.: “plan project” and “request change”) are present in both textual and graphical jPDL specification. Table 1. Mapping between UMA to jPDL element UMA
jPDL
Activity Workorder Activities with more than one predecessor Activities with more than one successor Activity without predecessor Activity without successor Task Step
Table 1 presents the mapping of the UMA process elements to jPDL workflow elements implemented by our QVT transformation. The generated elements in jPDL model are need to generate the jPDL specification files, which are then installed in the workflow engine to support the workflow execution. The generated jPDL model enables the definition of a jPDL workflow specification and the creation of Java Server Faces forms implementations to monitor the process flow. This monitoring functionality is responsible to store information about the tasks and or decisions taken during the process execution. In order to generate a process definition archive, in jPDL schema, and the related JSF forms for the jPDL workflow specification, we implemented a model-to-text transformation using Acceleo [6]. This is a code generation tool that allows transforming models to code. We also generated the “forms.xml” file, which is a XML file that matches each specific task node to a JSF form. All of these files were generated in a jPDL Eclipse project. Through of simple configurations, this project can be deployed in the jBPM workflow engine. Figure 6 presents a fragment of the jPDL process definition file. It is essential to the workflow deployment and execution in the workflow engine. We can see that each workflow element represents an element of the process specification, and they are organized in the same order specified in the process specification. This process specification file, the JSF pages and the forms.xml configuration file defines the complete software process to be deployed and execute in the workflow engine. After the deployment of the process workflow in the jBPM engine, the user can request the start of a new instance of the process. Figure 5(D) shows the result of the deployment of the process previously customized and generated by GenArch tool. When starting the execution of a new instance of the process, the user can visualize the actual state of the specific process – that presents details of the activity that have to be done. After the execution of each activity, the user notifies the workflow engine that requests the user to enter some information about the activity in a specific JFS form. All the information is stored in a specific database, related to the process
Automating the Variability Management, Customization and Deployment
383
Fig. 6. Fragment of the generated jPDL process definition file
instance. Finally, the workflow engine shows that a new activity is now required. All these steps are repeated for each activity until the end of the process, when the end state of the workflow is reached.
5 Related Work In this section, we present some recent research work that are related to process reuse or propose the adoption of software product line techniques to enable the automatic management, reuse and customization of software processes. Additionally, we also point out existing research work that proposes the deployment and execution of process workflows. We can observe that existing research work do not propose automated and integrated approaches that provides support to the variability management, customization, deployment and execution of software processes. Rombach [7] presents the first studies to describe the term Software Process Line. His proposal suggests the organization of families of similar processes. It has been applied in small domains and points out the feasibility of applying this approach in more general contexts. However, his work does not define any approach or tools to effectively promote the reuse of software processes. Xu et al [17] present a definition of a standardized representation and retrieval of process components. The focus is on: (i) the specific components organization of a process and its representation; and (ii) the recovery process definition based on the reuse of existing components. The main drawback of their approach is that it requires high expertise for the representation and retrieval of components.
384
F.A. Aleixo et al.
Barreto et al [16] propose an approach to the componentization of legacy software processes. Their strategy aims to facilitate the achievement of expected results for maturity models. This work states that make processes reusable is a costly task, because many different situations must be provided and addressed by components and features. The work is restricted to the definition of reusable process fragments, and it does not propose any automation for the effective reuse and variability management. Bendraou et al [24] propose the creation of a model for implementation of processes, called UML4SPM. This model is based on UML and the process implementation is done using a meta-programming language, called Kermeta, that allows its implementation without any compilation. The main drawback of this work is that it does provide an environment for the deployment of processes workflow, thus allowing their execution and monitoring. Bendraou et al [25] presents an approach based on model-driven engineering, which includes the mapping between two languages UML4SPM and WS-BPEL. Each language operates in a different perspective: definition of software processes and process execution, respectively. The rationale for this choice was that both languages have their strengths, both in the modeling as in the proposed implementation. The transformation between process models and workflow was defined by a program written directly in Java language. Our work has a strong relationship with this proposed work because we also propose an approach to process definition and execution. One of the main difference is that our work motivates: (i) the process definition using abstractions from the UMA metamodel; and (ii) the process execution using workflow engine, such as the jBPM. We believe that the existing design of our approach allows evolving the process definition and execution technologies in an independent way. For example, we can migrate our approach to work with other existing workflow engines. Another difference between our work and the presented by Bendraou et al [25] is that we provide transformations that allow the generation of effective source web forms that are installed and running on a workflow system. Maciel et al [26] defines a model-based approach that allows the modeling of software processes. This approach is based on Model-Driven Architecture (MDA) technology and adopts a series of standard technologies (OMG SPEM 2.0, UML 2.0, MDA). The main focus of this work is specifying elements of MDA processes. However, the approach does not address process variability management and does not involve a complete process transformation to execution in workflow systems.
6 Conclusions and Future Work In this paper, we presented a model-driven approach to managing and customizing software processes variabilities. Our approach also provides support to the deployment and execution of the customized process in a workflow engine. The approach has been implemented and validated using existing model-driven and software product line technologies. The main benefits of our approach are: (i) the variability management of software processes that directly contributes to productivity improvement when producing customized software processes to related projects; and (ii) the integration and automation of the process definition, customization, deployment and execution. Additionally, our approach has been designed in a flexible way that allows
Automating the Variability Management, Customization and Deployment
385
its easy adaptation to deal with new technologies. For example, new process or workflow specification languages can be adopted to specify process or workflows, respectively. Additionally, our approach can be used to implement different model-driven technologies. During the development and validation of the proposed approach with a initial case study, we have found and observed important technical issues and lessons learned that are important to the refinement of our research work. Next, we present and discuss some of them. Process Mapping in Workflow Specifications. Our approach defines an explicit mapping between process models and workflow specifications through the model-tomodel and model-to-text transformations. Different elements and abstractions of process models EPF were directly mapped to representations in workflow specifications in both level as description and implementation of web forms, which could then be installed to run the process in the workflow engine jBPM. Different relevant information can be generated within web forms, in accordance with the wishes of process engineer. Although already offering support for mapping and implementation of various information (date/time, start and end states of implementation, current status, buttons upload or links to artifacts generated), our approach needs to be further refined to allow during the transforming process the interaction with the process engineer to choose parameters and specific configuration options for implementing their process. Workflow Code Integration with Software Engineering Tools. Our approach allows the currently generation of different forms web JSF that can be installed and executing directly into jBPM. Information concerning the implementation of the process is informed by such forms and are automatically persisted by jBPM. Currently, the approach is being refined to allow integration with other systems external to obtain relevant information to track process execution. One of the integrations we currently exploring obtains information directly from a configuration management system and version control systems about artifacts being submitted directly to the repository. This allows the workflow process to obtain updated information regarding the execution of processes. The jBPM engine offers the possibility of extending the workflows code for realize external calls (via web services) to other external systems. Management of Variabilities in Software Processes. Our model-driven approach for the process variability management based on manual annotation of the process assets (XMI files) has succeeded to modularize the process variabilities and to achieve its intended goals in our case study. The creation of fragments was another mechanism that helped us to deal with the management of fine-grained variabilities in existing XMI files. Despite the benefits brought by both annotation and fragment mechanisms to the process variability management, we noticed that it is essential to complement them with visual support focused on the use of techniques and composition mechanisms of model-driven engineering. We are currently exploring in a new implementation of the approach, the adoption of different model-driven strategies to support a better management of the dependencies between different process elements (activities, tasks, artifacts, guides). In our case study, we observed that many dependencies between process elements, can bring serious difficult to their specification using the annotation and fragment mechanisms. Those mechanisms provide an adequate low-level persistence support to the variability management and process derivation,
386
F.A. Aleixo et al.
but they are not an adequate mechanism to be used by the process engineer. Our proposal is to complement them using more high-level domain-specific modeling languages that contributes to the specification of the process variabilities. We also intend to compare and analyze the complementarities of such different mechanisms. As a future work, we also intend to apply and evaluate our approach to more extensive and complex software process customization scenarios. We are currently refining the approach to apply it in an industrial scenario of a company that defines and reuses its processes using the Rational Unified Processes (RUP) framework. Additional details about the approach and its implementation can be found in [18]. Acknowledgements. This work was supported partially by Brazilian Research Council (CNPq) under grants: No. 313064/2009-1, No. 552010/2009-0, and No. 480978/2008-5.
References 1. IBM. IBM - RUP (2010), http://www-01.ibm.com/software/awdtools/rmc 2. EPF Project. Eclipse Process Framework Project (EPF) (2009), http://www.eclipse.org/epf/ 3. Pohl, K., Bockle, G., Van der Linden, F.: Software Product Line Engineering: Foundations, Principles and Techniques. Springer, Heidelberg (2005) 4. Kleppe, A.G., Warmer, J.B., Bast, W.: MDA Explained: The Model Driven Architecture: Pratice and Promisse. Addison-Wesley, Reading (2003) 5. GenArch Plugin. A model-based Product Line Derivation Tool (2009), http://www.teccomm.les.inf.puc-rio.br/genarch/ 6. OBEO. Acceleo (2009), http://www.acceleo.org/pages/home/en 7. Rombach, D.: Integrated software process and product lines. In: Li, M., Boehm, B., Osterweil, L.J. (eds.) SPW 2005. LNCS, vol. 3840, pp. 83–90. Springer, Heidelberg (2006) 8. Gear/BigLever Software, Software Product Lines – Pragmatic Innovation from BigLever Software. (2009), http://www.biglever.com 9. Pure:Variant (2009), http://www.pure-systems.com 10. Eclipse Foundation, Eclipse Process Framework (EPF) Composer 1.0 Architecture Overview (2010), http://www.eclipse.org/epf/composer_architecture/ 11. Haumer, P.: Eclipse Process Framework Composer: Part 1: Key Concepts. (2007), http://www.eclipse.org/epf/general/ EPFComposerOverviewPart1.pdf 12. Cirilo, E., Kulesza, U., Lucena, C.: A Product Derivation Tool Based on Model-Driven Tecniques an Annotations. Journal of Universal Computer Science 14 (2008) 13. Cirilo, E., Kulesza, U., Coelho, R., Lucena, C.J.P., von Staa, A.: Integrating Component and Product Lines. In: ICSR (2008b) 14. Cirilo, E., Kulesza, U., Lucena, C.J.P.: Automatic Derivation of Spring-OSGi based Web Enterprise Applications. In: ICEIS 2009 (2009) 15. OMG. OMG (2009), http://www.omg.org/spec/QVT/1.0/ 16. Barreto, A.S., Murta, L.G.P., Rocha, A.R.: Componentizando Processos Legados de Software Visando a Reutilização de Processos (2009) 17. Xu, R.-Z., Tao, H., Dong-Sheng, C., Yun-Jiao, X., Le-Qiu, Q.: Reuse-Oriented Process Component Representation and Retrieval. In: The Fifth International Conference on Computer and Information Technology. IEEE, Los Alamitos (2005)
Automating the Variability Management, Customization and Deployment
387
18. Aleixo, F.A., Freire, M.A., Santos, W.C., Kulesza, U.: Software Process Lines (2010), http://softwareprocesslines.blogspot.com/ 19. OMG. Software & Systems Process Engineering Metamodel Specification (SPEM) (2010), http://www.omg.org/spec/SPEM/2.0/ 20. Eclipse Foundation, Eclipse (2009), http://www.eclipse.org/epf/ 21. Clements, P.: Software Product Lines: Practices and Patterns. Addison-Wesley, Boston (2002) 22. Aleixo, F., Freire, M., Santos, W., Kulesza, U.: A Model-Driven Approach to Managing and Customizing Software Process Variabilities. In: Proceedings of XXIV Brazilian Symposium on Software Engineering (SBES 2010), Salvador, Brazil, pp. 50–69 (2010) 23. Hat, J.: JBPM (2009), http://labs.jboss.com/jbossjbpm/ 24. Bendraou, R., Sadovykh, A., Gervais, M.-P., Blanc, X.: Software Process Mode and Execution: The UML4SPM to WS-BPEL Approach. In: 33rd EUROMICRO Conference SEAA (2007) 25. Bendraou, R., Jezéquél, J.-M., Fleurey, F.: Combining Aspect and Mode Driven Engineering Approaches for Software Process Modeling and Execution. In: International Software Process Workshop, Vancouver, Canada, pp. 148–160 (2009) 26. Maciel, R.S.P., da Silva, B.C., Magalhães, A.P.F., Rosa, N.S.: An Integrated Approach for Model Driven Process Modeling and Enactment. In: XXIII Brazilian Symposium on Software Engineering (2009)
A Formalization Proposal of Timed BPMN for Compositional Verification of Business Processes Luis E. Mendoza Morales1 , Manuel I. Capel Tu˜no´ n2, and Mar´ıa A. P´erez1 1
Processes and Systems Department, Sim´on Bol´ıvar University, Venezuela {lmendoza,movalles}@usb.ve http://www.lisi.usb.ve 2 Software Engineering Department, University of Granada, Spain [email protected] http://lsi.ugr.es/˜sc
Abstract. The Business Process Modelling Notation (BPMN) is currently being used by companies as the Business Process (BP) standard modeling language. In this work, we define a timed semantics of BPMN in terms of the Communicating Sequential Processes + Time (CSP+T) process calculus in order to detail the behaviour of processes within a fixed time span. By adding a formal specification to the response times of activities, temporal constraints, and temporal constraints in communications and task collaboration, we are able to specify and develop the Business Process Task Model (BPTM) of a target BP. We also demonstrate how our proposal can be integrated into the Formal Compositional Verification Approach (FVCA) to allow the use of state–of–the–art MC tools to automatically verify BPTMs. Finally, we examine the application of the proposal to a BPTM verification related to the Customer Relationship Management (CRM) enterprise–business. Keywords: Compositional verification, Formal specification, Transformation rules, Task model, Model–Checking.
1 Introduction Within a short period of time, Business Process Modelling Notation (BPMN) has been supported by a variety of Business Process Modelling (BPM) tools [1]. Nevertheless, BPMN [1] provides no formal semantics [2], but instead uses only a graphical notation and informal language. The informal semantics of BPMN syntactical constructions may lead to a number of misunderstandings of the correct semantics associated with the execution of Business Processes (BPs). To avoid this variousness, the definition of a formal semantics of BPMN is required prior to being able to perform any behavioural specification and verification of BPs. Thus, existing verification tools (e.g., SMV, D ESIGN /CPN, U PPAAL, K RONOS, H Y T ECH, FDR2, etc.) can be used during BPM activity to carry out the verification and validation of BPs, thereby helping to improve their quality. In order for us to undertake a formalization of BPMN, we propose a formal semantics of BPMN endowed with timed characteristics. Accordingly, we have improved the J. Filipe and J. Cordeiro (Eds.): ICEIS 2010, LNBIP 73, pp. 388–403, 2011. c Springer-Verlag Berlin Heidelberg 2011
A Formalization Proposal of Timed BPMN
389
semantic proposed in [2] by incorporating the Communicating Sequential Processes + Time (CSP+T) operators, which allow the definition of a timed semantics of BPMN entities. This includes the possibility of using state–of–the–art Model–Checking (MC) tools to check temporal constraints between separate event occurrence. In our proposal, we transform/interpret an original BPMN model into an equivalent executable formal model called Business Process Task Model (BPTM). Behavioural aspects and temporal constraints of BPMN models are formally specified and verified using CSP+T formal specification language. Formal Compositional Verification Approach (FCVA) [3,4], applicable to the BPTMs, can then be used to model–check the correctness of any BPTM so as to determine the satisfaction of temporal BP properties. Some approaches for providing BPMN with a formal semantics have been proposed in the last few years. The first approach to give a formal semantics to a fundamental set of BPMN was defined in [5] by mapping BPMN into Petri Nets (PN). Two automata– based translation algorithms were presented in [6] as part of a framework to analyze and verify properties of BPMN diagrams translated into a Business Process Execution Language (BPEL) format. These algorithms carry out the transformation from BPEL into guarded automata, and then from the latter into Promela. To the best of our knowledge, the only other attempt to define a comprehensive formal semantics of BPMN is [2], which uses CSP as the target formal model to establish a relative time semantics of the execution of the BPs. To use CSP yields the possibility of checking the correctness of the specifications obtained with the FDR2 MC tool. A characteristic of BPMN is that it shares several features with BPEL, but according to [5] the types of verification problems for both BPMN and BPEL are different. In contrast to the aforementioned proposals, we take into account the temporal constraints inherent to timed BPMN entities, which actually restrict the execution of BPs. Our proposal is based on the analysis of timed BPMN elements in order to ensure that temporal properties are correctly modelled. We show a way to obtain formal specifications (i.e., a BPTM) from semi–formal notations widely used in BPM (e.g., BPMN). This is part of a verification approach (FCVA) that allows automatic verification of a substantial subset of BPTM by taking advantage of the strengths of the CSP process calculus. The remainder of this paper is structured as it follows. In the next section we present an introduction of CSP+T. We then give a brief description of BPMN and describe the time semantics proposed for some BPMN notational entities using CSP+T. Next, we provide an overview of our FCVA for BPTM verification. Finally, we apply our proposal to a BPM related to the Customer Relationship Management (CRM) business. In the last section we discuss our conclusions and future work.
2 CSP+T CSP+T [7] is a formal specification language for describing timed deterministic processes. It extends Communicating Sequential Processes (CSP) in order to allow the description of complex event timings from within a single sequential process, which is of use in the behavioural specification of concurrent systems. CSP+T is a superset of CSP, the major difference to the latter being that the traces of events are now pairs (e, t)
390
L.E. Mendoza Morales, M.I. Capel Tu˜no´ n, and M.A. P´erez
denoted as t.e, where t is the global absolute time at which event e is observed. The operators included in CSP+T, related to timing and enabling intervals, are: (a) the special process instantiation event denoted star (); (b) the time capture operator ( ) associated with the time stamp function ve = s(e) that allows the occurrence time of an event e (marker event) to be stored in a variable v (marker variable) when it occurs; and (c) the event–enabling interval I(T, t1 ).e, which represents timed refinements of the untimed system behaviour and facilitates the specification and proof of temporal system properties [7]. CSP–based MC tools take a process (representing the system implementation), and automatically check whether the process fulfils the system specification.
3 BPMN to CSP+T Transformation Rules The main goal of BPMN is to provide a notation that is readily understandable by all business users. BPMN specifies a single diagram, called Business Process Diagram (BPD), which represents a scenario of a business model. According to [1], a BPD specification must be both (1) easy to use and understand and (2) expressive enough to model very complex BPs. Additionally, BPD models are usually built to be run with the appropriate transformation rules and business execution languages. Representing a BP flow simply consists of depicting the events that occur to start the BP, the activities and tasks that follow as a consequence and, finally, the outcome of the BP flow. Business decisions and flow branching are modelled using gateways. A pool typically represents an organization or business entity. We talk of lanes to typically represent a department, a business worker within the organization, or other less specific concepts such as functions, applications, and systems. During BPM, pools can be further partitioned into lane business entities. Thus both pools and lanes represent business process participants [1]. Our BPMN formalization proposal takes as its starting point the semantics for the BPMN analysis entities given in [2], combined with CSP+T operators. Therefore, the time capture operator ( ) and the event enabling interval I(T, t).a (or [t, t + T ].a) of CSP+T are used to give a syntactical interpretation to response times. Thereby, we can define the maximum times by which processes must have finished the execution of a task in order to meet the temporal constraints required by business participants in a BP. In this way, we allow the time span of each business process in the model to be controlled. Consequently, a more precise and complete semantics is obtained than the one in the local and global BPD diagrams that represent business collaboration. The analysis of timed BPMN notational entities is shown in Fig. 1. We define a direct map from the size limits of activity entities into the maximum (ran.max) and minimum (ran.min) duration times, respectively. These are established as part of the attributes of any activity. Syntactically, according to BPMN, we denote the times at which the invocation events x occur as tx and the minimum and maximum duration ranges of bpmnSx activities as Sx.ran.min and Sx.ran.max, respectively. The start–time stimeT ime and the intermediate–time itimeT ime events as Time1 . We called ENT the set of all the types of BPMN modelling entities defined in [2].
1
Sx.ran.min, Sx.ran.max, T ime ∈ N∗ (i.e., natural numbers without zero).
A Formalization Proposal of Timed BPMN
time
tS2
εS1
εS2 time S1
S1
tit
tS1
εit
εS2
vit
tS1
texc
εS1
εexc S1
S2
MI = min{S1.ran.min, S2.ran.min}
εS1
time
S1
S1.ran.min
εS1
εm1 S1
vS1
tS3 tS2 (e) time
S3
S4
S2
S3
S1.ran.max
[vS1,S1.ran.min] ∩ . [vS2,S2.ran.min]
m2
S2
time
S1.ran.max
m1
Time Time
vS1
Time
S1.ran.max
(d)
tS1
v
S1.ran.min
v
tst
vS1
S1 start
time
S2
tS1 tS2 tm1 tm2
(f)
Pool 1
t0
(b) (c)
tS1
Pool 2
(a)
391
εS2
εm2
vS2
S2.ran.min
S2.ran.max
MA = max{vS1,vS2}
Fig. 1. Timing graphical analysis of some BPMN notational elements
3.1 CSP+T Structured Operational Semantics In order to facilitate the implementation and further automation of the transformation rules, we define them according to the Structural Operational Semantics (SOS) introduced by Plotkin in 1981 [8]. SOS is usually used to define the semantics of formal specification languages based on process calculi and specific programming languages. SOS is compositional, because it allows the semantics of complex process terms to be defined from simpler ones. The application of the transformation rules’ pattern: event/communication/execution step)
premises (conditions) conclusion
can be understood as a transformation between two syntactical terms that occur as a consequence of a communication between concurrent processes or an execution step or event occurrence in a sequential process. Thus, each rule defines the premises of the BPMN entity to be transformed and the conditions that must be satisfied before transforming the referred entity into the syntactical CSP+T process term indicated in the conclusion of the rule. In Fig. 1 (a) the start event of BPMN is (graphically) shown, which represents the necessary instantiation of a BP object prior to the start of its execution. In CSP+T, its specification is performed by means of the instantiation event. When this event arises, its occurrence instant (s()) is stored in the v marker variable. occurrence)
P (start) =( v → SKIP P (start)) (end → SKIP ) v = s(); SKIP P (start)end → SKIP
start; s(); S1 ∈ EN T \{start}
392
L.E. Mendoza Morales, M.I. Capel Tu˜no´ n, and M.A. P´erez
Let the activity bpmnS1 be the one that precedes activity bpmnS2, according to the flow shown in Fig. 1 (b). According to the BPMN semantics that we are proposing, the start of the bpmnS2 execution (i.e., the occurrence of event S2 ) depends on the ending of activity bpmnS1, which must occur within the time span of activity bpmnS1, given by the range S1.ran.min to S1.ran.max. In its turn, the measurement of ranges S1.ran.min and S1.ran.max depends on the occurrence of event S1 . Thus, we must ensure that the event S2 will occur on time, i.e., within the interval A = [S1.ran.min, S1.ran.max] related to the occurrence time tS1 of S1 , stored in vS1 , at which the bpmnS1 was invoked (i.e., the occurrence of event S1 ). In CSP+T the process term that specifies the expected behaviour is obtained by the following transformation rule: P (S1) =(S1 vS1 → SKIP I(S1.ran.max − S1.ran.min, vS1 + S1.ran.min).S2 → SKIP P (S1))(end → SKIP ) bpmnS1; s(S1 ); 1. S1 occurrence) S1 ∈ EN T \{start} vS1 = s(S1 ); SKIP I(S1.ran.max − S1.ran.min, vS1 + S1.ran.min).S2 → SKIP P (S1)(end → SKIP )
2. S2 occurrence)
I(S1.ran.max − S1.ran.min, vS1 + S1.ran.min).S2 → SKIP P (S1)(end → SKIP ) s(S2 ) ∈ A; S2 ∈ EN T \{start} s(S2 ); SKIP P (S1)(end → SKIP )
The BPMN timer start (i.e., stimeT ime) event establishes that the modelling entity S1 BPMN, which precedes the sequence flow, must begin T time units after the occurrence of the instantiation event. Then, according to the schema shown in Fig. 1 (c), the CSP+T process terms that specify these behaviours are obtained by applying the following transformation rule: 1. occurrence)
P (stime) =( v → SKIP I(T ime, v ) → SKIP S1 → SKIP P (stime))(end → SKIP ) ( stimeT ime; s() ) v = s(); SKIP I(T ime, v ) → SKIP S1 → SKIP P (stime)(end → SKIP )
I(T ime, v ) → SKIP S1 → SKIP P (stime)(end → SKIP ) ( s(τ ) < v + T ime ) s(τ ); SKIP S1 → SKIP P (stime)(end → SKIP ) → SKIP P (stime)(end → SKIP ) s(S1 ) = v +T ime; occurrence) S1 s(S1 ); SKIP P (stime)(end → SKIP ) S1 ∈ EN T \{start}
2. I(T ime, v ) timeout)
3. S1
The timed intermediate event (i.e., itimeT ime) specifies the delay in the BPMN modelling entity invocation which precedes the sequence flow. Then, according to the schema shown in Fig. 1 (d), the S2 BPMN modelling entity must begin T time units after the occurrence of it event. The CSP+T process term that specifies this behaviour is obtained by the transformation rule: 1. it occurrence)
P (itime) =(it vit → SKIP I(T ime, vit ) → SKIP S2 → SKIP P (itime))(end → SKIP ) ( itimeT ime; s(it ) ) vit = s(it ); SKIP I(T ime, vit ) → SKIP S2 → SKIP P (itime)(end → SKIP )
2. I(T ime, vit ) timeout)
3. S2 occurrence)
I(T ime, vit ) → SKIP S2 → SKIP P (itime)(end → SKIP ) ( s(τ ) < vit + T ime ) s(τ ); SKIP S2 → SKIP P (itime)(end → SKIP )
S2 → SKIP P (itime)(end → SKIP ) s(S2 ) = vit + T ime; s(S2 ); SKIP P (itime)(end → SKIP ) S2 ∈ ENT \{start}
A Formalization Proposal of Timed BPMN
393
According to Fig. 1 (e), the execution of the activity bpmnS1 can be interrupted (i.e., the occurrence of the event exc ) at any time after its inception (i.e., the occurrence of the event S1 ) and until its total duration is complete(i.e., within the [vS1 , S1.ran.max] time interval). The transformation rule for the specification of a behaviour that includes a timed exception flow consists of two alternative behaviours. The first is given by the transformation rule for a typical BPMN activity (see the rule for a bpmnS1 activity) and the second describes the exception flow in accordance with the following structure: P (S1) =(S1 vS1 → SKIP I(S1.ran.max − S1.ran.min, vS1 + S1.ran.min).S2 → (SKIP I(T ime, vS1 ) → exc → SKIP S3 → SKIP ) P (S1))(end → SKIP ) bpmnS1; 1. S1 occurrence) s(S1 ) vS1 = s(S1 ); SKIP I(S1.ran.max − S1.ran.min, v + S1.ran.min). S1 S2 → (SKIP I(T ime, vS1 ) → SKIP exc → SKIP S3 → SKIP ) P (S1)(end → SKIP )
Finally, in the case of message flows, represented in Fig. 1(f), the activity bpmnS1 sends the message m1, which is received by the activity bpmnS2. The activity bpmnS2 then responds by sending the message m2. This will also only be possible within the time interval represented by the shaded area in Fig. 1 (f), which corresponds to the intersection of the time intervals [vS1 , S1.ran.min] and [vS2 , S2.ran.min]. The transformation rule for process terms that include collaboration between two participants P ool1 and P ool2 is structured according to the following (M I = min{S1.ran.min, S2.ran.min}, M A = max{vS1 , vS2 }, and B = [S2.ran.min, S2.ran.max]):
1.
S1 occurrence
AND
S2 occurrence
P (S1) =(S1 vS1 → SKIP I(M I, M A).m1 !x → SKIP I(M I, M A).m2 ?y → SKIP I(S1.ran.max − S1.ran.min, vS1 + S1.ran.min).S3 → SKIP P (S1))(end.1 → SKIP ) P (S2) =(S2 vS2 → SKIP I(M I, M A).m1 ?x → SKIP I(M I, M A).m2 !y → SKIP I(S2.ran.max − S2.ran.min, vS2 + S2.ran.min).S4 → SKIP P (S2))(end.2 → SKIP ) vS1 = s(S1 ); SKIP I(M I, M A).m1 !x → SKIP I(M I, M A).m2 ?y → SKIP I(S1.ran.max − S1.ran.min, vS1 + S1.ran.min).S3 → SKIP P (S1)(end.1 → SKIP )
⎛
bpmnS1;
⎜ s(S1 ) ⎜ AND ⎜ ⎝ bpmnS2; s(S2 )
AND
vS2 = s(S2 ); SKIP I(M I, M A).m1 ?x → SKIP I(M I, M A).m2 !y → SKIP I(S2.ran.max − S2.ran.min, vS2 + S2.ran.min).S4 → SKIP P (S2)(end.2 → SKIP )
I(M I, M A).m1 !x → SKIP I(M I, M A).m2 ?y → SKIP I(S1.ran.max − S1.ran.min, vS1 + S1.ran.min).S3 → SKIP P (S1)(end.1 → SKIP )
AND
I(M I, M A).m1 ?x → SKIP I(M I, M A).m2 !y → SKIP I(S2.ran.max − S2.ran.min, vS2 + S2.ran.min).S4 → SKIP P (S2)(end.2 → SKIP ) 2. m1 occurrence) (s(m1 ) ∈ [M A, M I]) s(m1 ); SKIP I(M I, M A).m2 ?y → SKIP I(S1.ran.max − S1.ran.min, vS1 + S1.ran.min).S3 → SKIP P (S1)(end.1 → SKIP )
AND
s(m1 ); SKIP I(M I, M A).m2 !y → SKIP I(S2.ran.max − S2.ran.min, vS2 + S2.ran.min).S4 → SKIP P (S2)(end.2 → SKIP )
⎞ ⎟ ⎟ ⎟ ⎠
394
L.E. Mendoza Morales, M.I. Capel Tu˜no´ n, and M.A. P´erez I(M I, M A).m2 ?y → SKIP I(S1.ran.max − S1.ran.min, vS1 + S1.ran.min).S3 → SKIP P (S1)(end.1 → SKIP )
AND
I(M I, M A).m2 !y → SKIP I(S2.ran.max − S2.ran.min, vS2 + S2.ran.min).S4 → SKIP P (S2)(end.2 → SKIP ) 3. m2 occurrence) (s(m2 ) ∈ [M A, M I]) s(m2 ); SKIP I(S1.ran.max − S1.ran.min, vS1 + S1.ran.min).S3 → SKIP P (S1)(end.1 → SKIP )
AND ⎛ s( S3 ) ∈ A; I(S2.ran.max − S2.ran.min, vS2 + S2.ran.min).S4 ⎜ S3 ∈ EN T \{start} → SKIP P (S2))(end.2 → SKIP ) ⎜ AND ⎝ s(S3 ); SKIP P (S1)(end.1 → SKIP )
AND
s(S4 ); SKIP P (S2)(end.2 → SKIP )
s(S4 ) ∈ B; S4 ∈ EN T \{start}
⎞ ⎟ ⎟ ⎠
4 FCVA Instantiation for BPTM Verification By applying our transformation rules we can transform an original BPMN model into an equivalent executable BPTM, i.e., behavioural aspects and temporal constraints of BPMN models are now formally specified by using constructs of the CSP+T formal specification language. As a result, we obtain a set of detailed CSP+T process terms, to which the BPTM conforms, and which completely describe the temporal behaviour of the BP represented by the BPD. We can check the correctness of the BPTM by using a MC tool with respect to formally specified properties of the BPTM, derived from the business rules and goals usually given by business experts. In Fig. 2 we can see the graphical summary of our approach, showing the integration of MC concepts with our timed semantics proposal for BPMN and the phases to be followed in order to carry out the complete verification of a BPTM. BPTM Modelling. Firstly, to obtain the complete behavioural description of the BPTM interpreted into CSP+T syntactical terms, we apply the transformation rules defined in section 3. Thus, the complete description of the BPTM behaviour, modelled by the CSP+T process term T (BP D) is interpreted as a set of CSP+T process terms T (P ooli ) (i.e., T (BP D) = i:1..n T (P ooli )). BPTM Behaviour Specification. Secondly, requirements and temporal constraints that a given BPTM must fulfil are specified in Clocked Computation Tree Logic (CCTL)2 [9]. In this way, we define the properties to be checked on the BPTM behaviour. Afterwards, these properties are described by CSP+T process terms T (φi ), T (ψi ), T (¬δ). As a result, we obtain a set of detailed CSP+T process terms that specify and deal with the behavioural aspects and temporal constraints to which a BPMT must abide in order to be of use. Thus, the BPMN notational elements involved in BPTM realization 2
CCTL is a temporal logic extending Computation Tree Logic (CTL) with quantitative bounded temporal operators. See [9] for more details.
A Formalization Proposal of Timed BPMN BUSINESS PROCESS MODELLING
BPTM VERIFICATION
INFORMATION - EVENTS - RESOURCES - GOALS - ACTIVITIES - BUSINESS RULES
GLOBAL BP CORRECTNESS CAN BE ANALYSED BY BUSINESS ANALYSTS AND DESIGNERS
SET OF BPMN BUSINESS PROCESS DIAGRAMS BPTM MODELLING BPMN
CSP+T
BPMN MODELLING ENTITIES SPECIFICATION ACCORDING TO CSP+T SEMANTICS FLOW OBJECTS – CONNECTING OBJECTS – SWIMLANES
COMPOSITION OPERATOR OF PROCESS ALGEBRA BPTM BEHAVIOUR SPECIFICATION
CSP+T
BPTM BEHAVIOUR OBTAINED AS A PARALLEL COMPOSITION OF CSP+T PROCESSES SET OF CSP+T PROCESS TERMS
CCTL
BUSINESS RULES AND GOALS, AND TEMPORAL CONSTRAINTS
DEDUCTIVE TECHNIQUES
COMPOSITIONAL VERIFICATION OF GLOBAL BP
TRACES – FAILURES
EXPECTED BEHAVIOUR
MODEL CHECKING
TRACES – FAILURES
PERFORMED BEHAVIOUR
VERIFICATION OF LOCAL BPs
CSP+T
TIMING AND ORDERING SPECIFICATION
395
BPTM ABSTRACT BEHAVIOUR
BPTM MODEL
Kripke structures semantics
Fig. 2. Our verification proposal
now receive a temporal interpretation and representation. Since we express the system properties in the same specification language as the system model, another benefit derived from our proposal is that we can perform MC in the same specification language without the need to transform either the property or the BPTM any further. BPTM Verification. Once the BPTM has been obtained, we can proceed to verify it according to the rules of CSP–based process calculus. By using CSP–based MC tools we can model check the local BPs corresponding to the Pools within the BPD against the set of process terms which represent the properties (i.e., the expected behaviour) that the BPTM must accomplish. Finally, through the application of the Theorem 1 (BPTM compositional verification) (defined below), we obtain the complete verification of the BPTM behaviour (i.e., T (BP D) (φ ∧ ψ ∧ ¬δ)) that corresponds to the global BP BPD, according to relation (1). Theorem 1 (BPTM Compositional Verification3). Let the global BP BPD be structured into several business participants P ooli working in parallel, BP D = i:1..n P ooli . For a set of process terms T (P ooli ) describing the behaviour participants
of business properties φi , invariants ψi , and deadlock δ, with i:1..n Σi = ∅, i:1..n Ωi = P ooli , ∅, and i:1..n L(T (P ooli )) = ∅, the following condition holds: T (BP D) (φ ∧ ψ ∧ ¬δ) ⇔ T (P ooli ) (φi ∧ ψi ) ∧ ¬δ, (1) i:1..n
i:1..n
where T (BP D) = i:1..n T (P ooli ). Proof. We assume that the behaviour of a BP participant P ooli can be described by the T BA(P ooli ), and the BPTM behaviour by the T BA(BP D), respectively4. Moreover, we can associate a process term T (P ooli ) and T (BP D) of some process calculus (i.e., CSP, CCS, ACP, etc.) that implements the behaviour of the participant P ooli and the BPTM BP D, respectively, so that: 3
4
Σi , Ωi , and L(T (P ooli )), represents the set of input and output signals, and labelling, respectively, of the process T (P ooli ). We need to use Timed B¨uchi Automaton (TBA) to carry out the demonstration in order to abstract the definition details of the concurrency operator () from their semantic definition in each particular process calculus notation.
396
L.E. Mendoza Morales, M.I. Capel Tu˜no´ n, and M.A. P´erez
T (P ooli ) |= (φi ∧ ψi ) ∧ ¬δ ⇒ T BA(P ooli ) |= (φi ∧ ψi ) ∧ ¬δ, and T (BP D) |= φ ∧ ψ ∧ ¬δ ⇒ T BA(BP D) |= φ ∧ ψ ∧ ¬δ . By the process calculus parallel composition operator, the following assertion must be satisfied: T (BP D) = T (P ooli ) . i:1..n
If the T BA(P ooli ) fulfills,
(1) i:1..n Σi = ∅ and i:1..n Ωi = ∅, and (2) i:1..n L(T BA(P ooli )) = ∅, we can affirm that T BA(P ooli ) are “composable” and we can write: T BA(P ooli ) |= (φi ∧ ψi ) ∧ ¬δ i:1..n
i:1..n
Given that thesatisfaction assertion is closed with respect to the conjunction operator, we conclude i:1..n φi ⇒ φ, i:1..n ψi ⇒ ψ, and therefore: T (BP D) |= φ ∧ ψ ∧ ¬δ ⇔ T (P ooli ) |= (φi ∧ ψi ) ∧ ¬δ . (2)
i:1..n
i:1..n
The sufficient condition, i.e., φ ⇒ i:1..n φi , ψ ⇒ i:1..n ψi , can be proved by considering that even in the case where all BP participants P ooli presented a different behaviour, φ and ψ could be defined as the conjunction of every BP participant’s properties (φi , ψi ).
As a final remark, our proposal can be adapted to other BPM languages and standards, defining the corresponding transformation rules that allow the transformation of the BPTM properties and models into CSP+T formal language constructs. See [4] to review the instantiation of our BPTM verification approach to a BPTM derived from BPs modelled with BPM Unified Modelling Language (UML) stereotypes.
5 Example of Application In order to demonstrate the applicability of our proposal, it was applied to a BPM enterprise–project related to the CRM business. We show an example of the application of the timed semantics proposed for BPMN which focuses on the verification of a CRM BP. We chose to work with the Product/Service Sell BP, due to its importance to the CRM strategy. The information required to allow a formal reasoning of the CRM participant collaboration to be carried out is displayed in the Product/Service Sell BPD shown in Fig. 3, which allows a Company to perform the activities associated with selling a Product/Service requested by a Customer. As shown in Fig. 3, the BP depicts a high collaboration between the participants in order to achieve its execution, which means a synchronization of the activities involved in message flows between participants.
A Formalization Proposal of Timed BPMN
397
Fig. 3. BPD of the Product/service Sell BP
5.1 BPTM Definition and Description To obtain the specification of the Product/Service Sell BPD in CSP+T, according to the transformation rules presented in section 3, we define the sets CU and CO for indexing the processes mapped to the modelling entities of Customer (i.e., Cus) and Company (i.e., Com) participants (see Fig. 3), indicated below: CU ={start.1, cu s1, cu s2, cu s3, cu s4, cu s5, cu s6, xgate.1, end.1, abort.1} CO ={start.2, co s1, co s2, co s21, co s3, co s4, co s5, co s6, co s7, co s8, agate.1, agate.2, end.2, abort.2} Cus =let X =i : (αY \{f in.1, abt.1}) • (i → Xf in.1 → SKIP abt.1 → ST OP ) Y =(i : CU • αP (i) ◦ P (i)) within(Y |[αY ]|X)\{|init.Cus|} Com = let Z =j : (αR\{f in.2, abt.2}) • (j → Zf in.2 → SKIP abt.2 → ST OP ) R =(j : CO • αP (j) ◦ P (j)) within(R|[αR]|Z)\{|init.Com|}
where for each i ∈ CU and j ∈ CO, processes P (i) and P (j), respectively, are defined next. We use n ∈ N to denote the number of Product/Service information request (cu s2) Activity instances. We will only present some of the processes that make up the Cus and Com, mainly to illustrate the application of the proposed semantics. P (start.1) = (t0. → init.Cus.cu s1 → SKIP )f in.1 → SKIP P (cu s2) = let A(n) = n > 0 & (init.Cus.cu s2 → SKIP starts.Cus.cu s2 → SKIP msg.cu s2!x : {in, last} → SKIP msg.cu s2.out → SKIP init.Cus.xgate.1 → SKIP A(n − 1))init.Cus.xgate.1 → SKIP X(n) = (init.Cus.cu s2 → SKIP init.Cus.xgate.1 → SKIP ) (n > 1 & (init.Cus.cu s2 → (msg.cu s2.in → X(n − 1) msg.cu s2.last → init.Cus.xgate.1 → SKIP )) n = 1 & (init.Cus.cu s2 → msg.cu s2.last → init.Cus.xgate.1 → SKIP ) n = N & msg.cu s2.end → init.Cus.xgate.1 → SKIP ) within((A(n)|[SynSet]|X(n)) P (cu s2))f in.1 → SKIP SynSet = {msg.cu s2.in, msg.cu s2.last, init.Cus.cu s2, init.Cus.xgate.1} P (cancel) = (init.Cus.cancel → SKIP msg.cancel!x : {can} → SKIP msg.cancel.out → SKIP init.Cus.abort.1 → SKIP P (cancel))f in.1 → SKIP P (abort.1) = (init.Cus.abort.1 → SKIP abt.1 → ST OP )f in.1 → SKIP
398
L.E. Mendoza Morales, M.I. Capel Tu˜no´ n, and M.A. P´erez P (co s2) = (init.Com.co s2 vs2 → SKIP msg.co s2!x : {in, last} → SKIP msg.co s2.out → SKIP starts.Com.co s2 → SKIP (SubCom|[{end.3}]|end.3 → I(86400 − 64800, vs2 + 64800).init.Com.co s3 → SKIP ) |[{init.Com.co s3}]| I(86400 − 64800, vs2 + 64800).init.Com.co s3 → SKIP ) P (co s2)) f in.2 → SKIP P (end.2) = init.Com.end.2 → SKIP f in.2 → SKIP
Finally, the collaboration between the participants Customer and Company is the parallel composition of processes Cus and Com, as it is denoted by the P SS CSP+T process term, to which the BPTM of the Product/Service Sell BP to be verified conforms: P SS = (Cus|[αCusαCom]|Com)\{|msg|}
Therefore, the set of processes previously described (Cus, Com, PSS) conform to the BPTM of the Product/Service Sell BP expressed in CSP+T. In this sense, this BPTM is the one proposed for verification with respect to the properties specified in CCTL below. 5.2 Properties Definition In order to demonstrate the application of our proposal, we will work with the following property, which concerns the obligation of receiving and obtaining the Product/Service delivery confirmation, once the Customer has initiated the communication with the Company. As we proceed with the verification of the BPTM behaviour (denoted above as P SS), starting from the sub–processes that structure it (i.e., Cus and Com), and applying our compositional verification approach, then we must define the properties that each participant in the BP must fulfil. This shows the expected execution sequence of BPMN notational elements when they execute the partial processes for which they are responsible. The participants must execute all their activities in the workflow in order for the global process (BP) to function. In order to compositionally verify the behaviour of the entire P SS, we must verify the partial properties that concern the sub–processes Com and Cus. φCom =AG[a,b] (Start.2 → A[co s1 U[a+1,b−8] (co s2 ∧ A[cu s2 U[a+2,b−7] (co s3 ∧ A[co s3 U[a+3,b−6] (agate.1 ∧ A[agate.1 U[a+4,b−5] ({co s5 ∨ co s6} ∧ A[{co s5 ∨ co s6} U[a+6,b−3] (agate.2 ∧ A[agate.2 U[a+7,b−2] (co s7 ∧ A[co s7 U[a+8,b−1] (co s8 ∧ A[co s8 U[a+9,b] End.2])])])])])])])]) φCus =AG[a,b] (Start.1 → A[cu s1 U[a+1,b−5] (cu s2 ∧ A[cu s2 U[a+2,b−4] (xgate.1 ∧ A[xgate.1 U[a+3,b−3] (cu s4 ∧ A[cu s4 U[a+4,b−2] (cu s5 ∧ A[cu s5 U[a+5,b−1] (cu s6 ∧ A[cu s6 U[a+6,b] End.1])])])])])])
According to the CSP–based process calculus, the expected behaviour must be expressed according to the event sequence observed as result of BPTM run. We must therefore interpret the previous property according to the expected sequence of events that the Product/Service Sell BP must show in order to be verified. The operational interpretation of CCTL formulas, previously specified according to the process calculus
A Formalization Proposal of Timed BPMN
399
CSP+T, are the processes T (φCom ) and T (φCus ) that are presented below and describe the expected behaviour for the participants that realize the BPTM. T (φCom ) = t0 . → T (Start.2) T (Start.2) =I((b − 9) − a, a).init.Com.co s1 → T (co s1) T (co s1) = I((b − 8) − (a + 1), a + 1).init.Com.co s2 → T (co s2) T (co s2) = I((b − 7) − (a + 2), a + 2).init.Com.co s3 → T (co s3)) T (co s3) = I((b − 6) − (a + 3), a + 3).init.Com.agate.1 → T (agate.1) T (agate.1) =(I((b − 5) − (a + 4), a + 4).init.Com.co s5 → T (co s5)) (I((b − 5) − (a + 4), a + 4).init.Com.co s6 → T (co s6)) T (co s5) = (I((b − 4) − (a + 5), a + 5).init.Com.co s6 → T (co s6)) (I((b − 3) − (a + 6), a + 6).init.Com.agate.2 → T (agate.2)) T (cu s6) = (I((b − 4) − (a + 5), a + 5).init.Com.co s5 → T (co s5)) (I((b − 3)(a + 6), a + 6).init.Com.agate.2 → T (agate.2)) T (agate.2) =I((b − 2) − (a + 7), a + 7).init.Com.co s7 → T (co s7) T (co s7) = I((b − 1) − (a + 8), a + 8).init.Com.co s8 → T (co s8) T (co s8) = I((b) − (a + 9), a + 9).init.Com.end.2 → T (End.2) T (End.2) = SKIP T (φCom ) T (φCus ) = t0 . → T (Start.1) T (Start.1) =I((b − 6) − a, a).init.Cus.cu s1 → T (cu s1) T (cu s1) = I((b − 5) − (a + 1), a + 1).init.Cus.cu s2 → T (cu s2) T (cu s2) = I((b − 4) − (a + 2), a + 2).init.Cus.xgate.1 → T (xgate.1)) T (xgate.1) =I((b − 3) − (a + 3), a + 3).init.Cus.cu s4 → T (cu s4) T (cu s4) = I((b − 2) − (a + 4), a + 4).init.Cus.cu s5 → T (cu s5) T (cu s5) = I((b − 1) − (a + 5), a + 5).init.Cus.cu s6 → T (cu s6) T (cu s6) = I(b − (a + 6), a + 6).init.Cus.end.1 → T (End.1) T (End.1) = SKIP T (φCus )
5.3 Verifying the Collaboration Once the set of CSP+T process terms that represent the BPTM as well as the properties which it has to fulfil have been obtained, we can start to perform the verification of the BPTM. According to our approach, we must verify that the processes representing the behaviour of the participants in the BPTM (i.e., Cus and Com) fulfil the properties specified in section 5.2. Then, according to the semantic domain to which CSP calculus belongs, it can be ascertained that the following refining relations are fulfilled: T (φCus ) T Cus, T (φCom ) T Com, T (φCus ) F Cus, T (φCom ) F Com
(3)
To verify the above relationships, we will work according to the semantic model of CSP without temporal operators, since, as pointed out in [10], safety and liveness untimed properties of a timed system should be verifiable in the untimed model and should be later used in the timed analysis. Furthermore, this allows us to integrate the use of FDR2 MC tool in order to carry out the verification of processes that represent the participants in the BP example model. Accordingly, we present the process terms CSP U T (φCom ) and U T (φCus ), which correspond to the expected untimed behaviour of untimed processes U T (Com) and U T (Cus), respectively, of the participants Customer and Company:
400
L.E. Mendoza Morales, M.I. Capel Tu˜no´ n, and M.A. P´erez U T (φCom ) = → U T (Start.2) U T (Start.2) =init.Com.co s1 → U T (co s1) U T (co s1) = init.Com.co s2 → U T (co s2) U T (co s2) = init.Com.co s3 → U T (co s3)) U T (co s3) = init.Com.agate.1 → U T (agate.1) U T (agate.1) =(init.Com.co s5 → U T (co s5)) (init.Com.co s6 → U T (co s6)) U T (co s5) = (init.Com.co s6 → U T (co s6)) (init.Com.agate.2 → U T (agate.2)) U T (cu s6) = (init.Com.co s5 → U T (co s5)) (init.Com.agate.2 → U T (agate.2)) U T (agate.2) =init.Com.co s7 → U T (co s7) U T (co s7) = init.Com.co s8 → U T (co s8) U T (co s8) = init.Com.end.2 → U T (End.2) U T (End.2) = SKIP U T (φCom ) U T (φCus ) = → U T (Start.1) U T (Start.1) =init.Cus.cu s1 → U T (cu s1) U T (cu s1) = init.Cus.cu s2 → U T (cu s2) U T (cu s2) = init.Cus.xgate.1 → U T (xgate.1)) U T (xgate.1) =init.Cus.cu s4 → U T (cu s4) U T (cu s4) = init.Cus.cu s5 → U T (cu s5) U T (cu s5) = init.Cus.cu s6 → U T (cu s6) U T (cu s6) = init.Cus.end.1 → U T (End.1) U T (End.1) = SKIP U T (φCus )
According to the timewise refinement concept [10], the description of an untimed process puts constraints on the ordering and ultimate availability of events, and allows all timed behaviours that are consistent with its description. In this sense, we can write the following relations: T (φCus ) T Cus ⇒ U T (φCus ) T U T (Cus), T (φCom ) T Com ⇒ U T (φCom ) T U T (Com), T (φCus ) F Cus ⇒ U T (φCus ) F U T (Cus), T (φCom ) F Com ⇒ U T (φCom ) F U T (Com),
which establish that the verification of untimed terms in CSP is a necessary condition for the verification of timing CSP+T terms. This allows us to check the timed component behaviour on the basis of the sequence events admitted by the untimed CSP model. The event sequences that may not correspond with the correct order of events, resulting from the aggregation of the timing constraints of timed CSP + T model, are excluded from analysis. Thus, the behaviour of the participants Customer and Company, specified in CSP above, are verified with respect to the semantic domains of traces and failures, which allow demonstrations of safety and liveness properties to be conducted. Consequently, we consider that behavioural verification of constituent participants of the BPTM can be performed using the FDR2 MC tool, since we are working with a process algebra based on CSP. Local verification of each untimed model of participant, COMPANY (U T (Com)) and CUSTOMER (U T (Cus)) is carried out in the FDR2 screenshot in Fig. 4 for the BPTM Product/Service Sell of the BP example. Thus, the complete BPTM satisfies the untimed expected behaviour of COMP (U T (Com)) and
A Formalization Proposal of Timed BPMN
401
Fig. 4. FDR2 screenshot
CUST (U T (Cus)), respectively (the “check ok” marks can be seen at rows one and two). Then, we can obtain that the behaviour of the Cus and Com process terms is correct, i.e., all the timed behaviour of CSP+T process terms in the example model are consistent with their description. Thus, the relations in (3) are true. According to assertion (1) (see Theorem 1), to prove the correctness of the BPTM of the Product/Service Sell BP with respect to its expected behaviour, it must be demonstrated that: P SS φP SS ⇔ (Cus|[αCusαCom]|Com)\{|msg|} φCus ∧ φCom .
We have previously verified with FDR2 that: Cus |= φCus and Com |= φCom .
We must determine whether the local BPs Cus and Com are “composable”. Thus, we must verify that it fulfills the following two conditions: 1. The input signals (ΣCus and ΣCom ) and the output signals (ΩCus and ΩCom ) of both local BPs are disjoint, which can be seen below: ΣCus ∩ ΣCom = ∅
2. The labelling sets of both components, L(Cus) and L(Com), are disjoint, which can also be verified as follows: L(Cus) ∩ L(Com) = ∅ L(Cus) = {start.1, cu s1, cu s2, cu s3, cu s4, cu s5, cu s6, xgate.1, end.1, abort.1} L(Com) = {start.2, co s1, co s2, co s21, co s3, co s4, co s5, co s6, co s7, co s8, agate.1, agate.2, end.2, abort.2}
(6)
402
L.E. Mendoza Morales, M.I. Capel Tu˜no´ n, and M.A. P´erez
Having verified that the assertions (4), (5), and (6), are true, we conclude that Cus and Com are “composable” by Theorem 1 (see section 4). Given that the following relation of satisfaction holds: (Cus|[αCusαCom]|Com)\{|msg|} |= φCus ∧ φCom
And because the following identity holds: P SS = (Cus|[αCusαCom]|Com)\{|msg|} and φP SS = φCus ∧ φCom ,
We can conclude that the entire process (P SS) satisfies the conjunction of the properties above specified: P SS |= φP SS
Finally, we have obtained the verification of a BPTM corresponding to the Product/Service Sell BP from their verified local BP, Customer and Company.
6 Conclusions In this paper we have presented a timed semantics of BPMN defined in terms of CSP+T formal specification language, which extends the BPMN elements with timing constraints in order to detail the behaviour that they represent. We have also shown how BPTM verification can be carried out from independently verified local BPs, and the integration of the transformation rules into the proposed FVCA. We have shown the usefulness of our transformation rules in order to specify and develop the BPTM of a real-life business process related to CRM enterprise business. It has thus been demonstrated that adding formal specification to response times of activities, temporal constraints referring to communication and collaboration, and valid time span to capture exception flows, can be a valid path to progress in the automatic checking of BP models. As a consequence, the complete BPTM developed from its core participants has been proved correct by means of the formal language CSP+T and use of the FDR2 MC tool that allows local verification results of CSP+T syntactical terms. Future and ongoing work will focus on the refinement of the transformation rules, their automation through the development of the BPMNtoCSPT tool, and the application of our proposal to other BPTMs verification examples, derived from real–life BPs.
References 1. OMG. Business Process Modeling Notation – version 1.2. Object Management Group, Massachusetts, USA (2009) 2. Wong, P., Gibbons, J.: A relative timed semantics for BPMN. Electron. Notes Theor. Comput. Sci. 229(2), 59–75 (2009) 3. Mendoza, L., Capel, M., P´erez, M., Benghazi, K.: Compositional Model-Checking Verification of Critical Systems. In: Filipe, J., Cordeiro, J. (eds.) Enterprise Information Systems. LNBIP, vol. 19, pp. 213–225. Springer, Heidelberg (2009) 4. Capel, M., Mendoza, L., Benghazi, K.: Automatic verification of business process integrity. Int. J. Simulation and Process Modelling 4(3/4), 167–182 (2008) 5. Dijkman, R., Dumas, M., Ouyang, C.: Semantics and analysis of business process models in bpmn. Inf. Softw. Technol. 50(12), 1281–1294 (2008)
A Formalization Proposal of Timed BPMN
403
6. Fu, X., Bultan, T., Su, J.: Analysis of interacting BPEL web services. In: WWW 2004: Proceedings of the 13th International Conference on World Wide Web (2004) ˇ J.: Time–constrained buffer specifications in CSP+T and Timed CSP. ACM Transaction 7. Zic, on Programming Languages and Systems 16(6), 1661–1674 (1994) 8. Plotkin, G.: The origins of structural operational semantics. Journal of Logic and Algebraic Programming 60-61, 3–15 (2004) 9. R¨uf, J., Kropf, T.: Symbolic model checking for a discrete clocked temporal logic with intervals. In: Proceedings of the IFIP WG 10.5 International Conference on Correct Hardware Design and Verification Methods (1997) 10. Schneider, S.: Concurrent and Real–Time Systems – The CSP Approach. John Wiley & Sons, Ltd., Chichester (2000)
From Coding to Automatic Generation of Legends in Visual Analytics Guillaume Artignan and Mountaz Hascoët Univ. Montpellier II, LIRMM, UMR 5506 du CNRS 161, rue Ada 34392 Montpellier Cedex, France {artignan,mountaz}@lirmm.fr
Abstract. The description of the process that transforms non graphic data (astronomical, biological, or financial data for example) into interactive graphical representations of this data has been at the core of information visualization research over the past decades. That process often referred to as coding, can be supported and controlled at different levels and by different types of users. Different levels bring different types of opportunity. In order to make it possible for all users to benefit from the pros of different levels approaches, we propose a toolkit named STOOG that addresses the needs of different types of users. STOOG brings a four-fold contribution: (1) a style sheet language for the description of the transformation process at stake in information visualization, (2) an application building automatically interactive visualization of data based on the style-sheet description, (3) a user interface devoted to the conception of style sheets and then (4) the generation of basic legends. This paper presents STOOG basic concepts and mechanisms and provides a case study to illustrate the benefits of using STOOG for the interactive and visual exploration of information. Keywords: Graph visualization, Style sheets, Simple clustered graphs, Multivariate graphs, Legend generation.
From Coding to Automatic Generation of Legends in Visual Analytics
405
view displayed on the computer screen. We name the coding function the description of transformations that produce graphical representations of non graphic data by, taking as input datasets, user interactions and coding functions. The rational for these coding functions has its origins in the seminal work of Bertin [2] on the semiology of graphics. General purpose toolkits for information visualization such as Prefuse [14] make coding function key elements for the developers eager to build interactive visualizations of information. Other approaches for building visualization such as interpreted languages like DOT [11] for example handle coding as the basis for building views.
Fig. 1. A simple sample of multivariate graph. Each element in the graph is characterised by attributes. For instance, the node A has two attributes. The attribute named “type” and the attribute named “IP”. The graph is composed of nodes representing user or website and edges representing either visit or reference.
Most approaches come with different characterizations of the coding functions at stake. By building on previous work, our aim in this paper is to provide a powerful and flexible way for the description of coding functions for information visualization of multivariate clustered graphs both at the level of the developer and at the level of the end-user. Our Approach. We propose a system named STOOG with a four-fold contribution: (1) a style-sheet based language for the explicit description of coding functions, (2) an application that automatically builds interactive and graphical views from STOOG style-sheet based on coding descriptions and raw data, (3) a user interface aimed at end-users to support the conception and generation of style sheets and then (4) the possibility to generate legends respectively to the description in the style sheet. The mix of the three first contributions is important in order to bridge the gap between usually two complementary yet separated approaches: on the one hand the toolkit approach devoted to developers and on the other hand the application approach devoted to end-user. It is important to stress that bridging the gap between these two approaches is important to gain in flexibility and expressivity for the end-user and at the same time save efforts and development time for the developers.
406
G. Artignan and M. Hascoët
In this regard, our approach compares to the approach of Tableau Sotware [13] project (http://www.tableausoftware.com/). The main differences between their approach and our contribution lies in the fact that we consider multi-level graphs as the basis for representing data and that we propose to make the style-sheet language that describes the coding functions both explicit and open. Hence, STOOG presents several strengths over existing work. Firstly, the high level language makes semiotic analysis possible thanks to the manipulation of abstract concepts. Secondly, STOOG interprets the description language and builds an interactive visualization of any dataset accordingly. Thirdly, the user interface makes dynamic customizations of existing representation possible and easy. The style-sheet based language of STOOG enables the manipulation of high level concepts such as graphical representations, graphical structures or properties by the user. Four mechanisms are proposed in this language: (1) the matching mechanism which makes possible the association of one or more representations to a set of data elements, (2) the coding mechanism corresponding to the association of data attributes and visual variables, (3) the cascading mechanism implementing the inheritance of properties and structures and (4) the interaction mechanism defining the representation of data during the interaction. It is important to stress that these mechanisms were meant to be coherent with other approaches where form are explicitly described independently of its content such as in HTML/CSS or XML/XSLT approaches but our approach, STOOG is different and more general because the data handled in STOOG is not limited to HTML/XML documents. Indeed, STOOG handles any raw data set. Another analogy with STOOG style sheets can be found in SVG style sheets but here again, SVG style sheets are limited to SVG documents. They do not handle the coding functions for any set of multivariate and clustered data as is the case of STOOG. Implementation. STOOG is implemented using Java and supports the generation of views from raw data and coding functions. It further supports the rendering and interaction with these views. Moreover, if useful, extending STOOG style-sheet based language can be performed by a developer to account for more specific types of coding. STOOG is available for download on the web and directly reusable in any Java application. Outline. This paper is divided in six sections. We firstly present related work. Secondly, we define STOOG style sheet language, the concepts and the mechanisms to handle these concepts. Thirdly, we present STOOG through a use case. Fourthly, we expose the user interface devoted to the conception and generation of style sheets for end-users. Fifthly, we present means for generating basic legends according to the description done in the style sheet. We then conclude and discuss future work.
2 Related Work Over the past year, several approaches have demonstrated the importance of providing adapted tools for data visualization. We detail previous work on tools for data visualization and situate our contribution in the domain.
From Coding to Automatic Generation of Legends in Visual Analytics
407
Visualization Tools. Jeffrey Heer has shown in [14] the interest of providing toolkits for interactive visualization of information. One of the first toolkit that he proposed called Prefuse, transforms raw data to abstract data and, thanks to actions, further transforms abstract data to visual items. The visual items are finally drawn thanks to renderers. The toolkit is used by a lot of visualization application that build on top of Prefuse for visualizing data in various domains: graph community visualization [18], lexical visualization [9], cartographic visualization [19], collaborative visualization [15] etc. In [1] Guess is another system devoted to graph visualization. Guess highlights the need to have interfaces for customizing rendering by visual attributes. The graph rendering is determined by the user thanks to queries. These queries are written in Gython an extension of the Jython system (a Java interpreter for the Python language). The tool supports generation of charts, computation of convex hulls. Selection of data elements helped on data values criterion or topology criterion. In GraphViz [11] authors present the DOT language that supports the generation of views. Graphs are first described in a file using DOT language, the file is further interpreted by GraphViz which gives the rendering and layout. The DOT language is used mainly for its simplicity. The Protovis toolkit [6] builds upon lessons learned from Prefuse and proposes visualization by composing simple graphical marks. Protovis is implemented in JavaScript with rendering in HTML, SVG and Flash. The Protovis toolkit constitutes an excellent way to produce aesthetic charts for websites. In [17] ShowMe is described as a set of user interface commands for the automatic generation of presentations. Presentations are further integrated into Tableau Software. Views are specified in an algebraic language: the VizQL language. In [7] the authors propose an automatic technique for the visualization of heterogeneous data. The authors are more precisely interested to match data attributes to visualization attributes. The authors use the RDF format. In [20] the GSS language is presented Graph Style Sheet language for semantic web data visualization. The system offers visualization of RDF data as a directed labeled graph. The author introduces the idea of using style sheets for graph visualization. Cascading style sheets [23] are used for the presentation of HTML documents but also used in languages as Flex [16] or in formats as SVG [10]. In [5] the ILOG Discovery tool is proposed. The tool proposes the description of interactive charts using style sheets. The tool is based on model defined in [4]. The model is declarative and defines fixed dataflow architecture. Legends for Data Visualization. In cartography, the usage of legends can't be ignored. It is useful to interpret the mapping between visual variables and data attributes. Recently, interactive legends were presented in [24] to support filtering and selection of data subsets by directly manipulating the legends. As demonstrated in [24] the use of interactive legends tend to speed up the process of visual analysis compared with alternative approaches using standard widgets. In some data visualization interactive legends are used. Our Contribution. In this section we situate our contribution amongst the previously detailed contributions. Over the past decades several toolkits such as Prefuse or Guess [14,1,6] have provided very thoughtful ways of describing coding functions. Most toolkits provide flexible and powerful ways of describing coding but their usage is
408
G. Artignan and M. Hascoët
devoted to developers not end-users. Therefore to suit the needs of end-users, applications have to be developed for different application contexts and users. Even if these new toolkits help a lot, this is still very time consuming for developers and frustrating for end-users. Indeed, our own experiments with end-users show that the key object of interest for many end-users is the coding functions. However, lots of visualization applications provide less control over these functions to the end-users than what is possible at the toolkit level. Therefore our contribution is to build on previous work to provide both end-users and developers with ways of understanding and expressing coding functions. STOOG can be used to generate views pluggable in other applications or web sites. The view is initialized by two parameters: graph data and a style sheet. The aim of the style sheet is to describe the coding functions. Style-sheet concepts are easy to understand and use for both end-users and programmers. STOOG style sheets can be created and changed on the fly by end-users. STOOG can either be used as a STOOG standalone application by end-users or be integrated in other applications by developers. The style-sheet language definition of STOOG can also be extended by developers. Contrary to toolkits where coding is implemented in the code, like in Prefuse [14], the style-sheet approach makes it possible to handle different encoding without recompiling applications. We also propose a representation based on the composition of graphical representations or shapes. In the system proposed in [1], only one shape can be associated to each data element. In [6,4] the systems proposed are dedicated to data visualization and more precisely to charts. Contrary to [10], we propose dynamic links between data attribute and visual variables. We also use a mechanism selecting a subset of data elements to associate with a graphical representation. The approach by Pietriga on graph style sheets (GSS) [20] can be considered as very similar to our approach. However, there are several differences that justify our contribution. We firstly propose to account for interaction in coding function rather than static coding. Secondly, we are not limited to a set of predefined shapes, STOOG supports the composition of shapes. Thirdly, the data managed in [20] are RDF databases which can be represented by a simple labeled graph. We also consider more general models of clustered graph and multivariate graphs. Fourthly, our proposed toolkit is pluggable in a web browser or an existing application. Lastly, we have implemented the cascading mechanism proposed in CSS for our style sheets. Moreover, we have improved the mechanism by adding cascading of graphical representations.
3 Style-Sheet Language In [2], Bertin outlines six visual variables: shape, size, value, grain, color and orientation. Similarly, our style sheet language makes the definition of visual variable possible thanks to concepts such as graphical representations, graphical structures and attributes. In this section, we outline the four core mechanisms that we found useful for STOOG style-sheets: matching, coding, cascading and interaction. These mechanisms enable the association between data and visual variables. Basic Principles. A style sheet describes graphical representations. Each representation is associated to a class of data elements. In the precise case of graphs, data
From Coding to Automatic Generation of Legends in Visual Analytics GRAPHICAL REPRESENTATIONS Website -> Node { R1 URL{ shadow : 'yes'; position : 0,10; width_percent: 100; mirror : 'yes'; height_percent: 100; name : 'web_icon'; anchor_name :'Main'; mirror_blurred:'yes'; url : ''[email protected]; }} Website_hover -> Node { R2 TextBox{ text : @self.title; position : 50,85; box_size : 200,25; box_visible : 'no'; style : 'BOLD'; point: '20 pt'; }} Visitor -> Node { R3 TextBox{ position : 50,60; name : 'IPText'; box_size : 200,25; box_visible 'no'; style : 'BOLD'; point: '20 pt'; text : 'IP : '[email protected]; }
R4
TextBox{ name : 'IPText'; fill_color : 255,0,0,100;} Oval{ fill_color : 255,0,0,100; draw_color : 255,0,0,100; name : 'Head';}}
Fig. 2. A sample of style sheet. The style sheet is divided in three parts and must be read from left to right and from top to bottom. The section “graphical representation” is composed of seven representations numerated from R1 to R7. The sections “associations” possesses six association rules while the section “interaction” possesses only two.
elements are Nodes, Edges or Clusters. An important difference between our approach and other approaches like [1] is the possibility of composing data element representations with graphical structures. Graphical structures are defined by attributes. Each attribute is associated to a list of values.
410
G. Artignan and M. Hascoët
Fig. 3. A sample of transformed graph using a style sheet. The visualization is obtained by applying the style sheet Fig. 2 on the graph Fig. 1.
In order to illustrate our discussion, we present a style sheet sample (Fig. 2) applied on the graph (Fig. 1) and the associated generated visualization (Fig. 3). The Fig. 1 presents visits of websites by internet users. We decide to represent internet users and websites by labeled nodes visits and references by labeled edges. Fig. 3 presents the result after the application of the style sheet. Internet users are represented by a schematic person. Internet websites are represented by thumbs. Visits are represented by links in blue, the thickness is proportional with the time spent by a given user on a given website. References are represented by links in green. Internet users in pink have an IP address beginning by ‘184.188’. The style sheet is composed of a list of graphical representations. Each graphical representation is composed of graphical structures themselves composed of attributes. Graphical representations are noted on Fig. 2 from R1 to R7. The representation R3 named visitor is composed of three graphical structures: a text box, a polygon, an oval. The text box helps for displaying the IP address. The polygon depicts the visitor body. The oval depicts the visitor head. In this style sheet language, edges can be linked to any structure. When the creation of an edge representation is done we must specify the name of the source port and the name of the target port for binding the edge representation. The name is given by the anchor-name property. This section has shown how to define graphical representations. We are now interested in the four proposed mechanisms: coding mechanism, matching mechanism, cascading mechanism and interaction mechanism. Coding Mechanism. The coding mechanism consists in defining how data attributes are represented by visual attributes. Each visual attribute value is described by an expression. An expression can be: • a constant, • a data attribute • a binary operation with two operands themselves expressions, • an arithmetic function parameterized by expressions.
From Coding to Automatic Generation of Legends in Visual Analytics
411
Fig. 4. (a) The simplified grammar for arithmetic expression. (b) An example of accepted expression and the associated abstract syntax tree.
Matching Mechanism. The matching mechanism consists in associating a set of data elements (i.e. nodes, edges or clusters) to a set of representations. The associated elements are selected thanks to Boolean expressions. If a Boolean expression is satisfied, we associate a given representation to a set of selected data elements. A Boolean expression can be: • a constant (True or False), • an operator (Not, And, Or) with for each operand a Boolean expression, • a comparison (<=, <, >, >=, <>) composed of two compared expressions cf. Fig 4, • an operator testing the existence of an attribute given as a parameter.
Fig. 5. (a) The simplified grammar for Boolean expression. (b) An example of accepted Boolean expression and the associated abstract syntax tree.
Cascading Mechanism. Cascaded representations are processed when a data element is associated to several representations. The cascading algorithm takes an ordered set of representations in parameter and produces a final representation. The cascading of two representations A and B is done by producing a new representation C. The representation C is made of all structures in A and B. As with other CSS-like languages, a problem might arise from incompatible definitions occurring in the distinct representations of A and B. It is solved similarly: two structures are
412
G. Artignan and M. Hascoët
considered equal if the two structures have the same name. When two structures are equal in the representation of A and B the result in C is a structure of the same name. However, the definition differs and accounts for both A and B definitions. If ever these definitions are contradictory, then the latest definition is kept in C. The cascading of a set of N representations is done by a successive cascade of one representation after another. Interaction Mechanism. The interaction mechanism corresponds to the modification of the representation during the interaction. For each possible interaction we select data elements thanks to Boolean expressions. We affect to these previously selected data elements a new representation. Some interactions are not fully compatible with the cascading mechanism as explained in the previous section. We use the notion of dynamic cascading. Dynamic cascading does not replace the previous representation by a new representation but cascade it. In the precise case of interaction as the selection interaction we want sometimes to add a color filter in order to outline the selection. The dynamic cascading makes this modification possible without knowing in advance the previous representation. This kind of cascading is implemented only for the selection of elements. Dynamic cascading has proven to be very useful and might extend to other situations. Therefore, we plan to support it in a more general way in future versions of the system. We would then add a keyword in future versions in order to explicit the type of cascading desired.
4 Graph Viewer In this section, we present the viewer which is the part of STOOG responsible for rendering. The viewer is implemented in Java. Hence the viewer can be pluggable in any web browser supporting applets. This section is divided in three parts. Firstly, we present the properties of the viewer. Secondly, we expose the simplest interactions. Lastly, through a use case, we present advanced interactions. Properties. The viewer is made of three parts: the parser transforming style sheets on a syntax tree, the interpreter transforming this syntax tree on the structured model, the renderer displaying elements on the screen. The parsing process is implemented using the SableCC tool. It is a compiler compiler taking a grammar as a parameter and generating the Java parser for this grammar. This parser is used to create the abstract syntax tree of our style sheets. The interpreter takes the graph structure underlying the data to display as a parameter and the abstract syntax tree generated from the graph style sheet. It transforms the abstract syntax tree in a graphical representation model and associates each data element to a graphical representation. The renderer generates the interactive visualization on the screen in relation with the model. It is in charge of: drawing the graph, modifying the representations during user interaction, and displaying animations such as layout algorithms or zoom. The most important aspects of our style sheet graph viewer are its extensible capacities. The tool supports dynamic management of structures. Structures are
From Coding to Automatic Generation of Legends in Visual Analytics
413
Fig. 6. A sample of visualization using STOOG. Documents highlighted in blue, pink and green are in different sets of selection. “Bubbles” represent communities. Links between documents, appearing on mouse hover, represent similarity between documents.
Fig. 7. The Node Representation Editor makes the creation of templates of representations possible
imported dynamically using the Java introspection. Each structure must determine the attributes available and its own rendering. The arithmetic and Boolean functions, available in expression, are dynamic and improvable by specifying the classes containing the functions. A developer can therefore determine new kinds of structures. For instance, we can imagine structures using internet such as the structure URL (Fig. 2) which creates a thumb of web site.
414
G. Artignan and M. Hascoët
Simple Usage. A simple way to use our viewer is in the applet functionality. Indeed, the toolkit is pluggable in a web browser as an applet parameterized by two parameters: the graph to display, the graph style-sheet to use. Other optional parameters are available as the layout. During the execution both style sheets and graph can be changed on-the-fly using drag and drop. Advanced Usages. We present in this part the use of STOOG through a project. This project aims to study user classification of documents. Each user creates his own database of web documents. The documents are in relationships thanks to a similarity function parameterized by two documents and given a score between 0 and 1. The score is near to zero if the two documents are far semantically. The score is close to one if the two documents are similar. We are eager to visualize and cluster these documents. The result is visible in the Fig. 6. The STOOG tool is integrated as a Java component. The documents are represented by thumbs. Clusters are represented by bubble shapes around documents. Links between documents are visible on mouse over. Documents drawn in blue, red and green are in three different sets of selection. Menus have been added on the left side and on the right side to offer functionalities for selections, layout, manual clustering and automated clustering. The clusters can be created either manually or automatically. Clusters are created manually using the STOOG selection or automatically using the Java implementation of the MCL algorithm [22]. Some interactions are not implementable using the graph style sheet such as the display of incident edges on mouse hover. For this kind of interaction it is possible to define the representation using the graph style sheet with a name of interaction not predefined. The developer must then simply specify the name of the modification of representation and of the data element to modify during the interaction.
5 Style-Sheet Editor Our style sheet editor is divided in two parts: a creator and a graph style sheet editor. The creator makes the creation of representations on-the-shelf possible. The graph style sheet editor gives a list of representations possible and enables the association between data and visual attributes. Representation on-the-shelf. The creator (Fig. 7) proposes the construction of representations on-the-shelf. It is divided in three sections: (A) the list of structures, (B) the draw panel, (C) the list of attributes. The list of structures is defined using the import button. The import button triggers input dialog asking for a structure class name. The structure is imported thanks to the Java introspection mechanism. The list of structures is stored for further uses. The draw panel makes the positioning of structures possible. It offers a preview of the representation. The list of attributes shows all available attributes for each structure placed on the draw panel. For each attribute the user can specify predefined values, the visibility of the attribute (private or not), the visible name and the input method. For instance the attribute fill_color of the instance of polygon is public (i.e. it will appear in the graph
From Coding to Automatic Generation of Legends in Visual Analytics
415
style sheet editor), the visible name is ‘Color Body’. The color is typed using a color chooser. When the button “ok” is pressed the representation is available on-the-shelf. Graph style sheet editor. The graph style sheet editor Fig. 8 is divided in four parts: (D) data, (E) representations, (F) rendering and (G) rules. The data section makes the specification of a dataset possible. The data is considered as an instance nevertheless enables functionalities as auto-completion of data attributes. The representation section exposes a set of available representations for graph style sheets. One click triggers the association of the representation to the selected rule and the adding of a tab in the rendering section. The rule section exposes a set of rules created by the user. The rule is defined by a name, a condition of application named expression, an application mode and a kind of data element. The rendering section presents the visual variables associated to the representation. In the example in Fig. 8 the user has created a rule named “visitor”. The rule is applied on the visitor node. The rendering associated to the rule is a visitor representation (Fig. 7). The generated form concatenates the fields of all structures in the graphical representation. Only public attributes are visible, with the input method and the name chosen during the conception.
Fig. 8. The Graph Style Sheet Editor makes the creation of the style sheet possible
6 Legend Generation In information visualization, the interpretation of visual attributes is not always explicit. Legends are intended to make these associations explicit and easy to interpret. We are eager to use the description done in the style sheet in order to generate legends. We have considered automatic approach for legend generation but finally favor a semi-automatic approach because we found that it best suits user needs. Semi-automatic Legend Generation. The legend generation is achieved by first automatically extracting all graphical representations defined in the style sheet and
416
G. Artignan and M. Hascoët
represented in the view. These representations are sorted by data attributes. Figure 9 presents a list of collected representations classified by attributes. The resulting list can be seen as an extensive legend of all existing mappings between data elements and graphical representations. The user can further filter only the relevant mappings for his purpose in order to reduce the number of items in the final legend. By selecting only interesting representations or structures in the list and by dragging and dropping them from the list to the view, the user makes the final legend. The classification of representation by data attributes in the list facilitates the selection of graphical representations according to certain values of data attributes. In some cases, structures that are part of graphical representations convey important information such as properties or status and therefore the treatment of graphical structures is similar to the treatment of graphical representations: the user can select these graphical structures and dragand-drop them in the visualization to append them to the legend.
Fig. 9. Legend generation, the frame exposes a list of representations. The representation can be filtered with attributes values. Each “ComboBox” contains different values for the attribute in the data set.
7 Conclusions and Perspectives We have introduced a new language for the definition of style sheets which proposes useful concepts and four mechanisms in order to handle these concepts: the matching, the coding, the cascading, and the interaction. STOOG provides and integrates (1) an extensible language for style-sheet definition, (2) a standalone application and an API supporting the rendering and interaction of visualization resulting from encoding raw data according to STOOG style sheets, (3) a user interface to facilitate the creation and reuse of style-sheets and lastly, (4) the generation of basics legends. We believe that by using STOOG, both end-users and developers will save time and efforts in their attempts to visually explore large amounts of information. In the future we plan to extend our language to include more complex interactions. We are also interested in extending our toolkit to process multi-level clustered graphs. We are eager to explore the space of legend representations in order to improve the legend generation with, for instance, interactive legends.
From Coding to Automatic Generation of Legends in Visual Analytics
417
References 1. Adar, E.: GUESS: a language and interface for graph exploration. In: CHI 2006, pp. 791– 800. ACM, New York (2006) 2. Bertin, J.: La graphique et le traitement graphique de l’information. La graphique de Communication, 22 (1977) 3. Bertin, J.: Semiology of Graphics. University of Wisconsin Press (1983) 4. Baudel, T.: Canonical Representation of Data-Linear Visualization Algorithms and its Applications. ILOG Research Report (2003), http://techreports.ilog.com 5. Baudel, T.: Browsing through an information visualization design space. In: CHI 2004, pp. 765–766. ACM, New York (2004) 6. Bostock, M., Heer, J.: Protovis: A Graphical Toolkit for Visualization. In: TVCG 2009, vol. 15(6), pp. 1121–1128 (2009) 7. Cammarano, M., Dong, X., Chan, B., Klingner, J., Talbot, J., Halevey, A., Hanrahan, P.: Visualization of Heterogeneous Data. In: TVGC 2007 (2007) 8. Card, S., Mackinlay, J., Shneiderman, B.: Readings in Information Visualization: Using Vision to Think. Morgan Kaufmann Publishers Inc., San Francisco (1999) 9. Collins, C.: DocuBurst: Document Content Visualization Using Language Structure. In: Infovis 2006, Baltimore (2006) 10. Eisenberg, J.D.: SVG Essentials. 1. O’Reilly & Associates, Inc., Sebastopol (2002) 11. Gansner, E.R., North, S.C.: An open graph visualization system and its applications to software engineering. Softw. Pract. Exper. 30(11), 1203–1233 (2000) 12. Haeberli, P.E.: ConMan: a visual programming language for interactive graphics. SIGGRAPH Comput. Graph. 22(4), 103–111 (1988) 13. Hanrahan, P., Stolte, C., Mackinlay, J.: Tableau Software, Visual Analysis for Everyone (January 2007), http://www.tableausoftware.com/ 14. Heer, J., Card, S.K., Landay, J.A.: prefuse: a toolkit for interactive information visualization. In: CHI 2005, pp. 421–430. ACM, New York (2005) 15. Heer, J., Viégas, F.B., Wattenberg, M.: Voyagers and voyeurs: supporting asynchronous collaborative information visualization. In: CHI 2007, pp. 1029–1038. ACM, New York (2007) 16. Kazoun, C., Lott, J.: Programming Flex 2: the Comprehensive Guide to Creating Rich Media Applications with Adobe Flex. O’Reilly Media, Inc., Sebastopol (2007) 17. Mackinlay, J., Hanrahan, P., Stolte, C.: Show Me: Automatic Presentation for Visual Analysis. In: TVGC 2007, vol. 13(6), pp. 1137–1144 (2007) 18. Perer, A., Shneiderman, B.: Balancing Systematic and Flexible Exploration of Social Networks. In: TVGC 2006, vol. 12(5), pp. 693–700 (2006) 19. Phan, D., Xiao, L., Yeh, R., Hanrahan, P., Winograd, T.: Flow Map Layout. In: Infovis 2005, Washington, DC, p. 29 (2005) 20. Pietriga, E.: Semantic web data visualization with graph style sheets. In: SoftVis 2006, pp. 177–178. ACM, New York (2006) 21. SableCC Project, http://sablecc.org/wiki 22. van Dongen, S.: Graph Clustering by Flow Simulation. PhD thesis, University of Utrecht (May 2000) 23. W3C, Cascading Style Sheets (April 2006), http://www.w3.org/Style/CSS/ 24. Henry Riche, N., Lee, B., Plaisant, C.: Understanding Interactive Legends: a Comparative Evaluation with Standard Widgets. TVGC
Part IV
Software Agents and Internet Computing
Improving QoS Monitoring Based on the Aspect-Orientated Paradigm Mario Freitas da Silva1, Itana Maria de Souza Gimenes1, Marcelo Fantinato2 Maria Beatriz Felgar de Toledo3, and Alessandro Fabricio Garcia4 1
Departamento de Informática, Universidade Estadual de Maringá Av. Colombo, 5.790 – 87020-900, Maringá, PR, Brazil 2 Sistemas de Informação, Universidade de São Paulo Rua Arlindo Béttio, 1.000 – 03828-000, São Paulo, Brazil 3 Instituto de Computação, Universidade Estadual de Campinas Av. Albert Einstein, 1.251 – 13083-970, Campinas, SP, Brazil 4 Departamento de Informática, Pontifícia Universidade Católica do Rio de Janeiro R. Marquês de São Vicente, 225 – 22451-900, Rio de Janeiro, RJ, Brazil [email protected], [email protected], [email protected], [email protected], [email protected]
Abstract. Contract monitoring is a complex activity which requires code instrumentation and many additional functions to be implemented both in the client and server sides. This paper proposes an approach to simplify QoS monitoring based on the aspect-orientated paradigm. The objective of this paradigm is to increase modularity by allowing the separated expression of cross-cutting concerns and the automatic reunification of them into a whole system. We consider that an e-contract consists of collaborative partners, a business process specified in AO4BPEL and QoS attributes defined with WS-Agreement. Monitoring concerns are encapsulated into aspects to be executed when specific process points are reached. Differently from other approaches, the proposed solution requires no instrumentation, uses Web services standards, and provides an integrated infrastructure for dealing with contract establishment and monitoring. Moreover, a Business Process Management Execution Environment is designed to automatically support the interaction between customer, provider and monitor organizations. Keywords: Electronic contracts, Web services, Quality of Service, Aspects.
In the SOC paradigm, electronic services (e-services) are part of distributed applications that enable fast software development and cost reduction. They are autonomous and platform independent software that can be either atomic or composed from lower granularity e-services. The Web services technology, a realization of SOC, allows the publication, search and discovery of e-service according to internet standards (WSDL, SOAP and UDDI). This technology has imposed additional challenges to the BPM scope. Inter-organizational cooperation requires proper regulation to ensure the Quality of Service (QoS) exchanged between collaborative partners [3]. Electronic contracts (e-contracts) are means to represent an agreement between organizations in such a way that they can be monitored throughout the business process execution. E-contracts include information about parties involved in a business process, activities to be executed as e-services and respective QoS and constraints. An e-contract life cycle includes a negotiation phase that defines contract parameters and an execution phase. In the latter, the inter-organizational cooperation must be monitored to ensure contractual clauses. A reliable inter-organizational cooperation demands e-contract monitoring. Contract conditions, such as those related to QoS attributes, have to be constantly and accurately supervised, for the benefit of every party involved in the business process. E-contract monitoring involves supervising the performance of the provided services and collecting relevant auditing information so that provider’s commitments can be assessed and proper penalties can be applied [4]. Monitoring is not a trivial issue as it requires process instrumentation either to register service performance or to take corrective actions. The aspect-oriented (AO) approach [5] can be used to separate orthogonal concerns in the BPM domain [6], [7]. Concerns such as monitoring can be encapsulated into aspects and executed when well-defined point cuts are reached in the process. Nevertheless, conventional aspect mechanisms cannot be directly applied to contract monitoring. This paper presents an AO approach to monitor QoS attributes and levels, named Aspect-Monitor. An e-contract comprises collaborative partners, an interorganizational business process specified in AO4BPEL and QoS attributes of services in WS-Agreement. Monitoring aspects are designed to deal with Web services and their related QoS attributes and levels. Thus, aspects are not only applied to the process specification but also to enhance process execution environment. This paper is organized as follows. Section 2 presents basic concepts in e-contract specification and monitoring. Section 3 discusses aspect orientation and its application in the BPM domain. The proposed approach for e-contract monitoring is presented in Section 4, including the BPM Execution Environment. Section 5 shows an example of application. The following sections present lessons learned and related work. Section 8 concludes the paper.
2 Electronic Contracts and Monitoring Considering the wide adoption of the SOC paradigm and inter-organization cooperation, e-contracts are required to express the rules governing business processes. More details about e-contracts and contract monitoring are discussed in the following subsections.
Improving QoS Monitoring Based on the Aspect-Orientated Paradigm
423
2.1 Electronic Contracts A contract is an agreement between two or more parties interested in establishing a mutual relationship on business or legal obligations. It is an electronic document used to represent an agreement between partners running business using the internet [8]. Although e-contracts differ regarding their size, content and complexity, there are usually common elements within the same domain. The canonical elements are [9]: parties that represent the organizations involved in a business process; activities that represent e-services to be executed; and, contractual clauses that describe constraints on the execution of activities. Contractual clauses can represent three types of constraints: obligations that describe what parties should do; permissions that describe what parties are allowed to do; and, prohibitions that describe what parties should not do [10]. Obligations may include QoS clauses which define attributes related to non-functional properties of e-services. Examples are availability, integrity, reliability, security, performance and reply time [11], [12], [13]. Different languages might be used to represent e-contracts sections. The most recent ones are related to Web standards. In this context, e-contracts are called Web services e-contracts (WS-contracts). Our approach focuses on WS-BPEL and WSAgreement. WS-BPEL has been widely used for business process specification [14]. WS-Agreement, based on WSLA [11], has been used for QoS specification. 2.2 Contract Monitoring Monitoring mechanisms vary a lot with respect to their goals, which includes: runtime analysis of system correctness, fault diagnosis and repair, dynamic adaptation to changes in the environment, improvement of resource allocation, measurement of key performance indicators, and behavioral properties such as QoS parameters. They also vary in terms of performance, the location where they run, and the information they can capture. Examples of monitoring mechanisms are: • Instrumentation requiring monitoring code to be included into the client or provider code; • Intermediaries that intercept messages between clients and service providers; • Sniffers that listen to messages exchanged between clients and providers; • Probes that periodically checks service providers for information. Contract monitoring requires events and information produced by executing business processes. This includes checking whether QoS attributes defined in an e-contract are being obeyed during process execution. Passive capabilities such as raising alerts when a certain attribute is not according to specified levels are not enough; therefore, corrective actions must also be provided for running processes. Even, contract renegotiation might be necessary. Thus, monitoring goals, in the BPM domain, include not only run-time measurement of QoS attributes and KPIs (Key Performance Indicators) but also dynamic adaptation to variation in parameters and application of actions specified in contracts.
424
M.F. da Silva et al.
According to Benbernou et al. [15], new approaches developed for monitoring service-based applications should come up with holistic and comprehensive methodologies that: 1. Integrate various techniques at all functional service-based application layers; 2. Provide a way to target all the relevant application aspects and information; 3. Define rich and well-structured modeling and specification languages capable of representing these features; 4. Allow for modeling, identifying, and propagating dependencies and effects of monitored events and information across various functional layers and aspects in order to enable fault diagnosis. Monitoring can be carried out by the consumer and provider organizations, or else by a third-party. Current approaches for e-contract specification used for monitoring have an excess of instrumentation in the process, many extra functions to be implemented in the client and server and lack of available tools. Our approach aims at reducing the complexity of the e-contract monitoring mechanism by applying the AO fundamental concepts. Moreover, it searches for meeting the requirements set defined by Benbernou et al. [15] and presented above.
3 Aspect-Orientation Paradigm Systems are usually decomposed into parts (modules) to reduce its complexity. Preferably, different parts of the system that are dedicated to satisfy particular concerns must be isolated from each other, so that each concern may be better understood. The Structured and Object-Oriented (OO) paradigms contributed to the separation of concerns in systems development, gradually reducing the coupling between different parts of a system. In the Structured paradigm, the separation of concerns is carried out based only on different functionalities, splitting the system in a set of functions; in the OO paradigm, the separation is based on data as well, splitting the system in a set of classes (methods and attributes).
Aspect
Classes Object-oriented
Classes Aspect-oriented
Fig. 1. Object-oriented and Aspect-oriented
Improving QoS Monitoring Based on the Aspect-Orientated Paradigm
425
However, after this primary decomposition, there may be some concerns still distributed in various parts of the system, the called crosscutting concerns [16] – which are pieces of code related to each other, but spread over several classes. A common example of a crosscutting concern in systems development is logging. The aspectoriented paradigm aims at encapsulating – in isolated parts of the system – the crosscutting concerns, besides those traditional ones already isolated in functions and classes. These parts isolating the crosscutting concerns are called aspects. Fig. 1 presents a comparison between object-oriented and aspect-oriented concepts. As in other paradigms, aspect-oriented software development involves two steps: decomposing the system in parts not interlaced and not spread; and recomposing again these parts to obtain the desired system. However, whereas in other paradigms the recomposition is carried out by procedure or method calls, in aspect orientation there is no explicit call between aspects and the other parts. Instead, each aspect specifies how it must react to events occurring elsewhere. The main benefits of the aspect-oriented paradigm are: • Less responsibility in each party: the parts of the system dealing with the business logic are not contaminated with pieces of code that deal with peripheral concerns; • Better modularization: since the different parts in aspect orientation do not directly interact with each other, an even lower level of coupling is obtained; • Facilitated evolution: new aspects can be easily added without need to change the existing code; • Increased reuse: since there is no mixing of concerns code, the possibilities for reuse of parts between different systems are increased. 3.1 Basic Aspect Elements To develop programs in an aspect-oriented programming language, specific compilers must be used. For example, the AJC compiler [17] converts a program written in AspectJ into a program in Java bytecode, which can be executed by any Java Virtual Machine (JVM). Aspect-oriented programming languages, such as AspectJ, have the following additional elements for aspect manipulation: • Aspect: a module that encapsulates a crosscutting concern. An aspect is composed of pointcut, advices and inter-type declarations; • Joint point: a point in the runtime execution of the system, where aspects inject behavior through an advice body. Examples of join points are: method call, method execution, exception handler execution, field read and write access; • Pointcut: a construction (inside an aspect) that selects a set of joint points. It is a regular expression defining when an aspect must react to events occurring in a joint point. General-purpose aspect-oriented languages define some pointcut designators, which select specific types of join points; • Advice body: a code (inside an aspect) that is executed when a joint point triggers a pointcut expression. When a joint point is reached, the advice can be executed before, at the same time or after the joint point; • Inter-type declaration: a static declaration (inside an aspect) that modifies a module structure. For example, an attribute or a method can be added to an existing class, or a class can be pushed down the class hierarchy.
426
M.F. da Silva et al.
3.2 Aspects in BPM Domain In the BPM domain, AO4BPEL (aspect-oriented extension to WS-BPEL) [6] is a WS-BPEL extension introducing aspect-oriented mechanisms to support service composition. AO4BPEL targets two existent problems in composition languages: i) specification modularity allowing the separation of concerns such as access control, authentication and audit from the business model; and ii) dynamic service composition. A specification in AO4BPEL includes two main parts: a business process in WS-BPEL and its aspects.
Aspects comprise PartnerLinks, Variables and PointcutAndAdvice as illustrated in Fig. 2. PartnerLinks identify partners providing Web services; Variables are used in the Advice part; and, PointcutAndAdvice includes Pointcuts described in Xpath and Advices in WS-BPEL. A Pointcut defines the point in the process where an aspect should act. Advices are executed when a Pointcut is identified. Our approach improves the application of the AO paradigm to business process domain as it complements AO4BPEL with the association of its variables with QoS specified in WS-Agreement.
4 The Aspect-Monitor Approach Aspect-Monitor proposes monitoring aspects as an instance of the AO4BPEL aspects. In addition to be related to the process, the new aspects are also associated with QoS attributes and levels specified in WS-Agreement. They are created to support the monitoring of services to which they refer to. This monitoring is part of a wider BPM approach [3], including: business process negotiation, reuse, contract establishment, dynamic execution environment and the WS-contract metamodel. In this paper, we focus on the introduction of aspects concepts and the mechanisms provided for supervising QoS attributes; no coverage for aspect-orientation was provided in the previous works already published by the same authors. Next sections describe the BPM execution environment, the introduction of aspects into the e-contract metamodel and the monitoring mechanism.
Improving QoS Monitoring Based on the Aspect-Orientated Paradigm
427
4.1 The BPM Execution Environment The negotiation, establishment, execution of a business process and the monitoring of its Web services require inter-organizational cooperation and dynamic exchange of e-services. Fig. 3 presents the elements involved in this execution environment comprising three parties: Customer Organization; Provider Organization; Monitor Organization; and, Negotiator Organization. Customer Organization
Provider Organization
SOC System SOC System Own Web Services
Subcontracted Web Services
WS-Contract Execution Monitor Organization BPEL Server
The Customer Organization has the most complex structure, including: i) a structure for WS-Contract Definition which supports WS-contract negotiation and establishment; and, ii) a structure for WS-Contract Execution to support the execution of business processes specified in AO4BPEL. A SOC System is necessary in the customer organization if its own Web services are required as part of the processes. In the Provider Organization, the SOC System controls the Web services subcontracted by consumer. In the Negotiator Organization, the WS-Contract Negotiation uses a set of protocols responsible to negotiate, or renegotiate, contracts for providers and consumers.
428
M.F. da Silva et al.
The Monitor Organization implements the Aspect-Monitor mechanisms. A third-party – the Monitor Organization, has a WS-Contract Monitor structure to control the execution of the process and hence the composed Web services, by using a set of monitor Web services and the QoS terms contained in the WS-contract as a reference. These Monitor Web services are invoked by the monitoring aspects executed in the Customer Organization whenever a join point is reached. During the execution of a business process, the customer organization invokes Web services which can be local or subcontracted. For each invoked Web service, a set of Monitoring Aspects is triggered. The aspects are related to QoS attributes defined to the Web service. A monitoring aspect triggered during a process execution invokes correspondent monitor Web services, which will follow up the respective services execution to ensure that the contracted QoS levels are satisfied. If they are not satisfied, actions can be undertaken according to the contractual clauses. Those actions may be process cancellation, penalty application and contract renegotiation. If renegotiation is necessary, the monitor Web services involves the Negotiator Organization, through the WS-Contract Negotiation component which will be the responsible for it. A prototype for the WS-Contract Monitor structure has been developed to support the execution of Web services monitoring according to the Aspect-Monitor approach. 4.2 WS-Contract Metamodel A WS-contract metamodel represents rules to create WS-contracts and supervise them based on aspects (Fig. 4). It includes concepts related to: i) Web services described by the WSDL language; ii) business processes involving Web services, described by the business-part of the AO4BPEL language; iii) QoS described by WS-Agreement language; and, iv) monitoring aspects described by the aspect-part of the AO4BPEL language. A WS-contract is composed of four sections: WSDL definitions, BPEL Business Process, WS-Agreement, and Monitoring Aspects, as follows: i) WSDL definitions section: contains the primary elements Message Types, Partner Link Types, Port Types and Operations - the last two describe the Web services. These elements are used to form the elements of the next sections; ii) BPEL Business Process section: describes the business process that composes Web services. It consists of: Variables, Partner Links, Activities (both Interaction Activities and Structured Activities types); iii) WS-Agreement section: describes the QoS attributes and levels regarding the Web services. Attributes and levels are described in terms of: Service Properties (including Variables) and Guarantee Terms (including Service Scope and Service Level Objectives). The WS-Agreement Name, Context and Service Description Terms elements are not included in this metamodel since there are already similar sections representing this information in the BPEL Business Process section; iv) Monitoring Aspects section: contains the aspects related to the Web services monitoring. Each aspect is described in terms of: Monitoring Partner Link, Monitoring Variable, Join point, Pointcut and Advice. Details about the first three sections are presented by Fantinato et al. [18]. The fourth section is an extension of the metamodel defined for the Aspect-Monitor.
Improving QoS Monitoring Based on the Aspect-Orientated Paradigm
429
The objectives of the Aspect elements as well as their relationships with other WScontract elements are as follows: • Monitoring aspect: a module encapsulating a monitoring crosscutting concern. It is composed of monitoring partner link, monitoring variable, pointcut and advice. There is a unique monitoring aspect for each guaranteed variable and hence for each service level objective, regarding the WS-Agreement service properties and guarantee terms. • Monitoring partner link: represents the partners involved in the services monitoring, which are: a monitoring organization; a consumer organization; and, a provider organization. • Monitoring variable: represents variables related to the process in the advice element. • Join point: represents a Web service whose execution must trigger a monitoring aspect, thereby injecting behavior at the join point through its advice body. • Pointcut: represents the Operation which identifies the Join Point – i.e. the Web service – for which the monitoring aspect must react. In this approach, the monitoring aspects have only one join point for each pointcut. • Advice: contains a business process, specified in AO4BPEL, which must be executed when the join point (Web service) triggers a pointcut. This process must be used to invoke Monitoring Web services. When a join point is reached, the advice can be executed before, at the same time or after the join point. WS-contract
1
1
AO4BPEL 1 Section
1
1
WSDL Section 1 1
WSAgreement 1 Section 1
Aspect Section 1
*
*
*
1
1
0..1
1
*
1 1 process * variable
1
message 1 type
service 1 property 1 *
* process partner 0..1 link 1 * * inter- * action activity *
*
1 1
1 * 1
* structured +parent activity 0..1
port type
guarantee variable
1
*
service level 1 objective 1
0..1monitoring partner link
1 *
service scope
1
1
* monitoring * variable * *
1
1 pointcut 1
1 *
activity *
1 * 1
partner 1 link type
*
monitoring1 aspect 1 1 1
guarantee 1 term
1
operation 1 1 1
*
join point
0..1
Fig. 4. WS-contract metamodel
advice
1
430
M.F. da Silva et al.
5 Application Example This section presents a practical experiment undertaken on a pseudo-scenario to evaluate the application of the Aspect-Monitor approach, identifying its benefits and drawbacks. The experiment context involves a travel agency which uses Web services, from partner organizations such as airline, car rental and hotel reservations companies. Each party, in this collaborative business process, provides a set of Web services to be used by the other parties. The travel agency system offers Web services to their customers which therefore require Web services from its partners. A WS-contract is established to regulate the business agreement. An airline company can provide a series of Web services to a travel agency, such as: timetable queries, flight booking, ticket purchase, itinerary changes and seat selection. On the other hand, a travel agency can also provide some Web services to the airline company, such as: customer notification on flight changes and advertisement on special offers. As there is not, as yet, an AO4BPEL server available, a WS-BPEL server – ActiveBPEL [19] – was used instead; thus, a workaround solution has been used, in the execution environment, to execute aspect-related tasks. Accordingly, the business process specified in WS-BPEL (using here ActiveBPEL Designer) had to be instrumented in order to manually enter in the pointcuts the excerpt of process regarding the advice to be executed. Thus, the business process receives an instrumentation that simulates the use of aspects which can be performed with the WS-BPEL technology, using here the ActiveBPEL Engine. Fig. 5, Fig. 6, Fig. 7 and Fig. 8 present different excerpts of the WS-contract in which the relationship between Web services, business process, QoS attributes and levels, and monitoring aspects can be observed. In Fig. 5, the Web service provided by the Airline company (identified by the operation flight-bookingOP) is defined.
In Fig. 6, the invocation of the Web service is specified in the WS-BPEL, identified by the command invoke on the operation flight-bookingOP. In Fig. 7, a QoS attribute and its level are defined for the Web service, identified respectively by the reply-timeVAR and less than 60 seconds.
Improving QoS Monitoring Based on the Aspect-Orientated Paradigm
In Fig. 8, the monitoring aspect related to the Web service and QoS attribute is specified; when the Web service whose operation is flight-bookingOP is invoked, the respective monitoring aspect is triggered so that it can invoke a third-party monitor Web service. The following elements are present: the pointcut whose name is bookflight-invoke is defined for portType flight-servicesPT and operation flight-bookingOP (Fig. 5), which means that this operation is the join point of this monitoring aspect; and the advice, of “around” type, whose actions flow is used to invoke the third-party monitoring Web service, identified by portType flight-services-monitorPT and operation flight-booking-monitorOP. Since this monitoring aspect is related to the reply-timeVAR, its content is copied to a temporary variable so that it can be forward to the third-party monitoring Web service to be invoked. Other monitoring aspects for flight-servicesOP regarding other QoS attributes can exist as well as other ones for reply-timeVAR regarding other Web service operation.
432
M.F. da Silva et al. <partners>... ... <pointcut name="book-flight-invoke" contextCollection="true"> //process//invoke[ @portType="flight-servicesPT" @operation="flight-bookingOP"] ... ... ... Fig. 8. Excerpt from Aspect definitions
The monitoring aspects, producing the Aspect definitions section, are generated semi-automatically based on the information from the other WS-contract sections: WSDL, WS-BPEL and WS-Agreement.
6 Lessons Learned This section presents the lessons learned from the development and the exercising of the aspect-based approach to WS-contract monitoring. In the performed analysis, the proposed approach was compared mainly to the pure WSLA, WS-BPEL and WSAgreement approaches. Moreover, the requirements presented by Benbernou et al. [15] for new approaches related to e-contract monitoring were also taken into account. • Use of standard languages: no new language was defined to compose this approach. Standard languages, or already existing extensions, are used, which together are capable of representing the information related to contract monitoring, as required by item 3 from the requirements set proposed by Benbernou et al. [15]. • No need for instrumentation: the proposed approach allows the customer organization to define its business process without adding extra code for monitoring. It means that, according to the AO paradigm, the main concern (i.e. the business process) was not crosscut by the secondary concern (i.e. the contract monitoring).
Improving QoS Monitoring Based on the Aspect-Orientated Paradigm
433
Other monitoring approaches do not perform such separation, requiring instrumentation of the business process with monitoring functions. • Simplified computational support: the proposed approach simplifies the computational support necessary to carry out e-contract monitoring when compared to other existent approaches for the same objective. No extra technology needs to be implemented to support the monitoring functions. The proposed BPM execution environment aims to support all the contract life-cycle, including monitoring, in a unique support environment. • Integrated solution: the proposed approach presents an integrated solution in which all the main parts of a WS-contract are treated together: business process, QoS attributes and levels, and monitoring activities. Other monitoring approaches commonly deal with isolated Web services and do not provide a solution for composed Web services. This point is closer related to requirements 1 and 2 from the set proposed by Benbernou et al. [15], since the proposed approach provides a broad integration in different levels – including different techniques and information.
7 Related Work Several works are related to e-contract monitoring in a general way, but not involving AO, from which we can highlight Cremona [20] – used as the architectural basis for the development of the Aspect-Monitor approach. Cremona is an architecture for the creation, management and monitoring service-level agreements represented as WSAgreement documents. The monitoring module is not only used to observe and detect contract violations, but also to predict future violations and to trigger corrective actions in advance. There are also some aspect-based works related to e-contract monitoring, but none of them applies AO both in a wide and complete BPM approach, and using specifically the WS-Agreement language as presented in this paper. Some of these works are presented as follows. Singh et al. [21] proposed a methodology based on aspects to specify functional and non-functional characteristics of Web Services. The Web Services are described in an extended WSDL named AO-WSDL and published in an extended registry mechanism named AO-UDDI. Tomaz et al. [22] introduced AO concepts to add non-functional requirements to Web services. They focus on run-time adaptation of non-functional characteristics of a composite Web service by modifying the non-functional characteristics of its component Web services. They also propose a language for representing non-functional properties. Ortiz & Leymann [23] claim that, although the encapsulation of non-functional services properties using aspects are present in other works, aspect implementation must be combined with WS-Policy expressing non-functional properties. This proposal maintains policies separated from the functional behavior of the service and its main description. Narendra et al. [24] proposed run-time adaptation of non-functional properties of a composite Web service by modifying the non-functional properties of its component Web services. They use AO technology for specifying and relating non-functional
434
M.F. da Silva et al.
properties of the Web services as aspects at both levels of component and composition. WS-Policy and WSLA are used in the negotiation between Web services. Bianculli & Ghezzi [25] proposed an approach in which the behavior of a service is specified in an algebraic language and can be checked at run-time. A BPEL engine is extended, using AspectJ, with three components: an interceptor, a specification registry, and a monitor. The monitor is a wrapper for the evaluation of algebraic specifications, which keeps a machine-readable description of the state of the process instances which is updated when the new information from the interceptor is received. The symbolic state is then evaluated and compared against the expected one; and, in case of mismatch, certain specific actions may be performed.
8 Conclusions This paper presented an infrastructure to define, execute and monitor WS-contracts. In the proposed approach, a WS-contract comprises collaborative partners, business process specified in AO4BPEL and QoS attributes of services defined with WSAgreement. AO technology is used to separate concerns such as monitoring into aspects. Those aspects are executed when defined join point are reached in the process. Compared to other approaches, the proposed solution requires no instrumentation, uses Web services standards and provides an integrated infrastructure for dealing with contract establishment and monitoring. Future work includes: i) studying and incorporating other AO properties to enrich the proposed approach; ii) concluding the development of the WS-Contract Monitor structure, from the BPM execution environment; and, iii) carrying out a broader experiment of the presented proposal to evaluate its benefits and its limitations. Acknowledgements. This work was supported by The State of São Paulo Research Foundation (FAPESP) and The National Council for Scientific and Technological Development (CNPq), Brazil.
References 1. Weske, M.: Business Process Management: Concepts, Languages, Architectures. Springer, Berlin (2007) 2. Papazoglou, M.P., Traverso, P., Dustdar, S., Leymann, F.: Service-oriented Computing: A Research Roadmap. International Journal of Cooperative Information Systems 17(2), 223– 225 (2008) 3. Fantinato, M., Gimenes, I.M.S., Toledo, M.B.F.: Product Line in the Business Process Management Domain. In: Kang, K.C., Sugumaran, V., Park, S. (eds.) Applied Software Product Line Engineering, pp. 497–530. Boca Raton, Auerbach (2009) 4. Napagao, S.A., et al.: Contract Based Electronic Business Systems State of the Art. Technical Report (Project Deliverable), University Politècnica de Catalunya (2007) 5. Filman, R., Elrad, T., Clarke, S., Aksit, M.: Aspect-Oriented Software Development. Addison-Wesley, Reading (2005) 6. Charfi, A., Mezini, M.: Aspect-Oriented Web Service Composition with AO4BPEL. In: Zhang, L.-J. (ed.) ECOWS 2004. LNCS, vol. 3250, pp. 168–182. Springer, Heidelberg (2004)
Improving QoS Monitoring Based on the Aspect-Orientated Paradigm
435
7. Braem, M., et al.: Isolating Process-level Concerns Using Padus. In: Dustdar, S., Fiadeiro, J.L., Sheth, A.P. (eds.) BPM 2006. LNCS, vol. 4102, pp. 113–128. Springer, Heidelberg (2006) 8. Erl, T., et al.: Web Service Contract Design and Versioning for SOA. Prentice-Hall, Englewood Cliffs (2008) 9. Grefen, P.W.P.J., Aberer, K., Ludwig, H., Hoffner, Y.: CrossFlow: Cross-organizational Workflow Management for Service Outsourcing in Dynamic Virtual Enterprises. IEEE Data Engineering Bulletin 24(1), 52–57 (2001) 10. Marjanovic, O., Milosevic, Z.: Towards Formal Modeling of E-contracts. In: 5th International Enterprise Distributed Object Computing Conference, pp. 59–68. IEEE Press, New York (2001) 11. Keller, A., Ludwig, H.: The WSLA Framework: Specifying and Monitoring Service Level Agreements for Web Services. Journal of Network and Systems Management 11(1), 57–81 (2003) 12. Menascé, D.A.: QoS issues in Web Services. IEEE Internet Computing 6(6), 72–75 (2002) 13. Sahai, A., Machiraju, V., Sayal, M., Moorsel, A.P.A., Casati, F.: Automated SLA Monitoring for Web Services. In: Feridun, M., Kropf, P.G., Babin, G. (eds.) DSOM 2002. LNCS, vol. 2506, pp. 28–41. Springer, Heidelberg (2002) 14. Barreto, C., et al.: OASIS Web Services Business Process Execution Language (WSBPEL) TC (2007), http://docs.oasis-open.org/wsbpel/2.0/Primer/ wsbpel-v2.0-Primer.pdf 15. Benbernou, S., et al.: State of the Art Report, Gap Analysis of Knowledge on Principles, Techniques and Methodologies for Monitoring and Adaptation of SBAs. Technical Report (Project Deliverable), Université Claude Bernard Lyon, France (2008) 16. Kiczales, G., et al.: Aspect-oriented Programming. In: Liu, Y., Auletta, V. (eds.) ECOOP 1997. LNCS, vol. 1241, pp. 220–242. Springer, Heidelberg (1997) 17. The AspectJ Project, http://www.eclipse.org/aspectj 18. Fantinato, M., Toledo, M.B.F., Gimenes, I.M.S.: WS-contract Establishment with QoS: An Approach Based on Feature Modeling. International Journal of Cooperative Information Systems 17(3), 373–407 (2008) 19. ActiveVOS – BPMS from Active Endpoints, http://www.activevos.com 20. Ludwig, H., Dan, A., Kearney, R.: Cremona: An Architecture and Library for Creation and Monitoring of WS-Agreements. In: 2nd International Conference on Service-Oriented Computing, pp. 65–74. ACM Press, New York (2004) 21. Singh, S., Grundy, J.C., Hosking, J.G.: Developing .NET Web Service-based Applications with Aspect-oriented Component Engineering. In: 5th Australasian Workshop on Software and Systems Architecures (2004) 22. Tomaz, R.F., Hmida, M.B., Monfort, V.: Concrete Solutions for Web Services Adaptability Using Policies and Aspects. International Journal of Cooperative Information Systems 15(3), 415–438 (2006) 23. Ortiz, G., Leymann, F.: Combining WS-Policy and Aspect-oriented Programming. In: 2nd Advanced International Conference on Telecommunications and International Conference on Internet and Web Applications and Services, p. 143. IEEE Press, New York (2006) 24. Narendra, N.C., Ponnalagu, K., Krishnamurthy, J., Ramkumar, R.: Run-time Adaptation of Non-functional Properties of Composite Web Services Using Aspect-oriented Programming. In: Krämer, B.J., Lin, K.-J., Narasimhan, P. (eds.) ICSOC 2007. LNCS, vol. 4749, pp. 546–557. Springer, Heidelberg (2007) 25. Bianculli, D., Ghezzi, C.: Monitoring Conversational Web Services. In: 2nd International Workshop on Service Oriented Software Engineering, pp. 15–21. ACM Press, New York (2007)
Directed Retrieval and Extraction of High-Quality Product Specifications Maximilian Walther, Ludwig H¨ ahne, Daniel Schuster, and Alexander Schill Technische Universit¨ at Dresden, Faculty of Computer Science, Institute of Systems Architecture, Helmholtzstr. 10, 01062 Dresden, Germany, [email protected] Abstract. In recent years, a large quantity of algorithms has been presented for extracting information from semi-structured sources like HTML pages. Some of them already focus on product information and are adopted, e.g., in online platforms. However, most of those algorithms do not specifically target technical product specifications and never take the localization of such specifications into account. This work focuses on automating the whole process of retrieving and extracting product specifications. It achieves a high data quality by directing the source retrieval to producer pages where product specifications are extracted in an unsupervised manner. The resulting specifications are of high relevance to consumers since they enable effective product comparisons. The success of the developed algorithms is proven by a federated information system called Fedseeko. Keywords: document retrieval, information extraction, federated search.
1
Introduction
Today, customers as well as retailers and service companies use the Web for gathering detailed product information. As this information is distributed on different websites and presented in heterogeneous formats, this process is both time-consuming and error-prone. There are already a number of secondary sources bundling product information like online shops (e.g., amazon.com), product review sites (e.g., dpreview.com) or shopping portals (e.g., ciao.de). The information in individual online shops is restricted to sold products only, often presented in an error-prone or not comprehensive state. The information on product review sites is collected and verified manually and thus of higher quality but restricted to a special product domain such as dpreview.com to the domain of digital cameras. Shopping portals rely on information gathered from online shops, thus again only offering incomplete and error-prone information. For gathering product information from online shops these systems are generally able to query available Web Services, extract the information from websites using web scraping technologies or receive the offers directly by feed-like mechanisms. Gathering product information first-hand from producers is more reliable, but requires a lot more manual work as this data is not offered in a standardized way by the producers. The operators of the shopping portals or other product J. Filipe and J. Cordeiro (Eds.): ICEIS 2010, LNBIP 73, pp. 436–450, 2011. c Springer-Verlag Berlin Heidelberg 2011
Directed Product Specifications Retrieval and Extraction
437
Fig. 1. Example product page (left) and product detail page (right) - source: canon.co.uk
information systems have to locate the producer’s website, find the website presenting the product of interest, pinpoint the product information and extract it. As this process evidently requires a lot of man hours, information providers tend to either specialize on concrete product domains or reduce the presented information to very general details all products have in common, such as a product name, a producer name, a picture, prices, etc. From the consumer’s point of view, product specification data provided by producer websites (see example in Figure 1) is the most important product information, as it creates a general view on the product of interest and makes it comparable with related products. Easing the automatic retrieval of such information would yield a great advantage for product information systems. To reach this goal the following conditions have to be met: (Req1) The system has to retrieve the producer’s product detail page while only being supplied with a product name and its producer’s name. If multiple description pages with different templates exist for the same product, the page with the specification data is to be selected. (Req2) The system has to be able to extract information being supplied with few similar or even only one product detail page. (Req3) Different page templates for one manufacturer have to be managed by the extraction process, e.g., in case of different product categories or families.
2
Related Work
As shown in the introduction, the presented work is located in the area of product information retrieval putting a special focus on product document retrieval (DR) and information extraction (IE), more precisely, product specification extraction. Several systems dealing with similar problems were developed in related research works. Considering the product information domain, these systems mostly
438
M. Walther et al.
handle vendor information provided by online malls or third-party information in the shape of user reviews. Systems for gathering vendor information either access online malls using Web Services or web scraping wrappers and rank resulting product lists by federated ranking mechanisms. Detailed information on such systems including a feasible approach for federated ranking can be found in [1]. Wong and Lam[2] present algorithms for feature mining especially applicable for extracting product information from vendor sites. Their evaluation proves the algorithms’ feasibility in comparison to other systems. Concerning third-party information like user reviews, TextRunner[3] offers a facts-based search engine using the principles of Open Information Extraction. Sources treated by TextRunner do not only comprise product reviews. Red Opal[4] offers effective product search mechanisms based on the evaluation of a product review database. Reviews are examined concerning special product features, thus enabling the system to provide a set of products to the consumer that is expected to satisfy their needs concerning a chosen feature. As mentioned above, the presented systems do not focus on product information provided by producers. In effect, such information is of particular interest for the consumer as producers offer complete, correct and up-to-date information. In the field of information extraction, many research results have been published as well. Those may be divided in supervised, semi-supervised and unsupervised approaches. The approach of learning extraction rules from labeled training documents is referred to as supervised IE. Rapier[5] is a supervised extraction system that uses a relational learning algorithm to extract information from job postings. It initializes the system with specific rules to extract the labeled data and successively replaces those with more general rules. Syntactic and semantic information is incorporated using a part-of-speech tagger. Other supervised IE systems are SRV[6], WIEN[7], SoftMealy[8], STALKER[9] and DEByE[10]. Labeling training data in advance is a labor-intensive process limiting the scope of the IE system. Instead of requiring labelled data, semi-supervised IE systems extract potentially interesting data and let the user decide what shall be extracted. IEPAD[11] is a semi-supervised system. Apart from extraction target selection, such systems are very similar to unsupervised IE systems. Automatic or unsupervised IE systems extract data from unlabeled training documents. The core concept behind all unsupervised IE systems is to identify repetitive patterns in the input data and extract data items embodied in the recurrent pattern. Unsupervised IE systems can be subdivided into record-level extraction systems and page-level extraction systems. The former assume multiple data records of the same type are available being rendered by a common template into one page while the latter extract data from multiple pages having the same page-wide template. Evidently, record-level extraction systems can only operate on documents containing multiple data records and require means to identify the data regions describing the individual data records. The latter problem can be tackled with string or tree alignment techniques. Examples for such systems are DEPTA[12]
Directed Product Specifications Retrieval and Extraction
439
and NET[13]. DEPTA stands for Data Extraction based on Partial Tree Alignment and is an unsupervised IE system. It extracts data records from list pages (e.g., Amazon search result lists) with an algorithm called MDR, taking advantage of the tree structure of the HTML page. MDR was first presented by Liu et al.[14]. The design of MDR is based on two observations about data records. The first observation states that similar objects are likely located in a contiguous region and formatted with almost identical or at least similar HTML tags. The second observation is that similar data records are built by sub-trees of a common parent node. Unfortunately, multi-record IE systems like DEPTA are not well-suited for our extraction problem, as product detail pages are rarely multi-record pages and typically describe only a single product. Page-level extraction systems can treat the whole input page as a data region from which the data record shall be extracted. However, multiple pages for induction of extraction wrappers need to be fetched in advance. Thus, the problem of collecting training data is shifted into the DR domain and is rarely addressed by IE researchers. Examples for page-level extraction systems are RoadRunner[15] and ExAlg[16]. RoadRunner is an unsupervised web IE system that compares multiple pages and generates union-free regular expressions based on the identified similarities and differences. RoadRunner initializes the wrapper with a random page of the input set and matches the remaining pages using an algorithm called ACME matching. The wrapper is generalized for every encountered mismatch. Text string mismatches are interpreted as data fields, tag mismatches are treated as indicators of optional items and iterators. ExAlg is an IE system for automatically deducing the template from a set of template-generated pages. It has a hierarchically structured data model and supports optional elements and disjunctions. A web page is modeled as a list of tokens in which a token might either be an HTML tag or a word from a text node. ExAlg builds equivalence classes of the tokens found in the input documents. Based on these sets of tokens, the underlying template is deduced. The drawback of these page-level IE systems relating to our extraction problem is the large number of training data to induce extraction rules. ExAlg draws upon the target attributes’ occurrence characteristic which can hardly be derived from only two training pages, thus not meeting (Req2). Furthermore, the presented approaches do not take the problem of document retrieval into account and hence do not fulfil (Req1). Additionally, the support for multiple page templates (Req3) is not tackled yet. The conditions stated above are essential for a successful employment of such algorithms in federated consumer product information systems. In the following, we present our approach building upon some of the ideas presented here, extending them to fully fit (Req2) as well as finding new methods to tackle document retrieval (Req1) and multiple page templates (Req3).
3
Document Retrieval
The retrieval component’s task is to supply the information extraction algorithm with a genuine product specification page. We use web search services such as Google, Bing and Yahoo for this purpose.
440
M. Walther et al.
The document set to consider is the total number of publicly available websites W . Let the product whose specification page is to be found be pi . Thus, all websites presenting information about this product can be subsumed as W (pi ). Since only specification pages are of interest, these websites are defined by WS (pi ). Specification pages may be distributed all over the Web being offered by arbitrary sources. However, product manufacturers are accounted to be the most trustable sources concerning their own products. All websites provided by a manufacturer producing pi can be summarized by W (m(pi )). Hence, the document WS (pi ). In the majority of cases, to be found is one of the websites W (m(pi )) only one producer’s specification page exists per product, therefore following through with |W (m(pi )) WS (pi )| = 1. If so, this page is curtly defined as wi . The formula shows that the DR component’s task consists in determining the set of producer websites W (m(pi )) for the producer of pi filtering out the set of pages presenting information about pi and finally detecting wi or choosing one of the found product specification pages. Thus, the retrieval is laid out as a two-step process. In a first step, the producer page is located and, in a second step, the product specification page is searched restricting the requests to the producer domain. 3.1
Producer Page Retrieval
A producer site comprising W (m(pi )) is searched for by querying the mentioned web search services with the producer’s name, e.g., ”Siemens Home Appliances”. The results returned by all search engines are ordered using Borda ranking[17]. In Borda ranking, every participant announces an ordered list of preferred candidates. If there are n candidates, the top-ranked candidate of each voter receives n points and each lower ranked candidate receives a decremented score. For being able to search on the producer’s site, the producer domain is extracted. It includes the top-level domain of the host and one further level. For example, from the URL http://www.gigabyte.com.tw/ the domain name gigabyte.com.tw is extracted. If the product page cannot be retrieved on-site, the algorithm falls back to the next producer site candidate from the phrase search. 3.2
Product Detail Page Retrieval
For locating the actual product page, that is, building the intersection of W (m(pi )) and WS (pi ), again different web search services are queried, this time using the product’s name as query and restricting the search space to the retrieved producer domain. First, the result sets of the individual search engines are combined using Borda ranking to form an initial candidate list (see Figure 2). Under the supposition of a product page being discovered but the specification data being contained in a separate page, each page from the candidate list is scanned for product detail page links. Each link’s text is compared with characteristic product detail page link patterns. The target of the best matching link is added to the extended candidate list. The prospective specification page
Directed Product Specifications Retrieval and Extraction
441
Fig. 2. Scoring the product page candidates
inherits the Borda score of the linking page if it is not among the existing search results. Additionally, a specification score is assigned to this candidate. Subsequently, each result from the extended candidate list is rated with a URI score, a title score and a content score. For the URI score, the URIs of the candidates are scanned for characteristic terms associated with positive or negative meanings in the context of searching for product specification data. For example, the terms ”product” or ”specification” in a URL might indicate that the candidate is indeed a product specification page. Contrariwise, terms like ”forum”, ”news”, ”press” or ”review” might signify an irrelevant page in this context and entail a negative score. Furthermore, the URL is scanned for substrings of the product name. Accordingly, a URI score is given to each candidate. In a next step, the titles of the web pages are matched with the product identifier. The rationale behind this concept is to favor pages associated with the proper product in contrast to specification pages associated with similar products which might receive an otherwise high score. Depending on the percentage of matching terms a title score is calculated for every candidate. In a last step, the document contents are scanned for customary attribute key phrases. For this purpose, possibly available attribute keys from former extractions and their occurrence counts are retrieved. The set of text nodes contained in the page is matched with these phrases to calculate the candidates’ content scores. All computed scores are combined. The candidate with the highest score is returned as the alleged product page wi . An example can be seen in Table 1. Table 1. Example scores for product specification page candidates
442
4
M. Walther et al.
Information Extraction
The information extraction component is designed to extract key-value pairs of product information from product detail pages on the producers’ sites. Thus, the following algorithm takes a product detail page as input and retrieves the product’s specifications from this page. As keys and values in one product detail page share similar XPaths, we try to find those XPaths and create an extraction wrapper out of them. An overview of the procedure is given in Figure 3. In a first step the given product specification page is fetched and the DOM tree is created. Then, extraction wrappers already residing in the system can be applied. A wrapper consists of an attribute XPath and a relative key XPath. To extract the attributes, the wrapper retrieves the node set via the attribute XPath from the DOM representation of the input document. The key node is located by the relative key XPath. Subsequently, the key node is removed from the attribute node and the remaining text in the node is presumed to be the value component. If the extraction fails, the wrapper is not valid and a new one has to be induced. 4.1
Wrapper Induction
As depicted in Figure 3, different wrapper induction algorithms, namely a supervised and two unsupervised algorithms, are executed. The supervised algorithm (Induce by Example) is applicable when provided with key examples (e.g., ”Optical Zoom” for the digital camera domain) directly giving a hint on the product
Fig. 3. General information extraction procedure
Directed Product Specifications Retrieval and Extraction
443
Fig. 4. Wrapper induction algorithms
attributes to be extracted from the product detail page. The different algorithms are shown in Figure 4. Independent of the chosen algorithm, the first step comprises the creation of phrase clusters that might contain the product details to be extracted. The phrases are all text nodes of the website’s DOM tree. Only unique phrases are considered, all recurrent phrases are discarded during clustering. The clustering is based on the phrases’ generalized occurrence paths and enclosing tag signatures. The former is defined as an XPath query unambiguously selecting a node set only containing the respective nodes and being stripped of all indices. For example, the generalized occurrence path of ”//table/tr[4]/td[1]” is ”//table/tr[]/td[]”. The latter consists of the enclosing tag element including its attributes. As can be seen in Figure 5, phrases not occurring in all input documents are discarded in case multiple documents are considered. Thus, a phrase cluster contains all text nodes residing on the same level of the DOM tree having an identical enclosing tag. Apparently, the attributes must not reside in the very same tag to be syntactically distinguishable by the algorithm. Otherwise, approaches like ExAlg operating on the token-level have to be adopted. After all text nodes have been assigned to a cluster, a score for each cluster is computed using a rating function. Different rating functions are used depending on the wrapper induction algorithm. The different induction variants and employed rating functions are discussed below. Subsequently, an XPath query is derived from the path of all nodes in the best rated cluster and a wrapper object is created. The XPath is converted to a generalized occurrence path and split on the last eliminated index into attribute XPath and relative key XPath.
444
M. Walther et al.
Fig. 5. Clustering text nodes from multiple documents
If the wrapper is able to extract any attributes using these XPaths, the wrapper is returned. Otherwise, the next cluster with a lower score is considered. Induction with Example Phrase. The wrapper induction process can be facilitated by specifying a key phrase of one product attribute as an example. When such an example is provided, no additional training document is crawled and the cluster rating just looks for the example phrase in the available clusters (left side of Figure 4). Induction with Domain Knowledge. If no example key phrase is given, but extracted and confirmed product data is already stored in the system, all key phrases of this known product data are matched with each element in the cluster and a hit is recorded in case of success. The cluster score is simply modelled as the number of hits (central part of Figure 4). Induction with 2nd Product Page. If neither example key phrase nor domain knowledge is available, the wrapper induction relies on a training approach. Related product pages are retrieved to provide a training set. Phrases not occurring in all training documents are discarded and the clusters are rated based on their size scaled by the fraction of non-discarded nodes. The scaling shall prevent mixing up key and value components. The latter may happen when individual attributes have value tuples which build a larger cluster than the key components of the respective attributes (right side of Figure 4). How to retrieve related product pages is described in the following. 4.2
Related Product Page Retrievel
If no domain knowledge is available for the information extraction component to identify relevant data on a given product page, the generator of the IE wrapper requires at least one other page sharing the same template to detect recurrent patterns. Thus, similar pages are crawled starting from the product detail page
Directed Product Specifications Retrieval and Extraction
445
and selecting a page with a similar URL, content and structure. This approach is feasible, as it often takes no more than two clicks to navigate from a product detail page to another one of a similar product. Additionally, similar URLs are more likely to reference template-sharing pages, e.g., ”/product.html?name=D60” and ”/product.html?name=S550”. This is due to routing mechanisms in templatebased web application frameworks. The crawler starts at the product detail page and recursively extracts all referenced URLs until a given recursion depth is reached. In practice a depth of two showed suitable results. The URLs then are sorted by similarity to the original product URL and provided to the IE component. The URL similarity is modelled as the weighted Levenshtein distance of the individual URL components. E.g., differences in the host name have a larger impact on the final score than differences in the URL’s query part. 4.3
Text Node Splitting
In practice, product attributes are often stored in a single text node with a colon separating key and value items, e.g., ”Key: Value”. Therefore, text nodes are split along the alleged separator and only the first part is stored in the cluster. If such joint phrases are predominant in a cluster, the wrapper stores the occurrence path of the cluster as the attribute path without specifying a key path. In these cases, the wrapper performs the extraction of the key and value terms from the joint attribute phrases. Whether a cluster features joint attribute phrases is determined based on the fraction of phrases with an alleged separator. Either most phrases in the cluster contain a separator or known key phrases are found featuring a separator. For this reason, individual phrases are voted for in the respective rating functions. 4.4
Wrapper Selection
As stated above, product pages residing on the same producer site often share the same template. Still the algorithm should be able to handle more than one template per manufacturer, e.g., considering different product categories. When extracting information from an arbitrary product page, it needs to be decided which wrapper will extract information from the page or, in case none is applicable, whether a new one is to be generated. It is feasible to let all existing wrapper objects of the current domain extract information out of a given product page. Improper extraction rules either yield no data at all or might mine bogus data. Therefore, the returned data is matched with existing data previously extracted by the same wrapper. The assumption is that a certain template encodes similar content. For example, the pages of a producer’s product line share one template, while pages from another product line are encoded with a different template. This way, every wrapper of a producer is assigned a score whose role is twofold. On the one hand, the score is used to select the proper wrapper object. On the other hand, a minimal threshold ensures that a new wrapper is generated in case no eligible extraction rules exist yet. The threshold is based on the amount of prior knowledge available for a certain wrapper.
446
5
M. Walther et al.
Evaluation
To evaluate the effectiveness of the approach, the described algorithms were implemented in Fedseeko[18], a system for federated consumer product information search. The presence of domain knowledge is denoted by the D superscript in the charts. The gold standard used for testing consisted of 100 products from 40 different manufacturers and 10 diverse application domains. These products were picked to get a broad coverage of different product categories. For the tests concerning domain knowledge, 262 key phrases were inserted into the database gathered from representative products for each of the 10 domains. 5.1
Product Page Retrieval
For evaluating the retrieval component the gold standard consisted of the proper domain and page URL of each product and its producer. The automatic retrieval results were matched with the prestored locations. If the locator was able to identify the proper product page URL, the retrieval was filed as a success. Due to URL aliases and localized versions of product pages, non-matching URLs were checked manually again to identify false negatives. The results are illustrated in Figure 6. With a success rate of over 90%, the producer site identification is quite robust. However, failures to find the producer site occur when a producer has distinct sites for different product groups. For instance, the home appliance product line of Siemens is not featured at the primary Siemens site ”siemens.com” but offloaded to another domain, namely ”siemens-home.com”. Another frequently encountered source of error are localized producer sites. These might list diverse sets of products or use different product identifiers. However, often traces of product names are found on other sites of the producer, e.g., in the news section or in a support forum. Better retrieval results could be accomplished by searching the product page on multiple producer candidates in parallel and combining the results from all pages. It is worth stressing the point that only the retrieval of the product detail page was filed as a success. In the majority of the failure cases, an overview page associated with the proper product was returned. In other cases, an index or comparison page listing multiple products was identified. Other failures can be attributed to the retrieval of wrong products’ pages, ineligible content like product reviews or news entries.
Fig. 6. Evaluation of page retrieval
Directed Product Specifications Retrieval and Extraction
447
On the right side of Figure 6 it can be seen that incorporating domain knowledge (reference attribute keys) increases the retrieval performance. At the same time, there is a slightly greater chance of retrieving a wrong product’s page because the domain knowledge embodied in the accumulated key phrases is generic. Overall, the retrieval component proved a very good performance concerning producer sites, while having some deficiencies in the field of product detail page retrieval. This is especially adverse, as the information extraction relies on the retrieval of a correct product detail page. However, the usage of domain knowledge ameliorates the situation and thus makes the algorithm quite suitable. Domain knowledge is already gathered automatically in the system. Still, optimizations in this field might yield even better results for the product page retrieval and thus will be part of future work. 5.2
Related Product Page Retrieval
The goal of the web crawler is to identify a page generated from the same template as the reference page. During the tests, this succeeded in 69% of the cases. Half of the failure cases can be attributed to the fact that different views being associated with the same product often have a high page and URL similarity. However, if there are few common attributes and only slight deviations in the product-specific parts of the page, it is more likely that another view of the original product will be taken for a related product page. Another problem occurs in case template-sharing pages are not reachable through the designed link distance. Though the recursion limit could be increased, the execution time easily rises tenfold with every followed link level. A recursion depth of two was found to be a suitable trade-off. In order to make the employed related page retrieval algorithm comparable, a component for autonomously locating random product pages of the same producer has been implemented. However, one has to take product page detection failures into account. The considerable chance that the found page was built from a different template has to be regarded as well. Moreover, the other product page might feature a completely distinct set of key phrases. It was observed that such a system performed rather poorly in comparison to the crawler-based approach. 5.3
Information Extraction
Assessing the extraction performance is slightly more complicated than evaluating the page retrieval performance, as many attributes are associated with every product. Each of the three presented extraction algorithms was confronted with the task of extracting product attributes from the product pages. A reference attribute was manually retrieved for each product and matched with the extracted data. Whenever the reference attribute was contained within the set of extracted attributes, it indicated that (1) the proper data region had been selected, (2) the proper granularity level was chosen in a nested template and (3) the value could be mapped to its associated key phrase. Apparently, it does not indicate that the extracted data was complete or correct. Therefore, the first and last attribute were also recorded for reference and the number of extracted records was checked.
448
M. Walther et al.
Fig. 7. Evaluation of retrieval and extraction components
Fig. 8. Correctness and completeness of extraction results
As Figure 7 reveals, wrapper induction using crawled documents works for approximately half of the test products. However, significant extraction performance improvements were gained with the availability of domain knowledge. Unfortunately, some cases cannot be handled even when an example is provided. This applies to about one in ten products in the test data. A successful extraction implies that at least some product attributes were correctly extracted. More detailed results are given in Figure 8. The per-product results can be classified in different success and failure categories, based on correctness and completeness of the extracted data. A perfect result indicates that the extraction results are correct and complete. In other words, all available attributes were extracted and no false positives were in the result set. The second category includes attribute sets which are complete but contain additional incorrect attributes. Finally, if some of the attributes were not extracted, the data set is filed as incomplete.
Directed Product Specifications Retrieval and Extraction
449
The failures can be categorized into cases where no attributes were extracted at all and those where bogus attributes were mined. The former is less hazardous because it requires no guidance to mark these cases as failures. In the latter cases, however, the extracted data must be rejected manually. Using automatic extraction with existing domain knowledge, 85% of the extracted product attributes were correct and 10% bogus data. On average, 23 of 27 available product attributes were correctly extracted and one false positive was mined. Overall, the information extraction component showed feasible results. Assuming that the algorithms are included in an information platform used by consumers, it is expected that users provide extraction hints to the system in a wiki-like form. After some running time and the intensive collection of domain knowledge, the extraction success should even increase, thus only making the employment of information extraction by crawling inevitable in very few cases.
6
Conclusions
In this paper we presented algorithms for locating and extracting product information from websites while only being supplied with a product name and its producer’s name. While the retrieval algorithm was developed from scratch, the extraction algorithm extends previous works presented in Section 2 especially leveraging the special characteristics of product detail pages. The evaluation showed the feasibility of the approaches. Both the retrieval and extraction component generated better results when being supplied with domain knowledge used for bootstrapping. Thus, future research will focus on improving the system’s learning component to automatically create extensive domain knowledge at runtime. Currently, additional algorithms are being developed for mapping the extracted specification keys to a central terminology and converting the corresponding values to standard formats. Thus, product comparisons would be enabled at runtime. Evaluations will examine the success of these algorithms. Another direction of future research includes the automatic extension of the used product specification terminology being represented by an ontology. Thus, the mapping algorithm’s evaluation results would be improved significantly. The consolidated integration of this paper’s algorithms as well as described future extensions in a federated consumer product information system would enable users to create an all-embracing view on products of interest and compare those products effectively while only requiring a fraction of today’s effort for gathering product information from the information provider. In the same manner it may be integrated in enterprise product information systems as well as online shopping systems easing and accelerating the process of implementing product specifications.
References 1. Walther, M., Schuster, D., Juchheim, T., Schill, A.: Category-based ranking of federated product offers. In: WWW/Internet 2009: Proceedings of the IADIS International Conference WWW/Internet 2009, Lisbon, Portugal. IADIS Press (2009)
450
M. Walther et al.
2. Wong, T.L., Lam, W.: An unsupervised method for joint information extraction and feature mining across different web sites. Data Knowledge Engineering 68, 107–125 (2009) 3. Banko, M., Cafarella, M.J., Soderland, S., Broadhead, M., Etzioni, O.: Open information extraction from the web. In: IJCAI 2007: Proceedings of the 20th International Joint Conference on Artifical Intelligence, pp. 2670–2676. Morgan Kaufmann Publishers Inc., San Francisco (2007) 4. Scaffidi, C., Bierhoff, K., Chang, E., Felker, M., Ng, H., Jin, C.: Red opal: Productfeature scoring from reviews. In: EC 2007: Proceedings of the 8th ACM Conference on Electronic Commerce, pp. 182–191. ACM, New York (2007) 5. Califf, M.E., Mooney, R.J.: Relational learning of pattern-match rules for information extraction. In: AAAI 1999/IAAI 1999: Proceedings of the 16th National Conference on Artificial Intelligence and the 11th Innovative Applications of Artificial Intelligence Conference, pp. 328–334. AAAI, Menlo Park (1999) 6. Freitag, D.: Information extraction from html: Application of a general machine learning approach. In: AAAI 1998/IAAI 1998: Proceedings of the 15th National Conference on Artificial Intelligence and the 10th Innovative Applications of Artificial Intelligence Conference, pp. 517–523. AAAI, Menlo Park (1998) 7. Kushmerick, N., Weld, D.S., Doorenbos, R.: Wrapper induction for information extraction. In: IJCAI 1997: Proceedings of the 10th International Joint Conference on Artificial Intelligence (1997) 8. Hsu, C.N., Dung, M.T.: Generating finite-state transducers for semi-structured data extraction from the web. Information Systems 23, 521–538 (1998) 9. Muslea, I., Minton, S., Knoblock, C.A.: A hierarchical approach to wrapper induction. In: AGENTS 1999: Proceedings of the 3rd Annual Conference on Autonomous Agents, pp. 190–197. ACM, New York (1999) 10. Laender, A.H.F., Ribeiro-Neto, B., da Silva, A.S.: Debye - date extraction by example. Data Knowledge Engineering 40, 121–154 (2002) 11. Chang, C.H., Lui, S.C.: Iepad: Information extraction based on pattern discovery. In: WWW 2001: Proceedings of the 10th International Conference on World Wide Web, pp. 681–688. ACM, New York (2001) 12. Zhai, Y., Liu, B.: Web data extraction based on partial tree alignment. In: WWW 2005: Proceedings of the 14th International Conference on World Wide Web, pp. 76–85. ACM, New York (2005) 13. Liu, B., Zhai, Y.: NET – A system for extracting web data from flat and nested data records. In: Ngu, A.H.H., Kitsuregawa, M., Neuhold, E.J., Chung, J.-Y., Sheng, Q.Z. (eds.) WISE 2005. LNCS, vol. 3806, pp. 487–495. Springer, Heidelberg (2005) 14. Liu, B., Grossman, R., Zhai, Y.: Mining data records in web pages. In: KDD 2003: Proceedings of the 9th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 601–606. ACM, New York (2003) 15. Crescenzi, V., Mecca, G., Merialdo, P.: Roadrunner: Towards automatic data extraction from large web sites. In: VLDB 2001: Proceedings of the 27th International Conference on Very Large Data Bases, pp. 109–118. Morgan Kaufmann Publishers Inc., San Francisco (2001) 16. Arasu, A., Garcia-Molina, H.: Extracting structured data from web pages. In: SIGMOD 2003: Proceedings of the 2003 ACM SIGMOD International Conference on Management of Data, pp. 337–348. ACM, New York (2003) 17. Liu, B.: Web Data Mining: Exploring Hyperlinks, Contents, and Usage Data (DataCentric Systems and Applications). Springer Verlag New York, Inc., Secaucus (2006) 18. Walther, M., Schuster, D., Schill, A.: Federated product search with information enrichment using heterogeneous sources. In: Abramowicz, W. (ed.) BIS 2009. LNBIP, vol. 21, pp. 73–84. Springer, Heidelberg (2009)
Using XML Schema Subtraction to Compress Electronic Payment Messages Stefan Böttcher, Rita Hartel, and Christian Messinger Univerisity of Paderborn, Computer Science, Fürstenallee 11, 33102 Paderborn, Germany {stb,rst,michri}@uni-paderborn.de
Abstract. SEPA (Single Euro Payments Area) is an XML-based standard that defines the format for electronic payments between the member states of the European Union. Besides the advantages that come with an XML-based format, XML data involves one mayor disadvantage when storing and transferring large amounts of data: the storage overhead caused by the verbose structure of XML data. Therefore, we propose a compressed format for SEPA data that helps to overcome this problem. We propose to apply XML Schema Subtraction (XSDS) to the SEPA messages, such that all information that is already defined by the SEPA Schema can be removed from the SEPA messages. This compressed format allows executing navigation and updates directly on the compressed data, i.e. without prior decompression. The compression leads to a reduction of the data size of down to 11% of the original message size on average. In addition, queries can be evaluated on the compressed data directly with a speed that is comparable to that of ADSL2. Keywords: SEPA-XML message compression, SEPA data, exchange, efficient query processing on compressed SEPA data.
XML structure of SEPA messages into a data format that is 9 times smaller than the original message size. Furthermore, XSDS allows to process the compressed messages in a similar way as the original messages – e.g. by evaluating XPath queries – without prior decompression. Thus, using XSDS-compressed SEPA messages instead of original SEPA messages as internal format within a bank institute allows on the one hand to save storage costs while archiving the data and on the other hand to reduce the amount of data to be processed. Furthermore, query processing on compressed data is possible within competitive runtime. 1.3 Paper Organization The remainder of this paper is organized as follows. Section 2 describes the basic concept XSDS, i.e., how schema information can be removed from an XML document. Section 3 gives an overview of how the compressed data can be processed directly, i.e. without prior decompression. Section 4 evaluates the compression ratio of XSDS and query processing on XSDS compressed data based on SEPA data. Section 5 compares XSDS to related work. Finally, Section 6 summarizes our contributions.
2 The Concept 2.1 The Basic Idea SEPA is a standard that defines the format of electronic payment within the member states of the EU. Each electronic payment is processed and stored in form of an XML document, the format of which is defined by a set of XML schemata (XSD) by SEPA. Some parts of each SEPA file are strictly determined by the SEPA standard, e.g., that each payment message starts with the tag or that the first two child nodes of the element (group header) are the elements <MsgId> (message ID) and (date and time of message creation). Other parts are variable and vary from document to document (e.g. whether the Debtor () has a postal address () or not). The main compression principle of XML schema subtraction (XSDS) is the following. XSDS removes all information that is strictly defined by the XML schema information from a given XML document, and, in the compressed format, XSDS encodes only those parts of the XML document that can vary according to the XML schema. The compression principle of XSDS is similar to the compression principles of XCQ [19] and DTD subtraction [7] both of which remove information provided by a DTD from a given XML document. However, in contrast to these approaches, XSDS removes information given by an arbitrary XML schema, which is significantly more complex than just considering DTDs. The current paper reports about XSDS, but focuses on the advantages of applying XSDS to SEPA as an application standard which is significant for financial transactions in the EU member states. 2.2 This Paper’s Example As the whole SEPA standard is too huge to be discussed within this paper, we only have a detailed look on a small excerpt. Each payment document contains (amongst
Using XML Schema Subtraction to Compress Electronic Payment Messages
453
others) a Debtor. The information on the Debtor is stored as an element with label that contains a name (label ) and zero or one ID (label ) followed by zero or one postal address (label ). The ID contains either a privat ID (label ) or an organization ID (label ). The postal address consists of a city (label ) and zero to two address lines (label ). Figure 1 shows a graphical visualization of the element and its definition.
Fig. 1. Rule graph of an excerpt of the SEPA schema
2.3 Building the Rule Graph As the first step of the compression process, the given XML schema is transformed into a rule graph (c.f. e.g. the rule graph shown in Fig. 1). During the compression, the rule graph is traversed, the XML data is consumed and the compressed data is generated. The rule graph consists of 12 different kinds of node types each of which corresponds to an XML schema operator: - Element node: an element node represents an element definition in the XML schema. It has the name of the element as label. - Attribute node: an attribute node represents an attribute definition in the XML schema. It has the identifier ‘@’ plus the name of the attribute as label and a CDATA node as single child. - Attribute use node: an attribute use node represents the specified attribute use of an attribute. It has either ‘optional’ or ‘required’ or ‘prohibited’ as label and it has an attribute node as single child. - Choice node: a choice node represents the choice operator in XML schema. It has the symbol ‘|’ as label. - Sequence node: a sequence node represents the sequence operator in XML schema. It has the symbol ‘,’ as label. - Repetition node: a repetition node represents the non-default values of the minOccurs and maxOccurs attributes given for an element E in the XML schema. It has ‘x..y’ as label, where x is the value of the minOccurs attribute and y is the value of the maxOccurs attribute. It has the node that corresponds to E as single child. - ALL node: an all node represents the ALL operator in XML schema. It has the symbol ‘Σ’ as label.
454
S. Böttcher, R. Hartel, and C. Messinger
- CDATA node: a CDATA node represents character data. It has the symbol ‘#CDATA’ as label. It has a list node or a union node or an enumeration node or a bastype node as child node. - List node: a list node specifies character data that is of the form of a spaceseparated list. The list node has the symbol ‘{LIST}’ as label and has a basetype node as child node. - Union node: a union node allows the character data to be of different data types. It has the symbol ‘{UNION}’ as label and basetype nodes as child nodes. - Enumeration node: an enumeration node specifies that the character data is one item of a given enumeration. It has the keyword ‘{ENUMERATION}’ plus the list of values of the enumeration as label. It has a basetype node as child node. - Basetype node: The basetype node specifies the data type of the character data. It has ‘[TYPE]’ as label, where TYPE is the name of the data type (e.g., STRING or INT). 2.4 Removing Schema Information from the Document Structure in XSDS Each compression step assumes that we consider one current position in the XML document and the corresponding position within the rule graph at a time. Depending on the node type of the current node within the rule graph, we do the following with the input XML document that shall be compressed; - Element node: We consume the XML start tag at the current position of the XML document. If the XML node name differs from the label of the current node in the rule graph, the XML document is not valid. We continue to compress with the first child node of the current node in the rule graph. When all children have been processed, we consume the XML end tag. - Attribute node: We consume the attribute with the name specified by this node’s label from the current XML element. We continue to compress with the first child node of this rule graph node. - Attribute use node: If this node has a label ‘required’ or ‘prohibited’ nothing has to be done. If its label is ‘optional’, we check whether the attribute is given for the current element in the XML document or not. If there is an attribute with the given name, we store one bit ‘1’ in the compressed data, else we store one bit ‘0’. We continue to compress with the first child node of this rule graph node. - Choice node: We check, which of the alternatives given by the rule graph of the schema is present at the current position within the XML document and store the index of the position of the chosen alternative among the child nodes. Continue to compress with the node that represents the chosen alternative. (This requires log(n) bits, if there are n possible alternatives). Union nodes and enumeration nodes are encoded similar to choice nodes. - Sequence node: As the sequence requires all its child nodes to occur exactly once, we do not have to store anything for a child node nor we have to consume anything. We continue to compress with the first child node of this sequence node. - Repetition node: We determine the number n of repetitions and store n-x in the compressed data, where x is the value given for minOccurs. We continue the
Using XML Schema Subtraction to Compress Electronic Payment Messages
455
compression with the first child of this node and process its single child node n times. (If the number of children per node is e.g. limited by 2^32 (MAXINT), this requires 1 to 5 bytes per repetition node, depending on the concrete number of repetitions). List nodes are encoded similar to repetition nodes. - ALL node: We encode the order of child elements of the current XML element as found in the XML document. We process the child nodes in the order in which they occur at the current position within the XML document. - CDATA node: We consume the character data and compress it (c.f. Section 2.6). When applying the compression by removing schema information from the SEPA excerpt shown in Figure 1, we do the following for the variant parts of a given XML document fragment matching this schema excerpt. We store a single bit for the repetition nodes with label ‘0..1’ stating whether or not there is an element and whether or not there is an element found in the current XML document position. We store a single bit for the choice node with label ‘|’ stating whether there is a or an element found in the current XML document position. Finally, we store two bits for the repetition node with label ‘0..2’ stating whether there are 0, 1, or 2 elements . The remaining parts of an XML fragment for this SEPA excerpt are fixed by the SEPA excerpt. This includes all element names found in the XML fragment. For example, not only the element name of the fragment’s root is fixed, but also the element name of the first child of the element is fixed. Furthermore, it is not necessary to include the element names for optional parts like or when the option has been chosen, as the element name is fixed by the XSD. Similarly, for repetitions, e.g. occurrences of the element, it is sufficient to store the number of repetitions in the compressed data format. The element name for each repeated element is fixed. Finally, the element must occur whenever the first alternative of the choice is taken. Therefore, we need (at most) 5 bits for storing the structure of each possible XML fragment matching the structure of a -element in the given SEPA excerpt. Requiring only 5 bits is optimal, as there exist 24 different ‘shapes’ of the element and its descendants. 2.5 Determining the Path through the Rule Graph As all content models of XML – and thus XML schema – have to be deterministic, it is always sufficient to read the next XML event, in order to decide e.g. for a choice node, which one is the chosen alternative. But nevertheless, for some cases, it is not so easy to decide, which is the right path to take trough the rule graph. Consider for example a rule ((b | c*), d) that says that the element content either starts with a b-node or any number of c-nodes followed by a d-node. If the next event to be read is either a b-start-tag or a c-start-tag, the chosen alternative is easy to determine. But the next event might even be a d-start-tag. Although there is no element node with label d among the descendants of the choice node, we have to determine, which of the two alternatives had been chosen. In this case, we might have taken the second alternative with 0 repetitions of the repetition node.
456
S. Böttcher, R. Hartel, and C. Messinger
In order to decide for all- and choice-nodes, with which alternative to continue, and to decide for repetition nodes, whether or not there follows one more repetition, we calculate for each child node of the current node within the rule graph, whether it must be applied and whether it can be applied. A rule graph node must be applied, if it corresponds to the next XML event and it can be applied if it does not contradict to the next XML event. If we find a child node of a choice that must be applied, this is the next node to continue the compression, if not, we continue with the first node that can be applied. In order to encode an ALL node, we first continue with all of its child nodes that must be applied and then add those which can be applied. For a repetition node, there is one more repetition, if its child node must be applied. The functions must and can are defined for the different node types as follows: - Element node: An element node must be applied, if the next XML event is a startElement event that has the same label as the element node. An element node never has the state of can be applied. - Attribute node: An attribute node must be applied, if it the next XML event contains an attribute with the given label. An attribute never has the state of can be applied. - Attribute use node: An attribute use node must be applied, if it does not have the label ‘prohibited’. It can never be applied. - Choice node: A choice node must be applied, if one of its child nodes must be applied. It can be applied, if one of its child nodes can be applied. - Sequence node: A sequence node must be applied, if there is a position n such that the first n-1 child nodes of the sequence node can be applied and the nth child node must be applied. A sequence node can be applied if all of its child nodes can be applied. - Repetition node: A repetition node must be applied, if its child node must be applied. A repetition node can be applied if the value of minOccurs is 0 or if its child node can be applied. - ALL node: An ALL node must be applied, if there exists a child node that must be applied. An ALL node can be applied, if all of its child nodes can be applied. If we return to our example rule ((b | c*),d) and apply the must-function to the choice node in the rule graph and to the current XML event with label ‘d’, we get the result, that neither of its child nodes must be applied, but that the repetition node above c can be applied because of the value of minOccurs=0. Therefore, we decide to encode the second alternative and continue the compression with the repetition node. 2.6 Compressing the Textual Data Beneath the structure, a SEPA document contains textual data. Whereas large parts of the structure are defined by the schema, less information is given on the textual data. Nevertheless, compression of textual data and query evaluation on compressed data can be improved by grouping together textual data that is included by the same parent elements. For these purposes, for each parent element of textual data, a single container is provided that stores the textual data in document order. Storing the textual data in different containers provides two advantages:
Using XML Schema Subtraction to Compress Electronic Payment Messages
457
When processing the SEPA documents, different queries have to be evaluated, as e.g. whether the payment creditor is on an embargo list. In many cases, this can be answered by simply searching in a few containers. As each container contains data of the same domain (e.g., names, zip codes …) compressing each container separately from the other containers yields a stronger compression ratio than compressing all the textual data of one document together. XSDS mainly differs between three different types of textual data: String data, Integer data and data enumerations that only allow a value of a given enumeration of possible values (e.g., the address type of a postal address of an invoicee can be one value of the following list: ADDR, PBOX, HOME, BIZZ, MLTO, DLVY). In our implementation, each container that contains String data is compressed via the generic text compressor gzip. Each entry of an Integer container is stored via a variable-length Integer encoding that stores 1 to 5 bytes per Integer value depending on the concrete value. Finally, each value stored within an enumeration container is stored analogously to a choice within the structure: If an enumeration container allows n different values, each value is represented by an encoding with size log(n) bits, that defines the position of the current value within the enumeration.
3 Query Processing In contrast to other compressors like XMill [17], gzip or bzip2 that are mainly being used for archiving, XSDS is able to evaluate queries on the compressed data directly, i.e., without prior decompression. This makes XSDS not only useful for archiving data, but also for compressing data that is still processed or exchanged among partners. For example, for a bank institute, this means that the bank institute compresses SEPA data that it receives, if the data is not already compressed. Then, the bank institute can process the compressed SEPA data and archive it without any need of decompression or recompression between multiple processing steps. Only if the bank institute sends the data to a customer or another institute that requires uncompressed SEPA data as input, a decompression into the uncompressed SEPA format might be needed. For query evaluation on compressed SEPA data, we use the looking forward approach [20] followed by a query rewriting system that reduces queries to using only the axes child, descendant-or-self, following-sibling and a self axis using filters [6]. Additionally, we have implemented a generic query processing engine that further reduces XPath queries to queries on the basic axes first-child, next-sibling, and parent, and the operations getXMLNodeType and getLabel on the data compressed by our XSDS compression approach. In order to determine first-child, next-sibling or label of a current context node, we simultaneously parse through the XML schema and the compressed document. Similar to keeping track of the current context node which describes the actual parsing position in the XML document, simultaneously parsing the rule graph keeps track of a current XML schema node which describes the actual parsing position in the XML schema. Whenever the XML schema allows for variant parts, as there is a repetition, a choice or an ALL element in the schema, the concrete choice selected for this variant part is looked up in the compressed data.
458
S. Böttcher, R. Hartel, and C. Messinger
Concerning the textual data, only those containers have to be decompressed that contain data that is addressed by the query: either data that is needed for evaluating a predicate filter or data that is part of the output result. Whenever the compressor of a data container allows partial decompression of the compressed data and value comparisons directly in the compressed data, no needless decompression of textual data is performed at all. Only those text values that are required for query processing are decompressed and read.
4 Evaluations 4.1 Compression Ratio In order to evaluate our approach, we have collected 9 SEPA example files provided by different bank institutes from the internet. We have compared our approach to 3 different compression approaches: First, gzip – a generic text compressor based on LZ77 and Huffman, second XMill [17] – a non-queryable XML compressor, and third, bzip2 – a generic text compressor based on Burrows-Wheeler Transform [9], Move-to-Front and Huffman.
Fig. 2. Compression ratio reached for SEPA documents
The results of our evaluation are shown in Figure 2. Although all other tested compressors do not allow query evaluation on the compressed data, i.e., they require a prior decompression when processing the data, our approach additionally outperforms them in terms of the reached compression ratio, i.e. the size of compressed document divided by the size of original document. While bzip2 reaches compression ratios from 8% to 51% (37% on average), gzip reaches compression ratios from 10% to 49% (30% on average) and XMill reaches
Using XML Schema Subtraction to Compress Electronic Payment Messages
459
compression ratios of 10% to 54% (32%) on average, XSDS reaches compression ratios of 5% to 15% (11% on average). In other words, on average XSDS compresses 3 times stronger than each of the other evaluated compressors. To the best of our knowledge, XSDS is the compressor that reaches the strongest compression on SEPA documents. 4.2 Query Performance In order to test the query performance, we have generated a set of example SEPA documents with the same structure, but increasing size (while the smallest document, D12, has a size of 17 kB and contains 2 SEPA messages, the largest document, D1, has a size of 193 MB and contains 25,000 SEPA messages).
Fig. 3. Throughput reached for document D2
We have evaluated the following set of queries that ask for debtor names, currency of the payment, a complete SEPA message, or the amounts of the payments on the documents: Q1=/sepade/Msg/Document/pain.001.001.02/PmtInf/Dbtr/Nm Q2= //Dbtr/Nm Q3= //InstdAmt[@Ccy] Q4= /sepade/Msg Q5=//Amt Figure 3 shows the results of our evaluation on document D2 (77 MB, 10,000 SEPA messages). For this document, our query evaluation on the compressed document reaches an average throughput rate which is equivalent to 26 Megabits/s regarding the uncompressed original SEPA document. In other words, our query evaluation is faster than the ADSL2+ - currently the fastest ADSL standard – that reaches download rates of 24 Mbit/s.
460
S. Böttcher, R. Hartel, and C. Messinger
Fig. 4. Query scalability
In order to test the scalability of the query evaluation on the XSDS compressed data, we have evaluated the queries Q1 to Q5 on the documents D1 to D12. As shown in Figure 4, query evaluation on XSDS compressed data scales excellently, as the throughput rate even increases with increasing the original file size up to about 100 MB. One reason for the increase of the throughput rate up to an original file size of 100 MB is that the reached compression ratio is stronger, the bigger the files are, and thus the data volume to be processed during query evaluation decreases in relation to the size of the original file. For an original file size of more than 100 MB, the average query evaluation throughput decreases at least for query Q4 (that requires a total decompression of all text zipped text containers).The reason for the final decrease of the query evaluation throughput lies within the size of the text containers. The bigger the zipped text containers get, the less efficient they can be accessed. Because of that, we propose to split the text containers into several text containers as soon as a certain threshold (e.g. 10,000 SEPA messages) is exceeded. As each SEPA message represents a single payment transaction, no queries have to be evaluated that span several SEPA messages. Therefore, splitting a text container when its size exceeds a given threshold will not lead to any restriction.
5 Related Work There exist several approaches to XML structure compression, which can be mainly divided into three categories: encoding-based compressors, grammar-based compressors, and schema-based compressors. Many compressors do not generate compressed data that supports evaluating queries, i.e., any query processing on the compressed data needs prior decompression. The encoding-based compressors allow for a faster compression speed than the other ones, as only local data has to be considered in the compression as opposed to considering different sub-trees as in grammar-based compressors.
Using XML Schema Subtraction to Compress Electronic Payment Messages
461
The XMill algorithm [17] is an example of the first category. The structure is compressed, by assigning each tag name a unique and short ID. Each end-tag is encoded by the symbol ‘/’. This approach does not allow querying the compressed data. XGrind [22], XPRESS [18] and XQueC [2] are extensions of the XMill approach. Each of these approaches compresses the tag information using dictionaries and Huffman-encoding [16] and replaces the end-tags by either a ‘/’symbol or by parentheses. All three approaches allow querying the compressed data. However, as all of them result in a weaker compression than XMill, XSDS compresses stronger than all of them. The encoding-based compression approaches [3], [12], and [15] use tokenization. [12] replaces each attribute and element name by a token, where each token is defined when it is being used for the first time. [3] and [15] use tokenization as well, but they enrich the data by additional information that allows for a fast navigation (e.g., number of children, pointer to next-sibling, existence of content and attributes). All three approaches use a reserved byte for encoding the end-tag of an element. They all allow querying the compressed data. The encoding-based compression approach in [24] defines a succinct representation of XML that stores the start-tags in form of tokens and the end-tag in form of a special token (e.g. ‘)’). They enrich their compressed XML representation by some additional index data that allows a more efficient query evaluation. This approach allows querying of compressed data. XQzip [13] and the approaches presented in [1] and [8] belong to grammar-based compression. They compress the data structure of an XML document by combining identical sub-trees. Afterwards, the data nodes are attached to the leaf nodes, i.e., one leaf node may point to several data nodes. The data is compressed by an arbitrary compression approach. These approaches allow querying compressed data. An extension of [8] and [13] is the BPLEX algorithm [10]. This approach does not only combine identical sub-trees, but recognizes similar patterns within the XML, and therefore allows a higher degree of compression. It allows querying of compressed data. Schema-based compression comprises such approaches as XCQ [19], XAUST [21], Xenia [23] and DTD subtraction [7]. They subtract the given schema information from the structural information. Instead of a complete XML structure stream or tree, they only generate and output information not already contained in the schema information (e.g., the chosen alternative for a choice-operator or the number of repetitions for a *-operator within the DTD). These approaches are queryable and applicable to XML streams, but they can only be used if schema information is available. XSDS follows the same basic idea to delete information which is redundant because of a given schema. In contrast to XCQ, XAUST and DTD subtraction that can only remove schema information given by a DTD, XSDS works on XML schema which is significantly more complex than DTDs. Furthermore, XSDS uses a counting schema for repetitions that compresses stronger than e.g. the ones used in XCQ or Xenia. XSDS can be applied not only to SEPA documents, but to all XML documents for which an XML schema is given, for example to MX office documents (c.f. [5]). The approach in [14] does not belong to any of the three categories. It is based on Burrows-Wheeler Transform [9], i.e., the XML data is rearranged in such a way that compression techniques such as gzip achieve higher compression ratios. This
462
S. Böttcher, R. Hartel, and C. Messinger
approach allows querying the compressed data only if it is enriched with additional index information. In comparison to all other approaches, XSDS is the only approach that combines the following advantageous properties: XSDS removes XML data nodes that are fixed by the given XML schema, it encodes choices, repetitions, and ALL-groups in an efficient manner, and it allows for efficient query processing on the compressed XML data. To the best of our knowledge, no other XML compression technique combines such a compression performance for SEPA data with such query processing speed on compressed data.
6 Conclusions We have presented XSDS (XML schema subtraction) – an XML compressor that performs especially well for electronic payment data in SEPA format. XSDS removes all the data that can be inferred from the given schema information of the XML document. Thereby, XSDS provides two major advantages: First, XSDS generates a strongly compressed document representation which may save costs and energy by saving bandwidth for data transfer, and by saving main memory required to process data, and by saving secondary storage needed to archive compressed XML data. Second, XSDS supports fast query evaluation on the compressed document without prior decompression. Our experiments have shown that XSDS compresses SEPA messages down to a size of 11% of the original SEPA document size on average, which outperforms the other compressors, i.e. gzip, XMill and bzip2, by a factor of 3. Furthermore, query evaluation directly on the compressed SEPA data is not only possible, but in our experiments, query processing reaches throughput rates that are higher than those of ADSL2+. Therefore, we consider the XSDS compression technique to be highly beneficial in all SEPA applications for which the data volume is a bottleneck.
References 1. Adiego, J., Navarro, G., de la Fuente, P.: Lempel-Ziv Compression of Structured Text. In: Data Compression Conference 2004 (2004) 2. Arion, Bonifati, A., Manolescu, I., Pugliese, A.: XQueC: A Query-Conscious Compressed XML Database. ACM Transactions on Internet Technology (to appear) 3. Bayardo, R.J., Gruhl, D., Josifovski, V., Myllymaki, J.: An evaluation of binary xml encoding optimizations for fast stream based XML processing. In: Proc. of the 13th International Conference on World Wide Web (2004) 4. Böttcher, S., Hartel, R., Messinger, C.: Queryable SEPA Message Compression by XML Schema Subtraction. In: Filipe, J., Cordeiro, J. (eds.) ICEIS 2010. LNBIP, vol. 73, pp. 451–463. Springer, Heidelberg (2011) 5. Böttcher, S., Hartel, R., Messinger, C.: Searchable Compression of Office Documents by XML Schema Subtraction. In: Sixth International XML Database Symposium, XSym 2010, Singapore (September 2010)
Using XML Schema Subtraction to Compress Electronic Payment Messages
463
6. Böttcher, S., Steinmetz, R.: Evaluating XPath Queries on XML Data Streams. In: British National Conference on Databases (BNCOD 2007), Glasgow, Great Britain (July 2007) 7. Böttcher, S., Steinmetz, R., Klein, N.: XML Index Compression by DTD Subtraction. In: International Conference on Enterprise Information Systems, ICEIS 2007 (2007) 8. Buneman, P., Grohe, M., Koch, C.: Path Queries on Compressed XML. In: VLDB (2003) 9. Burrows, M., Wheeler, D.: A block sorting loss-less data compression algorithm. Technical Report 124, Digital Equipment Corporation (1994) 10. Busatto, G., Lohrey, M., Maneth, S.: Efficient memory representation of XML documents. In: Bierman, G., Koch, C. (eds.) DBPL 2005. LNCS, vol. 3774, pp. 199–216. Springer, Heidelberg (2005) 11. Candan, K.S., Hsiung, W.-P., Chen, S., Tatemura, J., Agrawal, D.: AFilter: Adaptable XML Filtering with Prefix-Caching and Suffix-Clustering. In: VLDB (2006) 12. Cheney, J.: Compressing XML with multiplexed hierarchical models. In: Proceedings of the 2001 IEEE Data Compression Conference (DCC 2001) (2001) 13. Cheng, J., Ng, W.: XQzip: Querying compressed XML using structural indexing. In: Hwang, J., Christodoulakis, S., Plexousakis, D., Christophides, V., Koubarakis, M., Böhm, K. (eds.) EDBT 2004. LNCS, vol. 2992, pp. 219–236. Springer, Heidelberg (2004) 14. Ferragina, P., Luccio, F., Manzini, G., Muthukrishnan, S.: Compressing and Searching XML Data Via Two Zips. In: Proceedings of the Fifteenth International World Wide Web Conference (2006) 15. Girardot, M., Sundaresan, N.: Millau: An Encoding Format for Efficient Representation and Exchange of XML over the Web. In: Proceedings of the 9th International WWW Conference (2000) 16. Huffman, D.A.: A method for the construction of minimum-redundancy codes. In: Proc. of the I.R.E. (1952) 17. Liefke, H., Suciu, D.: XMill: An Efficient Compressor for XML Data. In: Proc. of ACM SIGMOD (2000) 18. Min, J.K., Park, M.J., Chung, C.W.: XPRESS: A Queriable Compression for XML Data. In: Proceedings of SIGMOD (2003) 19. Ng, W., Lam, W.Y., Wood, P.T., Levene, M.: XCQ: A queriable XML compression system. Knowledge and Information Systems (2006) 20. Olteanu, D., Meuss, H., Furche, T., Bry, F.: XPath: Looking forward. In: Chaudhri, A.B., Unland, R., Djeraba, C., Lindner, W. (eds.) EDBT 2002. LNCS, vol. 2490, pp. 109–127. Springer, Heidelberg (2002) 21. Subramanian, H., Shankar, P.: Compressing XML documents using recursive finite state automata. In: Farré, J., Litovsky, I., Schmitz, S. (eds.) CIAA 2005. LNCS, vol. 3845, pp. 282–293. Springer, Heidelberg (2006) 22. Tolani, P.M., Hartisa, J.R.: XGRIND: A query-friendly XML compressor. In: Proc. ICDE (2002) 23. Werner, C., Buschmann, C., Brandt, Y., Fischer, S.: Compressing SOAP Messages by using Pushdown Automata. In: ICWS 2006 (2006) 24. Zhang, N., Kacholia, V., Özsu, M.T.: A Succinct Physical Storage Scheme for Efficient Evaluation of Path Queries in XML. In: ICDE (2004) 25. Ziv, J., Lempel, A.: A Universal Algorithm for Sequential Data Compression. IEEE Transactions on Information Theory 23(3), 337–343 (1977)
Enhancing the Selection of Web Sources: A Reputation Based Approach Donato Barbagallo, Cinzia Cappiello, Chiara Francalanci, and Maristella Matera Politecnico di Milano, Dipartimento di Elettronica e Informazione Via Ponzio 34/5, Milano, Italy {barbagallo,cappiell,francala,matera}@elet.polimi.it
Abstract. The large amount of available Web data sources is an important opportunity for Web users and also for various data-intensive Web applications. Nevertheless, the selection of the most relevant data sources and thus of high quality information is still a challenging issue. This paper proposes an approach for data source selection that is based on the notion of reputation of the data sources. The data quality literature defines reputation as a multidimensional quality attribute that measures the trustworthiness and importance of an information source. This paper introduces a set of metrics able to measure the reputation of a Web source by considering its authority, its relevance in a given context, and the quality of the content. These variables have been empirically assessed for the top 20 sources identified by Google as a response to 100 queries in the tourism domain. In particular, Google’s ranking has been compared with the ranking obtained by means of a multi-dimensional source reputation index. Results show that the assessment of reputation represents a tangible aid to the selection of information sources and to identification of reliable data. Keywords: Web reputation, Web search.
Enhancing the Selection of Web Sources: A Reputation Based Approach
465
several alternative approaches to ranking aimed at increasing the satisfaction of users in different contexts. A large body of literature follows the semantic Web approach and proposes ranking algorithms taking advantage of semantic abilities and metadata, such as tags, domain knowledge, ontologies, and corpuses [21]. Recently, collaborative approaches propose innovative ranking algorithms based on a variety of userprovided evaluations [22]. More consolidated approaches focus on QoS and adjust authority-based rankings with runtime response time information [10]. This paper explores the possibility of adjusting the ranking provided by search engines by assessing the reputation of Web information sources. The data quality literature defines reputation as a dimension of information quality that measures the trustworthiness and importance of an information source [7]. Reputation is recognized as a multi-dimensional quality attribute. The variables that affect the overall reputation of an information source are related to the institutional clout of the source, to the relevance of the source in a given context, and to the general quality of the source’s information content. To the current state of the art, the literature lacks evidence demonstrating the importance of the concept of reputation in improving the ranking provided by search engines. It also lacks an operationalization of the concept of reputation for the assessment of Web information sources. This paper aims at filling this literature gaps. The next section discusses our operationalization of the concept of reputation applied to Web information sources. Section 3 describes our experiment and Section 4 reports our main research results. Section 5 contextualizes our contributions in the fields of reputation assessment. Conclusions are finally drawn in Section 6.
2 Operationalization of the Concept of Reputation Our operationalization of reputation draws from the data quality literature. In particular, we start from the classification of reputation dimensions provided by [7]. The paper explains how accuracy, completeness, and time represent the fundamental data quality dimensions in most contexts. Interpretability, authority, and dependability represent additional dimensions that should be considered when assessing reputation, especially for semi-structured and non structured sources of information. In this paper, we focus on Web information sources and, specifically, on blogs and forums. This choice is related to the general research framework in which this paper is positioned, which focuses on sentiment analysis, i.e. on the automated evaluation of people’s opinions based on user-provided information (comments, posts, responses, social interactions). For this purpose, blogs and forums represent a primary source of information. We have identified four aspects of blogs and forums that should be evaluated to assess their reputation: • Traffic: overall volume of information produced and exchanged in a given time frame. • Breadth of contributions: overall range of issues on which the source can provide information. • Relevance: degree of specialization of the source in a given domain (e.g. tourism). • Liveliness: responsiveness to new issues or events.
466
D. Barbagallo et al.
Table 1 summarizes the reputation metrics that we have identified for the variables above (table columns) along different data quality dimensions drawn from [7] (table rows). The source of metrics is reported in parentheses, where “crawling” means either manual inspection or automated crawling depending on the site. Please note that some of the metrics are provided by Alexa (www.alexa.com), a well-known service publishing traffic metrics for a number of Internet sites. Also note that not all data quality dimensions apply to all variables (not applicable, N/A in Table 1). Table 1. Reputation Metrics Traffic Accuracy
N/A
Completeness
N/A
Time
traffic rank (www.alexa.com)
Interpretability N/A
Authority
Dependability
Breadth of contributions average number of comments to selected post (crawling) number of open discussions (crawling)
age of source (crawling)
average number of distinct tags per post (crawling) N/A
- daily visitors (www.alexa.com) - daily page views (www.alexa.com) - average time spent on site (www.alexa.com) N/A number of comments per discussion (crawling)
Relevance
Liveliness
Centrality, i.e., number N/A of covered topics (crawling) number of open number of comments discussions compared per user (crawling) to largest Web blog/forum (crawling) N/A average number of new opened discussions per day (www.alexa.com) N/A N/A
- number of inbound links (www.alexa.com) - number of feed subscriptions (Feedburner tool)
number of daily page views per daily visitor (www.alexa.com)
bounce rate (www.alexa.com)
average number of comments per discussion per day (crawling)
The metric labelled “number of open discussions compared to largest Web blog/forum” has been calculated based on the following benchmarks. Technorati (www.technorati.com) reports that the blog with the highest number of daily visitors is Huffingtonpost (a blog of blogs), with an average 4.80 million visitors per day. Alexa reports that the forum with the highest number of daily visitors is Topix, with an average 2.05 million visitors per day. As a general observation, our choice of metrics has been driven by feasibility considerations. In particular, Table 1 includes only quantitative and measurable metrics.
3 Research Design and Data Sample We have performed 100 queries with Google in the tourism domain. This domain choice is related to the importance of tourism in Web search activities. It has been
Enhancing the Selection of Web Sources: A Reputation Based Approach
467
estimated that more than 60% of Web users perform searches related to tourism and travel (see www.bing.com/travel). Referring to a specific domain helps design the set of queries according to a domain-specific search model. In this research, we refer to the Anholt-GfK Roper Nations Brand Index [2]. This index defines six fundamental dimensions of a destination’s brand along which the basic decision-making variables of potential tourists should be identified: presence, place, pre-requisites, people, pulse, and potential. We have identified ten decision-making variables along these dimensions: 1. Weather and environment. 2. Transportation. 3. Low fares and tickets. 4. Fashion and shopping. 5. Food and drinks. 6. Arts and culture. 7. Events and sport. 8. Life and entertainment. 9. Night and music. 10. Services and schools. Our choice of decision-making variables is discussed in [6]. The discussion of the decision-making model is outside the scope of this paper; however, the design of our set of queries according to a decision-making model helps us understand the impact of our findings. In particular, we can assess the usefulness of the reputation concept in the identification of important information sources for all decision-making variables, or, alternatively, only for specific variables. If, on the contrary, queries were generic, it would be more difficult to understand the consequence of missing high-reputation sources of information. We have defined 10 queries for each decision-making variable. The 10 queries are derived from the 5 basic queries described in Table 2 by adding “London” and “New York” to all queries. To limit Google’s results to blogs and forums, all queries are in the form: < “tag” [London or New York] “tag” [blog or forum]>. Figure 1 reports the Google results for a sample query about cinemas in London. Table 2. Basic queries Decision making variable Weather and environment Transportation Low fares and tickets Fashion and shopping Food and drinks Arts and culture Events and sport Life and entertainment Night and music Services and schools
Tags for five basic queries level of pollution, congestion charge, sustainable tourism, weather, air quality underground, rail, airport, traffic jam, street low-cost flights, cost of living, discounts and reductions, student fare, tickets discount shopping, fashion, department store, second hand, vintage pub, wine, beer, pizza, good cooking museums, monuments, parks, festivals, art sport, tennis courts, city marathon, NBA, football cinema, restaurants, clubs&bars, theaters, theme parks nightlife, music, theaters, party, jazz public transports, accommodation, university, utilities, healthcare
468
D. Barbagallo et al.
For all queries, we have considered the top 20 results according to Google’s ranking. Then, we have re-ranked results according to all metrics in Table 1. The distance between Google’s ranking and the ranking obtained according to each reputation metric has been calculated by means of Kendall tau [14]. Kendall tau (Kτ) has the following properties: • • •
It ranges between -1 and 1. It is equal to 1 when two rankings are identical. It is equal to -1 when two rankings are opposite.
Formally, Kendall tau is defined as follows: Kτ =
nc − nd 1 n ⋅ (n −1) 2
where n represents the number of ranked items, nc represents the number of concordant pairs (i.e., pairs with the same position in both rankings), nd represents the number of discordant pairs. By comparing Google’s ranking with reputation-based rankings we can: 1. Understand the impact of the reputation variables over search results. 2. Understand whether different reputation variables provide similar results and, hence, it seems reasonable to define an aggregate reputation index. We have complemented the quantitative analyses based on Kendall tau with a number of qualitative inspections of results and manual verifications in order to triangulate
1.
2.
3.
4.
5.
6. …
Fig. 1. Sample query results
Enhancing the Selection of Web Sources: A Reputation Based Approach
469
results. These complementary analyses have allowed us to understand the practical impact of deltas between rankings.
4 Empirical Results As discussed in the previous section, our experiments have been based on the top 20 results according to Google’s ranking for the 100 queries created considering all the tags listed in Table 2, both for London and New York. For all the Web sites retrieved through Google, we calculated the metrics in Table 1 and re-ranked results according to the performed assessment. We thus obtained more than 1000 re-ranked items to compare with the official Google ranking by means of the Kτ index. The computation of the average of the Kτ values for each metric allowed us to assess the impact of each metric in the Google ranking definition. In fact, the similarity values reported in Table 3 can be defined as the degree with which each reputation metric is implicitly considered in the Google’s PageRank algorithm. Note that Kτ values have been normalized in the [0, 1] interval. A first result of our experiments is the proof that actually the PageRank algorithm is only partially based on the observation of the inbound links. In fact, as can be noted in Table 3, the Kτ index associated to this metric reveals a dissimilarity between the Google ranking and the ranking exclusively based on inbound links. Furthermore, results also show that the Authority metrics provide rankings with a higher similarity than the ones generated on the basis of the Dependability and Completeness metrics. This is due to the fact that the PageRank algorithm mainly analyzes the frequency with which users access the Web site and, thus, it tends to promote the Web sites characterized by numerous users’ accesses (e.g., page views). The similarity with Google ranking then decreases when the metrics start to deal with the analysis of the actual use of the Web site contents (e.g., average number of comments, new discussions, etc.). This is due to the generality of Google, which on one hand is advantageous but, on the other hand, does not focus on the quality of information provided by Web sites. The lack of dependability and completeness metrics often leads to misjudgments of forum and blogs, where contents play a major role. Besides the similarity coefficients, the ranking comparison has been further refined by considering the distance between the positions associated with the same link Table 3. Similarity between our ranking based on reputation metrics and the Google ranking Metric Daily visitors Bounce rate Number of open discussions compared to largest Web blog/forum Average number of comments per discussion per day Number of comments per discussion Traffic rank Number of inbound links Daily page views Average time spent on site Average number of new opened discussions per day
in two different rankings. Again, considering all the metric-driven rankings, we have calculated (i) the average distance, (ii) the variance and (iii) the percentage of the coincident links inside a ranking. Table 4 shows the results of this analysis. The average distance is in general about 4, which is noteworthy if we consider that only the first 20 positions have been considered in both rankings. The variance values especially highlight that in some cases the distance is particularly high. This is also confirmed by the results shown in the last two columns of Table 4, where the details about the number of sites with a score difference greater than 5 and 10 are given. As can be noted, the percentage of cases in which the difference is greater than 5 is at least the 35%. In order to reduce the complexity of the model due to the large number of metrics, a principal component analysis (PCA) has been performed. This kind of analysis is used to reduce the initial set of variables into a small group of correlated ones. Table 5 shows the outcome of PCA along with the standardized regression weights of the relationships between the construct, considered as a latent variable, and observed variables. The results of the reliability analysis run with SEM show that all the factorizations can be accepted, since all the values of the composite factors are greater than the threshold value of 0.70, as suggested by [4][13] and the average variance extracted is greater than 0.50, as suggested by [16]. Moreover, all the relationships considered between observed and latent variables are significant (p < 0.001). This confirms that the factorizations in the measurement model have been performed correctly. The results of such analysis show how the initial set of metrics can be reduced to three main identified constructs: (i) traffic construct, which groups all those metrics that are, directly or indirectly, involved with the Web site traffic generated through its authority on the Web; (ii) participation construct, involving those metrics that measure the contribution of external users that write messages or replies and of internal Table 4. Analysis of the score differences Metric Daily visitors Bounce rate Number of open discussions compared to largest Web blog/forum Average number of comments per discussion per day Number of comments per discussion Traffic rank Number of inbound links Daily page views Average time spent on site Average number of new opened discussions per day
7.6337 7.5874 7.7077
Coincident links (%) 7.874 7.2386 6.9554
3.9685
8.23
7.6016
3.80
39.32
3.8344
7.521
8.812
2.75
35.91
3.8427 3.7296
7.3033 7.3072
7.4705 8.3113
2.88 2.23
36.57 35.78
3.9895 3.9507
7.5242 7.6656
7.6115 7.723
2.62 2.49
40.10 38.14
3.9093
7.5773
7.6215
2.36
40.10
Average distance 3.9213 4.10590 3.9567
Variance
Distance>=10 (%) 2.62 2.75 3.01
Distance>=5 (%) 38.40 41.81 37.61
Enhancing the Selection of Web Sources: A Reputation Based Approach
471
users who keep the content up-to-date; (iii) time construct which is an index of users’ interest, since it collects measures of the time spent on the Web site. Then, constructs for further analysis have been obtained through an average of each identified component in order to proceed with regressions. Table 6 reports the results of a linear regression that measures the interaction between each construct and the Google ranking variable, named Google_rank. The relation between traffic and Google_rank is significant (p = 0.036) and positive, meaning that traffic is a good predictor of Google positioning. The interaction between participation and Google_rank is supported at 90% significance level (p = 0.058) and the coefficient has a negative sign. Finally, time and Google_rank are negatively related and the relation is strongly significant (p < 0.001), so the better the results in such an indicator, the worse it is on a Google search. These analyses confirm that the PageRank algorithm is directly related to traffic and inbound links, privileging mere number of contacts rather than the actual interest of the users and the quality of such interactions. Indeed, the inverse relations between Google_rank and time and participation give some evidence of the fact that highly participated Web sites can be even penalized in a Google search or, at least, not rewarded. To understand this result let us consider the practical example of companies’ institutional Web sites. These Web sites are often equipped with a forum or a blog which is usually highly monitored by moderators or editorial units to avoid spam or attacks to the company reputation. It is easy to observe that this kind of Web sites are always well positioned, usually on top, and are also the most visited since they are the gate to the company and related products and services. Nevertheless, they are not always the most interesting or truthful sources of information, because negative Table 5. Principal Component Analysis Variable
Standardized Regression Weights 0.873
p-value
Daily visitors
0.992
<0.001
Daily page views
0.980
<0.001
Number of inbound links Number of open discussions compared to largest Web blog/forum Average number of new opened discussions per day Number of comments per discussion Average number of comments per discussion per day Average time spent on site Bounce rate
0.852
<0.001
0.988
<0.001
0.482
<0.001
0.634
<0.001
0.903
<0.001
0.957
<0.001
0.747
<0.001
Traffic rank
Construct Traffic
Participation
Time
<0.001
Variance Extracted 0.937
Composite Reliability 0.944
0.758
0.867
0.852
0.886
472
D. Barbagallo et al. Table 6. Linear regression analysis
Table 7. Values of aggregated metrics for TripAdvisor and AboutMilan Metric Traffic Participation Time Overall
TripAdvisor 0.99 0.99 0.79 0.93
AboutMilan 0.13 0.04 0.47 0.21
comments on products can be removed. In this case, an independent forum or blog could be a good information source for reviews but these are not usually highly ranked by Google unless they have a high traffic.
5 Reputation-Driven Web Data Selection: An Example Web 2.0 technologies allow people to express their opinions, and distribute them through several means (e.g., forums, blog posts, social networks), thus increasing the amount of information on the Web. In this particular field, the reputation of a Web source can support users in the evaluation of the reliability and relevance of its content for sentiment analyses. Sentiment analysis provides indicators that summarize the opinions contained in Web sources on a specific matter. Nowadays, sentiment analysis techniques are largely used by enterprises for monitoring customers’ appreciation for their products, services and even to understand their brands’ reputation [25]. Considering this scenario, in order to show the significance of reputation variation when dealing with the selection of the most relevant information sources, we have collected data to evaluate two different Websites that are returned by Google when searching for entertainments in Milan: TripAdvisor and AboutMilan. A query on monuments in Milan (“monuments in milan blog OR forum”) performed on Google returns AboutMilan as the first result and TripAdvisor as eleventh. First of all, we have evaluated the reputation of the two sources by using the metrics presented in Section 2. Table 7 shows the empirical values for aggregated values of traffic, participation, time, and the overall global score for each source, i.e., an aggregate reputation index obtained as a simple average of the other metrics. For the sake of simplicity, we will not deal with the problem of differently weighing the constructs of the overall ranking metric, but it is clear that a decision maker, who is more interested in participation rather than traffic, can decide to set different weights according to his/her preferences. Table 7 shows that the Google positioning of the two sources does not reflect the reputation values obtained with our framework. Indeed, TripAdvisor is one of the highest ranked websites in Alexa1, being one of the most 1
TripAdvisor was ranked 275th by Alexa Traffic Rank on September 17, 2010.
Enhancing the Selection of Web Sources: A Reputation Based Approach
473
common and detailed website for tourism. Second, we have evaluated the sentiment of the opinions contained in the two sources. The input data of sentiment analyses are fragments of text providing comments on the subject of interest. Sentiment is evaluated on individual fragments first and then aggregated into overall indices usually calculated as simple mean values. In this way, we have obtained SentTA and SentAM as the two aggregate sentiment indices in TripAdvisor and AboutMilan, respectively. In common industrial tools, the aggregate sentiment index is calculated as the mean value across different sources, according to the following expression: Sentimentstd = (SentTA + SentAM) / 2 Considering the reputation of the two sources, it is possible to define the Sentiment index as the following weighed mean: Sentimentranked = (wTA SentT A + wAM SentAM) / (wTA + wAM) where wTA and wAM are the two aggregated reputation indices for TripAdvisor and AboutMilan. We have extracted a set of 10 adjacent posts over a two-day time period from each one of the two considered sources and we have measured sentiment on a 1-to-5 scale: SentTA is equal to 3, i.e. neutral, and SentAM is equal to 4.1, i.e. positive. By using the sentiment evaluation formulas defined above. we have obtained the following values: • Sentimentstd = 3.6 that would be rounded to 4; • Sentimentranked = 3.2 that would be rounded to 3. These results show how reputation values can vary significantly. Thus, the problem of weighing the reputation of different sources when assessing sentiment becomes crucial for decision-making. The specific weighing function attributing a varying importance to the different constructs in Table 5 is still an open issue.
6 Contributions to the Field of Reputation Assessment The analysis described in this paper originates from the need of determining the influence of reputation over the selection of relevant and reliable sources for the analysis of interesting entities. Some work has been already devoted to the trust of Web resources [1], focusing on content and making a distinction between content trust and entity trust. Trustworthiness on the Web is also identified with popularity: this equation led to the success of the PageRank algorithm [9] even if it does not necessarily conveys dependable information since highly ranked Web pages could be spammed. To overcome this issue, new algorithms are based on hub and authority mechanisms in the field of Social Network Analysis (SNA) [20]. Especially when considering services such as forums, in our approach we assume that it is important to evaluate even a single contribution: SNA can be used to evaluate each author’s trustworthiness [26]. The selection of sources providing dependable information has been scarcely based on the definition of methods for assessing both software and data quality. However, the concept of reputation is the result of the assessment of several properties of information sources, including accuracy, completeness, timeliness, dependability, and consistency [7]. The data quality literature provides a consolidated body of research
474
D. Barbagallo et al.
on the quality dimensions of information, their qualitative and quantitative assessment, and their improvement [3]. Trust-related quality dimensions, and in particular reputation, are however still an open issue [14]. In [24], authors propose an architecture that evaluates the reputation of the different sources owned by companies involved in the cooperative process on the basis of the quality of the information that they exchange. In our approach, reputation is typically referred to each information source and represents a) an a-priori assessment of the reputation of the information source based on the source’s authority in a given field and b) an assessment of the source’s ability to offer relevant answers to user queries based on historical data on the source collected by the broker as part of its service. This approach is original in that it defines reputation as a context and time dependent characteristic of information sources and leverages the ability to keep a track record of each source’s reputation over time. The reputation of a source and, more in general, the quality of the data provided, can be the discriminating factor for the selection of the source when multiple sources are able to offer the same data set.
7 Final Discussion This paper has presented the results of an analysis that we have conducted to identify the relevance of data quality and reputation metrics over search rankings. Results show that different rankings occur when such metrics are taken into account and, more specifically, that in absence on reputation metrics some items can be misjudged. The primary goal of our experiment was not to identify lacks in the ranking strategies of current search engines; rather we aimed at proving how the assessment of reputation can improve the selection of information sources. Our assumption is that the reputation-based classification of information sources and the assessment of the quality of their information can help Web users to select the most authoritative sources. This is especially relevant in the context of the market monitoring, where Web users retrieve and access Web resources to get an idea about a key interest topic, but also to take some kind of choice/decision. The experiment described in this paper is situated within a larger project, INTEREST (INnovaTivE solutions for REputation based self-Service environments), which aims at promoting reputation as a key driver for the selection of dependable information sources ([5][6]). INTEREST focuses on the definition of technologies and methodologies to facilitate the creation of dashboards through which users can easily integrate dependable services for information access and analysis. The selection of services is based on data quality and reputation. Thanks to mashup technologies [27], the selected services can then be flexibly composed to construct a personal analysis environment. With respect to traditional dashboards, characterized by a rigid structure, INTEREST introduces: i) the possibility to adopt sources scouted from the Web and assessed with respect to their quality and reputation; ii) the possibility to quickly and easily create situational views [8] over interesting information, by mashing up selected dependable services. Our current efforts are devoted to refining the method for reputation assessment, for example by introducing term clustering to improve the analysis, and by defining a global reputation index resulting from the aggregation of the reputation metrics
Enhancing the Selection of Web Sources: A Reputation Based Approach
475
proposed in this paper. We are conducting an extensive validation of our method for reputation assessment, which is based on the analysis of a huge collection of contents crawled by well-know blogs and forums (e.g., Twitter). We are also conducting studies with samples of users to prove whether the reputation-based rankings of blogs and forums, as deriving from our reputation metrics, are in line with the quality of these information sources as perceived by users. Our future work is projected toward the creation of the INTEREST platform, in which the fusion of reputation analysis and mashup technologies can provide an effective environment for information composition and analysis.
References 1. Artz, D., Gil, Y.: A survey of trust in computer science and the Semantic Web. J. Web Sem. 5(2), 58–71 (2007) 2. Anholt, S.: Competitive Identity: The New Brand Management for Nations, Cities and Regions. Palgrave Macmillan, Basingstoke (2009) 3. Atzeni, P., Merialdo, P., Sindoni, G.: Web site evaluation: Methodology and case study. In: Arisawa, H., Kambayashi, Y., Kumar, V., Mayr, H.C., Hunt, I. (eds.) ER Workshops 2001. LNCS, vol. 2465, p. 253. Springer, Heidelberg (2002) 4. Bagozzi, R.P., Yi, Y.: On the evaluation of structural equation models. Journal of the Academy of Marketing Science 16(1), 74–94 (1988) 5. Barbagallo, D., Cappiello, C., Francalanci, C., Matera, M.: Reputation Based Self-Service Environments. In: ComposableWeb 2009: International Workshop on Lightweight Integration on the Web, San Sebastian, Spain, pp. 12–17 (2009) 6. Barbagallo, D., Cappiello, C., Francalanci, C., Matera, M.: A Reputation-based DSS: the INTEREST Approach. In: ENTER: International Conference on Information Technology and Travel&Tourism (February 2010) 7. Batini, C., Cappiello, C., Francalanci, C., Maurino, A.: Methodologies for data quality assessment and improvement. ACM Computing Surveys 41(3) (2009) 8. Balasubramaniam, S., Lewis, G.A., Simanta, S., Smith, D.B.: 2008. Situated Software: Concepts, Motivation, Technology, and the Future. IEEE Software, 50–55 (NovemberDecember 2008) 9. Brin, S., Page, L.: The Anatomy of a Large-Scale Hypertextual Web Search Engine. Computer Networks 30(1-7), 107–117 (1998) 10. Chen, X., Ding, C.: QoS Based Ranking for Web Search. In: Proc. of International Conference on Web Intelligence and Intelligent Agent Technology, pp. 747–750 (2008) 11. Chen, K., Zhang, Y., Zheng, Z., Zha, H., Sun, G.: Adapting ranking functions to user preference. In: Data Engineering Workshop, ICDEW, pp. 580–587 (2008) 12. DeStefano, D., LeFevre, J.A.: Cognitive load in hypertext reading: A review. Computers in Human Behavior 23(3), 1616–1641 (2007) 13. Fornell, C., Larcker, D.F.: Evaluating structural equation models with unobservable variables and measurement errors: Algebra and statistics. Journal of Marketing Research 18(3), 383–388 (1981) 14. Gackowski, Z.: Redefining information quality: the operations management approach. In: Eleventh International Conference on Information Quality (ICIQ 2006), Boston, MA, USA, pp. 399–419 (2006) 15. Gupta, S., Jindal, A.: Contrast of link based web ranking techniques. In: International Symposium on Biometrics and Security Technologies (ISBAST), pp. 1–6 (2008)
476
D. Barbagallo et al.
16. Hair, J., Anderson, R., Tatham, R., Black, W.: Multivariate data analysis, 5th edn. Prentice Hall, Upper Saddle River (1998) 17. Jaccard, J., Choi, K.W.: LISREL approaches to interaction effects in multiple regression. Sage Publications, Thousand Oaks (1996) 18. Jiang, S., Zilles, S., Holte, R.: Empirical Analysis of the Rank Distribution of Relevant Documents in Web Search. In: International Conference on Web Intelligence and Intelligent Agent Technology, pp. 208–213 (2008) 19. Kendall, M.G., Babington Smith, B.: Randomness and Random Sampling Numbers. Journal of the Royal Statistical Society 101(1), 147–166 (1938) 20. Kleinberg, J.M.: Hubs, authorities, and communities. ACM Comput. Surv. 31(4es), 5 (1999) 21. Lamberti, F., Sanna, A., Demartini, C.: A Relation-Based Page Rank Algorithm for Semantic Web Search Engines. IEEE Transactions on Knowledge and Data Engineering 21(1), 123–136 (2009) 22. Louta, M., Anagnostopoulos, I., Michalas, A.: Efficient internet search engine service provisioning exploiting a collaborative web result ranking mechanism. In: IEEE International Conference on Systems, Man and Cybernetics, pp. 1477–1482 (2008) 23. Mare, R.D.: Social background and school continuation decisions. Journal of, the American Statistical Association 75, 295–305 (1980) 24. Mecella, M., Scannapieco, M., Virgillito, A., Baldoni, R., Catarci, T., Batini, C.: The DaQuinCIS Broker: Querying Data and Their Quality in Cooperative Information Systems. J. Data Semantics 1, 208–232 (2003) 25. Pang, B., Lee, L.: Opinion Mining and Sentiment Analysis. Found. Trends Inf. Retr. 2(12), 1–135 (2008) 26. Skopik, F., Truong, H.L., Dustdar, S.: Trust and Reputation Mining in Professional Virtual Communities. In: Gaedke, M., Grossniklaus, M., Díaz, O. (eds.) ICWE 2009. LNCS, vol. 5648, pp. 76–90. Springer, Heidelberg (2009) 27. Yu, J., Benatallah, B., Saint-Paul, R., Casati, F., Daniel, F., Matera, M.: A framework for rapid integration of presentation components. In: International Conference on the World Wide Web, pp. 923–932 (2007) 28. Yu, J., Benatallah, B., Casati, F., Daniel, F., Matera, M., Saint-Paul, R.: Mixup: A development and runtime environment for integration at the presentation layer. In: Baresi, L., Fraternali, P., Houben, G.-J. (eds.) ICWE 2007. LNCS, vol. 4607, pp. 479–484. Springer, Heidelberg (2007)
Simulation Management for Agent-Based Distributed Systems Ante Vilenica and Winfried Lamersdorf Distributed Systems and Information Systems Department of Informatics, University of Hamburg Vogt-Koelln-Strasse 30, 22527 Hamburg, Germany {vilenica,lamersd}@informatik.uni-hamburg.de
Abstract. The development of open distributed applications with autonomous components is increasingly challenged by rising complexity caused, e.g., by heterogeneous software and hardware modules as well as varying interactions between such components. For such systems, simulation provides a well-established way to study the effects of different system configurations on its behavior as, e.g., especially its performance. Therefore, simulation is of particular interest for autonomous distributed applications, i.e. for those that exhibit emergent phenomena and where simulation is required for, e.g., validation purposes within the development process. Prominent application domains targeted at in this work include the areas of multi-agent and self-organizing systems. For easing and speeding up simulations of system characteristics when developing such systems, this work presents tools and a powerful software framework for automating the corresponding simulation management. Its basis is a declarative language that is used to describe the setting to be simulated as well as its evaluation. Then, an additional software framework is provided that processes this description automatically thereby relieving the developer from manually managing the execution, observation, optimization and evaluation of the simulation. In summary, this leads to much easier and sounder system simulations and, thus, to developing better autonomous distributed systems and applications. Keywords: Simulation, Multi-agent and Self-organizing systems, System automatization and optimization, Development of complex distributed systems, Declarative description languages.
1 Introduction The development of distributed applications is more and more challenged by a rising complexity of computer systems. This complexity is mostly caused by the usage of heterogeneous software and hardware modules as well as constantly varying and increasing numbers of interactions between system components. At the same time, classical approaches of designing and implementing software systems, which are mostly based on top-down design and central control, reveal their disability to cope with this complexity. This is even more obvious when it comes to non-functional requirements, such as scalability, robustness and failures tolerance. Even more, traditional approaches J. Filipe and J. Cordeiro (Eds.): ICEIS 2010, LNBIP 73, pp. 477–492, 2011. c Springer-Verlag Berlin Heidelberg 2011
478
A. Vilenica and W. Lamersdorf
mostly can not sufficiently support autonomic management functions which are again a prerequisite for a modern and efficient runtime management of applications. From this perspective, this work argues that the paradigm of agents and multi-agent systems (MAS) is well suited to model and build complex distributed systems. Due to their respective inherent distribution and separation of functionality as well as the ability to act autonomously, software agents provide a well-suited paradigm to build applications that consist of many single entities that cooperate in order to achieve a certain goal (on a global level). Moreover, this work advocates equipping MAS with the concept of self-organization (SO) [20]. Self-organizing systems are characterized by the capability to maintain and adapt their structure, even despite disturbances and without explicit central control. They are based on local strategies (micro-level) which lead to robust structures on system level (macro-perspective). The concept of SO itself has been successfully applied for millions of years in nature but has also proven its applicability in computer science more recently, as, e.g., shown in [14]. However, despite the fact that MAS and SO provide promising paradigms to model sophisticated complex distributed systems, their purposeful development is still challenged by their respective inherent dynamics. Even more, the dynamics of self-organizing systems sometimes leads to emergent phenomena [20]. Therefore, it is a real challenge for application developers to equip the vast number of parameters that local components, i.e. agents, may have with appropriate values in order to ensure an appropriate behavior of the (MAS) application. Addressing this challenge [7] have proposed a simulation-based development process that validates parameter settings in order to understand the behavior of the system which regards simulation as an integral part of the development of an application. Simulation itself has a long tradition in science and it is sometimes called as the ”the third way of doing science” [13, page 95] besides observation and experimentation [12]. Thereby, simulation is used for many reasons. Mostly, it is used to gather information about systems that are not accessible due to undesired perturbations. Further, it is applied when the behavior of the system has a time scale that is too small or too large to be observed [3]. More specifically, multi-agent based simulation (MABS) [6] has progressively replaced simulation techniques like object-oriented [22] or micro-simulation [15] in many areas. MABS is therefore mostly of interest for areas that benefit from the opportunity to model and simulate different types of individuals at the same time. In the last ten years MABS has been successfully used in physics [19] chemistry [17], sociology [16], ecology [11] and economy [18]. In economy, MABS has been used, for instance, to gather information and knowledge about stock markets, self-organizing markets, trade networks, and the management of supply chains as well as consumer behavior. A prominent example of the link between MABS and economy is the Eurace1 project that aims at simulating the whole European economic space [10]. Here, information taken from MABS can be used to reduce costs, optimize the efficiency of plants or reduce the risk of failure of an investment to be done. The work presented in this paper aims at supporting the development of complex distributed systems and proposes an approach that incorporates the paradigms of 1
http://www.eurace.org/
Simulation Management for Agent-Based Distributed Systems
479
multi-agent systems and self-organization. More specifically, it presents an approach that eases the conduction of MABS by managing them automatically. Therefore, a declarative description language for simulations is presented in connection with a powerful software framework that is capable of computing and managing simulations automatically. This approach enables developers to run different simulations with different settings without the need to manually adjust the simulation after each run. Simulations can be conducted as a background process or even over night. This reduces both time and attention needed to run simulations for developers significantly. Furthermore, the related tools can also be used to search for local maxima (minima) or a global maximum (minimum) since they also contain an optimization component. With all these capabilities, the proposed framework contributes to improving the (simulation-based) development process of complex distributed systems. It offers a convenient approach to automatically study the behavior of an application with respect to certain “what if” questions. The paper is structured as follows: The next section discusses related work whereas section 3 introduces the declarative simulation language. Section 4 describes in detail the architecture of the framework that enables the automated execution of simulations, and section 5 presents a case study from the application area of economy which uses the simulation description language and the framework. This paper concludes with section 6 with an outlook on future work.
2 Related Work Most of related work is linked to the interference between simulation and optimization performed by MAS. Therefore, existing approaches use simulation runs in order to optimize the performance of an application with respect to some utility function. These approaches usually use the phrase simulation based optimization to characterize their aim and have basically an infrastructure as depicted in figure 1. Thereby, these types of approaches often focus on different optimization algorithms and implementations that use these mechanisms. Often, these approaches aim to solve problems of a certain application domain. Whereas [9] has in general investigated how different optimization techniques are used in the field of simulation, [2,1] present a software tool that has been used to solve different real-world problems. These approaches use a mathematical model as input and do therefore not take advantage of the possibilities offered by simulation models that base on agents [13]. Also, they are designed as closed systems that cannot be extended by users in order to customize the software according to their need. [5] have presented an approach that goes one step further than those described above and that is therefore not only limited to different optimization techniques. Moreover, this work describes an infrastructure that targets the evaluation of the properties of dynamic systems. Thereby, optimization is only one aspect among others like visualization or persistence. Unfortunately, the work of Brueckner et al. lacks of the description of a simulation modeling language as well as certain details of the infrastructure that is mostly described on an abstract level. Therefore, it is difficult to value this work with respect to reusability and extensibility.
480
A. Vilenica and W. Lamersdorf
Optimization / Parameter Sweeping
Input
Output
Simulation
Fig. 1. Simulation-based optimization
In conclusion, existing work does not offer the connection of a declarative simulation description language and an open framework for automatically conducting simulations that is capable of more than just doing optimization. Nevertheless, existing optimization techniques and tools can be easily integrated into this open framework.
3 Using a Declarative Language to Define Simulation Settings This section will present a declarative simulation language that is designed in order to describe all information required for the automated execution and evaluation of a simulation. 3.1 DSDL: Declarative Simulation Description Language The aim of the declarative simulation description language (DSDL) is to offer application developers an easy to use but still powerful and flexible language to describe MABS that can be conducted automatically. Therefore, the benefit of automatically Mandatory Components
Optional Components
Run Configuration
Data Observer
Visualization
Simulation Configuration
Persistence
Optimization and Parameter Sweeping
Distribution Specified as XML Schema
Fig. 2. Most important components of DSDL
Simulation Management for Agent-Based Distributed Systems
481
executing the simulation and evaluating the results has to be much higher than the effort needed to model the simulation with the description language. Figure 2 depicts the most important aspects of the simulation language on an abstract level whereas figure 3 shows a code listing using DSDL to define an automated simulation. DSDL itself has been defined using XML Schema. Therefore, it bases on a widespread standard that is supported by various software tools which again enable a convenient way to create such files. The next two subsections will explain the elements of DSDL in detail. Thereby, the elements are subdivided into mandatory and optional components. 3.2 Mandatory Components The mandatory components of DSDL (cf. figure 2) target the specification of information that is essential for an automated simulation management. Run Configuration. First of all, DSDL introduces the concept of ensembles and single experiments. A simulation consists of i ensembles that again consist of j single experiments with i, j ∈ 1...n. Thereby, one ensemble represents one possible setting of parameters which is simulated j-times. Such a hierarchy is especially needed for simulations that have a non-deterministic behavior. Thus, in order to get a significant result several single experiments have to be conducted. The settings for the ensembles and single experiments are done in the run configuration part of DSDL. Also, this part handles two other important aspects: the termination condition for the experiments as well as the start time of the simulation. Termination conditions can be either a time expression, i.e. the duration an experiment should run, or a semantic expression, i.e. denoting a criterion that has to be true in order to terminate the experiment. Semantic expressions can be formulated using the Java Condition Language (JCL)2 . This language roughly supports expressions that can be formulated within an ”if” statement in Java. The other aspect, i.e. the start time, works as a timer for the simulation. It allows postponing the start of the simulation by specifying either a relative or absolute time. Such an option is especially interesting for simulations that need many resources and that should therefore be conducted at a time when the computer is mostly idle. Data Observer. Data observers allow specifying those elements of the simulation that should be observed and that in conclusion help to determine the fitness of a certain parameter setting. Such observed elements are mostly certain agent types or objects in the environment. Data observers may also contain certain filters that determine whether all elements of a type should be observed or only certain ones. Therefore, the observation can be customized to the needs of every simulation setting. Also, it can be defined whether the observation should pull data from the observed elements periodically, i.e. every n seconds, or on change, i.e. a certain trigger is activated that denotes an interesting event. Thus, the observed events have an id, value and timestamp. These raw data can then be further processed by other components, i.e. especially the optimization and parameter sweeping part as well as the visualization unit. 2
Fig. 3. Exemplarily simulation description using DSDL
Persistence. The persistence part of DSDL allows specifying the way the observed events are handled after a simulation experiment has been conducted. Storing the observed data in a persistent manner is a prerequisite for a deep analysis of a simulation setting. It allows analyzing the impact of certain application settings, i.e. to conduct and answer ”what if” questions. Usually, databases are used to manage the persistence of
Simulation Management for Agent-Based Distributed Systems
483
data and therefore the persistence component of DSDL allows specifying the required options to store the observed events. At the same time, there are situations that do not necessarily require the usage of a database. Therefore, persistence is achieved by storing simulation results in xml or csv files. Furthermore, it can also be defined whether all observed data should be stored or only aggregated one as well as if intermediate results should be persisted. The first option, i.e. storing all data, requires more space but should be used if a re-processing of a conducted simulation experiment is contemplated. 3.3 Optional Components The optional components of DSDL (cf. figure 2) extend the core components of the automated simulation management with helpful functions. Optimization and Parameter Sweeping. The optimization of a MAS application with respect to a given utility function is the main motivation for automatically executing simulations. Thereby, various methods and algorithms can be applied while sweeping through a parameter space in order to find local optima (minima) or a global optimum (minimum). Generally, algorithms for optimization can be divided into two subclasses: deterministic and probabilistic approaches. The first ones ”are most often used if a clear relation between the characteristics of the possible solutions and their utility for a given problem exists” [23, page 22]. In contrast, the latter ones are more feasible if ”the relation between a solution candidate and its ’fitness’ are not so obvious or too complicated, or the dimensionality of the search space is very high” [23, page 22]. Additionally, the same author has proposed a taxonomy of global optimization algorithms that shows the different approaches and techniques that are applied in this area. Nevertheless, DSDL does support the use of any optimization algorithm due to its black box approach. Therefore, it relies on the Jadex XML data binding framework3 which allows specifying the optimization library that will be loaded at runtime to perform the optimization easily. The developer specifies the needed input parameter settings in DSDL and those parameters a passed to the referenced optimization library which again returns the output parameter settings. Even more, DSDL does support parameter sweeping without the use of dedicated optimization methods. Thereby, it is inspired by the specification of batch parameters used in Repast Symphony4. It allows defining a parameter range and the step size that should be applied in order to sweep through this range. Also, a simple list of parameters can be specified that should be iterated through. Moreover, parameter sweeps can also be nested, i.e. the sweeping through ranges of different parameters can be easily combined. Visualization. The visualization section of DSDL is an optional component. It does not have to be specified in order to run a simulation but it can help to understand the results by not only having the pure facts but also a visual representation. Thereby, this component is closely related to the events received from the data observer. It can take 3 4
these events and compute different (statistical) functions that support the evaluation of a certain parameter setting. Also, the visualization component can cake events from different observers and visualize them within the same chart. Thereby, different types of charts like pie, area, line, bar, histogram etc. are supported. Distribution. In order to speed up the simulation process DSDL contains an optional component that handles distribution. This component offers the possibility to manage the delegation of the execution of single simulation experiments. It contains a discovery part that is capable of finding resources that can conduct experiments according to the simulation model defined using DSDL. Application developers can specify whether they want to use this distribution component in order to parallelize the conduction of the simulation study and which strategies and criteria should be applied for the selection of resources. Now, that the features of DSDL have been presented the next section will introduce the architecture of a system that is capable of automatically processing simulations specified in this language.
4
System Architecture of the Framework
The basic idea of this framework is to relieve application developers from further work once the simulation has been modeled using DSDL. The simulation is automatically managed by the framework and the developer can continue working on other things until the simulation terminates. Figure 4 depicts the architecture of the framework that is capable of automatically executing simulation experiments. It shows that the framework itself is composed as a MAS and therefore the management component takes advantage of agent characteristics such as autonomy or the ability to act proactive. Furthermore, figure 4 reveals that the framework consists mainly of two types of agents that handle the simulation. The master simulation agent encapsulates the main functionality. It processes the declarative simulation description (cf. section 3) that contains all information and parameters that are needed to perform the simulation experiments and returns the results of the simulation to the developer. Thereby, the master simulation agent does not perform the single simulation experiments itself. Moreover, it delegates those to a client simulation agent that is in charge of performing a single experiment. The results of the single experiment are then returned back to the master. Thereby, client and master agent are loosely coupled and communicate via message based communication, i.e. asynchronous communication. Basically, the master simulation agent consists of four components: a simulation run manager, a visualization component, an optimization component and a distribution component. Thereby, the simulation run manager is the most important component. It is instantiated on agent creation and it handles the progress of the whole simulation by delegating tasks to other subcomponents and agents. Figure 5 shows the workflow of the master simulation agent. After the agent has parsed the simulation description it creates a task for the execution of the first ensemble. The experiments of this ensemble are performed by the client simulation agents. Once the master agent has received these results
Simulation Management for Agent-Based Distributed Systems Optimization Component
Visualization Component
Distribution Component
Persistence
Declarative Simulation Description <…… …. …… …./>
485
Online
Optimization Algorithm A
Offline
Optimization Algorithm B
Simulation Run Manager
Report File <…… …. …… …./>
Master Simulation Agent
Receive results
- Oberserve Execution - Collect Data - Terminate Experiment
Send experiment description
Client Simulation Agent
Client Simulation Agent
Client Simulation Agent
Single Simulation Experiment
Single Simulation Experiment
Single Simulation Experiment
Fig. 4. System architecture of the framework for automatically managing simulations
it evaluates the observed data accordingly to the simulation description. Furthermore, the agent persists the raw data results in a database in order to make sure that they can be later reprocessed in a different manner than described in the description. Therefore, this approach makes sure that the single simulation results can be analyzed easily after the simulation has been terminated. As the master simulation agent has received all the results of the experiments of an ensemble it has to compute the target function of the simulation. This function may be defined as a simple parameter sweeping with predefined values or it may be a utility function. Depending on the result of the evaluation of the target function the simulation master agent terminates the simulation or it starts to prepare the execution of a new ensemble with new parameters. In the case of a simple parameter sweeping those parameters are predefined in the simulation description. In the case of a utility function the master simulation agent creates a task for the optimization component. This component is then in charge of computing the new values for the parameters. Finally, the master simulation agent creates a new ensemble and delegates again the execution of the single experiments to the client agents. Therefore, it is obvious that the master simulation agent uses basically four components to perform the simulation and that the simulation run component manages the execution of the components. Due to the clear separation of functionality and clear interfaces components can be easily interchanged. This is especially important for the optimization component and the visualization component. Since there are many approaches towards optimization (cf. section 3.3) it is crucial to have an approach that
486
A. Vilenica and W. Lamersdorf Init Master Agent & Parse XML-File
Start Ensemble of Experiments
Init Ensemble with new Parameters
Evaluate Results of Ensemble
Do Optimization / Parameter Sweeping
Persist
Compare with Target Function
Finish
Fig. 5. Workflow of the master simulation agent
supports the use of different mechanisms and libraries. Therefore, existing libraries like the Java Genetic Algorithms Package (JGAP)5 can be used within the framework to compute the parameters for a new ensemble. Also, the visualization of simulation results can be customized according to the requirements of a certain application domain. Results may be visualized on the local computer that is running the simulation by using existing libraries like JFreeChart6 . On the other hand, the results may also be accessed remotely using a web server and browser to visualize them. Furthermore, the type of result visualization can not only be distinguished spatially but also temporarily, i.e. online or offline visualization. Whereas online visualization means access to data of currently running simulation experiments offline visualization denotes the processing of data from simulation runs that have terminated. Thus, the framework presented here, supports both approaches at the same time. Finally, the master simulation agent contains a distribution component. The aim of this component is twofold: to speed up the simulation process and to use the computation power of several nodes in a computer network. By accessing shared computational resources across several nodes the simulation framework is even able to run massive agent based simulations [24] which cannot be performed on a single computer. On the other hand, the simulation framework can use the nodes to parallelize, e.g. to speed up, the simulation by running the single experiments of an ensemble at the same time on different computers. 5 6
Simulation Management for Agent-Based Distributed Systems
487
The framework, described in this section, has been implemented using the Jadex Agent Framework [4]. The next section will describe a case study that has been done using this framework implementation.
5 Case Study: Mining Ore Resources This section will present a case study called mining ore resources that was conducted in order to prove the applicability of the framework introduced in the section before. First, the setting of the case study will be present. Second, the processing of the case study will be described as well as the evaluation of the results. 5.1 The Setting The setting of the case study is inspired by an example presented in [8] and the basic implementation of this example was taken from the Jadex examples library. The case study deals with a MAS that was implemented in order to perform and coordinate the mining of ore resources on a non-explored field. As this setting was initially introduced in connection with the NASA (National Aeronautics and Space Administration)7 it is sometimes also called marsworld. Nevertheless, this case study is a representative for a whole class of problems from the economy. Basically it targets the question: how many (different) resources do I need in order to accomplish a goal in time and budget? Therefore, this is a classic optimization function that targets a trade-off often found in the economy: time vs. money/costs. Hence, the case study conducted targets the mining of ore resources (cf. figure 6). It consists of three types of components (i.e. agents): sentries, producers and carriers. These three components cooperate in order to achieve the goal, i.e. to find ore resources, inspect whether they can be explored and finally to bring the ore to the home base. At the beginning, all components explore the environment in order to find ore resources. If an agent finds a resource it reports the position to the sentry. The sentry is in charge of inspecting this resource in order to determine its capacity. If it can be exploited the sentry sends a message to the producer that is in charge of producing ore as much as the capacity permits. As the producer has finished its job it informs the carrier. Then, the carrier brings the ore to the home base. The agents have accomplished their goal when all available ore of the environment has been brought to the home base. 5.2 Simulation and Evaluation The scenario described before has been taken to prove the benefit of DSDL and the framework for automated simulation. In fact, the framework has been taken in order to answer an important question that rises from the description of the scenario: What is the optimal number of sentries, producers and carriers for a given environment? As every component costs a certain amount of money it is obvious that the utility function has to take into account the acquisition cost and operation costs of the agents. Therefore, the 7
http://www.nasa.gov/home/index.html
488
A. Vilenica and W. Lamersdorf
Homebase
Carrier: Bringing ore to the homebase Explored ore resource
Producer: Producing ore
Unexplored ore resource
Sentry: Exploring the environment
Fig. 6. Screenshot depicting the scenario of the case study “mining ore resources”
framework has been used to investigate the relation between the number of operating agents and the time they need to complete the task. Table 1 and figure 7 show the results from the simulation, that focuses on the effect that an agent type has on the time needed to accomplish the goal, i.e. to collect all ore found in the environment. Therefore, for each of the three agent types the number was iterated from one to ten while the number of the two other agent types was only one for each. The results show that sentries have the highest impact by reducing the mean time to accomplish the goal to 54 sec. whereas producers reach only 78 sec. Furthermore, it can be seen that using more than four carriers respectively more than six producers does not significantly influence the results whereas the simulation results for the sentries decrease continuously till the number of nine. Thereby, two things can be concluded from these results. First, an insufficient number of sentries may cause a bottleneck since sentries provide functionality that is of essential importance for the studied application. Second, each simulation row, i.e. iterating an agent type from one to ten, reveals the border till where a trade-off between time, i.e. efficiency, and money, i.e. costs for the agent, holds. The simulation helps therefore to make a good decision for the setting of the application, i.e. to choose appropriate values for the number of agents. Still, at the end it depends on the costs of an agent type whether an additional agent, that increases the costs but speeds up the time, pays off. The case study has proven the applicability of the framework as well of DSDL. It shows the benefit of an automated simulation management that allows application developers easily to run experiments with different parameter settings without the need to manually edit, observe or evaluate the simulation runs. All this work is performed by the framework.
Simulation Management for Agent-Based Distributed Systems
489
Table 1. Evaluating the influence of increasing the number of one agent type from 1 to 10 while keeping the others at 1 on the efficiency to accomplish the goal of the case study. Each setting was simulated fifty times. Number Time: Mean of Iterated Value (sec.) for Agent Types Iterating Sentries 1 127 2 100 3 87 4 70 5 66 6 61 7 60 8 57 9 53 10 54
Time: Mean Value (sec.) for Iterating Carriers 127 98 90 75 73 74 73 71 72 72
Time: Mean Value (sec.) for Iterating Producers 127 103 98 86 85 81 79 78 78 83
Evaluation of case study: mining ore resources. Results for respectively 50 simulation experiments. 140 Iterating Producers from 1 to 10 Iterating Carrier from 1 to 10 Iterating Sentries from 1 to 10 120
Time in seconds
100
80
60
40
20
0 0
2 4 6 8 10 Number of respectively agent type iterated through
12
Fig. 7. Depicting the relationship between increasing the number of one agent type from 1 to 10 while keeping the others at 1 and the efficiency to accomplish the goal of the case study. Each setting was simulated fifty times.
6 Conclusions and Future Work The work described in this paper addresses improvements of the development process of complex and dynamic applications. It advocates that simulation is a viable approach to gather information about the dynamic behavior of such systems. In order to
490
A. Vilenica and W. Lamersdorf
support that, an approach for automating the management of simulations was presented which relieves application developers from setting up and managing such simulations manually. The approach consists of a declarative simulation description language (DSDL) for defining all important aspects of the simulation. It includes components that define which data has to be observed, how data can be visualized and persisted, etc. Moreover, DSDL can be used to define the sweep through a parameter space and this sweeping can be linked with an optimization mechanism that supports and speeds up the parameter sweeping process. Additionally, this work has presented the architecture and implementation of a software framework that is capable of automatically performing simulations that have been modeled using DSDL. This framework eases the developer from manually starting simulations with certain parameter settings and evaluating them. It supports the whole life cycle of a simulation with built-in components that can easily be extended and customized towards the needs of a certain application domain. As a proof of concept, a case study has been conducted that shows the benefit of automatically managing simulations. The case study demonstrates further, how DSDL and the framework can be used to easily optimize the setting of an application with respect to a certain utility function. Although this work mainly targets multi-agent based simulation, the approach presented can also be adopted and extended to other simulation techniques as well. Future work will aim at further extending the functionality of the framework. It is currently planned to add a component that supports the validation of system dynamics by offering the possibility to define hypotheses that target the causal structure of the system as introduced by [21]. Therefore, DSDL shall be extended in order to be able to specify hypotheses and the simulation framework will be able to automatically validate these hypotheses while performing the simulations. Also, it is planned to build up a library that contains the results of all system validations that have been conducted. The task of this library is to gain information about the qualitative behavior of systems that exhibit a certain causal structure. Therefore, it will be able to predict the behavior of a system by analyzing its causal structure and comparing it with the results from the library. Acknowledgements. The authors would like to thank Deutsche Forschungsgemeinschaft (DFG) for supporting this work through a research project on ”Self-organization based on decentralized co-ordination in distributed systems” (SodekoVS). Furthermore, we would like to thank Lars Braubach and Alexander Pokahr from the Distributed Systems and Information Systems (VSIS) group at Hamburg University, as well as Wolfgang Renz and Jan Sudeikat from the Multimedia Systems Laboratory (MMLab) at Hamburg University of Applied Sciences for continued inspiring discussion and fruitful joint work.
References 1. April, J., Better, M., Glover, F., Kelly, J.: New advances and applications for marrying simulation and optimization. In: WSC 2004: Proceedings of the 36th Conference on Winter Simulation Conference, Washington, D.C, pp. 80–86 (2004)
Simulation Management for Agent-Based Distributed Systems
491
2. April, J., Glover, F., Kelly, J., Laguna, M.: Simulation-based optimization: practical introduction to simulation optimization. In: WSC 2003: Proceedings of the 35th Conference on Winter Simulation Conference, New Orleans, Louisiana, pp. 71–78 (2003) 3. Banks, J. (ed.): Handbook of Simulation. Principles, Methodology, Advances, Applications, and Practice. Wiley, Chichester (1998) 4. Braubach, L., Pokahr, A., Lamersdorf, W.: Jadex: A bdi agent system combining middleware and reasoning. In: Unland, R., Calisti, M., Klusch, M. (eds.) Software Agent-Based Applications, Platforms and Development Kits, pp. 143–168. Birkhaeuser-Verlag, Basel (2005) 5. Brueckner, S., Van Dyke Parunak, H.: Resource-aware exploration of the emergent dynamics of simulated systems. In: AAMAS 2003: Proceedings of the Second International Joint Conference on Autonomous Agents and Multiagent Systems, pp. 781–788. ACM, New York (2003) 6. Drogoul, A., Vanbergue, D., Meurisse, T.: Multi-agent based simulation: Where are the agents? In: Sichman, J.S., Bousquet, F., Davidsson, P. (eds.) MABS 2002. LNCS (LNAI), vol. 2581, pp. 1–15. Springer, Heidelberg (2003) 7. Edmonds, B., Bryson, J.: The insufficiency of formal design methods - the necessity of an experimental approach for the understanding and control of complex mas. In: AAMAS 2004: Proceedings of the Third International Joint Conference on Autonomous Agents and Multiagent Systems, pp. 938–945. IEEE Computer Society, Washington, DC (2004) 8. Ferber, J.: Les syst´emes multi-agents. Vers une intelligence collective. InterEditions, Paris (1995) 9. Fu, M.: Feature article: Optimization for simulation: Theory vs. practice. INFORMS: Journal on Computing 14(3), 192–215 (2002) 10. Groetker, R.: Europa reloaded. Technology Review (2), 52–57 (2009) 11. Huberman, B., Glance, N.: Evolutionary games and computer simulations. Proceedings of the National Academy of Sciences of the United States of America 90(16), 7716–7718 (1993) 12. Kelly, K.: The third culture. Science 279(5353), 992–993 (1998) 13. Macal, C., North, M.: Agent-based modeling and simulation: desktop abms. In: WSC 2007: Proceedings of the 39th Conference on Winter Simulation, pp. 95–106. IEEE Press, Piscataway (2007) 14. Mamei, M., Menezes, R., Tolksdorf, R., Zambonelli, F.: Case studies for self-organization in computer science. Journal of Systems Architecture 52(8), 443–460 (2006) 15. Orcutt, G.: A new type of socio-economic system. The Review of Economics and Statistics 39(2), 116–123 (1957) 16. Pietrula, M., Carley, K., Gasser, L.: Simulating Organizations. M.I.T. Press, Cambridge (1998) 17. Resnick, M.: Turtles, Termites and Traffic Jams. M.I.T. Press, Cambridge (1995) 18. Said, L., Bouron, T., Drogoul, A.: Agent-based interaction analysis of consumer behavior. In: AAMAS 2002: Proceedings of the First International Joint Conference on Autonomous Agents and Multiagent Systems, pp. 184–190. ACM, New York (2002) 19. Schweitzer, F., Zimmermann, J.: Communication and self-organization in complex systems: A basic approach. In: Fischer, M., Froehlich, J. (eds.) Knowledge, Complexity and Innovation Systems, pp. 275–296. Springer, Heidelberg (2001) 20. Serugendo, G., Gleizes, M., Karageorgos, A.: Self-organisation and emergence in mas: An overview. Informatica (Slovenia) 30(1), 45–54 (2006)
492
A. Vilenica and W. Lamersdorf
21. Sterman, J.: Business Dynamics - Systems Thinking and Modeling for a Complex World. McGraw-Hill, New York (2000) 22. Troitzsch, K.: Social simulation – origins, prospects, purposes. In: Conte, R., Hegselmann, R., Terna, P. (eds.) Simulating Social Phenomena. Lecture Notes in Economics and Mathematical System, vol. 456, pp. 41–54. Springer, Heidelberg (1997) 23. Weise, T.: Global optimization algorithms - theory and application. E-Book (2008), http://www.it-weise.de/projects/book.pdf 24. Yamamoto, G., Tai, H., Mizuta, H.: A platform for massive agent-based simulation and its evaluation. In: AAMAS 2007: Proceedings of the 6th International Joint Conference on Autonomous Agents and Multiagent Systems, pp. 1–3. ACM, New York (2007)
Part V
Human-Computer Interaction
Developing Analytical GIS Applications with GEO-SPADE: Three Success Case Studies Slava Kisilevich1 , Daniel Keim1 , Amit Lasry2 , Leon Bam2 , and Lior Rokach3 1
3
Department of Computer and Information Science University of Konstanz, Konstanz, Germany 2 Department of Information System Engineering Ben-Gurion University of the Negev, Beer-Sheva, Israel The Deutsche Telekom Laboratories, Department of Information System Engineering Ben-Gurion University of the Negev, Beer-Sheva, Israel {slaks,keim}@dbvis.inf.uni-konstanz.de, {lasrya,bam}@bgu.ac.il, [email protected]
Abstract. Geographic Information Systems (GIS) handle a wide range of problems that involve spatial distribution data from fields as diverse as housing, healthcare, transportation, cartography, criminology, and many others. Selecting the right GIS is determined by the task the user wishes to perform. Some considerations may include the availability of analytical functions, performance, ease of use, interoperability, extensibility, etc. Nowadays, as positioning technology proliferates and affects almost all aspects of our lives, GIS tools have become more and more complex as researchers seek to address and implement, respectively, new spatial tasks and techniques. Due to the rapid growth of GIS in recent years, many provide APIs for extending their software for specific needs. But learning a complex set of API functions, is not feasible in scenarios that demand rapid prototyping like in academic research or when the tool is supposed to be used by non-professionals. In this paper, we present three case studies that are performed on top of a GEO-SPADE, a Google Earth-based .NET-based framework that we developed. Unlike the typical APIs, that may often contain more than hundred or thousand of functions from different logical layers, the GEO-SPADE framework provides only the minimal set of functions for plug-in development, networking, and visualization using Google Earth engine. The extended functionality is provided by the developer and depends on her knowledge of the underlying language. At the same time, the ease of use is achieved by the functionality of Google Earth. The three presented case studies showcase the applicability of the Google Earth-based framework in different domains with different analytical tasks (spatial analysis, spatial exploration, and spatial decision support).
1 Introduction Geographic Information Systems (GIS) handle a wide range of problems that involve spatial distribution data such as housing, healthcare, transportation, cartography, criminology, and many others. Selecting the right GIS is driven by the task the user wishes to perform [1]. Underlying considerations may include the availability of analytical functions, performance, ease of use, interoperability, extensibility, etc. Nowadays, with the J. Filipe and J. Cordeiro (Eds.): ICEIS 2010, LNBIP 73, pp. 495–511, 2011. c Springer-Verlag Berlin Heidelberg 2011
496
S. Kisilevich et al.
proliferation of positioning technology and the rapid growth of spatio-temporal data together with a composite structure of non-spatial data, GIS tools have become more and more complex as researchers seek to address and implement new spatial analysis tasks and techniques. Although many contemporary GIS provide APIs for extending their software for specific needs, this involves adapting the GIS to learn a complex set of proprietary API functions, that may often contain more than hundred or thousand of functions from different logical layers, which is not feasible, particularly in scenarios that demand rapid prototyping like in academic research or in resource-constrained projects or when the tool is supposed to be used by non-professionals. Usually, the analyst (or researcher) requires fast implementation of tasks and the decision to extend the GIS application in cases when some functionality is not available, is rarely favorable. Consequently, to achieve the desired results, the geo-processing task is split between multiple GIS applications where each of them partly provides the required functionality. The issue of proprietary APIs and interoperability is addressed by Open Geospatial Consortium (OGC)1 , which encourages developers to use standard interfaces and specifications when implementing geo-processing. A need for distributed geo-processing systems was addressed in recent publications [2,3,4,5]. Such systems split the workload between client and server, where clients are often Web applications, accessible by multiple users while the server performs all the required computations and delivers the results to the client using Web service technologies. Such architectures have several advantages over old, closed solutions: (1) there is no need to install a Web application; (2) many users can access the application using a Web browser; (3) the computation is performed on the server side, which relieves the user’s computer from performing time-consuming operations; and (4) there is no need to locally store the data (the data can be accessed directly by the service and hosted by the service provider). Scalable, distributed architecture, that conforms to open standards, may solve the problem of interoperability and service reusability. However, the client side still has to be developed according to specific tasks defined in advance. Since any changes in the design or in extending the capabilities should be performed by the body responsible for maintaining the Web application, there is a need for a platform, that supports easy extensibility with minimum knowledge of the underlying API. Such a platform should support many of the general visual data exploration approaches like brushing, focusing, multiple views, linking. It should also be able to support (geo)visualization approaches such as direct depiction, visualization of abstract data summaries, and extraction and visualization of computationally extracted patterns. It should also be capable of integrating different geo-related tasks to allow the analyst to quickly generate and test her hypotheses [6]. Of the platforms that combine ease of development and visualization, Google Earth has become popular among researchers dealing with spatial data. Google Earth has attracted interest ever since it first appeared on the market [7]. Although debates whether Google Earth is a true GIS and what advantages it brings to scientists continue [8,9,10,11], the number of publications in which Google Earth is used for visualization and exploration of geo-related results shows the great interest among researchers in this tool. There are several factors that have made Google Earth 1
http://www.opengeospatial.org
Developing Analytical GIS Applications with GEO-SPADE
497
so successful among geo-browsers: it is freely available; its fast response and real-time exploration of spatial data; its use of satellite imagery; its ease of use and of Keyhole Markup Language (KML)2 , a human readable, XML-based format for geographic visualization. Other attractive factors include Google Earth’s ability to implement basic visualization techniques like zooming, panning and tilting, and the availability of many layers of information provided by Google and by user communities. In addition, Google Earth includes a built-in HTML browser to display textual information and allows dynamic content to be streamed from a server in response to changes in a visible frame. Recent publications demonstrated that Google Earth can be effectively used for data exploration in various areas such as: weather monitoring [12], spatio-temporal data exploration [13], data mining [14], insurance [15], crisis management [16], spatial OLAP [17], and geospatial health [18]. Despite its capabilities, Google Earth is still used as a “secondary” tool for data visualization because of its lack of geo-analytical functionality. Developers who want to integrate Google Earth in their applications usually overcome this geo-analytical deficiency by embedding Google Earth into a Web application using its JavaScript API. “Missing” features can then be implemented using a Network Link element, which is part of KML, to point to the URL of the underlying server to request a specific operation. Although the Network Link feature is used for creating dynamic content, it allows data to flow in only one direction, from the server to Google Earth in response to the changes in the visual boundaries. Clearly, this feature does not allow more complex operations like passing parameters interactively, running the data mining algorithm on the server, and returning the results for a particular setup of objects being visualized by Google Earth. Other mechanisms for performing such operations are required. Recently, it became possible to embed Google Earth in desktop applications3 using the Google Earth Web browser plugin4 COM interface. This makes it possible to write custom applications around Google Earth in high level languages like c# and to completely control the logic of the application thus bypassing the limitations of the standalone version of Google Earth. This possibility, on the one hand, and the constraints of current GIS systems, on the other, encouraged us to develop a prototype desktop system called GEO-SPADE (GEO SPAtiotemporal Data Exploration) [19]. The architecture of GEO-SPADE, presented in [19], is based on a thin client paradigm and introduces pluggable components and Service-oriented Architecture (SOA). The framework acts as a bridge between the Google Earth engine and the user defined functionality implemented in a user provided code as a front-end loaded by the framework and the back-end that is completely separated from the client-side (GEO-SPADE) but communicated by means of Web services with the front-end. Unlike the typical APIs, that may often contain more than hundred or thousand of functions from different logical layers, the GEOSPADE framework provides only the minimal set of functions for plug-in development, networking, and visualization using Google Earth engine. The minimalistic set of API functions and the text-based protocol (KML) for geographical visualization allows for
rapid prototyping of different tasks, while the ease of use pertinent to Google Earth allows non-professionals to utilize GEO-SPADE. To highlight its applicability for different analytical tasks, we selected three real-life case studies that were successfully performed by GEO-SPADE. The results of two of the presented case studies were presented at the international conferences. The case studies showcase the applicability of the Google Earth-based framework in various domains with different analytical tasks. The first case study shows how GEO-SPADE is used for analyzing tourist activity [20,21]. In the second case study, GEO-SPADE is implemented as a spatial exploration tool for searching for geo-tagged photos using different search criteria [22]. In the third case study, GEO-SPADE operates as a spatial decision support tool for predicting hotel prices.
2 Related Work In this section we discuss the applicability of Google Earth in different disciplines and domains as a stand-alone or Web-based tool, and the extensibility of Google Earth functionality. Google Earth has been used to show weather related information in [12] by producing snapshot updates of the current weather every 120 seconds and utilizing the KML file format to read these updates into the Google Earth. GeoSphereSearch, a contextaware geographic Web search, integrated a Web-based Google Earth to visualize results of a query [23]. [13] demonstrated how visually exploring mobile directory service log files with spatial, temporal and attribute components can be performed using Google Earth in combination with other open source technologies like MySQL, PHP and LandSerf. [13] discusses several aspects, like visual encodings, color and overplotting and shows how Google Earth handles these features. [15] presented the feasibility of exploring catastrophic events and potential loss of information. The tasks that must be performed are outlined. The methods, with which Google Earth can handle such tasks, include: mapping, filtering by attribute, space, time, spatial aggregation, and creation of new views. Bringing geo-processing capabilities and Google Earth together is addressed by [16] to support crisis management scenarios. For this task, “Google Earth Dashboard” was proposed. It combines several technologies: Adobe Flex, Web Map Service (WMS), Web Feature Service (WFS), Web Processing Service (WPS) services and Google Earth as the cornerstone. The potential in combining geo-browsers like Google Earth in analyzing geospatial health sciences is discussed in [18]. The applicability of Google Earth in Online Analytical Processing (OLAP) is demonstrated in [17,24]. A desktop-based exploratory spatial association mining tool integrated with Google Earth for visualization was proposed in [14]. The tool consisted of two visualization views: the first was based on Google Earth for viewing and exploring related phenomena; the second view was developed in-house using Java 3D technology to interact with the association rules produced by the mining algorithm. Integrating WPS-based services into Google Earth is addressed by [5]. Their approach consists of two parts. In the first part, the WPS client, built on top of the uDig (desktop, Java-based GIS system) exports configured processes into a KML file. In the
Developing Analytical GIS Applications with GEO-SPADE
499
second part, the KML file is loaded into Google Earth, which triggers WPS processes using the Network Link element. The advantage of the WPS approach is that the exported processes can be consumed by the vast community of people using applications that handle KML formats such as Google Earth or Google Maps. This advantage was addressed in recent publications [12,15]. However, the proposed approach has several disadvantages. Although the approach has only two parts, they are not autonomous and require manual implementation (configuration of the processes using WPS client, creation and export of KML and dissemination of the results). Another aspect is lack of interaction and control over the geo-process exported into KML. As soon as KML is loaded into the application with Network Link, the process will be triggered and the results returned. In general, the client side, which usually involves data visualization and manipulation, still remains more heterogeneous than the server side, since the way the data is presented cannot be enforced by standard specifications and interfaces. Here the choice of an appropriate software is driven by the needs of an expert. As was already mentioned, dozens of GIS applications are available, capable of solving specific spatial processing tasks and of supporting geo-visualization. When these applications are incapable of dealing with the problem, custom solutions and architectures are proposed [25,14,26]. However, the current trend in data visualization and exploration is to use mass-market applications such as Google Earth for visualization, combined with open source technologies for data processing and storage [13,16,17,24].
3 Case Studies In this section, we present three case studies in which GEO-SPADE is used as an analytical, exploratory and decision support tool. In the first case study presented in Section 3.1, GEO-SPADE is used as analytical tool for analyzing tourist activity using geo-tagged photos. The method was presented in [20] and used as an example of the tool’s capabilities in [19]. We extend the case study by discussing the details of data interchange and technologies used in the analysis. The second case study (Section 3.2) discusses the integration of GEO-SPADE in a region exploration scenario using geotagged photos sorted according to various criteria. The third case study (Section 3.3) presents an on-going project in which GEO-SPADE acts as a decision support tool for predicting hotel prices. Before moving to a specific case study, we would like to draw the reader’s attention to Table 1. It shows the technologies and tools that were used in each case study. It can be seen that KML is shared between all case studies because it is a format for geographical visualization used by Google Earth. Java API for RESTful Web Services5 is also used in the three case studies because Java is one of the most popular, general-purpose languages and the language of choice of many Web programmers. However, Windows Communication Foundation (WCF)6 was used in the first case study along with Java to simplify complex data interchanges between the client-side (GEO-SPADE) written in c# and the server-side. Specifically, class serialization was used to transmit complex 5 6
S. Kisilevich et al. Table 1. Server-side technologies and tools
Technologies
Tourist activity Region exploration Hotels Section 3.1 Section 3.2 Section 3.3 Java API for Java API for Java API for Communication RESTful Web Services RESTful Web Services RESTful Web Services Windows Communication Foundation Database PostGis MySql Microsoft SQL Server 2008 Data KML KML KML Interchange Class Serialization User-defined Additional JfreeChart tools Teiresias [27] Weka 3.6 API [28] DBSCAN [29] DBSCAN [29]
data structure, and rebuild the native object. Likewise, different databases were used. PostGis7 was used in the first case study because of its support for spatial queries and free availability. MySql8 was used in the second case study because of its ease of maintenance. We used Microsoft SQL Server 2008 in the third case study to simulate as close as possible the production environment of the service provider. Thus, the loosely coupled architecture (described in [19]) gives a developer freedom in choosing whatever technology to use to accomplish a specific task. 3.1 Analysis of Tourist Activity The analysis of people’s activity has become a major topic in different domains such as transportation management and tourism [30,31]. The access to a vast amount of geotagged photos taken by people from all over the world provided by photo-sharing sites like Flickr or Panoramio, has enabled the analysis of tourist activity at large scales, raised new research questions, and fostered the development of new analytical methods and applications. In our recent work [20,21], we defined a number of tasks for analyzing attractive areas, people’s behavior and movement, and proposed a number of analytical methods. We outlined six major steps for completing the task of finding travel sequences [20] in an arbitrary region of the world. These steps actively involve the analyst in the process of finding sequence patterns by reviewing the results of every step in the process, changing the parameters and activating the new steps using results from the previous step. In the first step, presented in Figure 1, the analyst selects the desired area. The visible frame constitutes the boundaries of a region near Funchal, Madeira. The analyst can check the photographic activity in the selected region [21]. Figure 2 shows timeseries graph of the number of people who took photos in the selected area from January 2008 to September 2009 aggregated by month. The server receives the boundary of the region and queries the database counting the number of people per month. We used JFreeChart9 to generate the image of the graph. The image is stored in the temporal directory on the server, while the KML, which is sent to the client includes the URL to the graph. The example of the KML is presented in Listing 1.1. 7 8 9
Developing Analytical GIS Applications with GEO-SPADE
501
F u n c h a l ( r e g i o n a l e v e n t s ) name> < d e s c r i p t i o n > s o u t h bound : 3 2.6445708362354 t d > t r > n o r t h bound : 32.6594099577288 t d > t r > w e s t bound : −16.922443384542 t d > t r > e a s t bound : −16.891952135182 t d > t r > l o w e r t i m e bound : 2008−01−01 0 0 : 0 0 : 0 0 < / t d > t r > u p p e r t i m e bound : 2009−08−31 0 0 : 0 0 : 0 0 < / t d > t r > t d > t r > t a b l e >]]> d e s c r i p t i o n > P l a c e m a r k > Listing 1.1. Server generated KML that shows photographic activity in the region. The activity graph is stored on the server.
The process of finding travel sequences is divided into two parts [20]. In the first part, every photo location is matched against a database of points of interest (we used the Wikipedia database10 ) and the closest POI is assigned to the photo. This creates clusters in which every photo is assigned to existing points of interest (POIs). In the second part, the remaining unassigned photos are clustered using DBSCAN [29], a density-based clustering algorithm. The results of this part are presented in Figure 3. The left part of the figure shows clusters in which photos were assigned to known POIs (assigned clusters in our terms), while the right part of the figure presents clusters of photos that were not assigned to a POI (unassigned clusters). Listing 1.2 shows part of the KML generated by the server that describes information about unassigned clusters including its symbolic name that begins with “[uc]”, identifier number (id), information regarding the number of photos, the number of people in the cluster, and cluster boundaries. Next, the analyst visually inspects the created clusters and performs the clustering again if needed (for example, when the size of clusters should be changed). She may remove some clusters that are irrelevant or unimportant using either the statistical information of the cluster (number of photos and people in a cluster) or her background knowledge of the area. Figure 4 summarizes this step by presenting a view in which the analyst can select unassigned clusters and artificially create a new POI identification for every selected cluster, either by giving a symbolic identifier or some meaningful name based on the knowledge of the area. Since Google Earth may contain different objects in a view, the system iterates over the objects and selects only those whose names begin with the “[uc]” identifier (see Listing 1.2). The identifiers of unassigned clusters that the analyst decides to add to the list of clusters that are important for the analysis, are sent to the server, which, in turn, queries the database, matches the received identifiers 10
[ uc ] u n a s s i g n e d c l u s t e r i d : 23 o w n e r s : 29 p h o t o s : 32]] > d e s c r i p t i o n > −16.926498 ,32.639085 ,0 −16.928043 ,32.640748 ,0 −16.925811 ,32.642844 ,0 −16.922807 ,32.643494 ,0 −16.922003 ,32.642939 ,0 −16.920404 ,32.640675 ,0 −16.921896 ,32.639835 ,0 −16.926498 ,32.639085 ,0 c o o r d i n a t e s > L i n e a r R i n g > o u t e r B o u n d a r y I s > Polygon > P l a c e m a r k > Listing 1.2. Server generated KML that describes information about unassigned clusters
to the ids stored in the database, adds these clusters to the list of assigned clusters and assigns symbolic names. In the final step, the analyst generates sequence patterns from the clusters that were assigned to existing and artificial POIs. Figure 5 shows a form in which the analyst can select such parameters as: database properties and the length of generated patterns. We used the Teiresias algorithm [27] for generating sequence patterns. Since the information about the generated sequences is not used directly for visualization, the server does not send the data in the KML format. Instead, we used a class serialization mechanism available in WCF to transfer the complex data structure to the client. The data includes the frequency of the sequence, the identifiers of the clusters in the sequence, the names of points of interest that constitute the travel sequence, the coordinates and centroids of every cluster in a sequence. The inspection of the travel sequences can be performed without referring to the exact location of clusters that are part of the sequence if the names of clusters belong to some existing POIs. However, when the sequence contains a cluster with a generated symbolic name, the analyst has to see the position of the cluster on the map. This is demonstrated by the following case. The most frequent pattern generated is Santa Lucia ⇒ newpoi-3. While Santa Lucia is a parish in the district of Funchal and may be known to the domain expert, newpoi3 is a name assigned by the system to an area in which the Wikipedia database does not contain a known POI. Clearly, this area should be located to give the analyst a hint about the sequence pattern. Sequences can be highlighted by clicking on the sequence pattern. Placemarks are added to the centers of the areas that are part of the selected sequence and the number assigned to them highlights the relative order of the area in
Developing Analytical GIS Applications with GEO-SPADE
503
Fig. 1. Selection of the area of interest
Fig. 2. Monthly photographic activity
the sequence. In the case of Santa Lucia ⇒ newpoi-3, Santa Lucia will by highlighted with the number one, while newpoi-3 receives the number two (Figure 6). 3.2 Region Exploration Using Geo-tagged Photos Photo-sharing web sites like Panoramio or Flickr provide users with the means to explore a specific area by searching for photos taken in that area using different filtering
504
S. Kisilevich et al.
Fig. 3. Multiple views of regions assigned to some existing POI (left) and regions where no POI was found (right)
Fig. 4. Selection of unassigned regions and creation of artificial POIs
criteria: keywords, recently uploaded photos, interesting photos defined by the number of comments written or by the number of times a photo was viewed. In [22], we presented a method for analyzing photo comments in terms of user opinions and sentiments that are present in comments. When the text parts that describe opinions
Developing Analytical GIS Applications with GEO-SPADE
505
Fig. 5. Sequence patterns
Fig. 6. Combined view of obtained sequence patterns and the map. The regions are highlighted by clicking on the sequences.
(attitude towards the quality of a photo) and/or sentiments (attitudes towards the objects depicted on a photo or mood expressions) are located in the text, the strength of the opinion and/or sentiments can be calculated and mapped to a continuous numerical scale. This allows searching for photos according to the opinion or sentiment scores.
506
S. Kisilevich et al.
We demonstrate how GEO-SPADE can be used by the user to perform the task of the area exploration using opinion and sentiment scores and show some technical details of the implementation. The exploration begins by navigating to an area of interest. The control panel shown in Figure 7 is the main control panel for filtering photos according to different criteria (including opinion and sentiment scores). When the focus is on the control panel, the current visual boundaries of the map view are sent to the server. The server connects to the database and fetches the information about all photos found in the area. The response structure consists of key-value pairs separated by “;”. An example of the response string is the following: service=statistics;photos=501;minOpinion=-0.70;maxOpinion=21.30;.... The first key-value parameter denotes the type of the response in such a way that the response handler can delegate the response to the appropriate service handler. The service handler extracts key-value pairs using regular expressions. This allows extraction of the known parameters without breaking the code even if the new parameters were added to the response string without updating the client side. The general statistics are displayed on the control panel, which allows the user to see how many photos are found in the region, what are the minimum and maximum sentiments and opinion scores of these photos and other relevant information. The next step is to retrieve the photos by applying one of the available filters (opinion, sentiment, etc.) and by restricting the number of photos on the map and on the control panel as well as by selecting the photos that were taken in a specific time period. The server’s response is similar to the one described above. However, it contains two parts. The first part is intended for parsing by the control panel and includes the information for N selected photos to be displayed in the control panel and sorted according to the selected filtering criteria (opinion, sentiment). The second part is a KML string that will be delegated to the Google Earth engine. Figure 8 shows an example of the KML response that consists of two representation styles. The left side of Figure 8 shows the map view of a region of Warsaw, Poland with thumbnails of images taken in that area filtered by sentiment scores. The right side of Figure 8, shows the same map view but instead of image thumbnails, the sentiment scores are mapped to colors. Both representations allow the user to quickly explore the area either by observing the image thumbnails or by navigating to a place where higher scores are given to images, which can be seen by looking at the color of the circles. Both ways make it possible to see the image itself and to retrieve more information about it (opinion or sentiment scores, and comments) by clicking on the thumbnail. 3.3 Hotel Price Prediction HoteLeverage is an ongoing project initiated by Travel Global Systems Inc. (TGS)11 and carried out in collaboration between TGS, Ben-Gurion University of the Negev (Israel) and Konstanz University (Germany). Serving as a Travel Service Provider (TSP), and the hotel broker between Product Suppliers (PSs) and consumers, TGS provides solutions to travel agencies (B2B) as well as to individual consumers. Accordingly, 11
http://www.travelholdings.com/brands tgs.html
Developing Analytical GIS Applications with GEO-SPADE
507
Fig. 7. The control panel for retrieving images according to one of the sorting criteria. The top-N retrieved images are also presented in the control panel. Centering the map view around the top-N photos is implemented by double-clicking the photo.
Fig. 8. Two photo representation styles: image thumbnails (left) and color coding according to the image scores of the selected criteria (right)
various PSs offer TGS attractive room prices, which TGS is committed to promoting by means of its Web sites and services. The same PS can provide other travel groups that offer similar services like TGS a lower price for the same room. This situation
508
S. Kisilevich et al.
Fig. 9. The control panel for model creation. The panel includes the list of hotels that should be included in the model; clustering algorithms that are applied to hotels and business centers; various components that will be part of the model; filters and the algorithm to be used for model creation.
Fig. 10. The map view with hotels in the area of Barcelona, Spain. The requested and predicted prices for the selected hotel are shown along with other relevant information about the hotel. The red triangle indicates that the proposed price is higher than the predicted one.
Developing Analytical GIS Applications with GEO-SPADE
509
can lead to considerable loss of revenue. However, searching for the prices of the rival companies is unfeasible due to the high volatility of search space (the hotel prices differ daily and, there is vast amount of PSs and TSPs). Therefore, it was decided to develop a decision support system that can identify hotels within TGS’s own database that possess the same characteristics (price changes, closeness to the transportation or points of interest, amenities in the room, etc...) and predict the objective price of the hotel in question. The comparison of the offered and predicted prices can give TGS the “leverage” to negotiate the lower price with a product supplier. The project consists of three milestones. By the first milestone, Ben-Gurion University students developed and evaluated data mining techniques for price prediction. By the second milestone, the technique was integrated into GEO-SPADE, while by the third milestone the system will be integrated into the TGS infrastructure. Figure 9 demonstrates one of the control panels available to the domain expert. The domain expert selects the region of interest. The list of hotels located in the region are drawn from the database and presented in the control panel. By default, all the hotels are selected for processing. However, the domain expert can remove hotels from the list. Next, the domain expert selects the appropriate algorithm for hotels and business locations clustering. The hotels and business locations clustering is used for estimating the closeness of a hotel to other hotels and to relevant businesses. Then, the domain expert selects the components that should be included into the model (closeness to points of interest, hotel amenities, hotel attributes, prices, etc...). If needed, filters are selected (hotels with specific number of rooms, hotel category and a specific period). Finally, she selects the algorithm for building prediction model and the name of the model for future reference. Figure 10 shows the map view with an example of predicted price for a hotel in Barcelona, Spain. It shows the requested and predicted price. The red triangle indicates that the proposed price is higher than the predicted one. In addition, it shows other relevant information about the hotel and the calculated statistics from the model such as distance to the nearest beach or to the main street.
4 Conclusions and Future Work This paper extends [19] by focusing on real-life scenarios of spatial analysis, exploration and decision support. Specifically, we presented three case studies in which GEO-SPADE is successfully evaluated on problems such as analysis of tourist activity, geo-tagged photo search and hotel price prediction. This is achieved by the architecture that is based on a thin client paradigm, pluggable components and SOA-based architecture in which the core of the functionality is developed separately in any programming language and run on the server, while the results of the computation are transferred using Web services. The results that are targeted for direct visualization are communicated in the KML format and delivered to the Google Earth engine. The results that are required for further processing are communicated to the client-side plug-in component in any format suitable for the developer. In future research we will concentrate on enhancing the capabilities of the framework and extending the use of the framework by working on various GEO-related problems.
510
S. Kisilevich et al.
We will also integrate several mapping technologies into GEO-SPADE such as 2D maps from different providers while preserving KML as a primary protocol for data interchange and visualization. Acknowledgements. This work was partially funded by the German Research Society (DFG) under grant GK-1042 (Re-search Training Group “Explorative Analysis and Visualization of Large Information Spaces”), and by the Priority Program (SPP) 1335 (“Visual Spatio-temporal Pattern Analysis of Movement and Event Data”).
References 1. Peters, D.: Building a GIS: System Architecture Design Strategies for Managers. ESRI Press (2008) 2. Friis-Christensen, A., Ostlander, N., Lutz, M., Bernard, L.: Designing service architectures for distributed geoprocessing: Challenges and future directions. Transactions in GIS 11(6), 799 (2007) 3. Diaz, L., Granel, C., Gould, M., Olaya, V.: An open service network for geospatial data processing. In: An Open Service Network for Geospatial Data Processing: Free and Open Source Software for Geospatial (FOSS4G) Conference (2008) 4. Schaeffer, B., Foerster, T.: A client for distributed geo-processing and workflow design. Location Based Services Journal 2(3), 194–210 (2008) 5. Foerster, T., Schaeffer, B., Brauner, J., Jirka, S., Muenster, G.: Integrating ogc web processing services into geospatial mass-market applications. In: International Conference on Advanced Geographic Information Systems & Web Services, pp. 99–103 (2009) 6. Andrienko, G., Andrienko, N., Dykes, J., Fabrikant, S., Wachowicz, M.: Geovisualization of dynamics, movement and change: key issues and developing approaches in visualization research. Information Visualization 7(3), 173–180 (2008) 7. Grossner, K.E.: Is google earth, “digital earth?” - defining a vision. University Consortium of Geographic Information Science, Summer Assembly, Vancouver, WA (2006) 8. Patterson, T.C.: Google Earth as a (not just) geography education tool. Journal of Geography 106(4), 145–152 (2007) 9. Goodchild, M.: The use cases of digital earth. International Journal of Digital Earth 1(1), 31–42 (2008) 10. Sheppard, S., Cizek, P.: The ethics of Google Earth: Crossing thresholds from spatial data to landscape visualisation. Journal of Environmental Management 90(6), 2102–2117 (2009) 11. Farman, J.: Mapping the digital empire: Google Earth and the process of postmodern cartography. New Media & Society 12(6), 869–888 (2010) 12. Smith, T., Lakshmanan, V.: Utilizing google earth as a gis platform for weather applications. In: 22nd International Conference on Interactive Information Processing Systems for Meteorology, Oceanography, and Hydrology, Atlanta, GA, January 29-February 2 (2006) 13. Wood, J., Dykes, J., Slingsby, A., Clarke, K.: Interactive visual exploration of a large spatiotemporal dataset: Reflections on a geovisualization mashup. IEEE Transactions on Visualization and Computer Graphics 13(6), 1176 (2007) 14. Compieta, P., Di Martino, S., Bertolotto, M., Ferrucci, F., Kechadi, T.: Exploratory spatiotemporal data mining and visualization. Journal of Visual Languages and Computing 18(3), 255–279 (2007) 15. Slingsby, A., Dykes, J., Wood, J., Foote, M., Blom, M.: The visual exploration of insurance data in google earth. In: Proceedings of Geographical Information Systems Research UK (GISRUK), pp. 24–32 (2008)
Developing Analytical GIS Applications with GEO-SPADE
511
16. Pezanowski, S., Tomaszewski, B., MacEachren, A.: An open geospatial standards-enabled google earth application to support crisis management. Geomatics Solutions for Disaster Management, 225–238 (2007) 17. Di Martino, S., Bimonte, S., Bertolotto, M., Ferrucci, F.: Integrating google earth within olap tools for multidimensional exploration and analysis of spatial data. In: Proceedings of the 11th International Conference on Enterprise Information Systems, pp. 940–951 (2009) 18. Stensgaard, A., Saarnak, C., Utzinger, J., Vounatsou, P., Simoonga, C., Mushinge, G., Rahbek, C., Møhlenberg, F., Kristensen, T.: Virtual globes and geospatial health: the potential of new tools in the management and control of vector-borne diseases. Geospatial Health 3(2), 127–141 (2009) 19. Kisilevich, S., Keim, D., Rokach, L.: Geo-spade: A generic google earth-based framework for analyzing and exploring spatio-temporal data. In: 12th International Conference on Enterprise Information Systems, pp. 13–20 (2010) 20. Kisilevich, S., Keim, D., Rokach, L.: A novel approach to mining travel sequences using collections of geo-tagged photos. In: The 13th AGILE International Conference on Geographic Information Science, pp. 163–182 (2010) 21. Kisilevich, S., Krstajic, M., Keim, D., Andrienko, N., Andrienko, G.: Event-based analysis of people’s activities and behavior using flickr and panoramio geo-tagged photo collections. In: The 14th International Conference Information Visualization, pp. 289–296 (2010) 22. Kisilevich, S., Rohrdantz, C., Keim, D.: “beautiful picture of an ugly place”. exploring photo collections using opinion and sentiment analysis of user comments. In: Computational Linguistics & Applications, CLA 2010 (2010) 23. Graupmann, J., Schenkel, R.: GeoSphere-Search: Context-Aware Geographic Web Search. In: 3rd Workshop on Geographic Information Retrieval (2006) 24. Ferraz, V.R.T., Santos, M.T.P.: Globeolap: Improving the geospatial realism in multidimensional analysis environment. In: 12th International Conference on Enterprise Information Systems, pp. 99–107 (2010) 25. Wachowicz, M., Ying, X., Ligtenberg, A., Ur, W.: Land use change explorer: A tool for geographic knowledge discovery. In: Anseling, L., Rey, S.J. (eds.) New Tools for Spatial Data Analysis, Proceedings of the CSISS Specialist Meeting (2002) 26. Lundblad, P., Eurenius, O., Heldring, T.: Interactive visualization of weather and ship data. In: Proceedings of the 13th International Conference Information Visualisation, Washington, DC, USA, pp. 379–386. IEEE Computer Society, Los Alamitos (2009) 27. Rigoutsos, I., Floratos, A.: Combinatorial pattern discovery in biological sequences: the TEIRESIAS algorithm. Bioinformatics-Oxford 14(1), 55–67 (1998) 28. Hall, M., Frank, E., Holmes, G., Pfahringer, B., Reutemann, P., Witten, I.: The WEKA data mining software: An update. ACM SIGKDD Explorations Newsletter 11(1), 10–18 (2009) 29. Ester, M., Kriegel, H., Sander, J., Xu, X.: A density-based algorithm for discovering clusters in large spatial databases with noise. In: Proc. KDD, vol. 96, pp. 226–231 (1996) 30. Girardin, F., Fiore, F.D., Ratti, C., Blat, J.: Leveraging explicitly disclosed location information to understand tourist dynamics: a case study. Jouranl of Location Based Services 2(1), 41–56 (2008) 31. Andrienko, G., Andrienko, N., Bak, P., Kisilevich, S., Keim, D.: Analysis of communitycontributed space-and time-referenced data (example of panoramio photos). In: GIS 2009: Proceedings of the 17th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems, pp. 540–541 (2009)
BioStories: Dynamic Multimedia Environments Based on Real-Time Audience Emotion Assessment Vasco Vinhas1,2, Eugénio Oliveira1,2, and Luís Paulo Reis1,2 1
FEUP – Faculdade de Engenharia da Universidade do Porto Rua Dr. Roberto Frias s/n, Porto, Portugal 2 LIACC – Laboratório de Inteligência Artificial e Ciência de Computadores Rua do Campo Alegre 823, Porto, Portugal {vvm,eco,lpreis}@fe.up.pt
Abstract. BioStories is the outcome of a four-year research project focused in uniting affective and ubiquitous computing with context aware multimedia environments real-time generation. Its initial premise was based in the possibility of performing real-time automatic emotion assessment trough online biometric channels monitoring and use this information to design on-the-fly dynamic multimedia storylines emotionally adapted, so that end users would unconsciously be determining the story graph. The emotion assessment process was based on biometric channels dynamic fusion such as EEG, GSR, respiration rate and volume, skin temperature and heart rate on top of Russell’s circumplex model of affect. BioStories’ broad scope also allowed for some spin-off projects namely mouse control through EMG that resulted in a tested technology for alternative/inclusive interfaces. Exhaustive experiments showed 86% of success rate for emotion assessment, IC95%(p)≈(0.81, 0.90), in a dynamic tridimensional virtual environment with an immersiveness user score of 4.3 out of 5. The success of the proposed approach allows the vision of its appliance in several domains such as virtual entertainment, videogames and cinema as well as direct marketing, digital TV and domotic appliances. Keywords: Emotion assessment, Biometric channels, Dynamic interfaces, Multimedia content.
with more disseminated, powerful and cheaper wireless communication facilities constituted the cornerstone for context-aware computing possibilities. As traditional multimedia distribution, such as television, has been suffering from extreme content standardization and low levels of significant user interaction, it is believed to exist an important breakthrough opportunity in uniting real-time emotion assessment based on biometric information and dynamic multimedia storylines so that interaction and decisions can be performed at a subconscious level, thus providing greater immersiveness through combining affective and ubiquitous computing towards new interaction paradigms. It has been in this context that BioStories have arisen as a prototype proposition for generating and distributing highly dynamic multimedia content not confined but with special focus in immersive tridimensional environments in which storylines are based and determined on user’s online emotional states that are assessed in real-time by means of minimal invasive biometric channel monitoring and fusion. As the set of biometric sources is also intended to be as flexible as possible, so that the system can be used in diverse contexts, it encloses the resource of distinct sources namely: electroencephalography, galvanic skin response, respiration rate and volume, skin temperature and heart rate. In order to cope with the need to perform continuous and smooth emotional assessment, the representation underneath the classification is a variation of the bidimensional Russell’s circumplex model of affect. Due to the broad spectrum of the global project, there were several spin-off opportunities from which the mouse control through electromyography is elected for further depiction as an alternative, inclusive and complementary user interface proposition. Considering the BioStories main track, its latest version achieved a success rate of eighty-six percent, IC95%(p)≈(0.81, 0.90), regarding emotion assessment with elevated content immersive levels, allowing the forecast of the technology appliance to several domains such as traditional interfaces extension, domotic environments, virtual entertainment and cinema industry as well as digital television, direct marketing and psychiatric procedures This document is structured as follows: in the next section a global state of the art study is depicted considering emotion assessment, dynamic multimedia contents, hardware solutions and global integrating projects; in section three the broad BioStories work is described in detail by depicting its global architecture, referring spin-off projects and defining the emotion assessment process as well as the multimedia content generation in a tridimensional highly immersive environment. In the following section, the experimental results are presented both for the mouse control through EMG and the main BioStories project; finally the last section is devoted to critical analysis, conclusion extraction and future work areas identification by means of application domains recognition.
2 Related Projects Ekman’s emotion model [4] contemplated six main universal classes of affect but in a discrete fashion: anger, joy, fear, surprise, disgust and sadness. However Russell’s proposal of a circumplex model based on a bidimensional arousal/valence plane allows for a continuous, analog emotional state mapping [13]. The introduction of a
514
V. Vinhas, E. Oliveira, and L.P. Reis
third dimension – dominance – also proposed by Russell, is still discussed, although generally accepted it lacks of biometric evidence for automatic emotion assessment ends [14]. The resource to biosignals to perform automatic emotional assessment has been conducted with success using different channels, induction and classification approaches. These emotion induction proposals range from image presentation, stage actor representation, personal stories recall or film and multimedia projection [11],[12],[17]. Concerning the set of biometric channels elected, there is also a wide variety of incented research work lines, but although some studies focus their attention in multichannel EEG and fMRI with promising results, these approaches are still believed to be either extremely intrusive and/or not convenient in real situations [15]. Most recent approaches are based in distinct conjunctions of simple hardware solutions for monitoring several biometric channels, namely GSR, skin temperature, respiration parameters, heart rate and blood pressure [3],[11],[16], each one with distinct emotion classification methodologies but there is a great acceptance and unanimity around the continuous model of affect as well as the resource to biosignals. Although the common denominator of almost all research project in this field consists in automatic emotional assessment, for the main principle of the proposed work, a special attention must be paid both to online real-time classification and its integration with external systems, particularly regarding human-computer interfaces. There have been conducted several efforts towards human emotion disclosure exploring social networks and chat systems. Despite the theoretical success of these proposals they have faced some usability resistance when used in direct human-human communication [5]. However, this limitation is suppressed when emotion assessment integration is performed in pure human-machine interfaces such as the efforts in order to generate audio [6] and images [2] emotionally contextualized exemplify. This integration research track has continuously been pushed further in order to mix affective with ubiquitous computing towards new interaction paradigms that are emotionally context-aware, thus enabling extreme interface dynamism to cope with distinct user moods and emotional responses. These proposals have envisioned and developed systems to integrate real-time emotion assessment based on biosignals processing into diverse domains such as everyday computer interfaces [3], or professional car driver environment optimization [10]. This broad scope allows for a plausible perspective of the appliance of such systems in several and distinct human computer interaction domains. It is in this vibrant and alive research context [18] that the current proposition stands, intending to be a valid contribution towards increasingly dynamic and emotionally aware human-computer interaction but not limited to traditional interfaces as the main goal is to, in real-time, mold and provide dynamic multimedia content that fits the audience emotional needs. In order to pursue this aspiration, and based on the described state of the art summary, the Russell’s bidimensional model of affect is reused and adapted; GSR, skin temperature and respiration volume and rate sensors are used as biosignals to perform real-time emotion assessment through fuzzy sensorial fusion and the outcome is integrated into a flexible multimedia content generation and distribution modular framework.
BioStories: Dynamic Multimedia Environments
515
3 BioStories Description The key design principles and requirements are identified and listed below: Complete architecture component modularity to ensure full freedom to parallel development and path reversal possibility, always critical in exploratory courses of action; Biometric channels composition flexibility degree maximization so that diverse conjugations could be experimented to guarantee several emotional assessment scenarios and promote third-party suppliers independence; Stakeholders’ geographic and logical distribution possibility as the project contemplates a strong experimental component it has been found imperative that the subjects, coordinators and researchers would have the chance of being apart for allowing controlled environments; Real-time signal acquisition and processing complemented with data storage for posterior analysis to allow online emotion assessment and multimedia distribution but also granting offline critical processing; Third-party software independence maximization to assure that although some hardware solutions have to be adopted the whole architecture shall not be limited or constrained by external entities. The conducted state of the art revision, specially the one dated from the beginning of the research, showed the absence of such framework ready for adoption [16]. As a natural consequence, the decision was to develop the envisioned platform following the enunciated principles that resulted in the design presented through Figure 1 [17]. The framework description might be done by following the information flow in a cycle. So, first to the end user a given multimedia content is presented and several biometric signals are collected from one or various equipments. In Figure 1, as example, there have been referred three distinct channels each one with a different data communication interface for illustrating the framework versatility. For every channel a software driver has been developed to ensure independency and the collected data is further made available through a TCP/IP network from where different client applications can connect and online access and process information. This approach copes with the physical and logical stakeholder’s independence and distribution. At this end of the architecture, there is a client for each biometric channel responsible both for pre-processing activities and enabling real-time monitoring. Their outputs are the input for the processing and analysis backbone responsible for data storage, emotion assessment and combined with the emotional policy, choose from the multimedia catalogue or generate the next multimedia content to be provided to the end user. The process of performing online emotion assessment and its conjugation with the emotional policy in order to influence the storyline is object of depiction in the subsection devoted to BioStories, where it is also visible the maintenance of the architectural structure across all developed prototypes, including the spin-off projects. The initial stage of the research was unsurprisingly characterized by the definition of the already detailed framework and early biometric equipment acquisition. Within this scope EEG and heart rate acquisition hardware solutions were purchased, namely Neurobit Lite™ and Oxicard®. While the first ensured high portability levels alongside with high usability levels, infrared data transmission and API disclosure for driver development; the second was caracterized by reduced dimensions, high perfusion degree and also API disclosure, thus enabling independent software design.
516
V. Vinhas, E. Oliveirra, and L.P. Reis
Fig. 1. BioStories Global Framework Architecture Diagram
During the preliminary experiments e conducted with the EEG equipment, due too an unforeseen active electrode positioning protocol error, EMG signals started too be registered due to its location in the user’s temporal zone, between the eye and the ear. As the signal pattern was so s distinct and clear, it was taken the decision of spinnning off this opportunity as an alternative interaction research track based on intentioonal eye closure detection [1]. The basilar project princciple was to provide a simple yet effective method for w wink detection and map this into external actions in a way that it could be faced either aas in inclusive human machine interface i for disabled people or an alternative interacttion mechanism as an extension n for traditional interfaces. Another important premise w was the need to keep the assesssment algorithm straightforward enough to be computedd in embedded systems without serious processing effort. This design resulted in th he definition of an algorithm based on the establishmennt of two threshold parameters: peak p and duration. The first parameter represents the m minimum signal amplitude valu ue so that intentional eye closure action might be present. Only values above such lim mit are considered to be potential winks. The duration parameter refers to the minim mum time span that the signal must persist above the ppeak limit to complete the action n detection process and, therefore, assign wink recognitiion. Once the signal processing g assessment procedure is conducted and wink recognittion
BioStories: Dynamic Multimedia Environments
517
Fig. 2. Framework Architeccture Instantiated to BioStories 3D Immersive Environm ment Prototype
is online performed, there is the need to trigger the corresponding external actiion, result of the interface purp pose. This is been achieved by click operations and ddrag mode activation emulation.. These actions were tested using the computer card gaame Microsoft Hearts which exp perimental results are reserved to the appropriated poinnt in the following section. Following the successfu ul approach, the detection methodology has been extennded to contemplate both differeential and conjunction eye analysis, thus enabling left frrom right wink detection as welll as single from both eye intentional closure. This has bbeen wer performed by setting two extra e pairs of peak/duration parameters, necessarily low for the non-dominant eye and higher for both eyes. Although standard parameeter values are suitable for mostt users, there is the possibility to tweak and fine tune thhese in order to maximize classiffication success rates. Further considerations reeferring to this research line are due to the results and crritical analysis sections, as th he main BioStories track, namely the emotion assessm ment and online multimedia geneeration are depicted in the following subsection. The main track researcch project designated as BioStories has known seveeral prototype versions. The ressults achieved in one approach were analyzed and the extracted conclusions were incorporated in the next version until the final approaach, now subject of detail. Having this in mind, and d undertaking just a swift contextualization, the initial approach was based on EEG signal s analysis complemented with GSR data. It was fouund
518
V. Vinhas, E. Oliveira, and L.P. Reis
that subject gender alongside with high-frequency EEG constituted key factors for emotion assessment [7],[8]. These findings were further explored in the first automatic emotion assessment attempt based on data pre-processing techniques together with offline cluster analysis [9]. As for enhancing the EEG data collected multichannel equipment needed to be acquired with both invasive and financial impact, it was chosen to use GSR, respiration and skin temperature sensors to perform real-time emotion assessment. In order to close the information loop, IAPS Library was used for multimedia content supply [17]. This prototype version was further improved by replacing the still images for dynamic multimedia content and refining the emotion assessment methodology [18]. The described evolution has now taken the final step with the latest BioStories prototype that enhances both emotion assessment and multimedia immersiveness levels. Starting the project description by the framework architecture instantiation, it is completed based and adapted from the original one, as illustrated through Figure 2. The main differences reside in the biometric channels and equipments used. In this version, Nexus-10 hardware solution was elected as it congregates the simultaneous collection of ten distinct channels in a compact portable unit with data communication based on wireless Bluetooth protocol thus granting complete software independence. For this version, there have been used as biometric channels: GSR; respiration sensors for acquiring volume and rate; and skin temperature. The remaining system components are very similar to the previously described architecture with the multimedia catalogue being replaced by an aeronautical simulator responsible for tridimensional environment generation and control – in this case Microsoft Flight Simulator X. In order to promote greater immersiveness levels, although not depicted in Figure 2, there have been used tridimensional goggles with three degrees of freedom – roll, pitch and yaw – from Vuzix iWear. It was found useful to employ Figure 3 as a reference for module interaction description, as it represents a running screenshot of the main BioStories application: the Emotion Classifier. First it is necessary to establish the connection to the Collector for accessing online biometric data feed; setting the link to the aeronautical simulator for real-time environment parameterization; and finally and optionally connect to a local or remote database for processing data storage. On the bottom of the screen there are two separate areas for signal monitoring: the one on the left provides the latest data collected and the one on the right allows for baseline and interval definition for each channel. In the middle of the screen, there is the emotional policy control panel where session managers are able to determine which emotional strategy shall be activated from contradicting or reinforce current emotional state, force a particular one or tour around the four quadrants. Specifically related to the emotion assessment methodology, the dynamic chart on the right allows for real-time user emotion monitoring as the big red dot stands for the initially self-assessed emotional state and the smaller blue dot for the current one. The classification process is based on sensorial fusion on top of the bidimensional Russell’s circumplex model of affect, depicted in the state of the art section. The whole process resides in the calculus of valence and arousal values, being the first responsible for the horizontal emotional movement across the x-axis from displeasure to pleasure states; and the second accountable for vertical emotional displacement across y-axis from low to high excitement levels. With the purpose of pre-processing data towards
BioStories: Dynamic Multimedia Environments
519
real-time emotional assessment, a normalization process is conducted so that bboth valence and arousal valuess are mapped into the [-1,1] spectrum. With this approaach, emotional states are faced as a Cartesian points in a bidimensional environment. Considering the mention ned normalization process, it is important to detail thee already superficially mentioned calibration process. In spite of any given Cartessian point represents a normalizzed defined univocal continuous emotional state, it cann be the result of an infinite num mber of biosignals conjugation.
Fig. 3. BioStories B Running Application Screenshot
1
.
,
⁄
.
. Eq. 1.. Dynamic Biometric Data Scaling Model
This evidence alongside with the extreme biosignals baseline definition discreppancy between two people or even for the same individual across time – either duee to morphologic differences orr context variations such as sleep time and quality, errratic food habits or external weaather conditions – leads to the absolute need for a standdard calibration and biometric ch hannels fusion. The calibration is perforrmed through a self-assessment process at the beginningg of the session, although it can n be repeated without limitation also during the experim mental protocol, by directly piinpointing the predominant current emotional state. T This action enables the definitio on of the normalized baseline point according to the reealtime assessed biometric infformation and for each channel it is considered an initiaally non binding twenty percen nt signal variability allowance. Whenever overflow is detected, the dynamic scaling g process is activated as illustrated through Equation 1 and might be summarized as the stretching of the biometric signal scale when normaliized readings go beyond the inteerval of [-1,1]. This process is conducted independentlyy for each of the channels and reesults in a non-linear scale disruption, ensuing in a greater density towards the limit brreach. First, c1 – any given biometric channel – maxim mum value is determined by com mparing current reading with the stored value – Equatioon 1 (a). If the limit is broken, the system recalculates the linear scale factor for vallues greater than the baseline neeutral value, having as a direct consequence the increassing of the interval’s density – Equation 1 (b). Based on the new interval definitiion,
520
V. Vinhas, E. Oliveira, and L.P. Reis
subsequent values shall e normalized accordingly – Equation 1 (c), (d). With this approach, and together with dynamic calibration and data normalization, it becomes possible for the system to perform real-time adaptations as a result of user’s idiosyncrasies and signal deviations, thus assuring continuous normalized values. Considering the aeronautical simulator as a tridimensional multimedia engine, the current emotional state and the target one, determined by the policy, influence the simulation parameters, namely weather conditions, scenery and maneuvering. The two quadrants associated with displeasure determine worse climacteric conditions ranging from thunderstorms to fog to cope with high or low levels of arousal. Fair weather conditions are associated with the two quadrants related to pleasure. The main routes are configurable and there have been designed two courses: one very simple that consists of an oval-shaped route around an island, and the second with many closed turns at low altitudes. Also maneuvering is controllable varying speed, heading and altitude swiftly and suddenly for the first route and for the second one applying additional features like maximum bank and yaw damper which limits the maximum roll during turns, and reduces rolling and yawing oscillations, making the flight smoother and calmer.
4 Experimental Results The In order to assess the adequability of the proposed approach for human computer interaction objectives, the following experiment was designed relating to the mouse control through EMG. Thirty volunteers selected from fellow laboratory researchers and college students were randomly divided into two groups. Group A emulated trained and experienced end users with basic technology knowledge and Group B represented inexperienced users without any previous knowledge or contact with the system. To the first group elements there have been given the opportunity to try the prototype for ten minutes after a short theoretical formation while the second group elements jumped straight to the validation experiment. The session consisted in the users performing ten intentional dominant eye closures – or the system classified as winks ten actions as result of false positives. The thirty sessions result distribution, divided into the two groups, showed the positive impact of the initial contact with the technology and system as well as performing parameter fine tuning. Nevertheless it also shows that the learning curve is easily beaten, stating high usability levels and user adaptation. The trained users group reached a mean success rate of ninety percent, IC95%(p)≈(0.84, 0.91), with the minimum value of sixty five percent with some error free records. The lack of training prevented such success levels as the mean assessment success rate is less than seventy percent with values varying between twenty five and eighty percent, IC95%(p)≈(0.51, 0.63). These results corroborate the initial hypothesis that stated that electromyography could be used through a simple approach with low computer resources consumption as a technique to perform effective, inclusive and alternative user interfaces. The conducted experiments also pointed to the positive impact of user training and technology familiarity without requiring overcoming a hard learning curve slope.
BioStories: Dynamic Multimedia Environments
521
Considering the BioStorie’s main track, both Table 1 and Table 2 condense by means of confusion matrixes the results of automatic emotion induction and assessment, respectively. Both perspectives are based in the discretization into the four basic quadrants of the Russell’s circumplex model and the user self-assessment is directly compared to the described automatic process – in the induction method emotional policy is used instead and reports to IAPS Library based prototype [17]. The induction process overall results point to a success rate of sixty-five percent, IC95%(p)≈(0.56, 0.74), greatly due to the lack of second quadrant precision as some pictures were considered context and cultural dependent. Table 1. Automatic Emotion Induction Confusion Table
In what regards to the emotion assessment based on the proposed fuzzy sensorial fusion, and taking into account the tridimensional virtual aeronautical environment, the global success rate achieves almost eighty-six percent, IC95%(p)≈(0.81, 0.90), with most users facing the experience with high arousal levels as seventy-two percent of the situations were self-assessed in first and second quadrants. The emotion classification error was distributed in a fairly linear fashion. Table 2. Automatic Emotion Assessment Confusion Table
Taking into account the second subject, the simulation engine acted as predicted allowing for full context control and route definition. With direct correspondence with the emotional response and assessment all the accessed simulation parameters initially enunciated such as weather conditions, routes and general maneuvering controls were successfully dynamically accessed and tweaked in real-time as SimConnect API from Microsoft Flight Simulator X acted as predicted while being integrated into the projected base framework as an additional module. Between the simulation and the integration topic, it has been conducted a survey amongst the thirty four subjects in order to assess the immersiveness level of the whole experiment – that consisted in three sequential stages from take-off, fifteen minute cruising and the final landing phase. The results showed an average classifica-
522
V. Vinhas, E. Oliveirra, and L.P. Reis
tion of four point three sco ore out of five with a minimal classification of three, tthus demonstrating a significan nt success in providing realistic immersive environmeents greatly potentiated by the used tridimensional tracking eye-wear. As take-off and landing are traditionally associated with higher apprehension and anxiety levels amongst passengers, it has been conducted a specific arousal moniitoring across the simulation sessions. The results are depicted through Figure 4 tthat exposes the predicted peaak zones with high normalization latency after take--off, therefore strengthening thee realism assessment conducted in the course of the reeferenced survey.
Fig. 4. Average A Arousal Levels During Simulation
Considering the last top pic, dedicated to integration and information loop comppletion, one must refer to thee absolute framework reliability and flexibility across the latest prototype developmeent and test. It has accommodated the dynamic changee of biometric channels with diistinct hardware solutions and has coped efficiently w with data communication distrib bution and analysis in real-time as well as enabled full third-party integration and modular operability. Finally, it has been confirmed the complete information loop closure, since initial multimedia presentation, biosignnals acquisition, distribution an nd processing, real-time emotional state assessment and online multimedia content generation g and further cycle iteration. The latest prototype an nd approach took advantage of multimedia presentattion reformulation as well as grreatly enhanced both multimedia realism and emotion assessment proposed methodo ology by enabling continuous and discrete emotional state definition and monitoring.
5 Conclusions and Future F Work The single fact of the fram mework design being able to cope with the distinct BioS Stories prototype versions alongside with its usage to support spin-off projects, prooved that the initial option to deevelop from scratch a common support framework waas a successful call. All of the initially proposed functional and non-functional requuireus accomplishing a stable yet flexible and dynamic test bed ments were totally met, thu and an innovative standalo one human computer interaction platform. These achieevements might be instantiated d by means of the complete architecture modularity ass all its components are strictly y compartmented therefore enabling extreme physical and logical stakeholders distrib bution. On top of this, it has been attained an extrem mely plastic biometric channel set and emotion assessment method allows for diveerse combinations according to environment conditions while assuring manufacturer and
BioStories: Dynamic Multimedia Environments
523
third-party independence. Finally the possibility or performing real-time emotion assessment while storing the raw collected data for posterior analysis alongside with multimedia content generation and distribution constitute a powerful cornerstone of the whole research project. Still in the initial subject, it is worth to examine the results attained by the most significant spin-off project, namely the mouse control through electromyography. This prototype as a proof-of-concept demonstrated that a simple minimal invasive approach using EMG to perform intentional eye close action detection was able to achieve high hit ratios while having a negligible impact in user’s environment. Its practical appliance to emulate discrete mouse movements and clicks verified the possibility of constituting a stable interaction paradigm both for inclusive proposes for disabled people but also as an extension for traditional user interfaces whenever manual usage is not advisable or already overloaded. The results presented concerning BioStories confirmed the initially enunciated hypothesis that multimedia content could be generated and/or the base storyline could be dynamically changed directly according to the audience emotional state assessed in real-time by means of biometric channel monitoring. Equally, the emotional model, adapted from Russell’s circumplex model of affect, confirmed its ability to realistically represent emotional states in a continuous form while enabling their discretization. Specifically in what concerns the emotional assessment methodology, the proposed approach based on sensorial fusion with high levels of personalization enhanced individual and temporal biosignals independence and adaptability. Still in this particular domain, this dynamic method allowed for high levels of flexibility in what concerns biometric channel set definition, thus permitting further developments and system employments. Taking into account the multimedia content division of the project, earlier BioStories prototypes illustrated that the usage of still images did not provide the needed immersiveness for strong and evident emotions, thus the latest option being concentrated continuous environments distributed through immersive goggles. The achieved results showed that this proposal was effective and the method for scene generation alongside with the content visualization method provided the levels of immersion and realism required for triggering and sustain real emotional responses. Probably the most immediate application of the proposed technology is the videogame and virtual entertainment industry. The possibility of real-time rich immersive virtual dynamic environments – as this domain is defined – in conjunction with online user emotional state retrieval constitute a perfect fit for this approach. On top of these factors, traditional end users offer little resistance in adopting new enhancing interaction solutions. Another positive factor resides in that the system could be designed for single user and single distribution as multiplayer platforms are greatly online based. The greatest challenge in this application is believed to be the biometric hardware miniaturization without signal quality loss in a way that they could be integrated into a single electronic consumer good. The second line of prospect resides on the cinematographic industry. In this case the challenges are much different as the multimedia contents are not continuous but discrete and thus generating tree storylines has an economic impact as further scenes need to be shot but as the film depth and amplitude would be enhanced it is believed that these hurdles could be suppressed as the content would not expire in a single
524
V. Vinhas, E. Oliveira, and L.P. Reis
view. From a technical stand, it is also needed to define if the content distribution is to be individualized or centralized in a common screen just like nowadays. The first option enables full individual content adaptation but prevents the traditional cinema experience, while the second does not allow for fully individualized emotional control. Regarding the emotion assessment process, as attaching physical equipment to all the audience is not feasible, it is envisioned the usage of intelligent seats equipped with position sensors as well as infrared cameras for skin temperature and body position evaluation. As a natural extension of the previous point, digital television arises. The advent of bidirectional communication opportunity in digital television has been, until the present day, modestly explored and confined to basic interaction mechanism such as televote or programming guide description. It is believed that the appliance of the proposed approach would enable the exponentiation of these levels allowing greater dynamism in recorded contents and, above all, promote real-time programmatic changes to live content according to the online audience response. One can image its massive impact when addressing advertising, editorial line options or political impact of declarations and news. The promoted changes with this technology would instigate the creation of a TV2.0 replicating the huge leap forward taken by the designated Web2.0. The technologically challenges are in the middle of the first and the second domains, as they can be tackled from a controlled personalized environment but the hardware solutions must be as minimal invasive as possible. Direct marketing applications might start precisely with its appliance to digital television. As soon as marketers have access to costumers emotional states, specific designed advertisements can be developed and distributed exactly to match a particular emotion profile in order to potentiate campaign returns. In these alternative scenarios it would be needed the development of non invasive solutions based on video monitoring and real-time location systems in a complementary way of that exposed in the cinematographic domain. In order to bring to a close the identification of future work areas and application opportunities, one shall point the chance to apply this approach to medical procedures in general, and in psychiatric procedures, in particular. The patient’s emotional state knowledge by the physician is a valuable key both for diagnostic and treatment proposes. This statement is even accentuated when referring to psychiatric domains such as phobias and depression. On the other hand, there is a need for stricter and more reliable emotion assessment even if real-time can be sacrificed, thus classification methods are on the line for improvement as future work research lines in this scope. As a final remark, it is important to distinguish the quantity and diversity of potential practical application of the proposed approach and technology thus enabling several research lines opportunities with a potential colossal impact not only, but with a special focus, in the field of human computer interaction.
References 1. Gomes, A., Vinhas, V.: Mouse Control Through Electromyography. In: BIOSIGNALS 2008 – International Conference on Bio-inspired Systems and Signal Processing, pp. 371– 376 (2008)
BioStories: Dynamic Multimedia Environments
525
2. Benovoy, M., Cooperstock, J., Deitcher, J.: Biosignals Analysis and its Alication in a Performance Setting - Towards the Development of an Emotional-Imaging Generator. In: Proceedings of the First Inte Conference on Biomedical Electronics and Devices, pp. 253–258 (2008) 3. van den Broek, E.L., et al.: Biosignals as an Advanced Man-Machine Interface. In: BIOSTEC International Joint Conference on Biomedical Engineering Systems and Technologies, pp. 15–24 (2009) 4. Ekman, P.: Emotion in the Human Face, pp. 39–55. Cambridge University Press, Cambridge (2005) 5. Wang, H., Prendinger, H., Igarashi, T.: Communicating emotions in online chat using physiological sensors and animated text. In: Conference on Human Factors in Computing System, pp. 1171–1174 (2004) 6. Chung, J.-w., Scott Vercoe, G.: The affective remixer: personalized music arranging. In: Conference on Human Factors in Computing Systems, pp. 393–398 (2006) 7. Teixeira, J., Vinhas, V., Oliveira, E., Reis, L.P.: General-Purpose Emotion Assessment Testbed Based on Biometric Information. In: KES IIMSS - Intelligent Interactive Multimedia Systems and Services, pp. 533–543. University of Piraeus, Greece (2008) 8. Teixeira, J., Vinhas, V., Oliveira, E., Reis, L.P.: MultiChannel Emotion Assessment Framework - Gender and High-Frequency Electroencephalography as Key-Factors. In: Proceedings of ICEIS 2008 - 10th International Conference on Enterprise Information Systems, pp. 331–334 (2008) 9. Teixeira, J., Vinhas, V., Oliveira, E., Reis, L.P.: A New Aroach to Emotion Assessment Based on Biometric Data. In: WI-IAT 2008 - IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology, Sydney, Australia, December 912, pp. 505–511 (2008) 10. Katsis, C., Katertsidis, N., Ganiatsas, G., Fotiadis, D.: Towards Emotion Recognition in Car-Racing Drivers: A Biosignal Processing Aroach, in IEEE Tran. IEEE Tran. on Systems, Man, and Cybernetics – Part A: Systems and Humans 38(3), 502–512 (2008) 11. Kim, J., André, E.: Multi-Channel BioSignal Analysis for Automatic Emotion Recognition. In: Proceedings of the First International Conference on Biomedical Electronics and Devices (2008) 12. Picard, R.W., Vyzas, E., Healey, J.: Toward Machine Emotional Intelligence: Analysis of Affective physiological state. IEEE Transactions on Pattern Analysis and Machine Intelligence 23(10), 1175–1191 (2001) 13. Russell, J.A.: A Circumplex Model of Affect. Journal of Personality and Social Psychology (39), 1161–1178 (1980) 14. Russell, J.A.: Evidence for a Three-Factor Theory of Emotions. Journal of Research in Personality 11, 273–294 (1977) 15. Fairclough, S.H.: Fundamentals of Physiological Computing. Interaction with Computers 21(1-2), 133–145 (2009) 16. Vinhas, V., Oliveira, E., Reis, L.P.: Realtime Dynamic Multimedia Storyline Based on Online Audience Biometric Information. In: KES IIMSS - Intelligent Interactive Multimedia Systems and Services, pp. 545–554. University of Piraeus, Greece (2008) 17. Vinhas, V., Oliveira, E., Reis, L.P.: Dynamic Multimedia Content Delivery Based on Real-Time User Emotions – Multichannel Online Biosignals Towards Adaptative GUI and Content Delivery. In: International Conference on Bio-inspired Systems and Signal Processing, pp. 299–304 (2009) 18. Vinhas, V., Silva, D.C., Oliveira, E., Reis, L.P.: Dynamic Multimedia Environment Based On Real-Time User Emotion Assessment – Biometric User Data Towards Affective Immersive Environments. In: ICEIS 2009 – International Conference on Enterprise Information Systems (2009)
A Framework Based on Ajax and Semiotics to Build Flexible User Interfaces Frederico José Fortuna1, Rodrigo Bonacin2, and Maria Cecília Calani Baranauskas1 1
Institute of Computing at University of Campinas (UNICAMP) Caixa Postal 6176, 13083 970, Campinas, São Paulo, Brazil 2 CTI Renato Archer - Rodovia Dom Pedro I km 143,6, 13069-901, Campinas, São Paulo, Brazil [email protected], [email protected], [email protected]
Abstract. Different users have different needs; thus, user interfaces, especially in the web, should be flexible to deal with these differences. In this work we present the FAN Framework (Flexibility through AJAX and Norms) for implementation of tailorable user interfaces. The framework is based on the semiotic concept of norms and was developed with web technologies such as Ajax and PHP. The proposed framework was applied to the Vilanarede system, an inclusive social network system developed in the context of e-Cidadania project, and tested with real users. We present the tests results and a discussion about the advantages and drawbacks of the proposed solution, as well as a preliminary analysis of the use of two tailorable functionalities during the first year. Keywords: Flexible interfaces, Adjustable interfaces, Tailoring, Norms, Human-computer interaction.
A Framework Based on Ajax and Semiotics to Build Flexible User Interfaces
527
Ajax is a powerful technology; however it doesn’t help in informing which interface elements should be adjusted and how it should be done to fit the user needs. Consequently it is necessary a theoretical referential, a design methodology and a development framework to encompass those adjustments. We argue that Semiotics can provide us with theoretical and methodological fundamentals for this task [1]. This work proposes to employ the concept of norms, from MEASUR (Methods for Eliciting, Analyzing and Specifying User’s Requirements) [2], to describe which elements should be adjusted, and how and in which situation they should be adjusted for every user or group of users. Norms describe the relationships between an intentional use of signs and the resulting behavior of responsible agents in a social context; they may also describe beliefs, expectations, commitments, contract, law, culture, as well as business. Norms are useful and powerful because they may be used to describe important aspects of the user context. Our hypothesis is that if we put together Ajax and norms we potentially create really flexible interfaces. So in this paper we present a framework to develop flexible interfaces using Ajax and norms. This framework has been applied in the Vilanarede social network system, in order to validate it. Vilanarede is a social network system developed in the context of the e-Cidadania project [3] in order to help creating a digital culture among as many categories of users as possible, people with different needs and interests. The paper is structured as follows: Section 2 presents the background and clarifies terms and concepts used in this work; section 3 presents the proposed framework including its architecture and implementation technology; section 4 presents and discusses results from the application of the framework; section 5 concludes this paper.
2 Background In this section we detail the main background work used to delineate the proposed framework. Section 2.1 introduces the concept of flexible interfaces, including a short overview on the research work on this field. Section 2.2 presents the Ajax concepts and features. Section 2.3 presents the concept of Norms and prior attempts to use norms in order to provide flexible interfaces, which support this work. 2.1 Flexible Interfaces According to the IEEE standard computer dictionary, flexibility is the ease with which a system or component can be modified for use in applications or environments other than those for which it was specifically designed [4]. From this definition, we could expect that flexible user interfaces should be easily modified for different contexts of use. We understand that to meet this requirement, both technical and social issues must be addressed in designing the system and the interaction. Literature in Human–Computer Interaction (HCI) has pointed out different alternatives to provide flexibility in the user interface. Dourish [5] argues that we cannot dissociate the technical and social disciplines in systems design. The relationship between them is considerably more intricate than the traditional view suggests, where
528
F.J. Fortuna, R. Bonacin, and M.C.C. Baranauskas
requirements and constraints do not have consequences beyond the application, and sociological insights do not apply inside the system. Tailoring is a concept closely related to our notion of flexibility; it can be understood as ‘‘the activity of modifying a computer application within the context of its use’’ [6]. Research in tailorable systems usually presupposes that the user should be in charge of the tailoring task [7] [8]. A challenge in tailoring is how to provide users with high-level interfaces in which relevant changes can be made quickly without the necessity of advanced technical skills. Among other approaches, the context-aware applications [9] collect and utilize information regarding the context in which provision of services is appropriate to particular people, place, time events, etc. Context refers to the physical and social situations in which computational devices are embedded. Fischer et al. [10] have proposed the concept of ‘‘Meta-Design’’, a vision in which design, learning, and development become part of everyday working practices of end-users. 2.2 Ajax – Rich Interfaces Ajax can be understood as a new style to develop web applications, which includes several web technologies. It is not a new technology by itself, it emerged in software industry as a way to put together some existing technologies in order to provide “richer web interfaces”. The term Ajax was coined by Garrett [11] and incorporates the following technologies: • • • • •
Standards-based presentation using XHTML and CSS; Dynamic display and interaction using the Document Object Model; Data interchange and manipulation using XML and XSLT; Asynchronous data retrieval using XMLHttpRequest; JavaScript binding everything together.
Ajax changes substantially the client-server interaction. The classic synchronous model where each user request is followed by a page reload is substituted by an asynchronous independent communication where the user does not need to wait in a “black page” for the server processing. Part of the processing can be executed at the client side. This aspect can minimize the server and mainly the communication workload. Many applications that initially were restricted to desktop platforms due to the limitations of the standard html development can be transposed to the web by using Ajax resources. For example, we can drag and drop images, apply filters on them and change their sizes without reloading the page. Regarding the interface flexibility, Ajax enables to change the features, remove or include any interface object at a webpage without the necessity of reloading the page. Some of the first very popular applications using Ajax are Google Suggest, Google Maps and Gmail. Nowadays, Ajax is widely used in web development and many platforms and development tools support Ajax. The World Wide Web Consortium (W3C) is also now working to standardize key Ajax technologies such as the XMLHttpRequest specification [12].
A Framework Based on Ajax and Semiotics to Build Flexible User Interfaces
529
2.3 Norms and Flexible Interfaces From the conceptual and theoretical points of view, the proposed framework is based on Norm Analysis Method (NAM) from MEASUR. Norms describe the relationships between an intentional use of signs and the resulting behaviour of responsible agents in a social context; they also describe beliefs, expectations, commitments, contract, law, culture, as well as business. “Norms can be represented in all kinds of signs, whether in documents, oral communication, or behaviour, in order to preserve, to spread and to follow them. However, one cannot always put one’s hands conveniently on a norm, as one might grasp a document that carries information through an organization. A norm is more like a field of force that makes the members of the community tend to behave or think in a certain way” [13]. Besides the description of the agents’ responsibilities in the organisational context, Norm Analysis can also be used to analyse the responsibilities of maintaining, adapting and personalising the system features. Norms can be represented by the use of natural language or Deontic Logic in the late stages of modelling. The following format is suitable for specifying behavioural norms [14]: ::= whenever if <state> then is to do Where is a deontic operator that specifies that the action is obligatory, permitted or prohibited. The norms are not necessarily obeyed by all agents in all circumstances; they are social conventions (laws, roles and informal conventions) that should be obeyed. For example: a norm specifies that the agents are obliged to pay a tax; if an agent has no money it will not pay, but usually there is a cost when an agent does not obey the norms. Bonacin et al. [1] present the foundations, a framework, and a set of tools for personalized service provision using norms simulation. Part of their framework is used in this proposal; however their framework is limited by using “thin client” solutions, since a page reload is necessary for each interface change. This work expands the possibilities from the technological and practical points of view by using the Ajax technology. In addition it is provided an easier and more productive way to develop flexible interfaces. In this work norms can be evaluated during interaction with the system and be immediately reflected in interface changes. We propose to simulate the norms and deduce the expected behavior of users and organizations during the user interaction (real-time). For example, if someone accesses an e-Government portal, and is obliged to pay a tax, then he/she will probably be interested in accessing information about the payment procedure. So the system can be immediately adapted according to the user needs and preferences. Norms are susceptible to changes in the organizational context and in carrying the intentions of agents in society. In the proposed approach, domain specialists, designers and users maintain the norms specifications according to the changes in their socio-pragmatic [15] context.
530
F.J. Fortuna, R. Bonacin, and M.C.C. Baranauskas
3 The Framework Proposal In order to help the development of flexible interfaces we propose FAN: an Ajax framework using norms. FAN stands for Flexibility through Ajax and Norms. The framework’s architecture is described in section 3.1, its execution aspects are explained in section 3.2 and some scenario of adjustments are illustrated in section 3.3. 3.1 Framework' Architecture The framework we propose provides powerful capabilities but it has a relatively simple architecture which is shown in Figure 1. Basically, the framework is composed of four modules: Perception, SOAP Client, Action and Users’ Facts Storage. These modules are responsible for tasks such as creating, storing and loading contexts and facts about the users, capturing events on the presentation layer (interface, which is usually constructed with HTML and CSS), accessing NBIC/ICE web services (which maintain and interpret Norms) and customizing the interface. Regarding the development technologies, the Perception, SOAP Client and Action modules were developed using Ajax, running on the client side, because of its potential and capacity of giving developers capabilities to modify a web page without the need of reloading it, and the Users’ Facts Storage module was developed in Ajax and PHP. The PHP code runs on the server side to store facts related to every user.
Fig. 1. Framework architecture
A Framework Based on Ajax and Semiotics to Build Flexible User Interfaces
531
The Perception module is responsible for capturing events generated by the users on the presentation layer, such as mouse clicks over buttons or images. Also it is responsible for generating facts related to these events. Examples of facts: the user has difficulty to use the mouse due to many clicks on non-clickable regions or the user is a sign language reader due to many demands on sign language videos. The SOAP Client module is responsible for the communication between the proposed framework and NBIC/ICE System, which is responsible for maintaining and evaluating the norms. This module handles synchronous and asynchronous requests made by the Perception module and it accesses NBIC/ICE web services using SOAP calls. The third module, Action, is responsible for changing the system’s interface according to an Action Plan generated by ICE. An Action Plan is a XML that stores interface objects IDs that should be modified and how the modifications should be done, which attributes of an interface object should be modified, what are new values of these attributes or a JavaScript code that must be executed. The last module, Users’ Facts Storage, is in charge of storing, loading and erasing users’ facts generated by the Perception module. It stores facts on XML files associated to each user and/or each system page. The NBIC/ICE System has two components: NBIC and ICE. NBIC stands for Norm Based Interface Configurator and it is responsible for storing and managing norms. ICE means Interface Configuration Environment. This component is responsible for making inferences, using Jess inference machine, over norms stored by NBIC and creating action plans, according to the inferences made, describing how the interface should be customized. In other words, which interface elements should me modified and how it should be done or which elements should be removed or added. A proxy is used in the communication between the framework and NBIC/ICE System in order to avoid a JavaScript-related security issue. Some web browsers don’t execute JavaScript codes stored in a Server different from the Server that hosts the current web Page. 3.2 How It Works Every time a page is loaded the Perception module creates a new context by making an asynchronous call to ICE. A context stores information such as the user id, the system name (or system id) and some information about the page that the user is visiting. After the context is created or loaded, the norms related to the system are loaded in that context. After creating the context and loading the norms, the Perception module makes a request to Users’ Facts Storage module to load all the facts related to the current user and current web page (node). If the Users’ Facts Storage finds facts of the current user and node, this module will build a list with these facts, otherwise it will build an empty list. This list is, then, returned to the Perception module. If the list is not empty, this module will start the customization process described as follows, for each fact in order to adjust the interface so that it looks and behaves like in the last time the user visited or adjusted that page. When an event on the interface (e.g. mouse click over a button) is captured by the Perception module, as shown in Figure 2, this module creates predicates related to the event and the actual context.
532
F.J. Fortuna, R. Bonacin, and M.C.C. Baranauskas
Fig. 2. Event captured by Perception
After creating predicates, the Perception module uses an ICE service, making an asynchronous call, via SOAP Client module, to generate a new fact based on the generated predicates, as shown on Figure 3. A fact is basically a collection of predicates. Also the Perception module requests Users’ Facts Storage to store the predicates on a XML as a new fact so that the new adjustment may be saved.
Fig. 3. Perception making requests to Users’ Facts Storage and ICE
After creating a new fact, ICE starts to make inferences over the norms, stored by NBIC and related to the context and fact, and then it builds an action plan, an XML file that stores interface objects IDs that should be modified and how the modifications should be done. The action plan is then sent to SOAP Client module, which calls the Action module to modify the interface according to the action plan generated by ICE. These steps are shown in Figure 4.
A Framework Based on Ajax and Semiotics to Build Flexible User Interfaces
533
Fig. 4. NBIC/ICE generating an Action Plan
Fig. 5. Action module customizes the interface according to the Action Plan
The Action module parses the XML file (action plan), as shown in Figure 5, to look for interface objects ids and attributes values or JavaScript code that should be executed. This module then accesses the found interface objects using DOM and modifies their attributes according to the new values found on the action plan. Also, this module executes any JavaScript code found in the Action Plan. 3.3 FAN Usage Possibilities It is hard to list all the adjustments that may be made using the framework. The following list represents some significant adjustments that the framework enables:
534
• • • • • • •
F.J. Fortuna, R. Bonacin, and M.C.C. Baranauskas
Changes in attributes and style of objects; Drag-and-drop interface elements; Changes in an object’s position via DOM operations; Insert, remove, show, hide, enable or disable interface objects; Permit adjustments made using JavaScript code, either using implementations already provided by the system or code contained in an Action Plan; Automatically modify an object after repetitive user actions.
To illustrate three possible adjustments that FAN can offer, Figure 6 shows a screenshot of Vilanarede - a social network system in which users post advertisements classified into different categories. It also shows some page customization options that FAN may enable, like changing the position of an object, hiding an element and changing the page structure by changing the display order of interface elements.
Fig. 6. Screenshot of a social network system along with three interface adjustment options enabled by the FAN framework
Figure 7 shows the result of the adjustments proposed in Figure 6. The image presents a customized page with appearance and behavior different from the original page (Figure 6). Another example feature of the framework is automatic modification of objects by perceiving repetitive user actions. If a user clicks a consecutive number of times in non-clickable areas when he or she tries to click or select a certain interface object, the framework increases the size of interface elements to help the user, as shown in Figure 8. There is a norm determining that the size of the elements should be increased if the user has difficulty in navigating the page or if the user has some sight problems.
A Framework Based on Ajax and Semiotics to Build Flexible User Interfaces
535
Fig. 7. Customized page with look and behavior different from the original page, according to the adjustments proposed on Figure 6
Fig. 8. FAN Framework automatically modifying the size of elements based on a system norm that determines the execution of such actions
4 Applying the Framework The framework is being used and tested in the Vilanarede system. Vilanarede is a social network system developed in the inclusive context of e-Cidadania project. The framework for flexibility we propose should help Vilanarede system to achieve its objectives because it may give each user the possibility to adjust the system interface according to his or her needs or interests. As an example, the framework may enable a user to change the type of the main menu, changing the way he or she will see it. Using inference the framework may discover that a user has some kind of sight problem and them the framework may change the size of interface elements like images and texts, making them bigger.
536
F.J. Fortuna, R. Bonacin, and M.C.C. Baranauskas
The framework was tested for the first time with users on a participatory activity organized by the e-Cidadania project with Vilanarede users. During the activities, 7 users were asked to do some tasks at Vilanarede system or tasks related to the system. After each participatory activity researchers analyzed the data collected. 4.1 The eCidadania Participatory Practicies The Participatory Practices developed in workshops generally lasted three hours and took place at the CRJ - Centro de Referência da Juventude (In English, Youth Reference Center), a public space supported by the local government. The CRJ is composed of a Telecenter, where the community has free access to computers and the internet, a public library, and rooms for community courses. The CRJ is strategically located near a main bus station, public schools and the neighborhood associations, which benefits merging different community groups. The workshops followed the practices established by the Human Research Ethics Committee and were performed with compromise and respect between all parties. The activities developed according to the PluRaL framework [16] pillars. The first pillar brings out the signs of interest in the domain, related to users, devices or environment, and formalizes the interaction requirements that the tailorable system should cope with. The second pillar benefits from MEASUR [2], allowing a consistent view about the domain that assists the formalization of functional requirements. In the third pillar, the tailorable design solution is built and a norm-based structure formalizes the system's tailorable behavior. The activities culminated with a set of norms that formalize the tailorable behavior of Vilanarede. Those norms were formalized and included in FAN, and a set of tailorable functionalities, based on norms, were evaluated in an additional participatory activity. The next section presents this activity and some statistics of use. 4.2 Tests and Results On the activity organized to test the framework, practitioners used a notebook station with Vilanarede system already loaded, as shown in Figure 9, and they were asked to fill their profile at Vilanarede with some personal information such as their level of education, level of experience with web navigation and how frequent they change the font size in the system. If a user selected that he or she makes the page font bigger most of the time or always, the framework would infer that it would be useful to make the font bigger for that user, so the font would be bigger on every page as soon as the user saved his profile.
Fig. 9. Station set used by Vilanarede users to test the framework
A Framework Based on Ajax and Semiotics to Build Flexible User Interfaces
537
After filling profile information, users were asked to change the type of the page’s main menu from a default list of icons to a circular menu. In order to do this, users should first click on a special button in the page so that adjustment buttons would be shown on the page. These buttons would enable the user to modify some interface elements. After that, users should find and click on the correct button to change the menu type from a linear list to a circular set of options. Figure 10 shows the menu types and the buttons to change from one type to the other.
Fig. 10. Alternative ways to see the Vilanarede’s main menu – a linear way and a circular one
Everything the users said and their emotions during the activity were recorded by a digital camera and a webcam. All the things they did on the interface (mouse movements, clicks etc) were recorded with Camtasia Studio, a desktop screen recorder. Also, e-Cidadania researchers observed the users during the activity and they wrote down on paper forms some important information regarding their interaction with the system. They wrote information such as whether a user noticed the font becoming bigger (automated adjustment) depending on the information he or she filled in his or her profile and how difficult it was for each user to locate the special button that enables adjustment buttons and how difficult it was for them to change the main menu type and the steps the users took to complete the task. Also, researchers observed users’ suggestions to change the appearance and position of that special button and which new adjustments they would like to be able to make on the interface. The data collected by the researchers, the cameras and the screen recorder showed that almost all the users who tested the adjustments provided by the framework enjoyed being able to adjust the interface according to some of their interests and needs, as one of the users said “it’s very good to have the possibility to change the interface”. Also, the data showed that the majority of the users preferred the new type of menu offered by the framework. Despite the advantages of using the framework and the good and interesting data collected, before and during the activity it was possible to identify some problems, some drawbacks and the users pointed out some aspects of the adjustment process that could be improved. Many users didn’t like the appearance and position of the button they needed to click on in order to make adjustment buttons visible. In general, they would like to see a bigger button in a different position so that it would be easier to find, indentify and click on it.
538
F.J. Fortuna, R. Bonacin, and M.C.C. Baranauskas
During the activity it was possible to notice that one of the drawbacks of the framework is performance. The framework may take a few seconds to adjust a system’s interface if the system has many complex norms or if the NBIC/ICE server is not so fast. The framework may also have problems with performance especially if the user (client) has a very poor internet connection as the framework may suffer from connection timeouts if packets got lost or corrupted or if they take so much time in their way from the client to the server and back. Another drawback is that it is not easy to synchronize the requests made by the perception module to ICE/NBIC via SOAP Client so that ICE/NBIC returns the correct and expected data. Also, the framework enables developers to make almost all elements in the interface adjustable, using simple norms, but developing JavaScript code to implement the adjustable behavior of certain elements may be not that simple.
Fig. 11. Number of users that made at least one adjustment to the Vilanarede’s interface
Further analysis done in Semptember 2010 showed that, since the framework was applied to Vilanarede System, at least 24 of approx. 293 (approx. 8,2%) Vilanarede users have already customized the system interface, as shown on Figure 11, considering the 2 customization options offered by the framework at that moment. About, 20% of the users performed only one of the tailoring actions available. However, 30% of them performed both and 50% changed the system several times. The number of changes in which users altered both options available (in comparison with altering only the menu or only the font size) suggests that users feel comfortable to use more than one tailoring option. After the first month, the number of multiple changes (i.e. those in which the user changed at least once) is growing. This seems to indicate that users take a time to get used to the tailoring options and benefit from it as many time as they want.
5 Conclusions and Future Work Different users have different needs and it is essential that user interfaces meet these different needs. In order to deal with this diversity, interfaces should be adjustable. To help the design and development of flexible, adjustable interfaces we proposed a framework, developed using Ajax and PHP, grounded in the concept of norms. This
A Framework Based on Ajax and Semiotics to Build Flexible User Interfaces
539
framework was successfully used and tested in Vilanarede system during a participatory activity organized by e-Cidadania project. The test helped us to identify the advantages and drawbacks of using the framework and the users gave us important feedback to help improving the framework. In order to enhance the performance of the process of changing the interface according to users’ needs, we could look for a more powerful and faster server with a faster broadband internet connection to run NBIC/ICE system as it is running, at the moment, in an old Pentium4 server with a limited broadband connection. One powerful idea we consider as future work is to integrate the framework with concepts and technologies of the semantic web. Instead of storing norms and facts inside files or databases at NBIC/ICE system we may use technologies like RDFS or SWRL (Semantic Web Rule Language) to store norms and facts on the web. Queries over norms may be done using SparQL, a RDF query language. Instead of using NBIC/ICE to make inferences over norms we may use RDF/RDFS, OWL and SWRL to do it. That way, users may store information about their interests or needs (facts) on RDF files and store them on the web. The framework may use the facts stored on these files to make inferences and adjust different systems’ interfaces according to users’ needs. Integrating the framework with semantic web would make it possible to distribute norms and facts on the Web and then the framework could be used by a much bigger number of users. Acknowledgements. This work was funded by Microsoft Research - FAPESP Institute for IT Research (proc. n. 2007/54564-1 and n. 2008/52261-4).
References 1. Bonacin, R., Baranauskas, M.C.C., Liu, K., Sun, L.: Norms-Based Simulation for Personalized Service Provision. Semiotica (Berlin) 2009, 403–428 (2009) 2. Stamper, R.: Social Norms in Requirements Analysis – an outline of MEASUR. In: Jirotka, M., Goguen, J., Bickerton, M. (eds.) Requirements Engineering, Technical and Social Aspects. Academic Press, New York (1993) 3. Baranauskas, M.C.C.: e-Cidadania: Systems and Methods for the Constitution of a Culture mediated by Information and Communication Technology. Proposal for the Microsoft Research-FAPESP Institute (2007) 4. Institute of Electrical and Electronics Engineers. IEEE Standard Computer Dictionary: A Compilation of IEEE Standard Computer Glossaries, New York, NY (1990) 5. Dourish, P.: Open implementation and flexibility in CSCW toolkits. Ph.D. Thesis, University College London (1996), ftp://cs.ucl.ac.uk/darpa/jpd/ dourish-thesis.ps.gz (last access October 17, 2007) 6. Kahler, H., Morch, A., Stiemerling, O., Wulf, V.: Special Issue on Tailorable Systems and Cooperative Work. In: Computer Supported Cooperative Work, vol. 1(9). Kluwer Academic Publishers, Dordrecht (2000) 7. Teege, G., Kahler, H., Stiemerling, O.: Implementing Tailorability in Groupware. SIGGROUP Bulletin 20(2), 57–59 (1999) 8. Morch, A., Mehandjiev, N.: Tailoring as Collaboration: the Mediating Role of Multiple Representations and Application Units. In: Kahler, H., Morch, A., Stiemerling, O., Wulf, V. (eds.) Special Issue on Tailorable Systems and Cooperative Work, Computer Supported Cooperative Work, vol. 9(1). Kluwer, Dordrecht (2000)
540
F.J. Fortuna, R. Bonacin, and M.C.C. Baranauskas
9. Moran, T., Dourish, P.: Introduction to the special issue on Context-Aware Computing. Human-Computer Interaction 16(2), 2–3 (2001) 10. Fischer, G., Giaccardi, E., Ye, Y., Sutcliffe, A.G., Mehandjiev, N.: Meta-Design: a Manifesto for End-User Development. Communications of the ACM 47(9) (2004) 11. Garrett, J.J.A.: A New Approach to Web Applications, Adaptive Path, February 18 (2005), http://www.adaptivepath.com/ideas/essays/archives/000385.php (last access December 21, 2009) 12. W3C, XMLHttpRequest, W3C Working Draft (November 19, 2009), http://www.w3.org/TR/XMLHttpRequest/ (last access December 21, 2009) 13. Stamper, R., Liu, K., Hafkamp, M., Ades, Y.: Understanding the roles of signs and norms in organizations – a semiotic approach to information system design. Behaviour & Information Technology 19(1), 15–27 (2000) 14. Liu, K.: Semiotics in information systems engineering. Cambridge University Press, Cambridge (2000) 15. Stamper, R.K.: Information in Business and Administrative Systems. John Wiley & Sons, Inc., NY (1973) 16. Neris, V.P.A.: Study and proposal of a framework for designing tailorable user interfaces. PhD Thesis, Institute of Computing, UNICAMP, Campinas, Brazil (2010)
A Chat Interface Using Standards for Communication and e-Learning in Virtual Worlds Samuel Cruz-Lara, Tarik Osswald, Jordan Guinaud, Nadia Bellalem, Lotfi Bellalem, and Jean-Pierre Camal LORIA / INRIA Nancy - Grand Est 615 Rue du Jardin Botanique, 54600 Villers-l`es-Nancy, France {samuel.cruz-lara,tarik.osswald,jordan.guinaud, nadia.bellalem,lotfi.bellalem}@loria.fr, [email protected]
Abstract. Nowadays, many applications embed textual chat interfaces or work with multilingual textual information. The Multilingual-Assisted Chat Interfaceis an extension of the usual chat interfaces which aims at easing communication and language learning in virtual worlds. We developed it in the frame of the ITEA2 Metaverse1 Project [ITEA2 07016], using the Multilingual Information Framework (MLIF) [ISO DIS 24616]. MLIF is being designed in order to fulfill the multilingual needs of today’s applications. By developing the Multilingual-Assisted Chat Interface,we wanted to help people communicate in virtual worlds with others who do not speak the same language and to offer new possibilities for learning foreign languages. We also wanted to show the advantages of using web services for externalizing computation: we used the same web service for two virtual worlds: Second Life and Solipsis. First, we shortly analyze social interactions and language learning in virtual worlds. Then, we describe in a technical way the features, and the architecture of the Multilingual-Assisted Chat Interface. Finally, we give development indications for programming with virtual worlds. Keywords: Multilinguality, MLIF, Chat interface, Web services, Virtual worlds, Communication, Language learning, Automatic translation.
1 Introduction Today, talking to people via a textual chat interface has become very usual. Many web applications have an embedded chat interface, with a varying array of features, so that the users can communicate from within these applications. A chat interface is also easy to implement: unlike voice or video, it needs neither additional devices nor additional signal analysis algorithms. Therefore it is not surprising if brand new technologies such as virtual worlds also embed a textual chat interface. But all applications have their own peculiarities, and their chats also serve various requirements. A distinctive feature of virtual worlds is that people are more likely to converse with other people who cannot speak their native language. In such cases, the need for some sort of assistance in facilitating inter-language communication becomes obvious. A chat interface with multilingual features can meet these requirements.[1]. J. Filipe and J. Cordeiro (Eds.): ICEIS 2010, LNBIP 73, pp. 541–554, 2011. c Springer-Verlag Berlin Heidelberg 2011
542
S. Cruz-Lara et al.
Moreover, such an interface can be turned into an advantage for people who want to improve their foreign language skills in virtual life situations. In this paper, we first want to present some considerations about social interactions and language learning in virtual worlds. Then, we will describe chat interfaces in general and more specifically the Multilingual-Assisted Chat Interface, which we have developed within the ITEA2 Metaverse1 (www.metaverse1.org) Project [ITEA2 07016] in order to give a first answer to the question “how can we ease communication in virtual worlds and turn the multilinguality issue into an advantage?”
2 Social Interactions The social interaction in virtual worlds constitutes a particularly rich field of research. Indeed, the objective to be reached is to offer simple and effective means of communication, which approach the natural communication. In this context, efforts should be made as much in improving the technical aspects as in taking the socio-cognitive aspects into account. Thus, it will improve the realism of the virtual environments and increase the quality of the exchanges between the avatars. Therefore, gestures, speech, direction of gaze, postures or facial expressions have an entire role in the construction of the social links between individuals. Figure 1 is an example of how we could describe one of the main differences between the common use of the web and the common use of virtual worlds. On the one hand, on the common web (which we call 2D-web), one web server generates several web pages, which are displayed by a web browser that is installed on the visitor’s computer. Usually, these web pages are not linked together (despite some exceptions), which keeps the users from interacting with each other. On the other hand, one virtual world server manages one virtual environment, which is shared by several avatars. Thus, these avatars (and consequently the users) are likely to interact socially with each other. The virtual world viewers (installed on the users’ computers) are designed to display the environment and to control the avatars. The question concerning the influence of virtual environments on the social behaviour of users, constitutes a particularly interesting topic for the researchers in sociology. In fact, they highlighted a phenomenon of disinhibition and facilitation which leads to a greater sociability [2][3]. As social human beings, we adjust our behaviour with the social norms in face to face communication. We know, thanks to our education and our culture, what is socially acceptable or not. Communicating by interposed computers strongly decreases this adjustment because we cannot observe in real-time the effects of our words and of our writings. In order to propose credible and powerful communication between avatars, the characteristics of the human communication must be taken into account: language with explicit or implicit references to the objects of the environment, gestures, postures, facial expressions. These elements of communication are all the more important since the avatar is immersed in a three-dimensional world populated by objects, personages and places more or less characteristic which sometimes echoes the real world. The models suggested by El Jed [4] try to take into account intentional communication as well as non-intentional communication in the interpretation of the acts of
A Chat Interface Using Standards for Virtual Worlds
543
Fig. 1. Common use of the 2D-web (left) Versus Virtual Worlds (right)
communication between avatars. In this context, the favoured mode of communication is natural language combined with deictic gestures. The difficulty, in this case, consists in using markers like vocal intensity, voice intonation, or indexical or deictic references (“I”, “here”, “over there”) associated with the designation gesture to determine the relevant interpretation of the exchanges. The direction of the gaze can also be exploited in order to focus the visual attention of the interlocutors towards a specific place in the shared environment. The facial expressions are essential during the exchanges and constitute the first channel to communicate emotions. They can express mood, approval or disapproval, but also the whole panel of the human emotions (fear, joy, etc.). All these manifestations of the human communication would be very useful in the domain of education, specially for the development of e-learning techniques used for the realisation of virtual campuses [5].
3 Language Learning Language learning in virtual worlds is a new field of research which is still open to innovations. How can we create technological advances in order to create an optimal psycholinguistic environment for language learning? What makes new proposals innovative and helpful? In order to answer these questions we are currently developing some empirical support. We believe that one potential source of guidance may be offered by some methodological principles of Task-Based Language Teaching (TBLT) applied to distance foreign language teaching [6]. It should be noted that the TBLT aspect of our work is
544
S. Cruz-Lara et al.
currently under development. The general idea is allowing teachers to create TBLT units via the Moodle learning environment system (http://www.moodle.org), and then to use SLoodle (http://www.sloodle.org). SLoodle is an open source project which integrates Moodle components in Second Life. Our approach (i.e., the Multilingual-Assisted Chat Interface) must be considered as an innovative form of Computer-Assisted Language Learning (CALL). The Multilingual-Assisted Chat Interface can deliver MLIF data about a words part of speech, which, together with other items such as translations and synonyms, can be saved inside a database for later use in language learning. This capacity to collect linguistic data for further reference and study turns the Chat Interface from a help tool into the first step of a learning environment. But, however natural it may seem to resort to Parts of Speech when learning a language, it is not as straightforward as it may seem to people with a classical education. We are concerned here, not with linguistic scholars, but with people who learn the language for communicative purposes. As teachers know, students have difficulty identifying parts of speech (PoS), in their own language as well. While they may agree to use some PoS labels to facilitate their learning, students would be put off by an approach which relies too heavily on the use of PoS. PoS can be used, but sparingly. The classical list of grammatical word categories transmitted through education has been historically tampered with by the succeeding linguistic schools. Terms such as Determiner and Modifier for example have now become part of the standard list. It is customary to classify words into two broad groups: lexical terms (comprising Nouns, Verbs, Adjectives and Adverbs) and grammatical terms. But some PoS categories have words which belong to both groups. In English, modal verbs are more grammatical words than they are lexical words, and the same could be said of some adverbs (more, here, yesterday). If PoS is used to help language learning, the list of categories has to be adapted. Some categories could be subsumed under a common heading, for example Grammar words, whereas specific categories could be created, for instance Noun, uncountable, or Adjective, attributive. Consequently, a good PoS tagger should be able to reach a degree of precision which goes beyond the superficial level of Noun/Verb/Adjective/Adverb/ Grammar words. Depending on the language, the teacher will have to make decisions on which categories to use and which must be regrouped. The learner could then use improved PoS criteria to select items of language more worthy of their efforts than others. They may decide which priority to give Grammar words, for example, or Person words if such a category is made available to them. To achieve this goal, they would not need to have a complete mastership of all the PoS categories, but a narrowed-down selection of tags more relevant to their efforts. On the other hand, the reflection on Parts of Speech may benefit the comprehension process, by directing the learners attention to the relation between words or sentence elements. A classic example in English is to decide whether a word is a noun or a verb. In this case, it is important that the learner be given the time to think before they are presented with the solution.
A Chat Interface Using Standards for Virtual Worlds
545
The ability to tag words which have been found during the course of live exchanges should be extended beyond the use of just PoS, to other useful aspects of the vocabulary, such as Topic, Notion, Function, which provide perhaps more useful entries for vocabulary study and reuse than just their grammatical kind.
4 The Multilingual Assisted Chat Interface The Multilingual-Assisted Chat Interface is a tool offering new functionalities to the chat users in Virtual worlds. It is directly embedded in some virtual worlds viewers (to date: Second Life and Solipsis). In order to implement those functionalities (described in Section 4.2), we modified the source code of the viewers (see Section 4.6). The Multilingual-Assisted Chat Interface mainly relies on the Multilingual Information Framework [7], which is a standard being currently developed by the International Organization for Standardization (ISO). 4.1 Generalities about Chat Interfaces It was at the Massachussets Institute of Technology (MIT), that data processing specialists provided the theoretical foundations of the concept of Instant Messaging (IM) while speaking about “service about notification”. This led to the first Instant Messaging system, called Zephyr [8]. It was initially used on the campus of MIT, then in other universities in the United States and then elsewhere. The chat systems are then developed in close relationship with online services and network games, especially in massive multiplayer online games. The term “chat” refers to a real-time written dialogue, using a computer. The chat presents many similarities with the oral dialogue. In fact, it surely is the written means of communication which is the closest to it. Indeed, it is closer than other tools, such as forums and email. Most of the time (for example in Second Life), the chat is volatile. In fact, the contents of a chat session, like an oral conversation, is not intended to be available to the public. Furthermore, when a user logs in, he totally ignores what has been said before his arrival. Similarly he cannot know what will be said after his disconnection. Moreover, the chat users introduce into their writing some elements which are specific to oral communication. The chat is usually based on a client-server architecture, meaning that users do not communicate directly with each other, but through a single server. All the chat users connected to one server do not necessarily communicate together. In general, a server gives access to several channels of discussion (also known as rooms), which are completely partitioned. The chat users only see what is happening in the channel they are connected to, and they can only send messages to this very one. On most of the public servers, the creation of channels is completely free, and any user can open some. In general, the client chat interface is split into three areas: – the ongoing conversation; – the message editing and sending zone; – the listing of the connected users.
546
S. Cruz-Lara et al.
A chat session can be considered as a set of messages, ordered sequentially and produced by various authors (humans or robots). 4.2 Functionalities In this section, we are going to describe the three main functionalities which we implemented in the Multilingual-Assisted Chat Interface. Grammatical Analysis and Word Colouring. This is the first feature that we implemented. It consists in coloring some specific words in a sentence written in the chat interface, in order to show the grammatical structure of the sentence more clearly. The settings can be modified easily in a specific menu: the user can choose which grammatical categories they want to highlight, and the corresponding color. They can also choose which sentences they want to analyze (other people’s sentences, objects’ sentences, all the sentences, etc.). The grammatical part-of-speech analysis is performed on a remote web server, using the method described in Section 4.5. The data structure used for tagging the grammatical category of each word is MLIF. Providing Word Translations, Synonyms and Definitions. An important feature is the ability to simply click on a word in the chat interface in order to get definitions, synonyms and/or translations of this word. In the current implementation, the definitions and synonyms are retrieved from WordNet, and the translations from Google Translate. It is important to note that other external corpora could also be used. This feature could be needed in two main situations: – When people are reading text in a language they do not know very well, they may need some help (e.g. definitions, synonyms, translations) in order to understand even very common topics. – When the discussion is about a rather precise subject, where several technical terms are often used, people may especially need additional definitions even if they are native speakers. The obvious advantage of this feature is that it is able to aggregate information from several web services with only one click, which is much more convenient than looking for the information in a web browser by oneself. Figure 2 shows the menu, the chat interface (with the verbs in a different colour) and the action performed when clicking on a word. Every MLIF data structure generated after clicking on a word can be stored in a database. Doing so provides the user with a feedback on the words that he did not understand that well. Writing down the unknown vocabulary is something very usual when learning a new language and this is exactly what we have implemented here. Automatic Translation. When people do not know a language very well, they will be interested in having an automatic translation of every sentence. In this case, it is no longer an e-learning functionality but a way to make communication easier between people in the virtual worlds.
A Chat Interface Using Standards for Virtual Worlds
547
Fig. 2. Colouring words in the chat interface and displaying information about one word (Solipsis)
For example, user A uses the Multilingual-Assisted Chat Interface and wants to have all the messages displayed in French. The Multilingual-Assisted Chat Interface will analyse the incoming messages and translate them if they are not written in French (using two Google Translate functionalities: automatic language recognition and sentence translation). Then, the messages sent by user A will be translated with respect to the language of the latest received message in the conversation. Figure 3 shows a typical situation involving two avatars who neither understand nor speak the same language. Another important point is that the source language is automatically detected by Google Translate. Thus, the user only needs to enable the automatic translation functionality to be able to chat with any other avatar in the virtual world. Also, as the source language is stored in the MLIF data, it is possible to link every discussion to a pair of languages and as a consequence, it is possible to carry out several multilingual conversations at the same time. In addition to that, both original and translated messages are stored in the MLIF data. Thus, the user can click on one translated sentence and see the original sentence if they want some insight into the original language or if the translation does not seem very accurate. 4.3 General Architecture Figure 4 shows the general high-level architecture of the main components of the Multilingual-Assisted Chat Interfacewhich are used for the “grammatical analysis and word colouring” and for “providing word translations, synonyms and definitions” (see Section 4.2).
548
S. Cruz-Lara et al.
Fig. 3. Automatic translation between a Japanese and a French avatar
Fig. 4. The Multilingual-Assisted Chat Interface flowchart
A Chat Interface Using Standards for Virtual Worlds
549
Three colors are used in this scheme. Each one represents a certain category of components: orange (on the left): components belonging to the virtual world (especially the viewer); blue (in the middle): the web service components that we developed, mainly business components; green (on the right): external web services and corpora, and data storage. The circled numbers represent the chronological order of the interactions between the components when a message is sent or when a word is clicked. The corresponding explanations are written below: 1. Every message sent by a user is first sent to the virtual world server. When the client (i.e. the viewer) of the person we are writing to receives a message, this is forwarded to a Message Manager, (i.e. the component set dealing with the chat messages). We needed to modify these components both in Snowglobe (a viewer for Second Life, see Section 4.6) and in Solipsis. 2. Before displaying the message on the Chat Interface, the Message Manager sends an HTTP request to the web service (the Grammatical Analyser) in order to obtain the MLIF data representing the sentence and its several grammatical components. 3. The Grammatical Analyser connects to external Grammatical Corpora in order to get the grammatical part-of-speech tag for each word of the message. 4. When the Message Manager receives the MLIF data structure representing the original message with a grammatical labeling (as an HTTP response), the MLIF data is parsed and turned into a format enabling the coloring and hyper linking of each word. The coloring code depends on the settings of the user interface (see Section 4.2). 5. An action is performed: while the addressee reads the message, they click on a word that they do not understand (in the Chat Interface) in order to retrieve synonyms, definitions and translations for this word. 6. After clicking on the word, the user is redirected to a Web Interface, which will display all the desired information. 7. Loading the web page involves calling the Word Request Manager so that it retrieves information from external web services. 8. First, the Word Request Manager retrieves some definitions and synonyms from WordNet. 9. Second, the Word Request Manager connects to Google Translate in order to retrieve translations for the sentence and for the synonyms (see Section 4.4). 10. Once the Word Request Manager has received all the desired information, all of it is written into an MLIF data structure. This MLIF data is stored in a database, from which it can be retrieved later if required (see Section 4.2). 11. The MLIF data is then sent to an MLIF-to-HTML Parser, which is going to transform the MLIF data into user-friendly HTML code. 12. The HTML code thus obtained is finally displayed by the Web Interface user interface, which makes it easy for the user to read. It is important to note that most data exchanges are made using MLIF, since this data format exactly fits our requirements. This is very important to facilitate the development
550
S. Cruz-Lara et al.
within the web service (as we only use one data format) and for future applications (as it enhances interoperability). 4.4 Selection of Translations for Synonyms When providing definitions, translations and synonyms for one word, we can group the definitions and the synonyms by senses of the word (in case of homonyms). This is a functionality which is provided by WordNet. However, the translation provided by Google do not fit to these homonym sets. That is why we had to implement a method for retrieving and selecting translations for every specific sense of a word. We can distinguish six main steps for this selection: 1. For one word, we retrieve several senses from the WordNet database. Each sense also corresponds to a certain part-of-speech category. 2. Again in the WordNet database, we retrieve a set of synonyms for each sense. 3. For each set of synonyms, we retrieve a set of translations from Google Translate (several translation for each word of the set of synonyms). 4. We only consider the translations belonging to the same part-of-speech category as the word (noun, verb, adjective, adverb, etc.). 5. We now classify the translations in order to find the most frequent ones. 6. Finally, we select the 1/2/3 first translations for 1-2/3-4/5+ elements in the set of synonyms. The translations selected at the final step are translations for one of the senses of the original word. With this method, we obtained rather satisfying results. But we are aware that the conditions of the selections of the elements could be studied and that the very efficiency should be proved in a deeper study. 4.5 Web Services for Virtual Worlds The Multilingual-Assisted Chat Interface is based on a web service. While the virtual world client viewer is in charge of displaying information, the web service deals with data processing. We call our system web service because it processes data which is going to be used by other systems (e.g. virtual world viewers). Although the web service does not rely on any existing web service protocol (e.g. SOAP), it is neither a web site nor a web application. The data we compute is not supposed to instanciate a displayable format. The text is sent by the viewer (Second Life, Solipsis, etc.) to the web service. The latter gathers the information requested by the users and puts it together into a MLIF data structure. Finally, the generated MLIF data is returned to the viewer, which turns it into a displayable format. The web service has a grammatical tagger tool [9]. It is composed of two Pythongenerated dictionaries: a default tag dictionary and a rule dictionary. The first one contributes to the matching of each word with its most likely category, the second one to the correction of errors by checking the context. Interoperability is one of the most interesting features in our web service: we can use our web service to build new tools on any platform (other metaverse platforms,
A Chat Interface Using Standards for Virtual Worlds
551
web services, applications, etc.). Note that it is used both for the Multilingual-Assisted Chat Interface in Second life and in Solipsis. Externalizing tools on PHP servers makes them platform-independent and saves a lot of time when applications must be adapted to different platforms. Web services may be very useful for hosting business layers, data access layers, and data storage. Many web services and web applications exist on the web and propose specific ways to process information. Most of the time, there are separate tools for translating, for finding synonyms or for getting the definition of a word (Google Translate, WordNet, ConceptNet, etc.). Our web service uses several of these tools in order to allow the user to centralize all this information in only one MLIF data structure. 4.6 Programming for Virtual Worlds The multilingual-assisted chat interface is now available in two virtual worlds: Second Life, developed by Linden Lab, and Solipsis, developed by Orange Labs and ArtefactO. This section describes our work in a technical way and is therefore directed to people who have an advanced knowledge of programming. Snowglobe. Snowglobe is an alternative viewer for Second Life that can be downloaded on the official web site at http://snowglobeproject.org. It is runnable on Linux, Mac OS and Windows. This project aims at gathering Linden Lab and open source developers in order to create an innovative viewer. The developers who want to implement new features can get the source code on the Second Life wiki (http://wiki.secondlife.com/). This page contains the source code, artwork and open source libraries for all operating systems. All the instructions concerning compilation and private linked libraries are detailed on the wiki (http://wiki.secondlife.com/wiki/Snowglobe) in the “Get source and compile” section. We are now going to explain how to modify the chat interface and what we have modified in order to build the Multilingual-Assisted Chat Interface. Modifying the Chat Interface. A user can send a message with two kinds of tools: the public chat interface and the instant messaging interface (i.e. private messages to one specific user). – The Public Chat Interface. The text written into the chat interface can be modified in two ways. The first one consists in modifying the text before sending it to the server and the second one consists in modifying the text upon its reception by the viewer. For the first case, the code is located in the files named llchatbar.cpp, llchatbar.h and llchat.h. The LLChatBar class is directly linked to the LLPanel class (defined in the files llpanel.cpp and llpanel.h) which is in charge of displaying the menu elements, among the chat elements. In parallel, the text is sent to the server from the function named send chat from viewer. This function has three parameters. The string parameter utf8 out text is the text that the user has written. The type parameter (of type EChatType) indicates the range of the message. The range will depend on whether the user wants to whisper or to shout. It will be different too if the message concerns all the members of a region, the owner of an object or just a
552
S. Cruz-Lara et al.
single user. The last parameter is the channel. There are public channels and private channels. Generally, we use private channels to communicate with objects. The code for the second case is mainly defined in the LLTextEditor class (see lltexteditor.cpp and lltexteditor.h). Once the text has been written by the user, received by the LLPanel class and sent to the server by the LLChatBar class, the LLTextEditor class deals with the next step: it prepares the text for the class which is in charge of the display. LLTextEditor has three important functions: appendColoredText, appendStyledText and appendText. The appendColoredText function defines the style of the text and calls the appendStyledText function. The latter applies the style to the text and finds out the HTML addresses to replace them by clickable links. If a link is detected, a new style is defined and the appendText function is called. When the code has completed the analysis of the HTML links, the appendText function is called a second time with the previous style. Each style change requires the appendText function to be called. In other words, appendText can consider only one style at the same time. – The Instant Message Interface. There is another way to communicate, which allows the user to have private communications with other users. This means of communication is called instant message and is defined in another file: llimpanel.cpp. The main function is called addHistoryLine. It can edit history and call functions of the LLTextEditor class in order to write into the instant message panel. The Multilingual-Assisted Chat Interface. The functions we built for the MultilingualAssisted Chat Interface are mainly defined in the LLTextEditor class. When the viewer receives text, the buildMLIF function converts it into a MLIF data structure thanks to the web service, which provides the grammatical category for each word. Then, three functions (parseMLIFAndReturnCategories, parseMLIFAndReturnLinks and parseMLIFAndReturnWords) are called to parse the MLIF data structure and they return respectively each word, its grammatical category and the link to its information. In the following, a “grammatically tagged link” defines a clickable HTTP link which embeds the grammatical category information of the word. A grammatically tagged link allows the viewer to recognize a word category and to apply it to the corresponding style and URL. It is composed of a word and its grammatical tag (tag://word). For example, if you write the verb “be” and wish to colour it, the viewer will create a “verb://be” grammatically tagged link. We chose to represent the links like this for implementation reasons. In fact, this was the clearest way to enable coloration and clickability of words in Snowglobe, since it is the same structure as the usual “http://website” links. The main format is MLIF; the “tag://word” is only a bridge between MLIF and the chat interface in Snowglobe. The addTags function matches a word with its grammatical category in order to create a grammatically tagged link and to apply the colour chosen by the user. We have defined as many grammatically tagged link types as there are grammatical categories. When user A wants an instant translation in their language of what user B says, the function buildMLIF requests the web service to return a MLIF data structure with user Bs sentence in its original language and the same sentence translated in user A’s
A Chat Interface Using Standards for Virtual Worlds
553
language. Both original and translated sentences are stored in the MLIF data structure. Indeed, user A can consult the original sentence if the translation is odd or if they want to learn the other language. The MLIF data is parsed by theparseMLIFAndReturnTranslatedWords function, which returns translated words. All those functions are not called at the same time. A menu proposes several options (see 4.2). Each option is saved in an XML file (panel preferences linguistic chat.xml), which is managed by the functions defined in the files llprefslingchat.cpp, llfloaterpreference.cpp and llstartup.cpp. The appendStyledLinguisticText function in the LLTextEditor class can verify if an option is activated before calling the associated function. For instance, if the variable linguisticChatActivateInstantTranslation is activated, the viewer will use the function which allows an instant translation of a sentence to be displayed(parseMlifAndReturnTranslatedWords). Solipsis. Solipsis is a French virtual world (still under development), which is notable for two features. First, it plans to implement a decentralized Metaverse platform that will use a peer-to-peer protocol. Second, it integrates a web browser directly in the three-dimensional space. All the objects created can embed a web browser (as a texture applied on a surface), which is compatible with many web technologies (Flash, Javascript, etc.). Due to many platform dependent components, Solipsis is only runnable on Windows. The Multilingual-Assisted Chat Interface on Solipsis is roughly similar to the Snowglobe version. The general development is typically the same but the structure is different. The graphical chat interface is defined in an HTML file (uichat.html) and built with HTML and Javascript. All the classes and functions we have described for the Snowglobe version are written in C++ in the files GUI Chat.cpp and GUI Chat.h. The HTML file contains a form linked to a Javascript function (sendMessage). This calls a C++ function of GUI chat.cpp (addText). This function decides, with respect to the user’s preference of whether to use the linguistic functionalities or not, to convert the text to MLIF data or not. In other words, it calls one of the two sending methods declared in the HTML file: 1. sendTextMessage, which writes text into the chat interface without data processing; 2. sendMLIFMessage, which converts MLIF data to human-readable text (with the function convertMlifToText), and matches the tags with both corresponding colours and links (with the function colorWordByTagAndMatchLink). The menu is implemented in the HTML file menuLinguisticChat.html and the actions that the user can execute with the in-world interface menu are declared with showMenu C++ function. When a user clicks on a word, a new in-world web browser appears thanks to the showBrowser C++ function.
5 Conclusions First, we would like to highlight again that all the data exchanges are made using MLIF, because we intend to enhance interoperability through standardization. We believe standardization to be a key issue in the development and dissemination of new technologies.
554
S. Cruz-Lara et al.
Up to now, the grammatical coloration of the words in our Multilingual-Assisted Chat Interface interface is only available in English. In fact, such an analysis has to be based on linguistic corpora and the one we use only contains English terms. However, we plan to adapt our technology to other analysers such as Treetagger. It would also be very interesting to improve the grammatical coloration by including aspects such as Topic, Notion, or Function. In the same way, the translations rely on the Google Translate web application. As its quality of translation is improving every day (thanks to its “suggest” functionality), we can expect even better translations in the future. However, as we partly rely on the automatic language recognition functionality, we may have unexpected results when people mix languages. The other important point which should be noted is that the web service which we developed has been used for both Second Life and Solipsis. Externalizing applications (especially service applications) is very useful in order to minimize the development time and to be able to use more adapted data structures. But the Multilingual-Assisted Chat Interface is still a prototype. In the future, we would like to improve the reliability of this interface. We intend to distribute the modified Snowglobe viewer to a large amount of users for testing in order to collect feedbacks (to do so, we still have to solve some licence-related issues). Adding new features to the chat interface in order to facilitate social interactions and language learning is a process which still requires time, research and new technologies.
References 1. Cruz-Lara, S., Bellalem, N., Bellalem, L., Osswald, T.: Immersive 3D Environments and Multilinguality: Some Non-Intrusive and Dynamic E-learning-Oriented Scenarios Based on Textual Information. Journal of Virtual Worlds Research 2(3) (2009); Technology, Economy, and Standards 2. Suler, J.: The Online Disinhibition Effect. CyberPsychology and Behavior 7, 321–326 (2004) 3. Coleman, D.: Flame First, Think Later: New Clues to E-Mail Misbehavior. New York Times (2007) 4. El Jed, M., Pallamin, N., Pavard, B.: Vers des communications situ´ees en univers virtuel. In: National Conference on: Coop´eeration, Innovation et Technologie (2006) 5. De Lucia, A., Francese, R., Passero, I., Tortora, G.: Development and evaluation of a virtual campus on Second Life: The case of SecondDMI. Computers & Education 52, 220–233 (2009) 6. Doughty, C.J., Long, M.H.: Optimal Pshycholinguistic Environments for Distance Foreign Language Learning. Language Learning & Technology 7(3), 50–80 (2003) 7. MLIF: the MultiLingual Information Framework, http://mlif.loria.fr 8. Dellafera, C.A., Eichin, M.W., French, R.S., Jedlinsky, D.C., Kohl, J.T., Sommerfeld, W.E.: The Zephyr Notification Service. In: Winter 1988 Usenix Conference (1988) 9. Brill, E.: A Simple Rule-Based Part of Speech Tagger. In: 3rd Conference on Applied Natural Language Processing, pp. 152–155 (1992)
Prospecting an Inclusive Search Mechanism for Social Network Services Júlio Cesar dos Reis1,2, Rodrigo Bonacin2, and Maria Cecília Calani Baranauskas1 1 Institute of Computing at University of Campinas (UNICAMP) 1251 ave Albert Einstein, Cidade Universitária – 13083 970, Campinas, São Paulo, Brazil 2 CTI Renato Archer - Rodovia Dom Pedro I km 143,6, 13069-901, Campinas, São Paulo, Brazil {julio.reis,rodrigo.bonacin}@cti.gov.br, [email protected]
Abstract. Inclusive Social Network Services (ISN) represent an opportunity for ordinary people to access information and knowledge through the Web. Such systems should provide access for all, creating situations where users' diversity is respected and the access difficulties are minimized. The use of search mechanisms has been the main way to find information in the Web. However, such mechanisms are still built through lexical-syntactical processing resulting in barriers for many users to reach correct and valuable information. In ISN all people must have the possibility of recovering information that makes sense to them. For that, search engines in ISN should take into account the meanings created, shared and used by people through the use of the system. In this paper we prospect an inclusive search mechanism for ISN grounded in Semantic Web technologies combined with Organizational Semiotics concepts and methods. The conception of the proposed solution is based on experimental results that point out how to improve search mechanisms considering aspects related to social and digital inclusion. Such proposal aims to address and promote the participatory and universal access to knowledge. Keywords: Semantic web, Semantic search, Inclusive social network services, Organizational semiotics.
the opportunity to retrieve, access and use information provided in the digital media in a smooth way. The SNS represents an opportunity for interaction, access to information and knowledge through the Web. These systems primarily allow individuals to share their interests and activities, constituting communities. The e-Cidadania project [1] aims at transforming a SNS into an engine for digital inclusion and citizenship. The network systems with such characteristics can be defined as "Inclusive Social Network Services” (ISN) (see [18]). The use of search engines is one of the primary ways to find and to make access to information generated in these systems. However, search mechanisms are currently built based on comparisons of keywords and lexicalsyntactical information processing (syntax search). These mechanisms are not sufficient and adequate to effectively make sense to individuals in an inclusive scenario within social networks. Based on empirical results, which will be discussed in this paper, we have observed that people organized into virtual communities bring to this space their own vocabularies and meanings, and also develop their own local vocabularies through interaction and communication using technology. The results pointed out the need for novel search mechanisms considering the diversity of users’ competencies and inclusion aspects. A more appropriate inclusive search solution for an ISN should reflect the semantics used by participants of the social network. In few words, a search engine should take into account the local meanings created, shared and used by people organized into a community. In this paper we argue that the quality and response accuracy of a search mechanism are intrinsically associated to the proximity of the semantics shared by people. Thus, it is necessary to identify the meanings used in the network and to represent its semantic aspects. This could actually contribute to make the information accessible to everyone, including people with low educational levels that have difficulty to access online information due to their simple vocabulary or their deficiency in writing. Usually, these people use an informal (colloquial) or local vocabulary in the search. With the proposed solution they could find the correct information in an easier and more precise way, besides learning from it. In this paper we show results of search activities within an ISN, conducted in the context of the e-Cidadania project. The goal of these activities was to observe a set of search scenarios with potential users of an ISN, and to understand how these users make sense of a search mechanism. Based on the results, we present a prospection of a more appropriate search mechanism for an ISN with foundations in Organizational Semiotics (OS) [25], [13]. In our approach the goal is to expand and to improve the search technologies and techniques of the Semantic Web field based on Reis et al. [14]. Besides, the representational structure (semantic model) used by the search mechanism is based on data from the interaction and communication among users in the social network system. Thus, the search engine will take into account the meanings shared and created by people (including the informal terms) in their interaction with the system aiming to provide better results. The paper is organized as follows: Section 2 presents the concept of ISN and the importance of search mechanisms for the universal access to information; Section 3 presents the analysis of the empirical experiment with ISN users; Section 4 details the proposed approach; Section 5 makes a discussion about the approach and related works; and Section 6 concludes presenting further works.
Prospecting an Inclusive Search Mechanism for Social Network Services
557
2 Universal Access and Inclusive Search According to Boyd & Ellison [5] since the beginning of the Social Networking, sites such as MySpace1, Facebook2, Orkut3 and others have attracted millions of users and many of them have integrated these sites into their daily practices. Online Social Networks or "communities of members" have great relevance in the Web as users spend much time navigating on them. According to Nielsen [19], social networks are more popular than e-mail, with 66.8% of global reach. Around the world, it represents the fourth most used resource in the Internet and 85.2% of penetrations are in the portals and communities of general interest. Additionally, 85.9% of Internet users use search engines, which is one of the most common activities. Despite these great numbers and the success of Social Network sites among Internet users, in social contexts such as Brazil and other developing countries, there are yet a lot of people without access to the Internet and consequently without opportunities to access information and knowledge. Social indicators shown by the PNAD (National Survey by Household Sample) produced by the Brazilian Institute of Geography and Statistics (IBGE in its Portuguese acronym) [11] points out that in 2008, 65% of the population did not have access to the Internet. In addition, important data from the Ministry of Education in Brazil [16] reveal that about 30 million of Brazilians are functionally illiterate, defined as the population over 15 years old and less than 4 years of schooling (21.6% of the population). Using a broader concept of functional illiteracy, according to a survey from Paulo Montenegro Institute held in 2007 [12], the majority (64%) of Brazilians between 15 and 64 years old and more than 4 years of schooling reach no more than the degree of rudimentary literacy, i.e., they have only the ability to locate explicit information in short texts or make simple math; they are not able to understand longer texts. This data illustrates only part of the challenge that we face in terms of designing systems, which should include all these users. In this context it has become a major concern to allow access to online content available from SNS to all people in a more "natural" and efficient way. Thus, it is extremely important to recreate methods to permit the effective access and use of information conveyed in digital media, for all. This could be materialized with the ISN concept. We understand ISN as a “virtual communication space” based on the concept of social networks, which is inclusive and allows the community to share knowledge about the community know-how. This space has to facilitate “exchange” (of knowledge, goods and services) in accordance to the collaborative (project team, partners, community) system conception. It is also worth to mention that in an ISN there are not target users, but all users are relevant and should be included without discrimination. Therefore, there may be people without skills to handle certain technological features of the system and consequently without knowledge to find information that they need in the system. Moreover, those users most often use colloquial terms to express themselves through the system. For example, they may use the term "postinho" (in Portuguese) instead of 1
"Basic Health Unit" (formal). They use terms that make sense to them, but in fact, these expressions semantically mean the same. So, when someone is trying to retrieve information from the ISN, these factors must be taken into account by the search engine. On the other hand, when a user searches for something in a non-formal or not refined way and, the same concept but in its formal way (cult) is returned, this represents an opportunity for learning. Accordingly, we should seek for a computational search solution that takes into consideration the meaning that is adopted or emerges in the context of use of that network; i.e. the meaning that people bring to the network, and those that are constructed by using the system over time (through interaction). This may facilitate and provide better access to the content generated by users of the network.
3 Analyzing Search Scenarios of an ISN From a practical point of view, the e-Cidadania project resulted in the ‘Vilanarede’4 ISN system. This system has represented an opportunity to investigate the interaction behaviour of representative users in a developing country. As a direct activity of the project, we have conducted the 8th Participatory Workshop, in a telecenter located at ‘Vila União’, neighborhood of Campinas city, Brazil. In this workshop we developed an activity related to search in the ISN. The objective was to observe some major points including: (1) How would the users build understanding of the search engine? (2) Which keywords would they use? (3) Would they have any difficulty in completing the proposed scenarios? and (4) What would be their satisfaction with the search results? A task sheet with 4 search scenarios was presented to each pair of participants, and a form was prepared to the observers (researchers) of the activity. Additionally, an “extra” scenario, called Scenario X, was also included in the task. We had 7 pairs of users in total. An initial instruction about the activity was given to the participants. The pairs were formed by the users themselves, and for each scenario the pair should write the words used in the search and the title of the announcements found. Resulting from this activity, we had both the sheet tasks filled by the pairs of users and the observation forms filled out by the observers. Besides, the activity was filmed and there was audio recording of each pair during the task execution. The 4 search scenarios were: Scenario 1: Find out announcements on how to popularize the ‘VilanaRede’. Scenario 2: Find out announcements of mango (fruit) in ‘VilanaRede’. Scenario 3: Find out announcements related to food in‘VilanaRede’. Scenario 4: Find out announcement related to religion item combined with handicraft in ‘VilanaRede’. Each scenario intended to verify whether semantic capacity was needed for the search mechanism. The time for the completion of the scenarios was approximately 45 minutes. After the execution of the search scenarios, a general discussion was conducted in order to get the general impression from the users about the activity. During this discussion, several interesting stories were collected. 4
www.vilanarede.org.br
Prospecting an Inclusive Search Mechanism for Social Network Services
559
In Scenario 1, we wanted to observe whether users would use synonymous of “popularize” to find the announcements. Some pairs had difficulty to understand the scenario, as well as difficulty in choosing the terms for the search. However, some pairs associated the word "popularize" to "divulge" and quickly found related announcements. In this scenario one pair used some unusual keywords such as: "boca-aboca” (a popular expression used in Brazil that means “orally passing information from person to person”), “email”, “phone” and “posters”. By using the term "boca-aboca" in order to find announcements about how to divulge the site, unusual results also appeared as an advertisement for "Bife de casca de banana” (steak of banana peel). It happened because in one of the comments of this announcement we find "I'm with water in my mouth (boca)” in reference to the announcement of "steak of banana peel”. Phrases for search like "divulgation of the ‘Vilanarede’" or verbs such as "to popularize announcement" or "advertising Vila” were also used in this scenario. In Scenario 2, we wanted to verify whether users would find any announcement related to the mango (fruit) in the application contents. There was no announcement about the mango fruit in the system. However there was an announcement about mangá (cartoon) and it was written without the acute signal in the word (‘manga’ in Portuguese, which is mango fruit in English). In this scenario, users mainly used the following keywords (translated from Portuguese): "mango (fruit)", "fruit", "mango", "mango fruit", "mango / fruit". Some pairs were uncertain if they would have to put the keyword "fruit" or not. Note that in a semantic search, by putting the keyword “fruit”, the application should return all the announcements with mango (fruit), in the case of announcements semantically related to fruit. In Scenario 3, we wanted to see whether users would use the keyword "food" in the search or they would make a search for specific foods through the search engine. As a result, when users tried the keyword "food", the system returned nothing. However there are several announcements on food in the system: the sale of “salgadinhos” (homemade snacks), “pão-de-queijo” (cheese bread) and others. Among the relevant considerations from the observers, during the execution of this scenario users said that the system should relate “salgadinhos” (homemade snacks), “pão-de-queijo” (cheese bread) and “Bife de casca de banana” (steak of banana peel) with the concept of food. And this makes sense since that semantically all of these are food. During the discussion phase one of the users commented: "Using food is easier because it already covers everything," i.e., all types of food in the system. Another said: "To be more 'lean' and practical for those who are starting (in terms of computer literacy), like us, when we enter “food”, it should return a variety of foods due to our difficulty." Yet another user says: "Maybe to use food does not help in the search for something more specific, but if it is something that we have no knowledge of the domain, or we do not know what to look for, the tool would be useful and helpful." The main keywords used in this scenario (translated from Portuguese) were "food", "comida caseira" (homemade cooking), "food sale", "salgado" (homemade snacks), "salgadinhos"(small homemade snacks), "salgadinho frito" (fried homemade snacks), pies, "doces" (sweets), "pão-de-queijo" (cheese bread), "docinhos” (small sweets), cake, pastel and “brigadeiro” (chocolate sweet). Note that users utilize several variations in words such as “homemade snack”, "small homemade snack" and "fried homemade snacks". With Scenario 4, we aimed to determine which keywords users would use when looking for a specific announcement. One of the observers indicated that the pair
560
J.C. dos Reis, R. Bonacin, and M.C.C. Baranauskas
found the "Saint Anthony" because they already knew that this announcement was in the system. The same was reported by several other observers. The vast majority of the pairs used the keywords (translated from Portuguese): "homemade craft", "Crafts saint", "holiness", "holy" and "saints". Users found the desired information successfully. But one of the pairs put keywords like (translated from Portuguese): "Orisha", "Orisha of cloth", "religious" and "sculpture" and didn’t find out any announcement. Several observers noticed that the subjects utilized terms from their own colloquial language in the search; examples can be seen as "manga rosa" (pink mango), "manga coquinho" (coconuts mango), "tutu de feijão" (tutu bean), "boca-a-boca", “small sweets”, "little homemade snack" and "Orisha". Also in several occasions the pairs discussed before reaching an agreement on which word to use in the search. Another interesting result was obtained from the interaction of a deaf user with the search mechanism. As expected this user has difficulty with the written language. We observed that he uses the same hand signal to several different words. The user had difficulty in understanding the scenario 1, since the words popularize, advertise, advertisement and disclose have the same or similar hand signals in his language. Moreover, we could see that the user has different understanding for some words that have the same meaning; his behaviour during the search was not confident neither independent; he asked a lot of questions to the observer. Additionally, general results indicate that users from the context under study (prospective users of an ISN) had difficulty with the search button; in other words, they do not have a clear concept about the act of "searching" in an application on the Internet. Some users had no idea about the scope of searching. They did not know whether that search referred only to the announcements in the ‘Vilanarede’ system. This fact is explicit in a description from a user who said: "Search fondue because it is something chic”. However another report from other user says: "Fondue is very chic, we do not have it here in our network... we will not put it in the search because the network is ours, it is “poor”... and it will not have fondue...". This statement shows that the second user has the notion of the search scope, which will be just within the announcements from that social network system; so since there were no announcements about fondue, nothing would be returned. Even with this lack of sense about the search scope, one of the observers explains that the users were surprised with the power of the search, and they explored and tested it easily. Such surprise can be explained since most of them have never used a search mechanism before. From the forms filled by the observers, approximately 80% of the pairs felt comfortable during the task. Around 60% of the pairs did not make a lot of questions to the facilitator during the task. This point out that searching using keywords can be considered for these people. However regarding the search results the currently solutions are not enough to provide information that make sense to each user in a context. An interesting fact reported by the observers is that sometimes users initiate the search by entering complete questions in natural language, or at least they think aloud in that way. This is confirmed with the scenario X. This scenario was described as follows: "Suppose you want to make a reservation for a medical consultation at the local hospital and go to the ‘VilanaRede’ system to get information (e.g. phone of the hospital). How would you make the search for some announcement related to it?" In this scenario a few pairs used keywords such as (translated from Portuguese): "Hospital", "Health Center", "phone of the health clinic", "scheduling of medical
Prospecting an Inclusive Search Mechanism for Social Network Services
561
appointments." However, some pairs used sentences in natural language such as: "Can anyone tell me how to make a reservation for medical consultation at PUCC5?" and also "What is the phone of the SUS6 for appointments?”. Observers reported that after trying natural language, users started to use terms and keywords, and sometimes they employed a combination of more than one word. During the final discussion, after the execution of the scenarios, users explained that they had learned that a complete phrase usually “does not work”, and frequently only one "right word", as said by a user, is sometimes enough to return search results. These practical results from the workshop show that users’ colloquial language should be considered during the development of more appropriate search engines. Moreover, people in a social network can create their vocabulary, sharing meanings in the community. The results showed us that it is necessary to construct computationally tractable models from the semantic point of view that come out from the network itself. Semantics here is understood as the interpretation of signs [20] by individuals and their association with real world elements. This interpretation is socially contextualized; i.e. individuals and communities may have different interpretations for the same sign and a sign may connote different meanings depending on the context applied.
4 Toward an Adequate Search Mechanism for ISN General difficulties faced by users to get information in the Web can be explained mainly by: (1) overload of information presented in the system; and (2) problems related to the contextualization of the meaning for the terms used. As an attempt to solve this problem, we have investigated an approach that can result in better and more appropriate search engines for ISN. In a social network, the “emergence” of meaning is an ongoing process in which meanings and interpretations are constructed, used and shared through the system based on the interactions and expressions of users. These interpretations expressed by users in the system could be computationally represented. Several improvements could be achieved such as semantic models to make the social network context more faithful resulting in more adequate search engines. In order to accomplish that, we have proposed a search engine informed by a Semiotic approach [32]. We have developed a semi-automatic method to model the semantics of the ISN using the Semantic Analysis Method (SAM); the outcome of this process is intended to be used in the search engine. 4.1 Semantic Analysis Method This section presents a brief overview of the main concepts from Semantic Analysis Method (SAM) as a theoretical-methodological background to this paper. The SAM assists users or problem owners in eliciting and representing meanings in a formal and precise model. The meanings are specified in an Ontology Chart (OC) that represents
5 6
PUCC is a hospital at Campinas city from the Pontifical Catholic University of Campinas. Unified Health System in Brazil.
562
J.C. dos Reis, R. Bonacin, and M.C.C. Baranauskas
an agent-in-action ontology. In the SAM “The world” is socially constructed by the actions of agents, on the basis of what is offered by the physical world itself [13]. It is worth mentioning that the SAM´s concepts of ontology and agent are different from the concepts in use by the Semantic Web community. An OC represents a domain in study which can be described by the concepts, the ontological dependencies between the concepts, and the norms detailing the constraints at both the universal and the instance levels. Moreover the OC describes a view of responsible agents in the focal domain and their pattern of behaviour named affordances [13]. Some basic concepts of SAM adopted in this paper are based in Liu [13], and are briefly presented as follows: “Affordance”, the concept introduced by Gibson [30], is used to express invariant repertories of behaviour of an organism made available by some combined structure of the organism and its environment. In SAM the concept introduced by Gibson was extended by Stamper [31] to include invariants of behaviour in the social world; “Agent” can be defined as something that has responsible behaviour. An agent can be an individual person, a cultural group, a language community, a society, etc. (an employee, a department, an organization, etc.). “Ontological dependency” is formed when an affordance is possible only if certain other affordances are available. The affordance “A” is ontologically dependent on the affordance “B” meaning that “A” is only possible when “B” is also possible. The OC represents graphically these concepts. 4.2 Modeling Ontologies for ISN In the ‘VilanaRede’ system, users express themselves through their profiles, announcements of products, services and ideas posted; and they communicate mainly through commentary about the announcements and chats between members of the network. These data are stored in the ISN system database and from these data we represent the semantics used in the social network in a structure called ‘Semiotic Web ontology’ [14]. According to Reis et al. [14] this structure is a semantic model (computationally tractable ontology) in which the SAM is used in conjunction with other technologies from the Semantic Web field to describe computationally tractable ontologies using the Web Ontology Language (OWL)7. In this paper such semantic model is constructed from a semi-automatic process along with the vocabularies shared in the social network. The idea is to incorporate the concepts of particular Agents (roles) and Affordances (patterns of behaviour) arising from the SAM into an expanded and more representative Semantic Web ontology. It is worth mentioning that the goal is not to create a “perfect ontology” from a theoretical point of view, but to produce practical and immediate results for search in ISN. Therefore some properties from the OC may not be fully transcribed to OWL, while other aspects such as agentaffordance relationship are emphasized. This approach is justified from a Semiotic perspective, since the signs are socially constructed. Thus, a computational model that represents the semantics of a SNS should contain the agents that interpret the socially shared concepts. With this 7
http://www.w3.org/TR/owl-features
Prospecting an Inclusive Search Mechanism for Social Network Services
563
approach we incorporate and take into account the Semantic Web ontologies concerns and possible representations arising from the Ontology in a Semiotic perspective. In addition to agents and affordances, we have observed that Semantic Web ontologies do not incorporate (at least explicitly) the ontological dependency relations, an existential relation in the model. The approach is also justified by the representational limitations shown in literature (e.g. [26], [14]) regarding the use of ontologies in computing and their expressivity. Within the conceptual model of a ‘Semiotic Web ontology’, the agents have behaviour(s) (affordances) related to a concept. For instance, a seamstress, which is an agent, can sew a “manga” (it means sleeve in English). Sewing is a pattern of behaviour of a seamstress (in other words an affordance). “Manga” is a concept that can have several different meanings in Portuguese (It can mean sleeve, fruit, color, etc.), but in this context due to the affordance and the agent ontological dependence, the meaning of “manga” is possibly associated to shirt and not, to “manga” fruit (mango in English) that can also be represented in the model, as suggested by Reis et al. [14].
Fig. 1. Modeling meanings conforms to ‘Semiotic Web ontology’
Following the approach described in Reis et al. [14] for an SNS context imagine a scenario as illustrated by Figure 1. The grocer and the seamstress are agents that have affordances connected to specific concepts. This model can also have specific ‘is-a’ relationships; e.g. ‘manga rosa’ is a specific kind of mango. This also shows that concepts can be related to several agents and affordances and with other concepts, constituting relations and representations that make more complete ontologies, when compared to conventional ontologies described for a domain. For example, ‘manga’ can also mean a color for a painter who is searching something in the network, as well as ‘manga’ can have any synonym that makes sense for an agent ‘Y’ modeled from the data of the social network. In order to develop this representation for the inclusive search mechanism, we propose an assisted method (semi-automatic) with several distinct steps; the method is illustrated in Figure 2 (a). It includes: (1) the extraction of terms and possible semantic relations from the database of the ISN system; (2) the creation of an OC (from SAM); and (3) the creation of the final OWL ontology.
564
J.C. dos Reis, R. Bonacin, and M.C.C. Baranauskas
(a)
(b)
Fig. 2. (a) An illustration of the Semi-Automatic Method; (b) An illustration of the proposed inclusive search mechanism
In this assisted method, the first step deals with the data from the system database. This step takes into account the social relations in the network, and must provide the necessary well defined data (a list of concepts, agents and affordances, etc.) to build the semantic model. The next step involves the building of an OC (from SAM) by an ontology engineer. This intermediate ontology diagram is important to identify the possible agents in the ISN and their patterns of behaviour. In the third step, from the OC, a set of specific heuristics and transformation rules must be applied to create an initial OWL ontology (computationally tractable), extending the computational development of the SONAR CASE tool [24]. Bonacin et al. [3] proposed a heuristic to transform OC into system design diagrams; however those heuristics must be adapted to our purpose. During the modeling of the meanings the ontology engineer can also be supported by existing tools for Ontology Learning and Engineering. 4.3 Extracting Information from the ISN to Build ‘Semiotic Web Ontology’ Regarding search in ISN, the semantic mechanism in a knowledge domain representation must consider the activity of the social network. This includes people’s local and everyday language used in the network. For that, this requires tools and techniques that make extraction and text mining from the system database in order to discover and model the semantics shared by people in the social network. Thereby, the first step of the proposed method is to extract relevant information from the database of the ISN. The objective of this step is to support the ontology modeling from data created and shared by ISN users. We have conducted a study [21] about tools and techniques for the identification of concepts and semantic relationships that come out from the ISN data. The objective was to create a designing
Prospecting an Inclusive Search Mechanism for Social Network Services
565
strategy to assist the modeling of ontologies that represent the semantics shared in the social network based on the idea of Semiotic Web ontologies. In order to accomplish that, we have investigated tools and techniques that could aid in this step. We wanted to know which tools could support in discovering of relevant concepts and semantic relation between concepts. The main challenge is the heterogeneity of the content available in the ISN. The “VilanaRede” content (i.e. its announcements) was used to conduct such study. In this study we investigated several text mining tools. Among the tools described in literature, we have chosen the keyphases extractors KEA [17], tools for term extraction in the Portuguese Language such as ExATOlp [23] and algorithms for clustering like CLUTO8. Results showing positive and negative cases of the outcomes were elicited. Moreover, algorithm procedures were created to verify the intersection of results from the outcome of each tool. Tables illustrating the results are also described to show the terms extracted with their relative and absolute frequency. The results indicate that the more adequate approach to analyze the network information is to verify both: the data captured by individualized announcements together with data independent of any announcement. The keyphrases extracted by KEA on each announcement inform about the subjects discussed in the network, while the approach utilized by ExATOlp provides a general vision of all considered announcements. The pos-processing done with terms organized by semantic categories shows useful as well as to analyze the terms repeated by all tools, since this can indicate the concepts that are mandatory during the ontology modeling. Furthermore, the results allow to point out that not only the tools and techniques alone are important, but also how it is organized and used by ontology engineers in order to make decisions based on different information and perspectives. Finally, the results obtained by applying the tools and techniques on real data from “VilanaRede” showed promising in supporting the building of ontologies that represent the meanings used in the ISN. 4.4 Outlining an Inclusive Search Engine The main objective of the proposed method is to create valuable information to inform the inclusive search mechanism. When the user is logged in the ISN and he/she enters with some search term(s) in the search engine, the system starts a process of finding out relationships of this/these keyword(s) with the available ‘Semiotic Web ontology’. For example, suppose the user types the term "small snack". If there is nothing in the system with this expression, from the analogies and semantic relations made, the system may return some other types of food semantically close. Likewise if the user enters the word ‘food’, all advertisements related to food should be returned. There are several architectural proposals for semantic search solutions, such those described in Mangold [15]; Reis et al. [22] describe an overview of semantic search solutions applied to SNS. The decisions and architectural strategies for resolving the semantic search in this implementation is carried out in accordance with the requirements of an ISN and follows the recommendations proposed by Reis et al. [22], [32]. The main difference in the search solution of this proposal is to take into 8
http://glaros.dtc.umn.edu/gkhome/views/cluto/
566
J.C. dos Reis, R. Bonacin, and M.C.C. Baranauskas
account information regarding the user that is making the search (from his/her profile) and the user that produces the content; Figure 2 (b) illustrates this idea. In this strategy, the user profile is important due to the possibility of discovering a context for the search terms. Mainly from the user profile, we aim to identify some adequate agent(s) represented in the ontology and make a user-agent modeling as described by [32]. Thus we delineate (or even limit) the search space, making a relation between the user and the generated semantic model. For instance, imagine that a biologist is logged into the system (we could find that a user is a biologist based on his/her profile) and requests a search with the keyword ‘crane’. If there is a relation between the ‘biologist’ agent and the term ‘crane’ in the ontology, most likely the results (announcements) that could be returned first (ranked first) should be related to the concept of crane as a ‘bird’, not the other meaning(s) of this word. However, to a civil engineer that searches the same word, maybe the results that most interest him / her refers to the construction equipment and not the bird. Semantic Web Rule Language (SWRL)9 rules as illustrated by [32] can be used to relate agents to certain meanings enabling to deal with such situations. Besides, we do not mean that other results are not required or may not be returned in response to the search, (the engineer may want to know about this kind of bird), but the announcements from the social network that relates ‘crane’ with a construction equipment must have greater relevance in the ranking of results. The agent-affordance relation is also used to indicate the probable meaning of the words in the announcement. For example, based on the user that entered a particular announcement that mentions the word ‘crane’, we could verify whether the word ‘crane’ refers to a ‘bird’ or to a ‘construction equipment’. If the user who submitted the announcement is a ‘biologist’ agent, ‘crane’ would be most likely a ‘bird’ in this announcement. In a similar way, if the advertiser is a civil engineer, ‘crane’ probably would mean a ‘construction equipment’. According to Reis et al. [32] we also could have relationships between agents and could verify how much an agent is semantically close to another and indicate the probable meaning based on this aspect.
5 Discussion In the proposed approach, the support for better results from the search engine demands a careful modeling procedure. Different signs with the same meaning (synonyms) coming from different virtual communities of the social network can be discovered having the opportunity to be represented in the ontology; such signs and meanings can be purely regional. Thus, they could not be present in formal dictionaries or thesauri generally used by conventional search mechanisms. Furthermore because they are cultural expressions emerging from the social network, the ontology would potentially provide smarter and richer search results when compared to ontologies based on domains or formal definitions. The approach provides means to discover and distinguish the meanings used in the SNS, representing them through the agents in the ‘Semiotic Web ontology’. Differently from conventional computing ontologies and other approaches to semantic 9
http://www.w3.org/Submission/SWRL/
Prospecting an Inclusive Search Mechanism for Social Network Services
567
representation, our proposal involves adding the agents and affordances concept in the search. This addition can cooperate for richer search results treating the polysemy problem in not restricted or controlled language contexts in ISN. Moreover, the inclusion of the agents and other concepts from SAM in the Web ontology can aid improving the search mechanism, generating more adequate results to an ISN context. Considering users with limited literacy and with difficulty in dealing with technological artifacts (digitally illiterate), it is important to let them perform the search using their daily language since usually is what make sense to them, and to provide search results more natural and adequate to their lives. Thus, the search engine should reflect the semantic reality of the social network users. A search engine with such characteristics could create opportunities for inclusion, since the method for building the semantic model as well as the strategies to use the ontology suggests that the returned search results will tend to make more sense for the user that searches. Some recent studies in the literature address search solutions for SNS (e.g. [28], [9], [8]). These works are particularly focused on searching just the users’ profile in the network; the work of Choudhari et al. [6] makes progress in the development of semantic search in SNS, however their work have the same limitation and does not use ontologies to perform the search. Regarding semantic search but not strictly related to SNS context, several proposals and solutions are illustrated by the survey of Mangold [15] and Wei et al. [29]. Ontology based semantic search solutions (e.g. [4], [7]) as well as ontology based query expansion (e.g. [10], [2]) have enhanced techniques for semantic search applications. In order to implement a solution and make improvements to a search engine of an ISN, future research includes a detailed observation of more ontology based query expansion approaches to use the ‘Semiotic Web ontology’ method. Other approaches (e.g. [27]) have tried to take advantage of the ‘faceted browsing paradigm’, employing a solution integrating semantic search and visual navigation in graphs using the idea of social networks. Previous work conducted by Reis et al. [22] have discussed the challenges related to search in ISN; the authors propose recommendations for a search engine better suited to this kind of system. Furthermore, the proposal of Reis et al. [32] for a search informed by a Semiotic approach in SNS is the main work that we have based on to prospect an inclusive solution. To the best of our knowledge investigations that have specifically focused on semantic search in SNS considering aspects of accessibility and inclusion were not found in the literature so far. We argue that the development of a search engine more suitable for an ISN should include these new challenges and must be informed by a Semiotic approach. Also, the approach developed in this paper can methodologically and technologically improve and expand Semantic Web techniques, such as Web ontologies, illustrating immediate and practical results for better ISN search engines. This approach differs from others, since the search solution outlined tries to derive the meaning of the search terms and also the meaning of the terms from the ISN content produced by users based on the agents and affordances. Future experiments with real users should be conducted to verify whether our approach can bring promising benefits revealing search results more suitable to the context of social and digital inclusion, and also to SNS in general.
568
J.C. dos Reis, R. Bonacin, and M.C.C. Baranauskas
6 Conclusions and Further Work Social network systems may provide inclusive access to digital information, creating situations where the users’ diversity is respected and the access difficulties minimized. This is the purpose of the Inclusive Social Network Services (ISN). In this context it is important to provide information retrieving in a more natural way from the user’s point of view, with results that make sense to people. Therefore, more appropriated mechanisms for search should take into account the meanings created, shared and used by people in the social network. This paper presented new perspectives for search in Social network systems which consider the inclusive social context. It showed the outcomes of an analysis regarding how to improve a search mechanism considering aspects related to the digital and social inclusion. We could verify with real users that semantic aspects can make a difference for people to reach information, and that the current syntactic search engines are not enough for an ISN context. Inspired on the practical context of ISN users, and based on the approach of ‘Semiotic Web ontology’ this paper outlined an inclusive search mechanism for SNS. As further work, the goal of this research is to improve (in the implementation sense) the ideas drawn up for the search mechanism described in this paper. For that, we aim to develop the semi-automatic tool for building ‘Semiotic Web ontology’, as an extension of the SONAR tool, including the heuristics and transformation rules to build the OWL ontology aided from OC. Furthermore, we intend to develop a pilot implementation of this search engine based on the ‘Semiotic Web ontology’ in the ‘VilanaRede’ system, using and improving the strategies mentioned in this paper. Also the work involves new practical experiments in a case study with real users, utilizing this novel search mechanism in order to evaluate and validate the solution with empirical results. Acknowledgements. This work was funded by Microsoft Research - FAPESP Institute for IT Research (proc. n. 2007/54564-1) and by CNPq/CTI (680.041/2006-0). The authors also thank colleagues from CTI, IC – Unicamp, NIED, InterHAD and Casa Brasil for insightful discussion.
References 1. Baranauskas, M.C.C.: e-Cidadania: Systems and Methods for the Constitution of a Culture mediated by Information and Communication Technology. Proposal for the Microsoft Research-FAPESP Institute (2007) 2. Bhogal, J., Macfarlane, A., Smith, P.: A review of ontology based query expansion. Information Processing & Management 43(4), 866–886 (2007) 3. Bonacin, R., Baranauskas, M.C.C., Liu, K.: From Ontology Charts to Class Diagrams: semantic analysis aiding systems design. In: Proc. of the 6th International Conference on Enterprise Information Systems, Porto, Portugal (2004) 4. Bonino, D., Corno, F., Farinetti, L., Bosca, A.: Ontology Driven Semantic Search. Transaction on Information Science and Application 1(6), 1597–1605 (2004) 5. Boyd, D.M., Ellison, N.B.: Social Network Sites: Definition, History, and Scholarship. Journal of Computer-Mediated Communication (13), 210–230 (2008)
Prospecting an Inclusive Search Mechanism for Social Network Services
569
6. Choudhari, A., Jain, M., Sinharoy, A., Zhang, M. (2008), Semantic Search in Social Networks, http://www.cc.gatech.edu/projects/disl/courses/8803/ 2008/project/project_deliverables/group22/proposal.pdf (accessed on January 2010) 7. Fang, W., Zhang, L., Wang, Y., Dong, S.: Toward a Semantic Search Engine based on Ontologies. In: Proc. of the 4th International Conference on Machine Learning and Cybernetics, Guangzhou, China, vol. 3, pp. 1913–1918 (2005) 8. Gürsel, A., Sen, S.: Improving search in social networks by agent based mining. In: Proc. of the 21st International Joint Conference on Artificial intelligence Table of Contents, Pasadena, California, USA, pp. 2034–2039 (2009) 9. Haynes, J., Perisic, I.: Mapping Search Relevance to Social Networks. In: Proc. of 3rd Workshop on Social Network Mining and Analysis. International Conference on Knowledge Discovery and Data Mining, Paris, France (2009) 10. Hoang, H.H., Tjoa, M.: The State of the Art of Ontology-based Query Systems: A Comparison of Existing Approaches. In: Proc. of IEEE International Conference on Computing and Informatics (ICOCI), Kuala Lumpur, Malaysia (2006) 11. IBGE. Acesso à Internet e posse de telefone móvel celular para uso pessoal (2008), http://www.ibge.gov.br/home/estatistica/populacao/ acessoainternet2008 (retreived from January 2010) 12. IPM. Indicador Nacional de Analfabetismo Funcional, http://www.smec.salvador.ba.gov.br/site/documentos/ espaco-virtual/espaco-dados-estatisticos/ indicador%20de%20analfabetismo%20funcional%202007.pdf (retrieved from January 2010 ) 13. Liu, K.: Semiotics in information systems engineering. Cambridge University Press, Cambridge (2000) 14. Reis, J.C., Bonacin, R., Baranauskas, M.C.C.: A Semiotic-based Approach to the design of Web Ontologies. In: Proc. of 12th International Conference on Informatics and Semiotics in Organisations – ICISO 2010, Reading, UK, pp. 60–67 (2010) 15. Mangold, C.: A survey and classification of semantic search approaches. Int. J. Metadata, Semantics and Ontology 2(1), 23–34 (2007) 16. MEC (2007), http://portal.mec.gov.br (retrieved November 2009) 17. Medelyan, O., Witten, I.H.: Domain-independent automatic keyphrase indexing with small training sets. Journal of the American Society for Information Science and Technology 59(7), 1026–1040 (2008) 18. Neris, V.P.A., Almeida, L.D., Miranda, L.C., Hayashi, E., Baranauskas, M.C.C.: Towards a Socially-constructed Meaning for Inclusive Social Network Systems. In: Proc. of International Conference on Informatics and Semiotics in Organisations - ICISO, Beijing, China, pp. 247–254 (2009) 19. Nielsen: Global Faces and Networked Places. The Nielsen Company USA (2009), http://blog.nielsen.com/nielsenwire/ wp-content/uploads/2009/03/nielsen_globalfaces_mar09.pdf (retrieved from July 2009 ) 20. Peirce, C.S.: Collected Papers. Harvard University Press, Cambridge (1931-1958) 21. Reis, J.C., Bonacin, R., Baranauskas, M.C.C.: Identificando Semântica em Redes Sociais Inclusivas Online: Um estudo sobre ferramentas e técnicas. IC-10-28 (2010) (in portuguese), http://www.ic.unicamp.br/~reltech
570
J.C. dos Reis, R. Bonacin, and M.C.C. Baranauskas
22. Reis, J.C., Baranauskas, M.C.C., Bonacin, R.: Busca em Sistemas Sócio-Culturais Online: Desafios e Recomendações. In Seminário Integrado de Software e Hardware (SEMISH). In: Proc of the XXX Congress of the Computer Brazilian Society, vol. 1, pp. 380–394. SBC, Belo Horizonte (2010) (in Portuguese) 23. Lopes, L., Fernandes, P., Vieira, R., Fedrizzi, G.: ExATOlp: An Automatic Tool for Term Extraction from Portuguese Language Corpora. In: Proc. of the 4th Language and Technology Conference, pp. 427–431 (2009) 24. Santos, T.M., Bonacin, R., Baranauskas, M.C.C., Rodrigues, M.A.: A Model Driven Architecture Tool Based on Semantic Analysis Method. In: Proc. of 10th International Conference on Enterprise Information Systems, Barcelona, Spain, vol. 2, pp. 305–310 (2008) 25. Stamper, R.K.: Organisational Semiotics: Informatics without the Computer? In: Liu, K., Clarke, R., Andersen, P.B., Stamper, R.K. (eds.) Information, Organisation and Technology: Studies in Organisational Semiotics. Kluwer Academic Publishers, Dordrecht (2001) 26. Tanasescu, V., Streibel, O.: Extreme Tagging: Emergent Semantics through the Tagging of Tags. In: International Workshop on Emergent Semantics and Ontology Evolution (ESOE 2007), Busan, Korea (2007) 27. Tvarožek, M., Barla, M., Frivolt, G., Tomša, M., Mária, B.: Improving semantic search via integrated personalized faceted and visual graph navigation. In: Geffert, V., Karhumäki, J., Bertoni, A., Preneel, B., Návrat, P., Bieliková, M. (eds.) SOFSEM 2008. LNCS, vol. 4910, pp. 778–789. Springer, Heidelberg (2008) 28. Vieira, M.V., Fonseca, B.M., Damazio, R., Golgher, P.B., Reis, D.C., Neto, B.R.: Efficient Search Ranking in Social Networks. In: Proc. of 16th ACM Conference on information and knowledge management, Lisbon, Portugal, pp. 563–572 (2007) 29. Wei, W., Barnaghi, P.M., Bargiela, A.: The Anatomy and Design of a Semantic Search Engine. Tech. Rep., School of Computer Science, University of Nottingham Malaysia Campus (2007) 30. Gibson, J.J.: The Theory of Affordances. In: Shaw, R., Bransford, J. (eds.) Perceiving, Acting, and Knowing (1977) 31. Stamper, R.K.: Social Norms in requirements analysis - an outline of MEASUR. In: Jirotka, M., Goguen, J., Bickerton, M. (eds.) Requirements Engineering, Technical and Social Aspects, New York (1993) 32. Reis, J.C., Bonacin, R., Baranauskas, M.C.C.: Search Informed by a Semiotic Approach in Social Network Services. In: Proc of the IEEE Computer Society Press. Workshop Web2Touch - living experience Through Web, Tozeur, pp. 321–326 (2010)
Towards Authentication via Selected Extraction from Electronic Personal Histories Ann Nosseir1,2 and Sotirios Terzis3 1
Institute of National Planning, Salah Salem Street, Nasr City, Egypt 2 British University in Egypt, El Shorouk City, Egypt 3 Department of Computer and Information Sciences, University of Strathclyde 26 Richmond Street, Glasgow, U.K. [email protected], [email protected]
Abstract. Authentication via selected extraction from electronic personal histories is a novel question-based authentication technique. This paper first presents a study using academic personal web site data that investigated the effect of using image-based authentication questions. By assessing the impact on both genuine users and attackers the study concluded that from an authentication point of view (a) an image-based representation of questions is beneficial; (b) a small increase in the number of distracters/options in closed questions is positive; and (c) the ability of attackers, close to genuine users, to answer correctly with high confidence, genuine users’ questions is limited. Second, the paper presents the development of a web-based prototype for automated generation of image-based authentication questions. The prototype makes clear that although possible to largely automate the generation of authentication questions, this requires significant engineering effort and further research. These results are encouraging for the feasibility of the technique. Keywords: Security usability, Usable authentication, Question-based authentication, User study.
presentation step, in which users are challenged by (some of) their questions and are required to provide the registered answers for successful authentication [9]. However, the effectiveness of current question-based authentication is limited. The number of personal facts used is kept quite small and fairly generic, and the answer registration phase is rarely repeated [9]. The main idea of our research is to improve the effectiveness of question-based authentication by replacing answer registration by an automated process that constructs questions and answers via selected extraction from the electronic personal histories of users. Our approach is motivated by two main observations, first detailed electronic records of users’ personal histories are already available, and second the use of personal history information in question-based authentication is appealing both from a security and usability point of view. In today’s world as users constantly leave behind trails of digital footprints [10]. These trails consist of data captured by information and communication technologies when users interact with them. For example, trails of shopping transactions are captured by the information systems of credit card companies; while mobile phone networks capture trails of visited areas, etc. Cheap storage space makes it easy to keep lasting records of these trails [10]. As the use of information and communication technologies becomes pervasive, people’s digital footprints get closer to an electronic record of their personal history. This personal history information grows continuously over time. This allows an increasing set of personalized questions to be generated for authentication purposes. So, authentication can also be dynamic, using a different set of questions and answers at each authentication. All these make it more difficult for an attack to succeed. Moreover, as electronic personal histories comprise of trails from various sources, it is also more difficult for attackers to compromise the mechanism with a successful attack against an individual data source, or by impersonating the user in a particular context. More importantly, these characteristics can be provided with minimal impact on usability, as users are likely to know quite well their personal history. This paper first presents a study that was carried out using academic personal web site data as a source of personal history information. The study is part of a series of studies that aim to establish the appropriateness of personal history information as a basis for question based authentication. The study explores the extent to which participants could recall events from their personal academic history and successfully answer questions about them. At the same time, it explores to what extent their colleagues are able to answer the same questions. The study investigates the impact of an image-based formulation of questions in the ability of the participants to correctly answer them. The main conclusions of the study are: (a) image-based formulation of questions has a significant, positive impact on the ability of genuine users to answer correctly the questions and no significant impact on attackers; (b) a small increase in the number of distracters in closed questions has a significant negative impact on the ability of attackers to answer them correctly and no significant impact on genuine users; (c) despite the closeness of attackers to genuine users, their ability in answering the genuine users’ questions correctly with high confidence is quite limited. The paper also presents the development of a web-based prototype to explore the automated generation of image-based authentication questions. The prototype generates ‘images of places’ and map questions using online resources and services. The prototype shows that although it is possible to largely automate the generation of questions, this requires significant engineering effort, while there are a number of areas where
Towards Authentication with Personal Histories
573
further research is necessary to improve the authentication quality of generated questions. The rest of the paper starts with a review of related work, focusing on previous work on usable authentication mechanisms. This review motivates our overall approach and identifies some ideas for exploration. This is followed by an outline of our general research methodology that sets the context for the study. Then, the study details are presented, including procedures followed, results, and conclusions drawn. Building on the study conclusions, the development of a web-based prototype is presented for the automated generation of authentication questions. The paper concludes with a summary and directions for future research.
2 Related Work A number of studies have shown that the overwhelming majority of users find strong passwords, difficult to remember [1, 2]. These studies have also shown that memorability problems are exacerbated by infrequent use of passwords and when the passwords change frequently, i.e. when authentication is dynamic. Failed authentication attempts prevent users from accessing the services they need, and after a number of failed attempts most systems would point them towards password recovery schemes. These schemes require checks to be carried before the current password is recovered, or a new one is set. All these frustrate users. To avoid frustration users often take actions that undermine security [2, 11]. They often choose passwords that are weak. Even when they choose strong passwords, users may write them down. They are also very reluctant to change strong passwords, as significant effort has been invested to memorize them. They may also use the same password in a number of systems, increasing the negative impact of a successful attack. To accommodate users, service providers also take steps that undermine security, e.g. they do not enforce strong passwords that are changed regularly, and they tend to implement recovery mechanisms with few checks. It should be clear that for improved security, it is essential to address the password memorability problems and improve overall password usability. A number of usable authentication schemes have been proposed that aim to address the usability problems of passwords. These schemes can be roughly classified into two categories, those that use questions about personal attributes (facts and opinions), which are easy to recall, and those that use images, which are easy to recognize and recall. The former category includes schemes that either use questions about personal attributes that are easy to recall (personal facts and opinions), or provide cues that facilitate recall. They include schemes like question-based authentication [9], cognitive passwords [11] (a type of question-based authentication that focuses on personal facts and opinions), pass-phrases [12] and associative words [13]. The latter includes mechanisms that exploit the fact that the recognition process of the human brain is more powerful than memory recall, particularly so in the case of images. They focus on image-based schemes where the password takes the form of either a single or a series of images, or a selection of locations or points on specific images [14]. Examples of recognition-based scheme include like random art [4], Passfaces [5], personal images [6], Awas-E [15], and VIP [3], which differ in the type and origin of images, and the way these are used as explained below.
574
A. Nosseir and S. Terzis
All these schemes involve a registration phase, where the “password” is set up, and an authentication phase where the user is challenged to provide the “password”. At registration different approaches require different levels of user involvement. In some schemes the system selects the “password” on users’ behalf. In others, users select a password from a list of system provided ones. Some even allow complete freedom to users in choosing their “password”. During authentication, some schemes challenge users to provide the complete “password”, while others only part of it. Others go further and challenge users with a series of challenges/questions. In some schemes the questions are open, where the user has to provide their “password”, while in others closed, where the users have to choose the password from a list of provided ones. In general, it seems that the greater the user involvement in the “password” registration phase, the more memorable and applicable the “password” is, while closed questions remove any repeatability problems. Applicability refers to the extent to which the chosen questions apply to the user population, while repeatability refers to the extent to which the correct answer does not have multiple syntactic representations and its semantic value remains the same over time [9]. Improving usability is not by itself the end goal, as this improvement may be at the expense of security. It is interesting to note that most usable authentication schemes have been proposed without an analysis of their robustness against attacks. More specifically, a closed question only makes sense from a security point of view, if measures are taken to militate against brute-force attacks, e.g. large number of distracters and multiple independent questions, but the impact of such measures to usability has not been fully assessed. Moreover, questions-based schemes have been shown to be vulnerable to guessability attacks by close friends and family [8]. These problems can be worse, if the questions used have not been carefully chosen to ensure that the correct answer cannot be easily uncovered, or is publicly available [9]. Similarly, for image-based schemes studies have shown that users show biases in image selection [16], while the use of personal images is fraught with even more difficulties and dangers [17]. Image-based schemes due to their nature are also more vulnerable to shoulder surfing and observation attacks. We believe that a question-based authentication scheme using electronic personal history information has the potential to strike the right balance between security and usability. As personal history information grows over time, it should be more and more difficult for others, even close family and friends, to know it fully. The increase in the number of potential questions should also make the scheme more robust to guessability attacks and less vulnerable to shoulder surfing, observation and brute force attacks. By asking questions about events from the user’s personal history, the scheme ensures maximum applicability. It also has a potentially strong personal link essential for memorability. In order to establish that these properties hold in practice an in depth empirical investigation of personal history information is necessary.
3 Methodology For personal history information to be deemed appropriate for authentication purposes one needs to establish that users can easily recall events from their personal history and can successfully answer questions about them. One also needs to show that others
Towards Authentication with Personal Histories
575
cannot successfully answer the same questions. Of particular concern are those that share parts of a user’s personal history. Our work carries out a number of studies, using different types of electronic personal history data to generate questions about particular events of the participants’ personal histories. Each study combines a study of genuine user behavior, in which participants are asked to answer questions about their personal history, with a study of attacker behavior, in which participants are asked to answer questions about the personal history of other participants. The focus is in establishing a statistically significant difference in the ability of genuine users and attackers to answer the same set of questions. Each study carries out experiments that explore whether certain parameters can either improve genuine users’ performance or deteriorate attackers’ performance, or both. We have already carried out two studies. The first [18] used electronic personal calendar data. It showed that only questions about events that are recent, repetitive, pleasant, or strongly associated with particular locations produce statistically significant difference in the ability of genuine user and attackers to answer them correctly. The second study used sensor data generated by an instrumented research laboratory [19]. The study built on the earlier results focusing only on events that are recent and repetitive, and also showed a difference in the ability of genuine users and attackers to answer the questions correctly. An interesting thing about this study was that personal history events had to be inferred from the underlying sensor data. This study uses academic personal web site data that are easily accessible and are usually rich in information about the personal academic history of the person. This allows the generation of a variety of personal history events, from teaching, research, study and even leisure activities. Within an academic department some of these activities are shared, so participants will have some shared history providing us with attackers that are close to the genuine users. Despite these advantages, we should make clear that personal web sites, academic or otherwise, are not an appropriate source of information for authentication purposes, because all their information is publicly accessible. They can be used to draw conclusions about the appropriateness of personal history information for authentication purposes only if during the attack part of the study participants do not have access to the web.
4 Academic Personal Website Study 4.1
Participants and Procedures
The study was conducted with twenty-four members of staff that agreed to take part, three women and twenty-one men with an average age of forty-one. For all participants, we analyzed the information on their academic personal web site in order to identify events from their history. Building on the results of the earlier studies, we focused on events that are recent, repetitive, pleasant or strongly associated with particular locations. Selected events were associated with teaching (e.g. taught lectures and tutorials), research (e.g. research group and project meetings, paper publications and conference attendance), studies (e.g. degrees awarded), and leisure activities (e.g. attendance or participation to sporting events).
576
A. Nosseir and S. Terzis
The study consisted of a genuine users and an attackers part. Its main goals were first, to see whether there is any difference in the ability of genuine users and attackers to answer correctly questions about the identified personal history events, and second, to see what the effect is of using an image-based formulation of questions. The latter goal was motivated by the fact that image-based schemes have been shown to offer certain advantages with respect to usability. With these goals in mind, from the identified events for each participant we constructed four text-based questions and two image-based ones. Unfortunately, for a lot of the questions it was not possible to construct reasonable image-based formulations. The image-based questions were of three types: (a) images of people, (b) images of places, and (c) maps. The images used were all taken from the web. The images of people were co-authors of published papers, research project collaborators or the chairs of attended conferences (see Fig. 1). In all cases, the people were external to the department.
Fig. 1. Example of "image of people" question
Fig. 2. Example of "image of place" question
The images of places were pictures of bespoke buildings from university campuses and conference venues, or landmarks in the vicinity of conference venues (see Fig. 2). The maps were university campus maps or conference venue maps (see Fig. 3). All questions generated were closed in order to avoid repeatability problems in providing the answers. The questions were either true/false or four-part multiple choice as in some cases it was not possible to identify an adequate number of reasonable distracters. The large number of true/false and four-part multiple choice for both text and image-based questions, presented us with the opportunity to also explore the effect that the number of options has to the ability of genuine users and attackers to answer the questions.
Towards Authentication with Personal Histories
577
Fig. 3. Example of map question
The study involved a series of interviews, one with each participant. To facilitate the attack part of the study and keep the overall effort required by participants reasonable, they were divided into three groups of eight. The groups were formed alphabetically according to participants’ surnames. At the beginning of the interview each participant was presented with a questionnaire, and the aims of the study and procedures followed were explained. The questionnaire consisted of eight parts. In the first part the participant played the role of a genuine user answering her own questions, while in the remaining seven parts she played the role of an attacker answering the questions of the other participants of her group. In each attack, the attacker was made aware who the target of the attack was. In addition to answering the questions participants were asked to indicate how confident they were about their answers. A five point Likert scale was used, ranging from ‘very unconfident’ to ‘extremely confident’. The aim was to gain a deeper insight about the behavior of genuine users and attackers when answering questions. The investigator conducting the interview also noted the time it took to answer each question. The purpose of this was to see whether text and image-based questions require a different amount of effort to answer them. Finally, one of the participants that initially agreed to participate failed to attend the interview. 4.2
Results
At first we focus on text-based questions. As we can see in Fig. 4, genuine users answered 74% of text-based questions correctly, while attackers 49%. Comparing the genuine users’ answers against the attackers’ we found that the difference is statistically significant (Chi-square =19.439, df=1, p<0.001, two-tailed). These results are comparable to our earlier study [18] with 78% and 47% respectively. We take this as an indication that there is no fundamental difference between the two types of data as a source of personal information. These results, although encouraging for genuine users are a disappointing for attackers. A closer look reveals that taking into account the mix of true/false and four-part multiple-choice text-based questions, we will expect attackers to answer correctly approximately 40% of the questions (derived as a weighted sum of the true/false percentage of 50% and the four-part multiple-choice percentage of 25%). We conjecture that the difference between the expected and the observed percentages are most likely due to the shared history between participants. We come back to this issue later on when we examine the participants’ confidence in their answers.
578
A. Nosseir and S. Terzis
100%
Text Questions
90% 80%
Image Questions
70% 60% 50% 40% 30% 20% 10% 0%
Correct Answers
Wrong Answers
Genuine users
Correct Answers
Wrong Answers
Attackers
Fig. 4. Genuine user and attacker performance in text and image based questions
Next we look into image-based questions. As we can see in Fig. 4, genuine users answered 93% of image-based questions correctly and attackers 48%. Comparing the genuine users’ answers against the attackers’ we found that the difference is statistically significant (Chi-square =39.033, df=1, p<0.001, two-tailed). From these results there appears to be an improvement in the genuine users’ ability to answer the questions correctly. Statistical tests confirm that this improvement is significant (Chisquare =7.793, df=1, p<0.01, two-tailed). The attackers’ ability to answer questions correctly at first appears almost unchanged. A closer look reveals that taking into account the mix of true/false and four-part multiple-choice image-based questions, we will expect attackers to answer correctly approximately 34% of the questions. As a result, it may be the case that attackers’ performance improved. Statistical tests show that there is in fact no significant difference. From these results, we conclude that using an image-based formulation of questions is beneficial for authentication purposes, as it significantly improves genuine users’ performance without any significant impact for attackers. Next we look at the different types of image-based questions, i.e. images of people, images of places and maps. As we can see in Fig. 5, the performance of genuine users is pretty similar across all types of image-based questions, 92%, 94% and 92% correct answers for images of people, images of places and maps. The performance of attackers shows some noticeable variation. Although it is pretty similar for images of places and maps with 53% and 54% of correct answers respectively, for images of people it drops to 40%. In fact, it is their performance in images of people that drops their overall performance to 48%. The reasons for this variation in the performance of attackers are not clear. As a result, this issue needs further investigation in the future. We now focus on the confidence with which participants answered their questions. In our analysis we considered the top two levels of the Likert scale as high confidence, and the lower three as low confidence. The first observation from Fig. 6 is that genuine users answer much more questions correctly with high confidence than attackers. We believe this is because they often really know the correct answer. The second observation is that attackers also appear to give the correct answer with high confidence in 15% of the text-based questions and 14% of the image-based ones.
Towards Authentication with Personal Histories
100%
579
Images of People Images of Places Maps
90%
80%
70%
60%
50%
40%
30%
20%
10%
0%
Correct
Wrong
Correct
Wrong
Genuine Users
Genuine Users
Attacker
Attacker
Fig. 5. Genuine user and attacker behavior per type of image-based question
70%
Genuine users Low confidence
60%
50%
Genuine users High confidence
40%
Attackers Low confidence
30%
20%
Attackers High confidence
10%
0%
Correct
Wrong
Text Questions
Correct
Wrong
Image Questions
Fig. 6. Genuine user and attacker confidence in their answers for text and image-based questions
Surprisingly, these percentages are quite low, despite the close academic relationships between the participants. These results also largely explain the quite high overall correct answers given by attackers. They confirm our supposition that a person’s personal history is difficult for others, even those close to her, to fully know. The third observation is that there are a number of cases where both genuine users and attackers give wrong answers with high confidence. Based on informal discussions with the participants, we attribute this to confusion with some of the questions. It is interesting to note that there appears to be a lot less confusion with image-based questions than text-based ones. Another interesting point is that when participants have low confidence in their answers, genuine users tend to give correct answers more often than not, 52% and 79% for text-based and image-based questions respectively.
580
A. Nosseir and S. Terzis
Attackers tend to give more often wrong answers, 61% and 60% for text-based and image-based questions, respectively. The reasons behind this are not clear and require further investigation. We then turn our attention to the time taken to answer the questions. As we can see in Table 1, attackers take on average less time than genuine users to answer their questions. This difference is not statistically significant. Another interesting observation is that on average both genuine users and attackers take longer to answer imagebased questions, again the difference is not statistically significant. We also analyzed the time taken to answer each type of image-based question. As we can see in Fig. 7, map questions take on average noticeably longer to answer for both genuine users and attackers. This difference is not statistically significant. It is also interesting to note that map questions also have the greatest standard deviation. Table 1. Time taken for genuine users and attackers to answer questions Time Genuine users – text Genuine users – image Attackers – text Attackers – image
Mean 7.5287 11.5926 5.2278 7.8517
Std Dev 9.50990 15.94273 5.62078 8.23032
Fig. 7. Time taken to answer different types of image-based questions
The final issue we examine is how the number of options affects the ability of participants to answer questions correctly. As we can see in Table 2 for genuine users there is a drop in the percentage of correct answers for four-part multiple-choice compared to True/False questions for both text-based and image-based formulations. Statistical tests show that these differences are not significant. The situation is different for attackers. As we case see in Table 2 in this case there is a noticeable drop in the percentage of correct answer as we move from True/False to four-part multiplechoice questions for both text-based and image-based formulations. Statistical tests confirm that in both cases this difference is significant with (Chi-square =13.909,
Towards Authentication with Personal Histories
581
df=1, p<0.001, two-tailed) for text-based and (Chi-Square =9.500, df=1, P<0.005, two-tailed) for image-based formulation. From these results we conclude that increasing the number of options is beneficial for authentication purposes as it significantly reduces attackers’ performance without any significant impact for genuine users. Table 2. Genuine user and attacker performance in true/false and multiple-choice questions
The first conclusion is that an image-based formulation of questions is certainly beneficial. It provides a significant improvement to genuine user performance. In fact, it brings genuine user performance within the range of being definitely appropriate for authentication purposes with more than 90% correct answers. It also has some impact in reducing user confusion. More importantly, all these advantages are realized without any improvement in attacker performance. The second conclusion is that a small increase in the number of distracters in closed questions is also beneficial. As it would be expected, it makes things more difficult for attackers by reducing their chances of guessing the correct answer. More importantly, it does not have any significant impact on genuine user performance. In general, the more distracters we use the worse the performance of attackers should be however it is not clear how far we can go before the genuine user performance begins to suffer. We should point out that a similar effect to attacker performance could also be achieved by increasing the number of questions. Again, it is unclear what the impact of this would be on genuine user performance. It is also unclear how the two approaches, could be combined to improve overall authentication performance. These are points we intend to explore in the future. The third conclusion is that the ability of attackers to answer genuine users’ questions correctly with high confidence is in general quite low. This is true even though the study was carried out within a single academic department and focused on personal history information from academic personal web sites. This is really encouraging for the appropriateness of personal history information for authentication purposes. However, it would be also interesting to see whether these results will hold, if the attackers are selected based on their closeness to the genuine user. The final conclusion is that despite the positive impact of increasing the number of distracters, and the encouraging results with respect to the ability of attackers to answer correctly with high confidence genuine users’ questions, attackers’ performance is quite good overall. Identifying techniques that could make attacker performance more acceptable should be the focus in future work. In this respect, the primary aim is to investigate the impact of using multiple questions. Another idea is to use an
582
A. Nosseir and S. Terzis
inference mechanism in the generation of the questions that tests the knowledge acquired by experiencing an event, not just knowledge of the event details. This would make things a lot more difficult for attackers, but of course additional research is needed to find good ways of doing this.
5 Automated Generation of Authentication Questions The study above concluded that image-based representations of questions are beneficial from an authentication point of view. This raises the question of whether the generation of such questions could be automated. To investigate this issue we developed a web-based prototype that automatically generates such questions. As our focus was on the generation of the questions and not on how relevant data will be extracted from the electronic personal histories, we decided to base our prototype on specific pieces of personal data that were provided by users. This also allowed us to explore whether it is possible to generate different questions based on the provided data. This is important from a security point of view as it can reduce guessability and mitigate the risk of observation attacks. For reasons that are explained below we only focused on ‘images of places’ and map questions and asked users to provide us with two pieces of information: (a) a city that they are quite familiar with and holds a personal connection for them, e.g. the city they grew up in, or lived most of their life, etc, (b) a postcode for an area that they know very well, e.g. their home or work address. As was the case in the study, the prototype generates closed questions. We followed a similar approach to our sensor data study [19] and we developed a number of question templates, one for each type of question, i.e. images of places and maps. Each template contains a question and a grid of placeholders for the options. To further mitigate observation attacks, the place of the correct answer on the grid is randomly chosen. Filling in the placeholders with appropriate images first requires a collection of images. Note that although it is possible to use different collections for correct answers and distracters, this is undesirable from a security point of view as it increases the risk of intersection attacks, i.e. attackers can exploit the differences between the two collections to eliminate, select or prioritize certain images [17]. There are three approaches to the creation of image collections. The first is for developers to create a collection using sources such as image libraries and the Internet, an approach taken by Passfaces [5]. The second approach collects user generated images, like personal images [6] that required users to submit a set of their photographs during registration. The third approach algorithmically generates a collection of images, e.g. random art [4]. In either case it is important to ensure that images included are of good quality, i.e. they are not blurred, missing detail, or having too much detail, to avoid user confusion. The first and third approaches allow for more control of image quality. In the case of user generated images some form of filtering may be used to remove poor quality images. However, despite some interesting proposals, like the use of canny edge detection in [17], automating such filtering is largely an open research issue. For this reason, for our prototype we decided to exclude the use of user generated images at this stage.
Towards Authentication with Personal Histories
583
Fig. 8. Example of a map question generated by the prototype
For authentication mechanisms where the “password” is chosen by the system or by the user from a collection of system provided ones, like Passfaces [5] and random art [4] mentioned above, all that is required is the existence of an image collection. However, in our mechanism the “password” is the data extracted from the personal history. So, an additional step that translates the personal data into an image is necessary. In the case of postcodes this translation is straightforward. There are online services, like www.streetmap.co.uk or maps.google.com that given a postcode can return a map of the surrounding area. Our prototype uses streetmap (see Fig. 8). For images of places and images of people this is more complicated. Using the image searching of www.google.com does not provide reliable enough results as they often include irrelevant or poor quality images. Using resources, like www.flickr.com or www.picasa.com that contain user-tagged and even geo-tagged images also provides results of variable quality, as images are often user generated and the same is true for user-generated tags. Although a manual selection of images can overcome these problems, it is undesirable due to the effort it requires. As a pragmatic choice for images of places we constructed a collection of images in a semi-automated manner. Instead of relying on manually selected images, we rely on a manually selected list of web resources that provide good quality images of identified locations covering all the cities identified by our users (see Fig. 9). More specifically, we developed a program that given a list of URLs for such resources automatically extracts the images of the identified locations. The generated collection includes multiple images for each city showing that under certain circumstances this may be feasible. For images of people, it proved a lot more difficult to follow this approach. As a result our prototype does not include such questions. On this front, an interesting possibility for future work is
584
A. Nosseir and S. Terzis
to investigate how good a mechanism similar to the one used by Facebook (www.facebook.com) is. Facebook adds an extra authentication step when a user logs in from an unusual location, which is based on identifying people that the user is likely to know, based on the name-tags of uploaded photographs of the user’s friends. To militate against unreliable tags or poor quality photographs the system allows the user to effectively skip a number of challenges, in a manner similar to that used in CAPTCHAs.
Fig. 9. Example of ‘image of place’ question generated by the prototype
Selecting good distracters is also challenging. Good distracters are ones that cannot be easily eliminated by attackers and are not too similar to correct answers to confuse genuine users. For this purpose, two approaches have been explored. VIP [3] categorizes images into semantic categories and selects distracters from categories different to the correct answer, but always the same. Dynahand [20] uses automated filtering, an algorithm, which filters out too-similar handwritings from distracters. However, as mentioned above in the general case filtering is still an open research issue. For the ‘images of places’ questions we followed a similar approach to VIP [3]. For each correct city we used the same randomly chosen list of other cities from those provided by the users for the distracter image selection. For map questions, all options are maps of randomly selected different postcodes. Although using potentially changing distracters with a correct answer increases the vulnerability to intersection attacks, maps militates this as they are not particularly vulnerable to chance observations. In general, irrespective of how distracters are selected using “meaningful” images may still suffer from social engineering attacks, where the attacker uses knowledge about the genuine user to eliminate, select or prioritize certain images. How to select distracters to minimize this risk is largely an open research issue. From the development of the prototype we conclude that although it is possible to largely automate the generation of authentication questions, this requires significant
Towards Authentication with Personal Histories
585
engineering effort. Moreover, there are a number of areas where further research is necessary in order to ensure that generated questions are of good quality, i.e. robust against attacks.
6 Summary and Future Work This paper presented a study that is part of a wider investigation into the appropriateness of electronic personal history information for authentication purposes. The study used academic personal web site data as a source of personal history information. The main aim of the study was to examine the effect of using an image-based formulation of questions about personal history events. In contrast, to most other work in this area, the study followed a methodology that assesses the impact on both genuine users and attackers (others close to the genuine users). The study concluded that an image-based representation of questions is certainly beneficial from an authentication point of view. It also concluded that a small increase in the number of distracters used in closed questions has a positive effect to authentication performance. The study also showed that despite the closeness of the attackers their ability to answer correctly with high confidence questions about the genuine users is limited. These conclusions contribute positive results to the wider investigation into the appropriateness of electronic personal history information for authentication purposes. The paper also presented the development of a web-based prototype for the automated generation of image-based authentication questions. The prototype shows that although possible to largely automate the generation of authentication questions, this requires significant engineering effort and further research is needed to ensure that generated questions are of good quality from an authentication point of view. Besides the points identified above, moving this investigation forward requires that we adopt a wider perspective. So far, our investigation has exclusively focused on the performance of genuine users and attackers in answering correctly the generated questions. As the results in this front are encouraging, it is now time to consider additional aspects that determine the appropriateness of the approach for authentication. The main issue is to determine to what extent and in which contexts such an authentication mechanism would be acceptable to users. User acceptance relies primarily on two aspects. How effective and efficient the mechanism is to use, and whether this use of personal data is acceptable to users. In doing so, we need to study the mechanism in specific contexts, e.g. for authentication to online services, or authentication to users’ personal mobile devices, etc. In conclusion, authentication via selected extraction from electronic personal histories seems very promising in comparison to other usable authentication schemes. Our initial studies and prototype show some encouraging results for its feasibility; however further research is necessary before a concrete authentication mechanism can be produced. Acknowledgements. We wish to thank Prof. Richard Connor for his invaluable input and advice in developing this work.
586
A. Nosseir and S. Terzis
References 1. Brostoff, A.: Improving password system effectiveness. Department of Computer Science, University College London UCL, Doctor of Philosophy Thesis (2004) 2. Yan, J., Blackwell, A., Anderson, R., Grant, A.: Password Memorability and Security: Empirical Results. IEEE Security & Privacy 5(2), 25–31 (2004) 3. De-Angeli, A., Coutts, M., Coventry, L., Johnson, G., Cameron, D., Fischer, M.: VIP: A Visual Approach to User Authentication. In: Proc Advanced Visual Interfaces AVI, pp. 316–323. ACM Press, New York (2002) 4. Dhamija, R.: Hash Visualization in User Authentication. In: Proc (CHI), pp. 279–280. ACM Press, New York (2000) 5. Passface: Real-User PassfacesTM, http://www.passfaces.com 6. Pering, T., Sundar, M., Light, J., Want, R.: Photographic Authentication through Untrusted Terminals. Security & Privacy 2(1), 30–36 (2003) 7. Wiedenbeck, S., Waters, J., Birget, J.C., Brodskiy, A., Memon, N.: Authentication Using Graphical Passwords: Effects of Tolerance and Image Choice. In: Proc. Symposium on Usable Privacy and Security (SOUPS), pp. 1–12. ACM Press, New York (2005) 8. Zviran, M., Haga, W.: Cognitive Passwords: the Key to Easy Access Control. Computers and Security 9, 723–736 (1990) 9. Just, M.: Designing and Evaluating Challenge Question Systems. Proc IEEE Security & Privacy: Special Issue on Security and Usability 2(5), 32–39 (2004) 10. Harper, R., Rodden, T., Rogers, Y., Sellen, A. (eds.): Being Human: Human-Computer Interaction in the year 2020. Microsoft Research Ltd., Cambridge (2008) 11. Zviran, M., Haga, W.: A Comparison of Password Techniques for Multilevel Authentication Mechanisms. The Computer Journal 36(3), 227–237 (1993) 12. Porter, S.: A Password Extension for Improved Human Factors. Computers and Security 1(1), 54–56 (1982) 13. Smith, S.L.: Authenticating Users by Word Association. Computers & Security 6, 464– 470 (1987) 14. De-Angeli, A., Coventry, L., Johnson, G., Renaud, K.: Is a Picture Really Worth a Thousand Words? Exploring the Feasibility of Graphical Authentication Systems. International Journal of Human-Computer Studies 63(2), 128–152 (2005) 15. Takada, T., Koike, H.: Awase-E: Image-based Authentication for Mobile Phones Using User’s Favourite Images. In: Chittaro, L. (ed.) Mobile HCI 2003. LNCS, vol. 2795, pp. 347–351. Springer, Heidelberg (2003) 16. Davis, D., Monrose, F., Reiter, K.: On User Choice in Graphical Password Schemes. In: Proc 13th USENIX Security Symposium, pp. 151–164 (2004) 17. Dunphy, P., Heiner, A.P., Asokan, N.: A Closer Look at Recognition-based Graphical Passwords on Mobile Devices. In: Symposium on Usable Privacy and Security (SOUPS), Redmond, WA, USA (2010) 18. Nosseir, A., Connor, R., Dunlop, M.: Internet Authentication Based on Personal History – A Feasibility Test. In: Workshop on Customer Focused Mobile Services at WWW 2005 (2005) 19. Nosseir, A., Connor, R., Revie, C., Terzis, S.: Question-Based Authentication Using Context Data. In: ACM Nordic Conference on Human Computer Interaction (NordiCHI 2006), Oslo, Norway (2006) 20. Renaud, K., Olsen, E.: Dynahand: Observation-resistant recognition-based web authentication. IEEE Technology and Society Magazine 26(2), 22–31 (2007)
Search in Context Hadas Weinberger HIT - Holon Institute of Technology, Holon, Israel [email protected]
Abstract. This chapter is about a model and a method instructing the design of a search interface supporting query formulation in context and advising the appropriate search engine accordingly. Five recommendation principles guide the design of this user-friendly interface towards enhance precision. While indeed information about search engines is abounding, yet lacking is a framework for guiding users towards efficient and effective utilization of the spectrum of search engines available on the Web. This chapter suggests guidelines for the design of context-aware search architecture, namely, CASA. A use case illustrating the need for the suggested framework is described, and findings from empirical investigation conducted with students in two distinct academic institutions in two different countries are discussed to establish the feasibility and the usefulness of CASA. We conclude with conclusions and with recommendations for further research directions. Keywords: Search engines, User interface, Adapted search, Exploratory search, Context-aware system, Usefulness, Recommendation principles.
For redesigning the user’s experience with the search interface, we incorporate the notion of the dynamic and active role empowered upon users in the Web 2.0 arena. Specifically in view of the wide spectrum of search engines (SEs) available on the Web (e.g., popularity-based SEs, social SEs, semantic SEs, hybrid SEs, domain specific SEs) it is surprising that there is no interface instructing search engines’ personalized and context-aware methodological utilization. For instance, in a manner that considers the user’s query as part of the user’s context, against the user’s model. While there is much research effort aiming to bridge the gap between search engines’ methods and the user’s model [16, 12], this chapter takes a different perspective by aiming to bridge the gap between the user’s query on the one hand and the search engine’s qualities on the other hand by introducing principles for the design of a Context-Aware Search Architecture (namely, CASA). The goal set for this research is to enhance the search interface, by facilitating: a) a query modification interaction that is based on b) the user’s input and her feedback, towards c) a search engine recommender system. Our approach to the design of CASA follows the design science paradigm [7, 13]. Of the research activities outlined by design-science research in IS, this chapter covers the build of model (a set of principles) and a system (a Web-based search interface that is a prototype of a recommender system) while for the evaluation of this artefact we report on qualitative results of empirical investigation. Of the four design artefacts (i.e., constructs, models, methods, and instantiations) outlined in these frameworks, this research is about a model (i.e., the method instructing the recommender system’s principles), which informs a methodology (i.e., the techniques for supporting user’s requirements elicitation and query modification processes) and an instantiation (a prototype of the search interface). Following this introduction, section 2 holds a brief discussion of search engines. Section 3 describes the need for context-aware search architecture and section 4 describes CASA. Section 5 is focused on the methodology used in this research and section 6 is dedicated to the discussion of the findings of the empirical investigation and evaluation. We conclude in section 7 with a summary and discussion.
2 Search Engines in Context The lack of a consistent methodological approach to Web information seeking research [2, 16] might be attributed to the dynamic nature of the field, characterized by frequent innovations in search engines’ technology and the corresponding often modified search engines’ classification. Consequently, best practices of the field are often altered [20] – a notion which might explain users’ tendency to follow search trends. Against this background several leading practices in search engines technology are mentioned: a) popularity-based SEs (e.g., Google) which also utilize a host of algorithms (e.g., statistical measures, Web-genre analysis, clustering and categorization), b) Inclusive-meta SEs (e.g., Myriad, Quintura), c) Social SEs that focus on user’s contribution (Hakia, FreeBase), d) Semantic Web SEs (e.g., Hakia) and analytic SEs (e.g., WolframAlpha). Other navigation and information retrieval methods follow notions of: Web-genre (e.g., Google scholar), domain (i.e., geospatial), structure (e.g., Wikipedia) or the ‘long tail of search’ (e.g., FeedMil).
Search in Context
589
Taking the HCI perspective, several SEs include features that support user’s interaction with the results as obtained, through activities such as providing feedback or by allowing navigation and negotiation of the results based on data visualization. Examples are navigation of interactive maps (e.g., Kartoo), user voting (FeedMil), clusters negotiation and categorization (Clusty). With the advancement of Web 3.0, there are indeed innovative technologies embedded in search technologies [3, 7, 5] that assist in incorporating users’ annotation [1] also for the purpose of instructing the user model [4]. However, by the most part users are captivated by what could be named: ‘the ease of search’ syndrome which prevents them from using multiple search engines and the options unfolded in these. As much as HCI research should approach current practices [9] search engines’ technology should advance beyond current context building methods such as: a) structural attributes, b) syntactical features, and c) semantic analysis, while advancing towards the user’s context [6, 10, 17, 19] in order to reflect on the user’s perspective. For this end, users’ ought to be considered as actors, allowing them more freedom of action and of choice. Considering the classic criteria for information retrieval evaluation, namely, precision and recall; we argue that while the prevailing practices do not necessarily promote precision, user’s enhanced involvement should not be underestimated as an agent of precision. Along this lane an algorithm could be considered, which responds to the bi-dimensional view of the search operation to include: a) search engines’ typology on the one hand, and b) means to analyse and advise user’s query and its context on the other hand. This way search activities would facilitate an efficient and effective search in the context of the user model as best represented by her query.
3 The Need for Context-Aware Search Architecture This section brings forward the issue of the user perspective motivating this research. A concise discussion paves way towards the introduction of a hypothesis describing the user’s perspective. We conclude this section with a use case illustrating the need for context-aware search architecture and explain the relationship between the three elements that constitute the search interaction: the user, the query and the search engine. 3.1 The User Perspective Of the two most common tasks which best represent Web HCI, users’ contribution to online communities is mentioned alongside search processes. For the former, Preece and Shneiderman [18] identified several distinct types of users participating in online communities. Their work illustrates a typology following the classification of user’s contribution by users’ task and their role; consequently, identifying three user types: a reader, a contributor and a leader. User’s classification is motivated by the value assigned to attributes of a user’s contribution such as the frequency of the iteration, the complexity of the issue at hand, and the context of the contribution. Similar to this user’s classification, search processes are classified by three search types: a simple search, learning and investigation [14].
590
H. Weinberger
Against this background, this chapter assumes value for the contributor and for the leader who are engaged in either learning or investigation that are likely to be motivated towards the cultivation of adapted search habits yielding useful results. This assumption is illustrated by the research hypothesis that concerns the user perspective. Hypothesis 1: Search processes are mostly run using popular search engines while user’s navigation between search engines is not a common phenomenon. This is not because search engines are all alike, nor is it because different search engines would not yield different results, but because adequate tools for recommending an adapted use of search engines in context are yet lacing. This deficiency is specifically noticed against the prevailing adapted and personalized Web 2.0 practices. One possible path towards a solution to this situation would be for the search domain to develop a business model that is similar to the one adopted for Web 2.0 tools. Hence, providing users search services in context. An example in this context is users’ tendency to search inside social networks, hence depriving themselves of a large volume of Web content; consequently, creating a Wall inside the Web, as noted in the Economist (September 2010). There is in this example to demonstrate how a problem that remains unresolved may lead to the generation of yet a larger problem. 3.2 Search in Context: An Example Use Case A user involved in an exploratory search session is facing two challenges that concern the ‘how’ and the ‘where’ of the search process: 1) how to search refers to several activities related to syntactic and semantic search features such as: choosing key words, query structuring, modification and the identification of the domain and the genre to be explored, 2) where to search is about which search engine to use. While the latter might appear to be a decision motivated by the technological perspective, there are other relevant perspectives to be included. For instance, we mention the SE’s scope and HCI – features such as usability and a user-centered approach. As part of selecting a search engine, the user is required to meet challenges that concern his interests as well as challenges belonging with Web proficiency. While there are a host of interface-embedded syntactical, structural, semantic and statistics features that would support query formulation, there are no interface features directing search engine manipulation. Selecting and contacting a search engine could be a challenging task. In view of the spectrum of search technologies available on the Web, there is not in this only much promise that is yet unexploited, but also a serious challenge summoned for the user. While users might occasionally be aware of the plethora of search technologies, they still need a good reason to use these tools. For instance, a social search engine might yield different results than a popularity-based search engine, since each uses different tracking and indexing methods. As an example, consider the case of a user seeking information about blogging, more specifically: 'how to write a successful blog'. While a popularity-based search engine is likely to return the most popular results, a semantic search engine such as Hakia is likely to return results that origin with user’s (recent) input, indicating an innovative guide, tool or practice. This is not
Search in Context
591
to say that one result is preferable to the other – but to put forward the differences that prevail. In the context of the design of business, research or learning environments, we would like the user – a reader or a contributor, to be aware of her options in a most profound way. In other words, there is reason for revisiting the mechanism guiding search engine usage. There is in this use case to a) demonstrate the need for context-aware search architecture, b) describe the relationships between the three elements that are part of a Web-search interaction, and c) anchor the former two as part of a wider perspective on current HCI challenges.
4 The Context-Aware Search Architecture (CASA) CASA is comprised of two elements: 1) a two-faceted model supporting query definition and modification, and a method advised by 2) a set of recommendation principles guiding the process of search engine selection. The model and the method are hereby described. 4.1 The Search Interface In CASA the interface is designed to respond to a) the user’s query by suggesting an adequate use of b) a search engine. For the design of the user interaction (building the query’s context) we follow the Ontology for Social Knowledge Applications [23] intended for aiding users throughout the annotation of Web 2.0-tools user-generated content and context. Since tagging and search are considered the two sides of a coin [25], we assume the ontological construct could be followed for user’s requirements elicitation and for query modeling. The prototype interface (Figure 1) demonstrates the support available for the user in determining the a) query’s current focus (i.e., there, Query type) and choosing b) an ontological extension (i.e., there, Question type). Based on this ontological analysis the system c) recommends the search engine that is likely to yield results that are of highest precision – in accordance with the recommendation principles (described herein). The mechanism for identifying the Query type responds to the three-perspective view identified for the Ontology for Social Knowledge Applications (i.e., content, task and technology). The mechanism for the identification of the question type follows the WH questions scheme used also in the IS field for the evaluation of information systems. 4.2 Searching for Precision: Five Recommendation Principles This section is dedicated to the five recommendation principles identified for this research. The description of each recommendation principle (RP) is anchored in the context of a search engine type in relation to the WH question (i.e., aspect) for which it best responds. For each search engine type we provide example evidence description, annotated by a-e, followed by a Recommendation Principle (RP), formatted with bullets.
592
H. Weinberger
Fig. 1. Context-aware search architecture – an example interface
A. Popularity-based search engine – e.g., Google: the results tend to spread across several aspects of the query element(s); Answering questions such as: what, hence facilitating an introduction to the subject domain. Based on this finding the following RP was formulated: • RP1: A search for general information that is spreading across several aspects (i.e., responding to WH questions such as ‘what’), is likely to be useful by means of using a popularity-based search engine, e.g., Google. B. A social-semantic search engine, e.g., Hakia: the results tend to focus upon example instances of the query’s element(s); Answering questions such as: what, how and where, hence enabling the study of example applications. Based on this finding the following RP was formulated: • RP2: A search for information describing attributes assigned to a certain concept (i.e., responding to WH questions such as ‘what’ and ‘how’), is likely to be useful by means of using a social-semantic search engine such as Hakia. C. A semi-semantic and visualized search engine, e.g., Kartoo, Clusty: the results tend to spread across three instance-level aspects, answering questions such as: who, how and where; facilitating the comprehension of a phenomenon. Based on this finding the following RP was formulated: • RP3: A search for instance-level responses to questions with relation to a specific domain (i.e., responding to WH questions such as ‘who’, ‘how’ and ‘where’), is likely to be useful by means of using a semi-semantic, clustering, visual or interactively enabled search engine such as Kartoo or Clusty. D. Ananalytic, semantic-, social-semantic or hybrid search engine, e.g., FreeBase, FeedMil: the results tend to focus on several practical aspects, answering questions such as: where and how, responding to the technology perspective, hence summoning the user a wealth of information allocated by users to guide the investigation of a subject domain. Based on this finding the following RP was formulated:
Search in Context
593
• RP4: A search for instance-level information based on user-input (i.e., responding to WH questions such as ‘where’ and ‘how’), is likely to prove useful by means of using an analytic, semantic- and social-semantic or hybrid search engine such as FreeBase or Feedmil. E. An analytic search engine, e.g., WolframAlpha: the results tend to focus on several aspects, presented as a report on the subject of the investigation – based on a dialogue with the user. Specifically this search engine will prove useful for the user facing a depth- and wide-motivated search. Based on this finding the following RP was formulated: • RP5: A search for a wide perspective perception of a domain is likely to be found useful by means of using an analytic, semantic and hybrid search engine such as WolframAlpha that is empowered by artificial intelligence – amongst other features. This search engine compiles a categorized report of the subject matter, not only introducing the user a host of information in various forms but also allowing him the negotiation and analysis of the presentation of the findings. 4.3 Towards a User-Centered Search In this section we illustrate an example use case of interacting with CASA by the three stages in this interaction. For each stage, the user’s role and the system’s response are described. Stage 1 – search initialization: user introduces query elements in the search box. For example, the query element may be the expression: blog. The system then identifies the query’s dimension (there, query type). Stage 2 – query modification: there are two dimensions defined for this stage. The first concerns the system perspective and the second concerns the user’s perspective. The system perspective prescribes two complementary actions and decisions, accordingly. The first concerns the query type and the second concerns the question type. The first would be feasible provided an adequate lexical ontology is available. That would allow the system the automated identification of the query type. Second is the identification of the question type. In this context, the system is designed to respond to three types – responding to 3 ontological dimensions identified in OSKA [23]: subject (i.e., scope), activity (i.e., task) and media (i.e., technology). The user’s perspective also involves two actions lanes and decisions, accordingly. First, the user has to choose a question type following which the system suggests to him an extension aspect. For instance, if the query includes a term such as ‘blog’ that it is identified (i.e., by the system) as ‘subject’; consequently, corresponding WH questions (there, question type) are suggested to further focus the query. For instance, suggesting the ‘how’ or the ‘where’ extensions. This procedure is an example for a user’s requirements elicitation process that is followed by corresponding query modification process provided by the system’s part. Stage 3 – adapted-personalized search: based on the previous two stages, the system uses the recommendation principles mechanism to offer for the user results that origin with the most appropriate search engine for the query type and in accordance with the question type.
594
H. Weinberger
5 Methodology The iterative development of CASA, describes herein, follows the four stages of system development: planning, analysis, design and implementation, assuming an iterative evaluation process. Planning & Analysis: involved the consideration of 1) the search engines to be included in this research, 2) the search terms to be used for query formulation. Specifically we aimed at qualified search engines and heterogeneity of query elements. For the latter we followed [23] Ontology for Social Knowledge Application. Eight search engines were finally selected based on the distinct definition of each to include: Google as a popularity-based SE, Hakia as a Semantic- and Social-Semantic SE, Kartoo (i.e., which was yet active at the time of the development) and Clusty as visualized and clustering SEs and FreeBase as analytic and social-semantic SE. Last but not least are Feedmil and WolframAlpha. The former is a social-, hybrid and long tail search engine and the latter is an analytic- and semantic search engine. Design: the design effort was focused on query formulation [21]. For each search engine three queries were introduced, using: a) an element of the content perspective (e.g., Web 2.0 tools, Web 2.0 software, social media applications, b) an element of the task perspective (e.g., collaboration, participation, publishing, editing, reporting) and c) an element of the technology perspective (e.g., bookmarks, blog, Wiki, Microbloging, Database). Two methods were used as basis for query construction: a) including different parts of speech (verb, object), b) assembling various elements in accordance with Object-Oriented modeling techniques (entities, processes), c) including elements of different domains. All in all, the design process involved more than 300 queries based on which the recommendation principles were advised. Implementation: involved 1) the definition of the recommender principles based on the analysis of previous activities and their results. Consequently, the recommendation principles were used for 2) the design of the prototype Web interface. Last but not least we mention integration into curriculum of the advised method reported in this paper. Evaluation: aiming to balance between rigor and relevance, evaluation in this research followed two lanes. The first is evaluation through design and the second is empirical evaluation of the resulting design product. Both lanes were aiming at the assessment of the feasibility and the usefulness of CASA (i.e., the RPs and query modification technique). Evaluation involved students of two Web Information Systems courses (i.e., introductory and advanced) instructed by the author of this chapter. Two students’ groups were involved: undergraduate students of Instructional design at HIT, Israel, and graduate students from the business school at the University of Nicosia, Cyprus. In these courses students were instructed about the recommendation principles at the core of CASA, their rational and practice. Additionally, class assignments included practicing the ECHO [22] methodology – advising the development of a Mashup designed for Web-based (organizational) learning environment. The design of this environment involved search tasks and the usage of Web 2.0 tools for knowledge management, knowledge sharing and representation (e.g., Bibsonomy, Google apps, Diigo and WordPress).
Search in Context
595
For the evaluation through the design quantitative and qualitative methods were applied. Questionnaires were used to evaluate research assumptions, validate findings and verify design specifications. Additionally, we used short interviews, seeking students’ insights of lessons learnt and best practices. The process of evaluation through the design demonstrated the feasibility and usefulness of CASA, both for efficient retrieval and for coherent search [21]. The method advised by the recommendation principles guided students’ information retrieval tasks towards the harvesting of precise and concrete resources, using search engines they were not previously aware of. The findings of these iterations formed grounds for the design outcome, i.e., the recommendation principles and the Web interface. The next section is dedicated to the presentation and the discussion of the findings of this empirical investigation.
6 Findings from Empirical Investigation In this section the feasibility and the usefulness of CASA is demonstrated based on findings from the empirical investigation moderated by a six question questionnaire introduced to our students (N=52). In this questionnaire we asked the students for their opinion about a) the usefulness of the recommendation principles and b) the design of the Web-based interface (Figure 1). Additionally, to assure objectives results, we a Web-based poll was suggested. Information about the poll was ‘virally’ distributed by our students through their online community networks to people to whom CASA was not previously known. The remaining of this section is focused on a discussion of the findings obtained for 3 out of these 6 questions, followed by reference to the findings of the Web poll. The usefulness of CASA for different types of search tasks. This question was intended to assess the relative contribution of the search interface (figure 1), in its role as a search aid. Specifically, we aimed to assess the contribution of the design guidelines for four search types: 1) compound search – which concerns a query of several elements, which might be previously known to the user, 2) simple search – that is focused upon a basic fact, 3) investigation and 4) learning – that both demand several search iterations, probably also along a period of time. All the respondents were familiar with the design principles of the interface and with the referenced search types, as both variables were studied as part of the Web information systems class. A five measure likert scale was suggested for respondents to comment on the relative contribution of the design for different search types. The findings (figure 3) indicate a very high percentage of appreciation – i.e., significant contribution and very much (37.5% and 50%, respective), for the learning and investigation option. This finding could be seen to be verified by the attitude towards the compound searchindicating a relatively high percentage for lower satisfaction. There is more that can be learnt from these findings. For instance, it is evident by the results that the model guiding the design of CASA was understood by participants who noted the difference between its relevancy to investigation and learning, as evident by indicating a higher score for significant contribution; showing only a mild appreciation of its contribution to a simple search.
596
H. Weinberger
Fig. 2. The usefulness of CASA for different types of search tasks
Fig. 3. Preferences for search interface design
Preferences for search interface design. The goal set for this question was to measure the usability of CASA by focusing on the design rational. Participants were asked to express their opinion about three interface design options. The first design accommodates a single search engine option. The second design option suggests for the user the use of analytic techniques to assist her through the query analysis procedure, including query modification and search engine recommendation – instructed by the CASA model. The third option concerned an interface that includes unstructured access to several search engines. The results, as introduced on Figure 4, indicate the favoured option to be the inclusion of analytical capabilities to support the user with query modification and search operation (62.5 %). Following are the option of a single search engine interface (20.8%), with the multiple search engines gaining but 12.5% of the votes. The latter – the low ratio given to the multiple search engine option, could be understood with relation to the former, favouring the analytic option – hence dismissing a multiple, yet non-intelligent choice. The ratio between the latter two (single vs. multiple) may also be interpreted against the findings for the analytic option. In this case, acknowledging the relative value in making a rationalized choice between search engines, overruled a less structured approach.
Search in Context
597
The recommendation principles in practice. The goal set for this question was to evaluate the feasibility and the usefulness of the recommendation principles for routine search processes – as best practice. Figure 5 represents the findings, indicating a significant shift in practice – considering that prior to their studies participants were all using Google (i.e., lacking knowledge of the other search engines and their relative value). Apparently, 40% of the respondents appreciated the usefulness of the popularity-based search engine RP alongside the utilization of the semantic search enginerelated RP (20%-30%). Respondents also expressed their adherence to the social search engine RP (e.g., often: 34.5%). There is in these findings to indicate that users actually employ the suggested RPs and embed them in practice. Moreover, it could be assumed that there is evident benefit in different kinds of search engines, and that users recognize the relative value of each search engine type. All in all, these findings bring forward the feasibility and the usefulness instructed by- and unfolded in the recommendation principles.
Fig. 4. The recommendation principles in practice
To assure anonymous results, a Web-based poll (N=72) was suggested that included a (short explanation, and) one-question: “would you rather have a structured approach embedded as part of the user interface?” findings indicate that 68% of the respondents were enthusiastic about this option and have expressed this attitude by choosing the `much` choice. Notably, 20% gave their vote to the `very much` choice and only 6% have marked the `not so much` and `not at all ‘options. A concluding remark is set against the background of these findings in view of consistency between the results of the questionnaire and the findings of the Web poll, suggesting a balanced perspective between users familiar with CASA and between those who were not previously familiar with CASA – since both groups were found to be in favour of a structured approach and the redesign of the Web search interface.
7 Conclusions In this chapter a model was suggested and a method embedded in the design of a Web interface forming the Context Aware Search Architecture, namely, CASA. Our aim here was: a) supporting the identification and the modification of the user query, and b) guiding search engines’ selection, based on c) five recommendation principles. The
598
H. Weinberger
suggested framework – including the model, the method and the recommendation principles, was responding to variables of the user’s model and enhanced precision in view of the prevailing usability hurdles. There are three deliverables to the research reported here. The first is the method, prescribing guidelines for context-aware query modification. Second are the recommendation principles directing the utilization of search engines in context. The third deliverable builds on the former two to suggest a user- adapted search experience. With CASA, users may practice a domain-independent context-aware search interaction that is structured and yet adaptable and dynamic. The findings of this research indicate a relationship between a) the search engine type and the ontological perspectives of the query on the one hand, and between b) the precision of the search results, on the other hand. Based on these findings the feasibility and the usefulness of CASA are established. There are several limitations to this research. Further empirical experience is encouraged, which could best be supported by a design of an IT artefact that follows CASA’s framework. Additionally, further investigation can be carried out to extend the scope of the search engines included here. We believe that the findings from our study have implications beyond this immediate setting. Several further research directions may be instructed based on this research. First, we mention the design of an IT artifact, which could only be supported provided adequate ontologies are provided. The latter is the second research direction instructed here. Last but not least is the inclusion of Semantic Web (i.e., Web 3.0) technologies, such as artificial intelligence and natural language processing for the design of the envisioned IT artifact. This work should prove useful to anyone considering the development of Web search architecture, or else individuals seeking to enhance their exploratory search experience. CASA can improve and expand the current Web search experience of individual users on the one hand, while on the other hand inspiring web developers’ view towards innovative design. Acknowledgements. This research builds on my teaching experience and on a students’ project carried out as part of their meeting the requirements for a graduate degree in our department. These students are: Oleg Guzikov and Keren Rabi. I wish to thank all participating students – in our institution and beyond, for their participation, motivation, contribution and cooperation.
References 1. Bao, S., Wu, X., Fei, B., Xue, G., Su, Z., Yu, Y.: Optimizing web search using social annotations. In: International World Wide Web Conference (IW3C2), WWW 2007, Banf, Alberta Canada (2007) 2. Baeza-Yates, R.: Information Retrieval in the Web: beyond current search engines. International Journal on Approximated Reasoning 34(2-3), 97–104 (2003) 3. Carmagnola, F., Cena, F., Cortassa, O., Gena, C.: Towards a tag-based user model: How can user model benefit from tags? In: Conati, C., McCoy, K., Paliouras, G. (eds.) UM 2007. LNCS (LNAI), vol. 4511, pp. 445–449. Springer, Heidelberg (2007) 4. Ding, L., Pan, R., Finin, T.W., Joshi, A., Peng, Y., Kolari, P.: Finding and Ranking Knowledge on the Semantic Web. In: Gil, Y., Motta, E., Benjamins, V.R., Musen, M.A. (eds.) ISWC 2005. LNCS, vol. 3729, pp. 156–170. Springer, Heidelberg (2005)
Search in Context
599
5. Dey, A.: Understanding and using context. Personal and Ubiquitous computing 4, 4–7 (2001) 6. Finin, T., Ding, L.: Search Engines for Semantic Web Knowledge. In: Proceedings of XTech 2006: Building Web 2.0, Amsterdam, pp. 16–19 (2006) 7. Hevner, A., March, S., Park, J., Ram, S.: Design science in information systems research. MIS Quarterly 28(1), 75–105 (2004) 8. Hochheiser, H., Lazar, J.: HCI and Societal issues: A framework for engagement. International Journal of Human-Computer Interaction 23(3), 339–374 (2007) 9. Kobsa, A.: Generic user modeling systems. User Modeling and User Adapted Interaction 11, 49–63 (2001) 10. Kritiquo, Y.: User Profile Modeling in the context of web-based learning management systems. Journal of Networks and Computer Applications (2007) 11. Berners-Lee, T., Hendler, J., Lassila, O.: The Semantic Web. Scientific American (2001), Retrieved from: http://www.scientificamerican.com/ article.cfm?id=the-semantic-web 12. Maamar, Z., AlKhatib, G., Mostefaoui, S.K., Lahkim, M., Mansoor, W.: Context-based personalization of Web services composition and provisioning. In: Proc. EUROMICRO. IEEE Computer Society, Los Alamitos (2004) 13. March, S.T., Smith, G.F.: Design and natural science research on information technology. Decision Support Systems 15, 251–266 (1995) 14. Marchionini, G.: Exploratory search: from finding to understanding. Communications of the ACM 49(4), 41–46 (2006) 15. Marchionini, G., White, R.: Find what you need, understand what you find. International Journal of Human-Computer Interaction 23(3), 205–237 (2007) 16. Martzoukou, K.: A review of Web information seeking research: considerations of method and foci of interest. Information Research 10(2), paper 215 (2004), Retrieved from: http://InformationR.net/ir/10-2/paper215.html 17. Midwinter, P.: Is Google a Semantic Search Engine? (2007), Read Write Web: http://www.readwriteweb.com/archives/is_google_a_semantic_ search_engine.php 18. Preece, J., Shneiderman, B.: The Reader-to-Leader Framework: Motivating TechnologyMediated Social Participation. AIS Transactions on Human-Computer Interaction 1(1), 13–32 (2009) 19. Shen, X., Tan, B., Zhai, C.: Implicit user modeling for personalized search. In: Proceedings of the 14th ACM International Conference on Information and Knowledge Management, pp. 824–831 (2005) 20. Vossen, G., Hagemann, S.: Unleashing Web 2.0: from concepts to creativity. Morgan Kaufmann, San Francisco (2007) 21. Weinberger, H., Guzikov, O., Rabi, K.: Context-aware search architecture. In: Cordeiro, J., Filipe, J. (eds.) ICEIS 2010. LNBIP, vol. 73, pp. 587–599. Springer, Heidelberg (2010) 22. Weinberger, H.: ECHO: A Layered Model for the Design of a Context-Aware Learning Experience. In: Murugesan, S. (ed.) Handbook on Web 2.0, 3.0 and X.0: Technologies, Business, and Social Applications. IGI Global, Hershey (2010) 23. Weinberger, H.: Tagging Web 2.0 content in context (technical paper, HIT, Holon Institute of Technology) (2010) 24. White, R., Roth, R.: Exploratory Search: Beyond the Query-response Paradigm. Synthesis Lectures on Information Concepts, Retrieval & Services. Morgan & Claypool, Publishers, San Francisco (2009) 25. White, R.W., Kules, B., Bederson, B.: Exploratory search interfaces: Categorization, clustering and beyond. Communication of the ACM (2005)
A Virtual Collaborative Environment Helps University Students to Learn Maths Araceli Queiruga-Dios1, Ascensión Hernández-Encinas1, Isabel Visus-Ruiz1, and Ángel Martín del Rey2 1
Department of Applied Mathematics, E.T.S.I.I. de Béjar, University of Salamanca C/ Fernando Ballesteros, 37700-Béjar, Salamanca, Spain 2 Department of Applied Mathematics, E.P.S. de Ávila, University of Salamanca C/ Hornos Caleros 50, 05003-Ávila, Spain {queirugadios,ascen,ivisus,delrey}@usal.es
Abstract. The technological advances taking place in recent times have led us to consider the use of computers in classrooms as an indispensable tool. This paper details the experience of last year related to the use of different forms of interaction among students and new technological tools. The proposed learning system allows university students to go into computers and specific software in depth get. The use of new tools could enhance the teaching-learning process and adapt it to the new situation of Spanish university system. We have used both, an online platform and specifically designed programs to solve mathematical problems and learn Mathematics. Keywords: eLearning, Active learning.
A Virtual Collaborative Environment Helps University Students to Learn Maths
601
The teaching of mathematics is not an exception and is not liable for the use of these methods [14]. The subjects of mathematics required for obtaining a degree in industrial engineering include the linear algebra course which will focus this studio. We are realizing the integration of eLearning as a model of further education by significantly increasing their use in recent years [11]. The University of Salamanca has a virtual environment called Studium [6], and also has licensing campus for various software packages specifically suited for calculations in general. Without going any further we might cite the Mathematica [17] or Matlab [12] for solving equations, operate with matrices or perform various simulations useful for practical classes. As we know, mathematics is very close, generally, with computational sciences. The area of mathematics is therefore, one of the most adapted for the incorporation of the new technologies, since it is not only one purely formative subject, but is a scientific tool for the students who must help them to solve the problems that are generally throughout their race and in its professional development. This paper is organized as follows: In section 2 we show the work environment in which carried out the experience detailed in this paper, the linear algebra is detailed in Section 3, and in Section 4 the conclusions of the study are presented.
2 Working Environment The working environment in mathematics classes is diverse and is now extended with the possibility of using computers for classes. So far, traditional classes consist of lectures in which most of our students attend class as an auditor, without participating and only to take notes. In recent years, due to changes leading to the common European space, we use computers to complete the classes and provide students with different activities and new tools. 2.1 Online Platform Studium, the virtual campus of the University of Salamanca, offers teachers and students at the University a technological structure that channels education through the Internet, and provides different online tools that transport to the network the studentteacher interaction processes. Studium is based on the web platform Moodle (Modular Object Oriented Distance Learning Environment), a virtual teaching environment that allows placing contents and tasks in the network and provides online communication tools. The Moodle environment is within the construction learning mode, pedagogical approaches in teaching, trying to establish consistent relationship between what to learn (new knowledge) and what already knows (prior knowledge), thus the new information that get students is integrated into the background and acquires a appropriate significance with respect to what is already known [16]. This platform is currently used by many Spanish universities from that in 2004, the University Jaume I of Castellon and University of Las Palmas de Gran Canarias institutionally adopted Moodle. On this platform the students construct their learning through the use of
602
A. Queiruga-Dios et al.
different activities and collaborative tools posed by teacher and communication tools between participants (students and teachers). This online platform is based on free software and available, along with manuals and installation software in http://www.moodle.org. One advantage of the use of online learning platforms is that are built based on social constructivist pedagogy [3], which helps to get a collaborationist model of learning, where students share their experiences and work, and can communicate among themselves or with the course tutors at any time to answer questions or ask for help, thus becoming a space for social interaction, that connects individuals through a computer. As is known, one of the main purposes of a virtual campus is facilitate and enrich the interaction between all members of the community. This platform represents a graphical user interface intuitive enough for students to use it without prior knowledge of the environment. Another important feature of this virtual campus is that it allows the integration of interaction tools like chats or forums, and activities designed specifically for teaching, such as the Web-Quest (http://webquest.org), which represent a new methodology for teaching via the Internet, the Hot Potatoes software (http://hotpot.uvic.ca), free distribution, which allows developing exercises interactively, especially useful and used to propose questionnaires, games, online exercises for students, or any other web activities, which makes the Moodle platform considered a set of Web 2.0 tools [13]. In recent years we use the eLearning environment to leave some activities for students. We propose every month a questionnaire related to the topics included in the subjects. Related to the algebra subject the questionnaire involved aspects like matrices, determinants, linear equations, vectorial spaces, eigen values or eigen vectors. We limited the time to solve this activities depending of the difficulty of the topic, but students usually have enough time to do them. 2.2 The Internet and Classes The Internet provides a comfortable and fast form to access, to represent and to use the information [4]. Some of the advantages of its use that could be mentioned are: The ease and speed with which tasks and problems are managed and resolved, allows students to spend more time understanding and analyzing the results to the mechanics and the potential difficulty of its solution. The students find the job in mathematics with the more attractive possibilities that the computer offers, since it eliminates the routine work and power the creative part, which increases its motivation. It is able to quickly catch the attention of the students. The Internet use enables students to find information quickly. The problem is that they often do not know how to filter the information and data found, so it is convenient and important the work of the tutor in these cases. Is of great importance that students learn to discriminate the information obtained by browsing the Internet and know that not all information is equally valid or good. In addition, they must also learn to develop the information obtained with their own words, so they understand all the read and learn how to summarize it.
A Virtual Collaborative Environment Helps University Students to Learn Maths
603
Last year we proposed the students to make a report related to maths. They look for information about any mathematical topic. We can mention electrical circuits, images representation as a matrix and the Google searching machine as some of the most representative reports that they made. The Internet helps students to find the needed information and the explanation that they want about any issue. 2.3 Activities and Useful Tools The ICT provide the students the possibility of simulating experiences and of raising very different situations, comparing them, something that manually can be difficult or at least tedious in the majority of the cases. It allows them, for example, to include/understand the true reach of a problem or the effectiveness of an algorithm analyzing the initial results obtained when varying the hypotheses, initial conditions, etc. Thus we can mention, in addition, other possibilities that can be useful for math subjects: To graphically represent successive curves that shows the different solutions from a problem. To modify, to suggest to the student the data of the proposed problem and to see the repercussions of these modifications in the solution. The conviction power that gives the accomplishment of the calculations with computer in the presence of the students. To represent graphically, in seconds, all the solutions of different problems. Thus, using computers in the classroom, it is possible to offer to the school students' more and more complete and deep formation in mathematics, that allow them to face their future professional activity, adapting to the advances of science and technology. From the appearance of computer science, one of the main utilities of the computer in relation to the mathematical work has been the case of numerical calculation. Nevertheless, the mathematical applications in engineering require not only of numerical calculation, but rather an algebraic mixture of numerical calculations and manipulations on mathematical formulas. The symbolic calculation is indeed the technology specialized in the automatic manipulation of formulas, vectors, matrices, etc., with numerical and/or symbolic elements. It is in this specialty where the use of specialized software is especially useful, as can be the packages of symbolic calculation Mathematica [17], Maple [2], or Matlab [12]. These software packages have syntax easy to learn, since the orders and commands remember the mathematical operations that they execute and, therefore, its learning is fast and intuitive. In addition, the aid that offers these software packages is very complete and is informed with numerous examples. The University of Salamanca owns a license "Campus" for some of the versions of this mathematical software. Mathematica package is a system of computer algebra originally developed by Stephen Tungsten. It is also a powerful programming language that emulates manifolds paradigms using re-writing of terms. It uses code blocks (bookstores), to extend the capacities and to reorient the calculation. The programming language of Mathematica supports the use of functional programming and procedures (although generally, the functional programming is more efficient). It is implemented in a variant of the Object Oriented Programming language C, but the thickness of the extensive code
604
A. Queiruga-Dios et al.
of bookstores in fact is written in the Mathematica language, that can be used to extend the algebraic system. Unlike other languages, like Maple or Matlab, Mathematica tries to use the transformation rules that know as much in every moment as it is possible, trying to reach a stable point. In a basic level, Mathematica can be used to realize numerical and symbolic calculations, as well as to realize graphical representations of functions. But in more advanced levels, it can also be used like programming language of great versatility to own built-in functions and instructions that in the traditional languages would require additional routines. At the School of Industrial Engineering in Béjar, we use both Mathematica and Matlab. The first one has a great advantage that could be use together with the network through the WebMathematica environment. We do not know that possibility for the case of Matlab. On the other hand, Matlab with Simulink is the math software preferred by engineers for its versatility and utility in simulation process and toolbox availability.
3 Linear Algebra Subject An important task for engineers is to learn how it is possible to model daily events and objects with mathematical tools. It is considered that problem solving is an important part of their education; by interpreting such problems as a context within which they can apply Mathematics in general and Linear Algebra in particular, to aspects of the "real world". Thus, students can learn how to make practical use of their mathematical skills [10]. Industrial engineering studies are based on a solid initial technical knowledge, complemented with enhancements that, depending on the choice of the student, go deeper into the study of specific technical problems and problems in industrial organization. However, in the training of industrial engineers, emphasis tends to be placed on the generalist, polyvalent and integrative nature of the teaching, and on the development of skills to learn and solve the technical, organizational, and management problems that are inherent to any industrial or services company. Owing to their multidisciplinary education, industrial engineers are able to develop a career in any technical industrial activity. Within this type of training for engineers, of special interest is to solve different activities proposed in classes and in the virtual campus. We propose the students to solve some questionnaires, one for each module of the course. Besides the questionnaires, students must work in an application of linear algebra to daily life. To carry out this work students use the internet as the best resource. Our teaching programs allow us not only to clarify -for our students- the processes they should follow to be able to understand the mathematical concepts they are in taught and to consult and take maximum advantage of the literature, but they also enable us to install a solid formation in how to work with Mathematics. This, we hope, allows students to address new problems and research and hence to appreciate that both intuition and the real world can become bottomless sources of new studies and results in the field of mathematics. Additionally, the students must attempt to solve as many problems as possible, understanding by problems not only those that
A Virtual Collaborative Environment Helps University Students to Learn Maths
605
require an approach in which a rule or an algorithm is implemented for their immediate resolution, but also those in which the students must gauge the appropriateness of different strategies in order to tackle the problem in hand and must choose a method to solve it. Engineers working in industry must acquire and develop basic knowledge and skills, in which the ability to work in interdisciplinary teams, an ongoing interest in new knowledge through the students' own work, the ability to address the different problems of the company for which they work, and criteria to use the mathematics necessary to solve specific problems must all be combined. One way to tackle the problems that arise for engineers in their job is to learn how to find solutions on the internet or with colleagues, so the activity that we propose, the realization of a work in groups, involves both learning algebra development and other skills such as teamwork, information search, and the use of tools like Mathematica or other open source language that can be found on the Internet. As well as having knowledge of applied mathematics, engineers must be able to apply maths from the perspective of a true engineering professional. Knowledge of mathematics helps engineers to systematize logical and analytical thinking, with the help of other disciplines that will in turn help them to structure their synthetic thinking and awaken their creativity.
4 Conclusions After using the eLearning platform and the activities mentioned in this study, which complement traditional teaching, at the end of the course we made a survey to students and we found the following aspects of online teaching and learning system [8]: the student and teacher can access the application at any time, both to aid and to all information and documentation of the course, students have available and affordable all resources, like questionnaires, problems, practices, and the means for optimum use of the subject. The flexibility of platform involves a cooperative, rather than independently environment [1], allowing students to interact and organize the course as needed. The online platform allows continuous contact with the students' work. These may solve questionnaires, exercises and the application proposed work, using forum or chats to be in contact with teachers and other students. Acknowledgements. This work has been supported by the Project FS/5-2009 from "Fundación Memoria D. Samuel Solórzano Barruso" (University of Salamanca).
References 1. Aliev, R.A., Aliev, R.R.: Soft computing and its applications. World Scientific, Singapore (2001) 2. Bauldry, W.C., Evans, B., Johnson, J.: Linear Algebra with Maple. John Wiley & Sons, Chichester (1995) 3. Brooks, J., Brooks, M.: In Search of Understanding: The Case for Constructivist Classrooms. ASCD (1999) (revised edition)
606
A. Queiruga-Dios et al.
4. Díaz Len, R., Hernández Encinas, A., Martín Vaquero, J., Queiruga Dios, A., Ruiz Visus, I.: A Collaborative Learning case study: Mathematics and the use of specific software. In: V International Conference on Multimedia and Information Communication Technologies in Education, m-ICTE 2009 (2009) 5. García-Valcárcel Muñoz-Repiso, A., Tejedor Tejedor, F.J.: Current Developments in Technology-Assisted Education. In: Méndez-Vilas, A., Solano Martín, A., Mesa González, J.A., Mesa González, J. (eds.) FORMATEX (2006) 6. Hernández Encinas, L., Muñoz Masqué, J., Queiruga Dios, A.: Maple implementation of the Chor-Rivest cryptosystem. In: Alexandrov, V.N., van Albada, G.D., Sloot, P.M.A., Dongarra, J. (eds.) ICCS 2006. LNCS, vol. 3992, pp. 438–445. Springer, Heidelberg (2006) 7. Hernández Encinas, A., Queiruga Dios, A., Queiruga Dios, D.: Interacción, aprendizaje y enseñanza: Trabajo colaborativo en el aula. In: Proceedings IX Congreso Internacional de Interacción Persona-Ordenador (2008) 8. Kearsley, G.: Online help systems: design and implementation. Ablex, Norwood (1998) 9. Kirschner, P.A.: Using integrated electronic environments for collaborative teaching/learning. Research Dialogue in Learning and Instruction 2(1), 1–10 (2001) 10. Lantz-Andersson, A., Linderoth, J., Säljö, R.: What’s the problem? Meaning making and learning to do mathematical word problems in the context of digital tools. Instructional Science (2008) 11. Mason, R.: Models of online courses. ALN Magazine 2, 2 (1998) 12. Moler, C.: Numerical Computing with MATLAB. SIAM, Philadelphia (2004) 13. Oberhelman, D.D.: Coming to terms with Web 2.0. Reference Reviews 21(7), 5–6 (2007) 14. Queiruga Dios, A., Hernández Encinas, L., Espinosa García, J.: E-Learning: A Case Study of Chor-Rivest Cryptosystem in Maple. In: International Conference on Information Technologies (InfoTech-2007), Proceedings of InfoTech 2007, Varna, vol. 1, pp. 107–114 (2007) 15. Queiruga Dios, A., Hernández Encinas, L., Queiruga, D.: Cryptography adapted to the new european area of higher education. In: Bubak, M., van Albada, G.D., Dongarra, J., Sloot, P.M.A. (eds.) ICCS 2008, Part II. LNCS, vol. 5102, pp. 706–714. Springer, Heidelberg (2008) 16. Weiss, J., et al. (eds.): The International Handbook of Virtual Learning Environments, vol. 14. Springer, Heidelberg (2006) 17. Wolfram, S.: The Mathematica book, 4th edn. Wolfram Media/Cambridge University Press (1999)
Cacoveanu, Silviu 280 Camal, Jean-Pierre 541 Capel Tu˜ n´ on, Manuel I. 388 Cappiello, Cinzia 464 Carneiro, Davide 237 Chatzinikolaou, Nikolaos 164 Clarizia, Fabio 357 Cohn, Anthony G. 222 Collet, Pierre 151 Costa, Nuno 237 Cruz-Lara, Samuel 541 da Silva, Mario Freitas 421 de Araujo, Renata Mendes 311 ´ del Rey, Angel Mart´ın 600 Dengel, Andreas 177 Desconnets, Jean-Christophe 76 de Souza Gimenes, Itana Maria 421 de Toledo, Maria Beatriz Felgar 421 dos Reis, J´ ulio Cesar 555 dos Santos, Wanderson Cˆ amara 372 Duin, Robert P.W. 15 Dusch, Thomas 151 Fabbri, Sandra
90, 104
Garcia, Alessandro Fabricio 421 Ghodous, Parisa 151 Ginige, Jeewani Anupama 48 Greco, Luca 357 Guinaud, Jordan 541 Haas, Jan 177 Habich, Dirk 31, 60 H¨ ahne, Ludwig 436 Hartel, Rita 451 Hasco¨et, Mountaz 404 Hernandes, Elis C. Montoro 90, 104 Hern´ andez-Encinas, Ascensi´ on 600 Huster, Jessica 193 Keim, Daniel 495 Kesharwani, Subodh 3 Kikuchi, Shinji 299 Kisilevich, Slava 495 Koch, Wolfgang 135 Kolell, Andr´e 48 Kuhn, Olivier 151 Kulesza, Uir´ a 372 Lamersdorf, Winfried 477 Lasry, Amit 495 Lehner, Wolfgang 31, 60 Lemnaru, Camelia 264, 280 Libourel, Th´er`ese 76 Lin, Joe Y.-C. 326 Lin, Yuan 76 Lopes, Expedito Carlos 340 Matera, Maristella 464 Matsumoto, Yasuhide 299 Maus, Heiko 177 Mendoza Morales, Luis E. 388 Messinger, Christian 451